text
stringlengths
0
316k
year
stringclasses
50 values
No
stringclasses
911 values
A Logic for Semantic Interpretation I Eugene Charniak and Robert Goldman Department of Computer Science Brown University, Box 1910 Providence RI 02912 Abstract We propose that logic (enhanced to encode probability information) is a good way of characterizing semantic in- terpretation. In support of this we give a fragment of an axiomatization for word-sense disambiguation, noun- phrase (and verb) reference, and case disambiguation. We describe an inference engine (Frail3) which actually takes this axiomatization and uses it to drive the semantic interpretation process. We claim three benefits from this scheme. First, the interface between semantic interpreta- tion and pragmatics has always been problematic, since all of the above tasks in general require pragmatic infer- ence. Now the interface is trivial, since both semantic interpretation and pragmatics use the same vocabulary and inference engine. The second benefit, related to the first, is that semantic guidance of syntax is a side effect of the interpretation. The third benefit is the elegance of the semantic interpretation theory. A few simple rules capture a remarkable diversity of semantic phenomena. I. Introduction The use of logic to codify natural language syntax is well known, and many current systems can parse directly off their axiomatizations (e.g.,)[l]. Many of these systems simultaneously construct an intermediate "logical form" using the same machinery. At the other end of language processing, logic is a well-known tool for expressing the pragmatic information needed for plan recognition and speech act recognition [2-4]. In between these extremes logic appears much less. There has been some movement in the direction of placing semantic interpretation on a more logical footing [5,6], but it is nothing like what has happened at the extremes of the ~anguage understanding process. To some degree this is understandable. These "mid- dle" parts, such as word-sense disambiguation, noun phrase reference, case disambiguation, etc. are notori- ously difficult, and poorly understood, at least compared to things like syntax, and the construction of interme- diate logical form. Much of the reason these areas are l This work has been supported in part by the National Science Foundation under grants IST 8416034 and IST 8515005 and Office ~)f Nav~l Research under grant N00014-79-C-0529. so dark is that they are intimately bound up with prag- matic reasoning. The correct sense of a word depends on context, as does pronoun resolution, etc. Here we rectify this situation by presenting an ax- iomatization of fragment of semantic interpretation, no- tably including many aspects previously excluded: word- sense disambiguation, noun-phrase reference determina- tion, case determination, and syntactic disambiguation. Furthermore we describe an inference engine, Frail3, which can use the logical formulation to carry out seman- tic interpretation. The description of Frail3 is brief, since the present paper is primarily concerned with semantic interpretation. For a more detailed description, see [7]. The work closest to what we present is that by Hobbs [5]; however, he handles only noun-phrase reference from the above list, and he does not consider intersentential influences at all. Our system, Wimp2 (which uses Frail3), is quite pretty in *,wo respects. First, it integrates semantic and pragmatic processing into a uniform whole, all done in the logic. Secondly, it provides an elegant and concise way to specify exactly what has to be done by a seman- tic interpreter. As we shall see, a system that is roughly comparable to other state-of-the-art semantic interpreta- tion systems [6,8] can be written down in a pagc or so of logical rules. Wimp2 has been implemented and works on all of the examples in this paper. II. Vocabularies 87 Let us start by giving an informal semantics for the spe- cial predicates and terms used by the system. Since we are doing semantic interpretation, we are translating be- tween a syntactic tree on one hand and the logical, or in- ternal, representation on the other. Thus.we distinguish three vocabularies: one for trees, one for the internal rep- resentation, and one to aid in the translation between the two. The vocabulary for syntactic trees assumes that each word in the sentence is represented as a word instance which is represented as a word with a numerical post- fix (e.g., boy22). A word instance is associated with the actual lexical entry by the predicate word-inst: (word-inst word-instance part-ofospeech lexwal-item). For example, (word-inst case26 noun case). (We use "part of speech" to denote those syntactic categories that are directly above the terminal symbols in the grammars, that is, directly above words.) The relations between word instances are encoded with two predicates: syn-pos, and syn-pp. Syn-pos (syn-pos relation head sub-constituent), indicates that the sub-constituent is the relation of the head. We distinguish between positional relations and those indicated by prepositional phrases, which use the predicate syn-pp, but otherwise look the same. The propositions denoting syntactic relations are generated during the parse. The parser follows all possible parses in a breadth-first search and outputs propositions on a word-by-word basis. If there is more than one parse and they disagree on the propositional output, a disjunction of the outputs is a.~ert.ed into the database. The corre- spondence between trees and formulas is as follows: Trees s -- up (vp ... head-v ...) vp .... head-v np ... vp ~ ... head-v npl np2 ... vp .... head-v ... (pp prep ...) pp ~ prep np Formulas (syn-pos subject head-v np) head-v symbol is s symbol (syn-pos object head-v up) (syn-pos indirect-object head-v npl) (syn-pos object head-v npg) (syn-pp head-prep head-v prep) (-yn-pp prel>-np prep rip) np -- ... head-n ... head-n symbol is np symbol np -- pronoun pronoun symbol is np symbol np -- propernoun propernoun symbol is np symbol np .... adj head-n ... (syn-pos adj adj head-n) np .... head-n ... (pp prep ...) up that s s -- np (vp ... copula (pp prep ...)) s -- np (vp ... copula adj) (syn-pp head-prep head-n prep) s symbol is np symbol (syn-pp head-prep np prep) (syn-pos adj ad3 np) This is enough to express a wide variety of simple declar- ative sentences. Furthermore, since our current parser implements a transformational account of imperatives, questions (both yes-no and wh), complement construc- tions, and subordinate clauses, these are automatically handled by the above as well. For example, given an ac- count of "Jack wants to borrow the book." as derived from "Jack wants (np that (s Jack borrow the book))." or something similar, then the above rules would produce the following for both (we also indicate after what word the formula is produced): 88 Words Jack wants to borrow the book I"ornnl la.s (word-inst jackl propernoun jack) (word-inst want1 verb want) (syn-pos subject want1 jackl) (word-inst borrowl verb borrow) (syn-pos object want1 borrowl) (syn-pos subject borrow1 jack1) (word-inst bookl noun book) (syn-pos object borrowl bookl) This is, of course, a fragment, and most things are not handled by this analysis: negation, noun-noun combina- tions, particles, auxiliary verbs, etc. Now let us consider the internal representation used for inference about the world. Here we use a simple predicate-calculus version of frames, and slots. We as- sume only two predicates for this: == and inst. Inst, (inst instance frame), is a two-place predicate on an instance of a frame and the frame itself, where a "frame" is a set of objects, all of which are of the same natural kind. Thus (inst boyl boy-) asserts that boyl is a member of the set of boys, de- noted by boy-. (Frames are symbols containing hyphens, e.g., supermarket-shoping. Where a single English word is sufficiently descriptive, the hyphen is put at the end.) The other predicate used to describe the world is the %etter name" relation ==: (---- worse-name better-name). This is a restricted use of equality. The second argument is a "better name" for the first, and thus may be freely substituted for it (but not the reverse). Since slots are represented as functions, -- is used to fill slots in frames. To fill the agent slot of a particular action, say borrowl, with a particular person, say jackl, we say (== (agent borrow1)jack1). At an implementation level, -= causes everything known about its first argument (the worse name) to be asserted about the second (the better name). This has the effect. of concentrating all knowledge about all of an object's names as facts about the best name. Frail will take as input a simple frame representation and translate it into predicate-calculus form. Figure 1 shows a frame for shopping along with the predicate- calculus translation. Naturally, a realistic world model requires more than these two predicates plus slot functions, but the relative success of fairly simple frame models of reasoning indi- cates that they are a good starting set. The last set of predicates (word-sense, case, and roie-inst) are used in the translation itself. They will be defined later. (defframe isa slots acts shop- action ;(inst ?s.shop- action) (agent (person-)) :(inst (agent. ?s.shop-) person-) (store-of (store-)) ;( inst ( store-of ?s.shop-) store-) (go-step (go- (agent (agent ?shop-)) (destination (store-of ?shop-)))) ;(== (agent (go-step ?shop-)) (agent ?shop-)) ;(== (destination (go-step ?s.shop-)) ; (store-of ?s.shop-)) Figure 1: A frame for shopping III. Word-Sense Disambiguation We can now write down some semantic interpretation rules. Let us assume that all words in English have one or more word senses as their meaning, that these word senses correspond to frames, and that any particular word in- stance has as its meaning exactly one of these senses. We can express this fact for the instances of any particular lexical entry as follows: (word-inst inst part-of.speech word) =~ (inst rest sense1) V ... V (inst inst sense,=) where sense1 through sense,= are senses of word when it is used as a part.of.speech (i.e., as a noun, verb, etc.) Not all words in English have meanings in this sense. "The" is an obvious example. Rather than complicate the above rules, we assign such words a "null" mean- ing, which we represent by the term garbage*. Nothing is known about garbage* so this has no consequences. A better axiomatization would also include words which seem to correspond to functions (e.g., age), but we ignore such complications. A minor problem with the above rule is that it re- quires us to be able to say at the outset (i.e., when we load the program) what all the word senses are, and new senses cannot be added in a modular fashion. To fix this we introduce a new predicate, word-sense: (word-sense lez-item part-of-speech frame) (word-sense straw noun drink-straw) (word-sense straw noun animal-straw). This states that let-item when used as a part.of.speech can mean frame. We also introduce a pragmatically difl'erent form of disjunction, --OR: (~OR formulal formula2). In terms of implementation, think of this as inferring formula1 in all possible ways and then asserting the dis- junction of the formula,s with each set of bindings. So if there are two seLs of bindings, the result will be to assert 89 (OR f ormula2/biltdingsl f ormula2/bindings~ ). Logically, the meaning of --OR is that if xl ... x, are unbound variables i, for'rnulal, then there nmst exist xl ... z, that make formulal and formula2 true. We can now express our rule of word-sense ambiguity as: (word-inst ?instance ?part-of-speech ?lex-item) =:, (--OR (word-sense ?lex-item ?part-of-speech ?frame) (inst ?instance ?frame)) IV. The Inference Engine While it seems clear that the above rule expresses a rather simple-minded idea of how words relate to their mean- ings, its computational import may not be so clear. Thus we now discuss Wimp2, our language comprehension pro- gram, and its inference engine, Frail3. Like most rule-based systems, Frail distinguishes for- ward and backward-chaining use of modus-ponens. All of our semantic interpretation rules are forward-chaining rules'. (--- (word-inst ?instance ?part-of-speech ?lex-item) (--OR (word-sense ?lex-item ?part-of-speech ?frame) (inst ?instance ?frame))) Thus, whenever a new word instance is asserted, we forward-chain to a statement that the word denotes an instance of one of a set of frames. Next, Frail uses an ATMS [9,10] to keep track of disjunctions. That is, when we assert (OR formulal ... formula,=) we create n assumptions (following DeK- leer, these are simply integers) and assert each formula into the data-base, each with a label indicating that the formula is not true but only true given some assumptions. Here is an example of how some simple disjunctions come out. A (-- A (OR B C)) (-- B (OR D El) Formulas Assumptions A B 1 ¢ 2 D 3 E 4 Labels (0) ((1)) ((2)) ((1 3)) ((1 4)) Figure 2 represents this pictorially. Here D, for example, has the label ((13)), which means that it is true if we grant assumptions 1 and 3. If an assumption (or more gener- ally, a set of assumptions) leads to a contradiction, the assumption is declared a "nogood" and formulas which depend on it are no longer believed. Thus if we learn (not D) then (1 3 / is x nogood. This also has the consequence that E now has the label (1/. It is as if different sets of assumptions correspond to different worlds. Seman- tic interpretation then is finding the "best" of the worlds defined by the linguistic possibilities. t A D Figure 2: Pictorial representation of disju.ctio.s We said "best" ill the last sentence deliberately. When alternatives can be ruled out on logical grounds the corresponding assumptions become nogoods, and conclu- sions from them go away. But it is rare that. all of the can- didate interpretations (of words, of referents, etc.) reduce to only one that is logically possible. Rather, there are ilsually several which are logically .co,sistent, but some are more "probable" than others, For this rea.so,, Frail associates probabilities with sets of assumptions ("alter- native worlds") and Wimp eventually "garbage collects" statements which remain low-probability alter,atives be- cause their assumptions are unlikely. Probabilities also guide which interpretation to explore. Exactly how this works is described in [7]. Here we will simply note that the probabilities are designed to capture the following intuitions: 1. Uncommon vs. common word-senses {marked vs. unmarked) are indicated by probabilities input by the system designer and stored in the lexicon. 2. Wimp prefers to find referents for entities (rather than not finding referents). 3. Possible reasons for actions and entities are preferred the more specific they are to. the action or entity. (E.g., "shopping" is given a higher probability than "meeting someone" as an explanation for going to the supermarket.) 4. Formulas derived in two differents ways are more probable than they would have been if derived in either way alone. 5. Disjunctions which lead to already considered "'worlds" are preferred over those which do not hook up in this way. (We will illustrate this later.} V. Case Disarnbiguation Cases are indicated by positional relations (e.g., subject) and prepositional phrases. We make the simplifying as- sumption that prepositional phrases only indicate case relations. As we did for word-sense disambiguation, we introduce a new predicate that allows us to incrementally specify how a particular head (a noun or verb) relates to its syntactic roles. The new predicate, (case head syntactic-relation slot), 90 states that head can have its slol filled by things which stand itl syntacttc.lvlation to it. For example 0nst ?g go-) =~ (case ?g subject agent). This Call also be expressed in Frail using the typed vari- ables (case ?g.go- subject agent). This says that any instance of a go- can use the subject position to indicate the agent of the go- event. These facts can be inherited in the typical way via the isa hierarchy, so this fact would more generally be expressed as (case ?a.action- subject agent), Using case and the previously introduced --OR connec- tive, we can express the rule of case relations. Formally, it says that for all syntactic positional relations and all meanings of the head, there must exist a case relation which is the significance of that syntactic position: (syn-pos ?tel ?head ?val) A (inst ?head ?frame) =~ ('--*OR (case ?hea~l ?tel ?slot) (== (?slot ?hesd) ?val))) So, we might have (syn-pos subject gol jackl) A (inst gol go-) h (case gol subject agent) ::~ ('--- (agent gol)jackl). A similar rule holds for case relations indicated by prepositional phrases. (syn-pp head-prep ?head ?pinst) A (syn-pp prep-np ?pinst ?np) A (word-inst ?pinst prep ?prep) A (inst ?head ?frame) =~ (--"OR (case ?head ?prep ?slot) (=--- (7slot ?head) ?np)) For example, "Jack went to the supermarket." would give us (syn-pp head-prep gol tol) A (case gol to destination) A (syn-pp prep-np to1 supermarket1) A (word-inst tol prep to) A (;nst gol go-) =~ (== (destination go1) supermarketl). We now have enough machinery to describe two ways in which word senses and case relations can help disam- biguate each other. First consider the sentence Jack went to the supermarket. Wimp currently knows two meanings of "go," to travel and to die. After "Jack went" Wimp prefers travel (based upon probability rule 1 and the probabilities assigned to these two readings in the lexicon) but both are possible. After "Jack went to" the die reading goes away. This is because the only formulas satisfying (case gol to ?slot) all require gol to be a travel rather than a die. Thus "die" cannot be a reading since it makes (~OR (case ?head ?prep ?slot) (---- (?slot ?head) ?val)) false (a disjunction of zero disjuncts is false). We also have enough machinery to see how "'selec- tional restrictions" work in Wimp2. Consider the sen- tence Jack fell at the store. and suppose that Wimp knows two case relatious for "'at," Ioc and time. This will initially lead to the following disjunction: ((1)) .(== (Ioc fell1) store1) (syn-pp head-prep fell1 at1)<((2) ) (== (time fell1) store1). However, Wimp will know that (inst (time ?a.aetion) time-). As we mentioned earlier, == statements cause everything known about the first argument to be asserted about the second. Thus Wimp will try to believe that store1 is a time, so (2) becomes a nogood and (1) becomes just tmte. It is important to note that both of these disam- biguation methods fall out from the basics of the system. Nothing had to be added. VL Reference and Explanation Definite noun phrases (rip's) typically refer to something already mentioned. Occasionally they do not, however, and some, like proper names may or may not refer to an already mentioned entity. Let us simplify by saying that all rip's may or may not refer to something already mentioned. (We will return to indefinite np's later.) We represent np's by always creating a new instance which represents the entity denoted by the np. Should there be a referent we assert equality between the newly minted object and the previously mentioned one. Thus, in "Jack went to the supermarket. He found some milk on the shelf.", the recognition that "He" refers to Jack would be indicated by (== he24 jack3). (Remember that == is a best name relation, so this says that jack3 is a better name for the new instance we cre- ated to represent the "he," he24.) As for representing the basic rule of reference, the idea is to see the call for a referent, as a statement that something exists. Thus we might try to say (inst ?x ?frame) =~ (Exists (y \ ?frame) (== ?x ?y)). This is intended to say, if we are told of an object of type ?frame then there must exist an earlier one y of this same type to which the new one can be set equal. The trouble with this formula is that it does not say "earlier one." Exists simply says that there has to be one, whether or not it was mentioned. Furthermore, since we intend to represent an np like "the taxi" by (inst taxi27 91 taxi-) and then look for an earlier taxi. the Exists would be trivially satisfied by taxi27 itself. Our solution is to introduce a new quantifier called "previously exists" or PExists. (In [5] a similar end is achieved by putting weights on formula and looking for a minimum-weight proof.) Using this new quantifier, we h aye (inst ?x ?frame) =~ (PExists (y \ ?frame) (== ?x ?y)). If there is more than one a disjunction of equality state- ments is created. For example, consider the story Jack went to the supermarket. He found the milk on the shelf. He paid for it. The "it" in the last sentence could refer to any of the three inanimate objects mentioned, so initially the following disjunction is created: (== it8 shelf(}) (inst it8 inanimate-)~-(== it8 milk5) • " \(== it8 supermarket2). This still does not allow for the case when there is no referent for the np. To understand our solution to this problem it is necessary to note that we originally set out to create a plan-recognition system. That is to say, we wanted a program which given a sentence like "Jack got a rope. He wanted to kill himself." would recognize that Jack plans to hang himself. We discuss this aspect of Wimp2 in greater detail in [7]. Here we simply note that plans in Wimp2 are represented as frames (as shown in Figure 1.) and that sub tasks of plans are actions which fill certain slots of the frame. So the shop- plan has a go-step in Figure 1. and recognizing the import of "Jack went to the supermarket." would be to infer that (== (go-step shop-74) go61) where go61 represented the verb in "Jack went to the supermarket." We generalize this slightly and say that all inputs must be "explained"; by this we mean that we must find (or postulate) a frame in which the input fills a slot. Thus the go-step state- ment explains go61. The presence of a supermarket in the story would be explained by (== (store-of shop-74) super- market64). The rule that everything mentioned must be explained looks like this: (inst?x ?frame) ::~ (---,OR (roJe-inst ?x ?slot ?superfrm) (Exists (y \ ?superfrm) (== (?slot ?y) ?x))). (Some things cannot be explained, so this rule is not strict.) Here the role-inst predicate says that 7× can fill the ?slot role of the frame ?supedrm. E.g., (ro!e-inst ?r.store- store-of shop-) says that stores can fill the store- of slot in the shop- frame. Here we use Exists, not PExists since, as in the rope example, we explained the existence of the rope by postulating a new hanging event. The se- mantics of Exists is therefore quite standard, simply say- ing that one must exist, and making no commitment to whether it was mentioned earlier or not. As a matter of implementation, we note that it works simply by always creating a new instance. The impact of this will be seen i, a moment. We said that all inputs must be explained, and that we explain by seeing that the entity fills a slot in a pos- tulated frame. There is one exception to this. if a newly mentioned entity refers to an already extant one, then there is no need to explain it, since it was presumably explained the first time it was seen. Thus we combine our rule of reference with our rule of explanation. Or, to put it. slightly differently, we handle the exceptions to the rule of reference (some things do not refer to entities al- ready present) by saying that those which do not so refer must be explained instead. This gives the following rule: (inst ?x ?frame) A (not (= ?frame garbage*)) :=~ (OR (PExists (y \ ?frame) (== ?x ?y)) .9 (--,OR (role-inst ?x ?superfrm ?slot) (Exists (s \ ?superfrm) (== ( slot ?s) Here we added the restriction that the frame in question cannot be the garbage* frame, which has no properties by definition. We have also added probabilities to the dis- junctions that are intended to capture the preference for previously existing objects (probability rule 2). The rule of reference has several nice properties. First, it might seem odd that our rule for explaining things is expressed in terms of the Exists quantifier, which we said always cre- ates a new instance. What about a case like "Jack went to the supermarket. He found the milk on the shelf." where we want to explain the second line in terms of the shopping plan created in the first? As we have things set up, it simply creates a new shopping plan. But note what then occurs. First the system asserts (inst new-shopping5 shopping-). This activates the above rule, which must ei- ther find a referent for it, or try to explain it in terms of a frame for which it fills a role. In this case there is a referent, namely the shopping created in the course of the first line. Thus we get (== new-shopping5 shopping4) and we have the desired outcome. This example also shows that the reference rule works on event reference, not just np reference. This rule handles reference to "related objects" rather well. Consider "Jack wanted to play the stereo. He pushed the on-off button." Here "the on-off button" is to be understood as the button "related" to the stereo mentioned in the first line. In Wimp this falls out from the rules already described. Upon seeing "the on-off but- ton" Wimp creates a new entity which must then either have a referent or an explanation. It does not have the first, but one good explanation for the presence of an on- off button is that it fills the on-off-switch slot for some power-machine. Thus Wimp creates a machine and the machine then has to be explained. In this case a referent is found, the stereo from the first sentence. 92 VII. Pragmatic Influence We iinish with three examples illustrating how our se- mantic interpretation process easily integrates pragmatic influences: one example of pronoun reference, one of word-sense disambiguatiom and one of syntactic ambi- guity. First pronoml reference: Jack went to the supermarket. He found the milk on the shelf. He paid for it. In this example the "milk" of sentence two is seen as the purchased of shop-1 and the "pay" of sentence three is postulated to be the pay-step of a shopping event, and then further postulated to be the same shopping event as that created earlier. (In each case other possibilities will be considered, but their probabilities will be much lower.) Thus when "it" is seen Wimp is in the situation shown in Figure 3. The important thing here is that the statement (== it7 milk5) can be derived in two different ways, and thus its probability is much'higher than the other possible refereuts for "'it" (probability rule 4). (One derivation has it that since one pays for what one is shopping for, and Jack is shopping for milk, he mdst be paying for the milk. The other derivation is that "it" must refer to something, and tile milk is one alternative.) The second example is one of word-sense disam- biguation: Jack ordered a soda. He picked up the straw. Here sentence one is seens as the order-step of a newly postulated eaboutl. The soda suggests a drinking event, which in turn can be explained as the eat-step of cab outl. The straw in line two can be one of two kinds of straw, but the drink-straw interpretation suggests (via a role-inst statement) a straw-drinking event. This is postu- lated, and Wimp looks for a previous such event (using the normal reference rule) and finds the one suggested by the soda. Wimp prefers to assume that the drink- ing event suggested by "soda" and that from "straw" are the same event (probability rule 2) and this preference is passed back to become a preference for the drink-straw meaning of "straw" (by probability rule 5). The result is shown in Figure 4. Our third and last example shows how semantic guidance of syntax works: Janet wanted to kill the boy with some poison. Starting with the "with" there are two parses which dis- agree on the attachment of the prepositional phrase (pp). There are also two case relations the "with" can indi- cate if it modifies "kill," instrument and accompaniment. When Wimp sees "poison" it looks for an explanation of its presence, postulates a poisoning and which is found to be potentially coreferential with the "kill." The result looks like Figure 5. In this interpretation the poison can be inferred to be the instrument of the poisoning, so this option llas higher probability (probability rule 4). This ! Other allernative,9 (inst pay7 pay-) ~ 1== (pay-step shop-l) ~:...../{ (inst it8 inanimate-) l ~ ~ (== it8 shelf6) (== it9 supermarket2) Figure 3: -k pronoun example (== it8 milk5) ] i ~ ~ Other alternatives i (inst orcler2 orcler-) :~-~(=~" (orcler-step eat-outl) orcler2) (= (eat-step eat-outl) clrink3) I < Y (= (patient clrink3) socla4) (inst soda4 soda-) Other alternatives I (word-inst straw3 noun s~'aw)~ (inst ~straw3~animal-straw)~ J]] ~ (= (straw-of clrink3) Straw3) I Figure 4: A word-sense example L the boy with I l(syn-pp head-prep I boy1 with1) ~..~ Accompany (syn-pp head-prep killl withl) I "~ Instrument ..... ~_ [(== (instr killl) poison4) I (inst poison4 poison-) ~ e s J Figure 5: A syntactic disambiguation example 93 higher probability is passed back to the disjuncts repre- senting a) t, he choice of instrument over accompanyment, and b) the choice of attaching to ~kill" over "boy" (prob- ability rule 5). This last has the effect of telling the parser where to attach the pp. VIII. Future Research This work can be extended in many ways: increased syn- tactic coverage, more realistic semantic rules, improved search techniques for possible explanations, etc. Here we will simply look at some fairly straightforward extensions to the model. Our rule preferring finding a referent to not finding a referent is not reasonable for indefinite np's. Thus Wimp currently misinterprets 3ack bought a gun. Mary bought a gun. since it wants to interpret the second gun as coreferen- tial with the first. A simple change would be to have two rules of reference/explanation. The rule for indefi- nite np's would look like this: (inst ?x ?frame) A (not (= ?frame garbage*)) A (syn-pos indef-det ?x ?det) =~ (OR (PExists (y \ ?frame) (== ?x ?y)) .1 (--*OR (role-inst ?x ?superfrm ?slot) (Exists (s \ ?superfrm) (== (?s=ot ?s) ?x))) .9) This looks just like our earlier rule, except a check for an indefinite determiner is added, and the probabilities are reversed so as to prefer a new object over an already existing one. The earlier reference rule would then be modified to make sure that the object did not have an indefinite determiner. Another aspect of language which fits rather nicely into this framework is metonymy. We have already noted that the work closest to ours is [5], and in fact we can adopt the analysis presented there without a wrinkle. This analysis assumes that every np corresponds to two objects in the story, the one mentioned and the one in- tended. For example: I read Proust over summer vacation. The two objects are the entity literally described by the np (here the person "Proust') and that intended by the speaker (here a set of books by Proust). The syntactic analysis would be modified to produce the two objects, here proustl and read-objl respectively~ (syn-pos direct-object read1 read-objl) (word-inst proustl propernoun proust) (syn-pos metonymy rea6-objl proustl) It is then assumed that there are a finite number of relations that may hold between these two entities, most notably equality, but others as well. The rule relating the two entities would look like this: (-, (syn-pos metonymy ?intended ?given) (OR (=- ?intended ?given) .9 (------ (creator-of ?intended) ?given) .02) ...)). This rule would prefer assuming that the two individuals are the same, but would allow other possibilities. IX. Conclusion " We have presented logical rules for a fragment of the semantic interpretation (and plan recognition) process. The four simple rules we gave already capture a wide variety of semantic and pragmatic phenomena. We are currently working on diverse aspects of semantics, such as definite vs. indefinite np's, noun-noun combinations, adjectives, non-case uses of prepositions, metonymy and relative clauses. P~t~erences [1] F. Pereira & D. Warren, "Definite clause grammar for language analysis - a survey of the formalism and a comparison with augmented transition networks," Artificial Intelligence 13 (1980), 231-278. [2] Philip K. Cohen ~ C. Raymond Perrault, "Elements of a plan-based theory of speech acts," Cognitive Sci- ence 3 (1979), 177-212. [3] Eugene Charniak, "A neat theory of marker passing," AAAI-86 (1986). [4] Henry Kautz & James Allen, "Generalized plan recog- nition," AAAI-86 (1986). [5] Jerry R. Hobbs & Paul Martin, "Local pragmatics," ljcai-87 (1987). [6] Graeme Hirst, Semantic Interpretation and the Res- olution of Ambiguity, Cambridge University Press, Cambridge, 1987. [7] Robert Goldman & Eugene Charniak, "A probabilis- tic ATMS for plan recognition," forthcomming. [8] Barbara J. Grosz, Douglas E. Appelt, Paul A. Mar- tin ~ Fernando C.N. Pereira, "Team: an experiment in the design of transportable natural-language inter- faces," Artificial Intelligence 32 (1987), 173-243. [9] Drew V. McDermott, "Contexts and data depen- dencies: a synthesis," IEEE Transactions on Pattern AnaJysis and Machine Intelligence PAMI-5 (1983). [10] Johan deKleer, "An assumption-based TMS," Artifi- cial Intelligence 28 (1986), 127-162. 94
1988
11
Interpretation as Abduction Jerry R. Hobbs, Mark Stickel, Paul Martin, and Douglas Edwards Artificial Intelligence Center SRI International Abstract An approach to abductive inference developed in the TAC- ITUS project has resulted in a dramatic simplification of how the problem of interpreting texts is conceptualized. Its use in solving the local pragmatics problems of reference, compound nominals, syntactic ambiguity, and metonymy is described and illustrated. It also suggests an elegant and thorough integration of syntax, semantics, and pragmatics. 1 Introduction Abductive inference is inference to the best explanation. The process of interpreting sentences in discourse can be viewed as the process of providing the best explanation of why the sentences would be true. In the TACITUS Project at SRI, we have developed a scheme for abductive inference that yields a signi~caut simplification in the description of such interpretation processes and a significant extension of the range of phenomena that can be captured. It has been implemented in the TACITUS System (Stickel, 1982; Hobbs, 1986; Hobbs and Martin, 1987) and has been and is being used to solve a variety of interpretation problems in casualty reports, which are messages about breakdowns in machinery, as well as in other texts3 It~ is well-known that people understand discourse so well ~ because they know so much. Accordingly, the aim of the TACITUS Project has been to investigate how knowledge is used in the interpretation of discourse. This has involved building a large knowledge base of commonsense and do- main knowledge (see Hobbs et al., 1986), and developing procedures for using this knowledge for the interpretation of discourse. In the latter effort, we have concentrated on problems in local pragmatics, specifically, the problems of reference resolution, the interpretation of compound nom- inals, the resolution of some kinds of syntactic ambiguity, and metonymy resolution. Our approach to these problems is the focus of this paper. In the framework we have developed, what the interpre- tation of a sentence is can be described very concisely: ZCharniak (1986) and Norvig (1987) have also applied abductive inference techniques to discoume interpretation. (1) To interpret a sentence: Derive the logical form of the sentence, together with the constraints that predicates impose on their arguments, allowing for coercions, Merging redundancies where possible, Making assumptions where necessary. By the first line we mean "derive in the logical sense, or prove from the predicate calculus axioms in the "knowledge base, the logical form that has been produced by syntactic analysis and semantic translation of the sentence." In a discourse situation, the speaker and hearer both have their sets of private beliefs, and there is a large over- lapping set of mutual beliefs. An utterance stands with one foot in mutual belief and one foot in the speaker's private beliefs. It is a bid to extend the area of mutual belief to include some private beliefs of the speaker's. It is anchored referentially in mutual belief, and when we derive the logi- cal form and the constraints, we are recognizing this refer- ential anchor. This is the given information, the definite, the presupposed. Where it is necessary to make assump- tions, the information comes from the speaker's private beliefs, and hence is the new information, the indefinite, the asserted. Merging redundancies is a way of getting a minimal, and hence a best, interpretation. 2 In Section 2 of this paper, we justify the first clause of the above characterization by showing that solving local pragmatics problems is equivalent to proving the logical form plus the constraints. In Section 3, we justify the last two clauses by describing our scheme of abductive infer- ence. In Section 4 we provide several examples. In Section 5 we describe briefly the type hierarchy that is essential for making abduction work. In Section 6 we discuss future directions. 2Interpreting indirect speech acts, such u "It's cold in here," mean- ing "C1¢w¢ the window," is not a counterexample to the principle that the minimal interpretation is the best interpretation, but rather can be seen as a matter of achieving the minimal interpretation coherent with the interests of the speaker. 95 2 Local Pragmatics The fbur local pragmatics problems we have addressed can be illustrated by the following "sentence" from the casualty reports: (2) Disengaged compressor after lube-oil alarm. Identifying the compressor and the alarm are reference resolution problems. Determinlug the implicit relation between "lube-oil" and "alarm" is the problem of com- pound nominal interpretation. Deciding whether "af- ter lube-oil alarm" modifies the compressor or the disen- gaging is a problem in syntactic ambiguity resolution. The preposition "after" requires an event or condition as its object and this forces us to coerce "lube-oil alarm" into "the sounding of the lube-oil alarm"; this is an example of metonymy resolution. We wish to show that solving the farst three of these problems amounts to deriving the logical form of the sentence. Solving the fourth amounts to deriving the constraints predicates impose on their argu- ments, allowing for coercions. For each of these problems, our approach is to frame a logical expression whose deriva- tion, or proof, constitutes an interpretation. Reference: To resolve the reference of "compressor" in sentence (1), we need to prove (constructively) the follow- ing logical expression: (3) (B c)comFeessor(c) If, for example, we prove this expression by using axioms that say (71 is a starting air compressor, and that a starting air compressor is a compressor, then we have resolved the reference of "compressor" to 6'i. In general, we would expect definite noun phrases to refer to entities the hearer already knows about and can identify, and indefinite noun phrases to refer to new enti- ties the speaker is introducing. However, in the casually reports most noun phrases have no determiner. There are sentences, such as Retained oil sample and filter for future analysis. where "sample" is indefinite, or new information, and "fil- ter" is definite, or already known to the hearer. In this case, we try to prove the existence of both the sample and the filter. When we fail to prove the existence of the sam- ple, we know that it is new, and we simply assume its existence. Elements in a sentence other than nominals can also function referentially. In Alarm sounded. Alarm activated during routine start of compressor. one can argue that the activation is the same as, or at least implicit in, the sounding. Hence, in addition to trying to derive expressions such as (3) for nominal reference, for possible non-nomlnal reference we try to prove similar expressions. (3 ... e, a,...)... ^ activate'(e, a) ^ ...s That is, we wish to derive the existence, from background knowledge or the previous text, of some known or implied activation. Most, but certainly not all, information con- veyed non-nominally is new, and hence will be assumed. Compound Nominals: To resolve the reference of the noun phrase "lube-oi] alarm", we need to Find two entities o and a with the appropriate properties. The entity o must be lube oil, a must be an alarm, and there must be some implicit relation betwee~ them. Let us call that implicit relation nn. Then the expression that must be proved is (3 o, a, nn)tu~-oit(o) ^ atarm(a) ^ nn(o, a) In the proof, instantiating nn amounts to interpreting the implicit relation between the two nouns in the compound nominal. Compound nominal interpretation is thus just a special case of reference resolution. Treating nn as a predicate variable in this way seems to indicate that the relation between the two nouns can be anything, and there are good reasons for believing this to be the case (e.g., Downing, 1977). In "lube-oil alarm", for example, the relation is ~x, y [y sounds if pressure of z drops too low] However, in our implementation we use a first-order sim- ulation of this approach. The symbol nn is treated as a predicate constant, and the most common possible rela- tions (see Levi, 1978) are encoded in axioms. The axiom (v=, v)r~,~(y, =) ~ --(=,y) allows interpretation of compound nominals of the form "<whole> <part>", such as "filter element". Axioms of the form (Vz, y)sample(y, z) D nn(z, y) handle the very common ease in which the head noun is a relational noun and the prenominal noun fills one of its roles, as in "oil sample". Complex relations such as the one in "luhe-oil alarm" can sometimes be glossed as "for". (v=, v)fo~Cy, =) ~ --(=, y) Syntactic Ambiguity: Some of the most com- mon types of syntactic ambiguity, including prepositional phrase and other attachment ambiguities and very com- pound nominal ambiguities, can be converted into con- strained coreference problems (see Bear and Hobbs, 1988). SSee Hobbs (1985a) for explanation of this notation for events. 96 For example, in (2) the first argument of after is taken to be an existentially quantified variable which is equal to ei- ther the compressor or the alarm. The logical form would thus include (3...e,c,y,a .... )... A aftcr(y,a) A ye {c,~} A ... That is, however after(y, a) is proved or assumed, y must be equal to either the compressor c or the disengaging c. This kind of ambiguity is often solved as a byproduct of the resolution of metonymy or of the merging of redundancies. Metonymy: Predicates impose constraints on their arguments that are often violated. When they are vio- lated, the arguments must be coerced into something re- lated which satisfies the constraints. This is the process of metonymy resolution. Let us suppose, for example, that in sentence (2), the predicate after requires its arguments to be events: after(ca,e2) : event(ca) A event(e2) To allow for coercions, the logical form of the sentence is altered by replacing the explicit arguments by "coercion variables" which satisfy the constraints and which are re- lated somehow to the explicit arguments. Thus the altered logical form for (2) would include (3 ... kt, k2, y, a, rela, eel2,...).., h after(k1, k2) A event(ka) A rcll(kl, y) A event(k~) A ret2(k2,a) A ... As in the most general approach to compound nominal interpretation, this treatment is second-order, and suggests that any relation at all can hold between the implicit and explicit arg~unents. Nunberg (1978), among others, has in fact argued just this point. However, in our implementa- tion, we are using a first-order simulation. The symbol eel is treated as a predicate constant, and there are a num- ber of axioms that specify what the possible coercions are. Identity is one possible relation, since the explicit argu- ments could in fact satisfy the constraints. (Vx)rel(=, x) In general, where this works, it will lead to the best inter- pretation. We can also coerce from a whole to a part and from an object to its function. Hence, (vx, y)part(z, y) ~ eel(x, y) (Vx, e)function(c, x) D rel(e,z) Putting it all together, we find that to solve all the local pragnaatics problems posed by sentence (2), we must derive the following expression: (3 e, x, c, ka, k2, y, a, o)Past(e) h disengage'(e, z, c) A compressor(c) A after(k1, k~) Aevent(kl) A rel(ka,y) A y E {c,e} A event(k2) A ret(k2,a) A alarm(a) A nn(o, a) A tube-oil(o) But this is just the logical form of the sentence 4 together with the constraints that predicates impose on their ar- guments, allowing for coercions. That is, it is the first half of our characterization (1) of what it is to interpret a sentence. When parts of this expression cannot be derived, as- sumptions must be made, and these assumptions are taken to be the new information. The likelihood of different atoms in this expression being new information varies ac- cording to how the information is presented, linguistically. The main verb is more likely to convey new information than a definite noun phrase. Thus, we assign a cost to each of the atoms--the cost of assuming that atom. Tlus cost is expressed in the same currency in which other fac- tors involved in the "goodness" of an interpretation are expressed; among these factors are likely to be the length of the proofs used and the salience of the axioms they rely on. Since a definite noun phrase is generally used referen- tially, an interpretation that simply assumes the existence of the referent and thus falls to identify it should be an ex- pensive one. It is therefore given a high assumability cost. For purposes of concreteness, let's call this $10. Indefinite noun phrases arc not usually used referentially, so they are given a low cost, say, $1. Bare noun phrases are given an inte~ediate cost, say, $5. Propositions presented non- nominally are usually new information, so they are given a low cost, say, $3. One does not usually use selectional constraints to convey new information, so they are given the same cost as definite noun phrases. Coercion relations and the compound nominal relations are given a very high cost, say, $20, since to assume them is to fail to solve the interpretation problem. If we superscript the atoms in the above logical form by their assumability costs, we get the following expression: (3 e, z, c, kl, k2, y, a, o)Past( z )" ^ disengagc'(e, z, c)" ^ cornpreJsor(c) ss ^ aftcr(kt, k2)" ^event(k~) .2° ^ rel(kt,y) *~ ^ y ~ {c,e} A event(k2) sa° A rel(k2,a) s2° A alarm(a) gs ^ nn(o, a) s~° ^ tube-oil(o)" While this example gives a rough idea of the relative as- sumability costs, the real costs must mesh well with the in- ference processes and thus must be determined experimen- tally. The use of numbers here and throughout the next section constitutes one possible regime with the needed properties. Vv'e are at present working, and with some optimism, on a semantics for the numbers and the proce- dures that operate on them. In the course of this work, we may modify the procedures to an extent, but we expect to retain their essential properties. 4For justification for this kind of logical form for sentences with quantifiers and inteusional operators, see Hobbs(1983) and Hobbs (1985a). 97 3 Abduction We now argue for the last half of the characterization (I) of interpretation. Abduction is the process by which, from (Vz)p(z I D q(r) and q(A), one concludes p(A I. One can think of q(A) as the observable evidence, of (Vz)p(z) D q(z) as a gen- eral principle that could explain q(A)'s occurrence, and of p(A) as the inferred, underlying cause of q(A). Of course, this mode of inference is not valid; there may be many possible such p(A)'s. Therefore, other criteria are needed to choose among the possibilities. One obvious criterion is consistency of p(A I with the rest of what one knows. Two other criteria are what Thasard (1978) has called consilience and simplicity. Roughly, simplicity is that p(A) should be as small as possible, and consilience is that q(A) should be as big as possible. We want to get more bang for the buck, where q(A) is bang, and p(A) is buck. There is a property of natural language discourse, no- ticed by a number of linguists (e.g., Joos (19721, Wilks (1972)), that su~ests a role for simplicity and consilience in its interpretation--its high degree of redundancy. Con- sider Inspection of oll filter revealed metal particle~. An inspection is a looking at that causes one ~o learn a property relevant to the j~nc~io~ of the inspected object. The ~nc~io¢ of a falter is to capture p,~eticle~ from a fluid. To reveal is to os~e one ~o/earn. If we assume the two causings to learn are identical, the two sets of particles are identical, and the two functions are identical, then we have explained the sentence in a minimal fashion. A small number of inferences and assumptions have explained a large number of syntactically independent propositions in the sentence. As a byproduct, we have moreover shown that the inspector is the one to whom the particles are revealed and that the particles are in the filter. Another issue that arises in abduction is what might be called the "informativeness-correctness tradeotP'. Most previous uses of abduction in AI from a theorem-proving perspective have been in diagnostic reasoning (e.g., Pople, 1973; Cox and Pietrzykowski, 1986), and they have as- maned "most specific abduction". If we wish to explain chest palna~ it is not su~cient to assume the cause is sim- ply chest pains. We want something more specific, such as "pneumonia". We want the most specific possible expla- nation. In natural language processing, however, we often want the least specific assumption. If there is a mention of a fluid, we do not necessarily want to assume it is lube oil. Assuming simply the existence of a fluid may be the best we can do. s However, if there is corroborating evidence, we may want to make a more specific assumption. In Alarm sounded. Flow obstructed. SSometimes a cigar is just a cigar. we know the alarm is for the lube oil pressure, and this provides evidence that the flow is not merely of a fluid but of lube oil. The more specific our assumptions are, the more informative our interpretation is. The less specific they are, the more likely they are to be correct. We therefore need a scheme of abductive inference with three features. First, it should be possible for goal ex- pressions to be assumable~ at varying costs. Second, there should be the possibility of making assumptions at vari- ous levels of specificity. Third, there should be a way of exploiting the natural redundancy of texts. We have devised just such an abduction scheme, s First: every conjunct in the logical form of the sentence is given an assumability cost, as described at the end of Section 2. Second, this cost is passed back to the antecedents in Horn clauses by assigming weights to them. Axion~s are stated in the form (4) Pp ^Pp ~ Q This says that Pl and P2 imply Q, but also that if the cost of assuming Q is c, then the cost of assuming PI is wlc, and the cost of assuming P2 is w2c. Third, factoring or synthesis is allowed. That is, goal wi~s may be unified, in which case the resulting wi~ is given the smaller of the costs of the input wi~s. This feature leads to minimality through the exploitation of redundancy. Note that in (41, if wl + w2 <= 1, most specific abduction is favored--why assume Q when it is cheaper to assume PI and P~. Hwlq-w2 • I, least specific abduction is favored-- why assume PI and P2 when it is cheaper to assume Q. But in pis ^ P~s ~ Q if PI has already been derived, it is cheaper to assume P2 than ~. P1 has provided evidence for Q, and assumlug the "remainder" P2 of the necessary evidence for Q should be cheaper. Factoring can also override least specific abduction. Suppose we have the axioms PiS A P~ s D QI p~s ^ p~s ~ Q~ and we wish to derive ~i ^ ~2, where each conjunct has an assumability cost of $10. Then assuming QI ^ ~2 will cost $20, whereas assuming Pl ^ P2 ^ Ps will cost only $18, since the two instances of P2 can be unified. Thus, the abduction scheme allows us to adopt the careful policy of favoring least specific abduction while also allowing us to exploit the redundancy of texts for more specific interpretations. In the above examples we have used equal weights on the conjuncts in the antecedents. I~ is more reasonable, SThe ~bduction scheme is due to Mark Stickel, and it, or a variant of it, is described at ~-eater length in Stickel (1988). 98 however, to assign the weights according to the "seman- tic contribution" each conjunct makes to the consequent. Consider, for example, the axiom (Vz)ear(z) "s A no-top(z) "4 D convertible(x) We have an intuitive sense that ear contributes more to convertible than no-top does. r In principle, the weights in (4) should be a function of the probabilities that instances of the concept Pi are instances of the concept Q in the cor- pus of interest. In practice, all we can do is assign weights by a rough, intuitive sense of semantic contribution, and refine them by successive approximation on a representa- tive sample of the corpus. One would think that since we are deriving the logical form of the sentence, rather than determining what can be inferred from the logical form of the sentence, we could not use super~et information in processing the sentence. That is, since we are back-chaining from the propositions in the logical form, the fact that, say, lube oil is a fluid, which would be expressed as (5) (Vz)lube-oil(z) D fluid(z) could not play a role in the analysis. Thus, in the text Flow obstructed. Metal particles in lube oil filter. we know from the first sentence that there is a fluid. We would like to identify it with the lube oil mentioned in the second sentence. In interpreting the second sentence, we must prove the expression ( 5 z )lube-oil( z ) If we had as an axiom (Vz)/tuid(z) ~ tub,-al(:) then we could establish the identity. But of course we don't have such an axiom, for it isn't true. There are lots of other kinds of fluids. There would seem to be no way to use superset information in our scheme. Fortunately, however, there is a way. We can make use of this information by converting the axiom into a bicon- ditional. In general, axioms of the form species D genus can be converted into a bieonditional axiom of the form genus A differentiae _= species rTo prime this intuition, imagine two doom. Behind one is n ear. Behind the other is something with no top. You pick a door. If there's a convertible behind it, you get to keep it. Which door would you pick? Often, of course, as in the above example, we will not be able to prove the differentiae, and in many cases the differentiae can not even be spelled out. But in our ab- ductive scheme, this does not matter. They can simply be assumed. In fact, we need not state them explicitly. We can simply introduce a predicate which stands for all the remaining properties. It will never be provable, but it will be assumable. Thus, we can rewrite (5) as (Vz)fluid(z) h etcl(z) _ lube-oil(z) Then the fact that something is fluid can be used as evi- dence for its being lube oil. With the weights distributed according to semantic contribution, we can go to extremes and use an axiom like (Vz)rnammal(z) "2 A atc2(z) "s D elephant(z) to allow us to use the fact that something is a mammal as (weak) evidence that it is an elephant. In principle, one should try to prove the entire logical form of the sentence and the constraints at once. In this global strategy, any heuristic ordering of the individual problems is done by the theorem prover. From a practi- cal point of view, however, the global strategy generally takes longer, sometimes significantly so, since it presents the theorem-prover with a longer expression to be proved. We have experimented both with this strategy and with a bottom-up strategy in which, for example, we try to identify the lube oil before trying to identify the lube oil alarm. The latter is quicker since it presents the theorem- prover with problems in a piecemeal fashion, but the for- mer frequently results in better interpretations since it is better able to exploit redundancies; The analysis of the sentence in Section 4.2 below, for example, requires either the global strategy or very careful axiomatization. The bottom-up strategy, with only a view of a small local re- gion of the sentence, cannot recognize and capitalize on redundancies among distant elements in the sentence. Ide- ally, we would like to have detailed control over the proof process to allow a number of different factors to interact in deterr-ln~ng the allocation of deductive resources. Among such factors would be word order, lexlcal form, syntactic structure, topic-comment structure, and, in speech, pitch accent .s 4 Examples 4.1 Distinguishing the Given and New We will examine two difllcult definite reference problems in which the given and the new information are intertwined and must be separated. In the first, new and old informa- tion about the same entity are encoded in a single noun phrase. SPereira and Pollnck's CANDIDE system (1988) is specifically de- signed to aid investigation of the question of the most effective order of interpretation. 99 There was adequate lube oil. We know about the lube oil already, and there is a corre- sponding axiom in the knowledge base. lube-oil( O) Its adequacy is new information, however. It is what the sentence is telling us. The logical form of the sentence is, roughly, (3 o)lube-oil( o) A adequate(o) This is the expression that must be derived. The proof of the existence of the lube oil is immediate. It is thus old information. The adequacy can't be proved, and is hence assumed as new information. The second example is from Clark (1975), and illustrates what happens when the given and new information are combined into a single lexical item. John walked into the room. The chandelier shone brightly. What chandelier is being referred to? Let us suppose we have in our knowledge base the fact that rooms have lights. (6) (Vr)roorn(r) D (31)light(1) A in(l,r) Suppose we also have the fact that lights with numerous fixtures are chandeliers. (7) (Vl)light(l) A has-fiztures(l) D chandelier(l) The first sentence has given us the existence of a room m room(R). To solve the definite reference problem in the second sentence, we must prove the existence of a chande- lier. Back-chaining on axiom (7), we see we need to prove the existence of a light with fixtures. Back-chaining from light(1) in axiom (6), we see we need to prove the exis- tence of a room. We have this in room(R). To complete the derivation, we assume the light I has fixtures. The light is thus given by the room mentioned in the previous sentence, while the fact that it has fl.xtures is new infor- mation. 4.2 Exploiting Redundancy We next show the use of the abduction scheme in solving internal coreference problems. Two problems raised by the sentence The plain was reduced by erosion to its presen t level. are determining what was eroding and determining what "it" refers to. Suppose our knowledge base consists of the following axioms: (Vp, l, s)decrease(p, l, s) A vertical(s) A etc3(p, I, s) = (3 el)reduce'(el, p, l) or el is a reduction of p to l if and only if p decreases to l on some vertical scale s (plus some other conditions). (Vp)landform(p) A flat(p) ^ etc4(p) - plain(p) or p is a plain if and only if p is a fiat landform (plus some other conditions). (V e, lt, l, s)at'(e, It, l) ^ on(l, s) ^ vertical(s) A/tat(y) A etcs(e, it, l,s) -- levee(e,l,y) or e is the condition of l's being the level of y if and only if e is the condition of y's being at I on some vertical scale s and It is fiat (plus some other conditions). (Vz, I, s )decrease( z, I, s) A landform(z) A altitude(a) A etce(y, l, s) -- (3 e)erode'(e, z) or • is an eroding of z if and only if z is a landform that decreases to some point I on the altitude scale s (plus some other conditions)° (Vs)vertical(s) A etcr(p) - altitude(s) or s is the altitude scale if and only if s is vertical (plus some other conditions). Now the analysis. The logical form of the sentence is roughly (3 ca, p, l, z, e2, It)reduce'(el, p, l) A plain(p) A erode'(el, z) A present(e2) A level'(e2, l, y) Our characterization of interpretation says that we must derive this expression from the axioms or from assump- tions. Back-chainlng on reducer(el, p, l) yields decrease(p, l, sl) A vertical(s1 ) A etcs(p, l, sl ) Back-cb~r~ing on erode'(e:, z) yields decrease(z, 12,s2) A landform(z) ^ altitude(s2) A etc4( z,12, s2 ) and back-chaining on altitude(s2) in turn yields vertical(s2) A etcr( s2 ) We unify the goals decrease(p, I, st) and decrease(z, 12, s2), and thereby identify the object of the erosion with the plain. The goals vertical(sl ) and vertical(s2) also unify, telling us the reduction was on the altitude scale. Back- chaining on plain(p) yields landform(p) A flat(p) A ete,(p) and landform(z) unifies with landform(p), reinforcing our identification of the object of the erosion with the plain. Back-chainlng on level'(e2, I, y ) yields 100 at'(e2,y,l) A on(l, ss) A vertical(ss) A flat(y) ^ etcs(p) and vertical(s3) and vertical(s2) unify, as do flat(y) and flat(p), thereby identifying "it", or y, as the plain p. We have not written out the axioms for this, but note also that "present" implies the existence of a change of level, or a change in the location of "it" on a vertical scale, and a decrease of a plain is a change of the plain's location on a vertical scale. Unifying these would provide reinforcement for our identification of "it" with the plain. Now assum- ing the most specific atoms we have derived including all the "et cetera" conditions, we arrive at an interpretation that is minimal and that solves the internal coreference problems as a byproduct. 4.3 A Thorough Integration of Syntax, Semantics, and Pragmatics By combining the idea of interpretation as abduction with the older idea of parsing as deduction (Kowalski, 1980, pp. 52-53; Pereira and Warren, 1983), it becomes possible to integrate syntax, semantics, and pragmatics in a very thor- ough and elegant way. 9 Below is a simple grammar written in Prolog style, but incorporating calls to local pragmatics. The syntax portion is represented in standard Prolog man- ner, with nonterminals treated as predicates and having as two of its arguments the beginning and end points of the phrase spanned by the nonterminal. The one modification we would have to make to the abduction scheme is to allow conjuncts in the antecedents to take costs directly as well as weights. Constraints on the application of phrase struc- ture rules have been omitted, but could be incorporated in the usual way. (Vi,j, k, x,p, args, req, e, c, rel)np(i, j, x) A vp(j, k,p, args, req) A 'pt(e, c) $3 A rel(c, z) $2° A subst(req, cons(c, args)) $1° D s(i, k, e) (V i, j, k, e, p, ar gs, req, et, c, ~el)s( i, j, e) A pp(j, k,p, args, req) A p'(el, c) s3 A tel(c, e) 12° A subst(req, cons(c, args)) *x° D s(i, k, e&el) (Vi,j,k,w,z,c, rel)v(i,j,w) A np(j,k,z) A rel(c, z) *2° 3 vp(i, k, ~z[w(z, c)], <c>, Req(w)) (V i, j, k, z)det(i, j,"the") A cn(j, k, z, p) Ap(z) 'm D n1~i,k,z) (Vi,j,k,z)det(i,j,"a") A cn(j,k,z,p) A p(z) n D rip(i, k, z) (Vi,j,k,w,z,y,p, nn)n(i,j,w) A cn(j,k,z,p) ^w(y)" ^ .n(y,=) '=° ~ ~(i,k,z,p) (V i, j, k, z, ~ , ~, args, req, c, rel)cn( i, j, z, Pl ) A pp(j, k,p2, args, req) 9This idea is due to Stuart Shieber. A subst(req, cons(c, argo)) st° ^ rel(c, z) s2° ~(i,k,=,;~z[p~(:) ^ ~(~)]) (Vi,j,w)n(i,j,w) D (3z)cn(i,j,z,w) (Vi,j, k, w, z, c, rel)prep(i, j, w) ^ np(j, k, x) A rel(c, z) In° 3 ptXi, k, ,~z[w(c, z)], <c>, Req(w)) For example, the first axiom says that there is a sentence from point i to point k asserting eventuality e if there is a noun phrase from i to j referring to z and a verb phrase from j to k denoting predicate p with arguments arg8 and having an associated requirement req, and there is (or, for $3, can be assumed to be) an eventuality e of p's being true of ¢, where c is related to or coercible from x (with an assumability cost of $20), and the requirement req associated with p can be proved or, for $10, assumed to hold of the arguments of p. The symbol c&el denotes the conjunction of eventualities e and el (See Hobbs (1985b), p. 35.) The third argument of predicates corresponding to terminal nodes such as n and det is the word itself, which then becomes the name of the predicate. The function Req returns the requirements associated with a predicate, and subst takes care of substituting the right arguments into the requirements. <c> is the list consisting of the single element c, and cons is the LISP function cons. The relations re/and nn are treated here as predicate variables, but they could be treated as predicate constants, in which case we would not have quantified over them. In this approach, s(0, n, e) can be read as saying there is an interpretable sentence from point 0 to point n (asserting e). Syntax is captured in predicates like np, vp, and s. Compositional semantics is encoded in, for example, the way the predicat e p' is applied to its arguments in the first axiom, and in the lambda expression in the third argument of vp in the third axiom. Local pragmatics is captured by virtue of the fact that in order to prove s(O, n, e), one must derive the logical form of the sentence together with the constraints predicates impose on their arguments, allowing for metonymy. Implementations of different orders of interpretation, or different sorts of interaction among syntax, composi- tional semantics, and local pragmatics, can then be seen as different orders of search for a proof of s(O, n, e). In a syntax-first order of interpretation, one would try first to prove all the "syntactic" atoms, such as np(i,j,x), before any of the "local pragmatic" atoms, such as p'(e, c). Verb-driven interpretation would first try to prove vp(j, k, p, args, req) by proving v(i, j, w) and then using the information in the requirements associated with the verb to drive the search for the arguments of the verb, by de- riving subst(req, cons(c, args)) before trying to prove the various np atoms. But more fluid orders of interpreta- tion are obviously possible. This formulation allows one to prove those things first which are easiest to prove. It is also easy to see how processing could occur in parallel. 101 It is moreover possible to deal with ill-formed or unclea~ input in this framework, by having axioms such as this revision of our first axiom above. (V i, j, k, z, p, args, req, e, c, tel)rip(i, j, z) '4 ^ vp(j, k,p, args, req) "s ^ p'(e, c) Is A re/(c, :)12o A subst(req, cons(c, args)) st° D s(i, k, e) This says that a verb phrase provides more evidence for a sentence than a noun phrase does, but either one can constitute a sentence if the string of words is otherwise interpretable. It is likely that this approach could be extended to speech recognition by using Prolog-style rules to decom- pose morphemes into their phonemes and weighting them according to their acoustic prominence. 5 Controlling Abduction: Type Hierarchy The first example on which we tested the new abductive scheme was the sentence There was adequate lube oil. The system got the correct interpretation, that the lube oil was the lube oil in the lube oil system of the air compressor, and it assumed that that lube oil was adequate. But it also got another interpretation. There is a mention in the knowledge base of the adequacy of the lube oil pressure, so it identified that adequacy with the adequacy mentioned in the sentence. It then assumed that the pressure was lube oil. It is clear what went wrong here. Pressure is a ma~i- rude whereas lube oil is a material, and magnitudes can't be materials. In principle, abduction requires a check for the consistency of what is e.mumed, and our knowledge base should have contained axioms from which it could be inferred that a magnitude is not a material. In practice, unconstrained consistency checking is undecidable and, at best, may take a long time. Nevertheless, one can, through the use of a type hierarchy, eI~minate a very large number of possible assumptions that are likely to result in an in- consistency. We have consequently hnplemented a module which specifies the types that various predicate-argument positions can take on, and the likely disjointness relations among types. This is a way of exploiting the specificity of the English lexicon for computational purposes. This addition led to a speed-up of two orders of magn/tude. There is a problem, however. In an ontologically promis- cuous notation, there is no commitment in a primed propo- sition to truth or existence in the real world. Thus, ]ube- oil'(e, o) does not say that o is lube oil or even that it exists; rather it says that • is the eventuality of o's being lube oil. This eventuality may or may not exist in the real world. If it does, then we would express this as Re,fists(e), and from that we could derive from axioms the existence of o and the fact that it is lube oil. But e's existential status could be something different. For example, e could be nonexistent, expressed as not(e) in the notation, and in English as "The eventuality e of o's being lube oil does not exist," or as "o is not lube oil." Or e may exist only in someone's beliefs. While the axiom (V z)Fressure(z) D-qube-oil(x) is certainly true, the axiom (Vel,z)~essure'(e,,=) ~ -,(3 eDtu~e-oir(e2, =) would not be true. The fact that a variable occupies the second argument position of the predicate lube-o/l' does not mean it is lube oil. We cannot properly restrict that ar~Btment position to be lube oil, or fluid, or even a ma- terial, for that would rule out perfectly true sentences like "~uth is not lube oil." Generally, when one uses a type hierarchy, one assumes the types to be disjoint sets with cleanly dei~ed bound- aries, and one assumes that predicates take arguments of only certain types. There are a lot of problems with this idea- In any case, in our work, we are not buying into this notion that the universe is typed. P~ther we are using the type hierarchy strictly as a heuristic, as a set of guesses not about what could or could not be but about what it would or would not occur to someone to 5~zI/. ~'hen two types are declared to be disjoint, we are saying that they are certainly disjoint in the real world, and that they are very probably disjoint everywhere except in certain bizarre modal contexts. This means, however, that we risk fmling on certain rare examples. We could not, for example, deal with the sentence, ~It then assumed that the pressure was lube oily 6 Future Directions Deduction is explosive, and since the abduction scheme augments deduction with the assumptions, it is even more explosive. We are currently engaged in an empirical in- vestigation of the behavior of this abductive scheme on a very large knowledge base performing sophisticated pro- ceasing. In addition to type checking, we have introduced two other tevhnlques that are necessary for controlling the exploslon~unwinding recursive axioms and making use of syntactic noncoreference information. We expect our in- vestigation to continue to yield techniques for controlling the abduction process. We are also looking toward extending the interpretation processes to cover lexical ambiguity, quantifier scope am- biguity and metaphor interpretation problems as well. We will also be investigating the integration proposed in Sec- tion 4.3 and an approach that integrates all of this with the recognition of discourse structure and the recognition of relations between utterances and the hearer's interests. 102 Acknowledgements The authors have profited from discussions with Todd Davies, John Lowrance, Stuart Shieber, and Mabry Tyson about this work. The research was funded by the Defense Advanced Research Projects Agency under Office of Naval Research contract N00014-85-C-0013. References [1] Bear, John, and Jerry R. Hobbs, 1988. "Localizing the Expression of Ambiguity", Proceeding-., Second Confer- ence on Applied Natural Language Proce-.-.ing, Austin, Texas, February, 1988. [2] Charniak, Eugene, 1986. "A Neat Theory of Marker Passing", Proceedings, AAAI-86, Fifth National Con- ference on Artificial Intelligence, Philadelphia, Pennsyl- vania, pp. 584-588. [3] Clark,Herbert, 1975. "Bridging". In R. Schank and B. Nash-Webber (Eds.), Theoretical I~sue-. in Natu- ral Language Processing, pp. 169-174. Cambridge, Mas- sachusetts. [41 Cox, P. T., and T. Pietrzykowski, 1986. "Causes for Events: Their Computation and Applications", Proceed. ing~, CADE-& [5] Downing, Pamela, 1977. "On the Creation and Use of English Compound Nouns", Language, vol. 53, no. 4, pp. 810-842. [6] Hobbs, Jerry 1~, 1983. "An Improper Treatment of Quantification in Ordinary English", Proceeding, of the 51Jr Annual Meeting, Association for Computational I, inguiatic$, pp. 5%63. Cambridge, Massachusetts, June 1983. [7] Hobbs, Jerry R. 1985a. "Ontological promiscuity." Pro. ceedings, 23rd Annual Meeting of the A85ociation for Computational Linguistics, pp. 61-69. [8] Hobbs, Jerry R., 1985b, "The Logical Notation: Onto- logical Promiscuity", manuscript. [9] Hobbs, Jerry (1986) "Overview of the TACITUS Project", CL, Vol. 12, No. 3. [10] Hobbs, Jerry R., William Croft, Todd Davies, Dou- glas Edwards, and Kenneth Laws, 1986. "Commonsense Metaphysics and Lexical Semautics', Proceeding-., ~th Annual Meeting of the A~aociation for Computational LinguiaticJ, New York, June 1986, pp. 231-240. [11] Hobbs, Jerry R., and Paul Martin 1987. "Local Prag- matics". Proceedings, International Joint Conference on Artificial Intelligence, pp. 520-523. Mila~o, Italy, Au- gust 1987. [12] Joos, Martin, 1972. "Semantic Axiom Number One", Language, pp. 257-265. [13] Kowalski, Robert, 1980. The Logic of Problem Soh. lug, North Holland, New York. [14] Levi, Judith, 1978. The Synta= and Semantics of Complez Nominals, Academic Press, New York. [15] Norvig, Peter, 1987. "Inference in Text Understand- ing", Proceedings, AAAI-87, Sizth National Confer- ence on Artificial Intelligence, Seattle, Washington, July 1987. [16] Nuaberg, Geoffery, 1978. "The Pragmatics of Refer- enee", Ph.D. thesis, City University of New York, New York. [17] Pereira, Feraando C. N., and Martha E. Pollack, 1988. "An Integrated Framework for Semantic and Pragmatic Interpretation", to appear in Proceedings, 56th Annual Meeting of the Association for Computational Linguis- tics, Buffalo, New York, June 1988. [18] Pereira, Fernando C. N., and David H. D. Warren, 1983. "Parsing as Deduction", Proceeding8 of the 51~ Annual Meeting, AJsociation for Computational Lin- guistics, pp. 137-144. Cambridge, Massachusetts, June 1983. [19] Pople, Harry E., Jr., 1973, "On the Mechanization of Abductive Logic", ProceedingJ, Third International Joint Conference on Artificial Intelligence, pp. 147-152, Stanford, California, August 1973. [20] Stickel, Mark E., 1982. "A Nonclausal Connection- Graph Theorem-Proving Program", ProcecdingJ, AAAI. 85 National Conference on Artificial Intelligence, Pitts- burgh, Pennsylvania, pp. 229-233. [21] Stickel, Mark E., 1988. "A Prolog-like Inference Sys- tem for Computing Minimum-Cost Abductive Explana- tions in Natural-Language Interpretation", forthcoming. [22] Thagard, Paul R., 1978. "The Best Explanation: Cri- teria for Theory Choice", The Journal of Philosophy, pp. 76-92. [23] Wilks, Yorick, 1972. Grammar, Meaning, and the Ma- chine Analy-.iJ of Language, Routledge and Kegan Paul, London. 103
1988
12
PROJECT APRIL -- A PROGRESS REPORT Robin Haigh, Geoffrey Sampson, Eric Atwell Cenlre for Computer Analysis of Language and Speech, University of Leeds, Leeds LS2 9JT, UK ABSTRACT Parsing techniques based on rules defining grammaticality are difficult to use with authentic inputs, which are often grammatically messy. Instead, the APRIL system seeks a labelled tree su~cture which maximizes a numerical measure of conformity to statistical norms derived flom a sample of parsed text. No distinction between legal and illegal trees arises: any labelled tree has a value. Because the search space is large and has an irregular geometry, APRIL seeks the best tree using simulated annealing, a stochastic optimization technique. Beginning with an arbi- Irary tree, many randomly-generated local modifications are considered and adopted or rejected according to their effect on tree-value: acceptance decisions are made probabilistically, subject to a bias against advexse moves which is very weak at the outset but is made to increase as the random walk through the search space continues. This enables the system to converge on the global optimum without getting trapped in local optima. Performance of an early ver- sion of the APRIL system on authentic inputs is yielding analyses with a mean accuracy of 75.3% using a schedule which increases pro- cessing linearly with sentence-length; modifications currently being implemented should eliminate a high proportion of the remaining errors. INTRODUCTION Project APRIL (Annealing Parser for ~al~- tic Input Language) is constructing a software system that uses the stochastic optimization technique known as "simulated annealing'" (Kirkpatnck et al. 1983, van T ~rhoven & Aatts 1987) to parse authentic English inputs by seek- ing labelled trce-su~ctures that maximize a measure of plausibility defined in terms of empirical statistics on parse-tree configurations drawn from a dmahase of mavnolly parsed English toxL This approach is a response to the fact that "real-life" English, such as the m~u,Jial in the Lancaster-Oslo/Bergen Corpus on which our research focuses, does not appear to conform to a fixed set of grammatical rules. (On the LOB Corpus and the research back- ground from which Project APRIL emerged, see Garside et al. (1987). A crude pilot version of the APRIL system was described in Sampson (1986).) Orthodox computational linguistics is heavily influenced by a concept of language according to which the set of all strings over the vocabulary of the language is partitioned into a class of grammatical strings, which possess ana- lyses all parts of which conform to a finite set of rules defining the language, and a class of strings which are ungrammatical and for which the question of their grammatical stntcture accordingly does not arise. Even systems which set out to handle "deviant" sentences com- monly do so by referring them to particular "non-deviant" sentences of which they are deemed to be distortions. In our wcck with authentic texts, however, we find the "gramma- ticality" concept unhelpful. It frequendy hap- pens that a word-sequence occurs which violates some recognized rule of English grammar, yet any reader can understand the passage without difficulty, and it often seems unlikely that most readers would notice the violation. Further- more, a problem which is probably even more troublesome for the rule-based approach is that there is an apparently endless diversity of con- structious that no-one would be likely to describe as ungrammatical or devianL Impres- sionistically it appears that any attempt to state a finite set of rules covering everything that occurs in authentic English text is doomed to go on adding more rules as long as more text is examined; Sampson (1987) adduced objective evidence supporting this impression. Our approach, therefore, is to define a func- tion which associates a figure of merit with any 104 possible tree having labels drawn from a recog- uized alphabet of grammatical category- symbols; any input sentence is parsed by seek- ing the highest-valued tree possible for that sen- tence. The analysis process works the same way, whether the input is impeccably grammati- cal or quite bizarre. No conwast between legal and illegal labelled trees arises: a tree which would ordinarily be described as thoroughly ille- gal is in our terms just a tree whose figure of merit is relatively very poor. This conception of parsing as optimization of a function defined for all inputs seems to us not implausible as a model of how people understand language. But that is not our con- cern; what matters to us is that this model seems very fimitful for automatic language- processing systems. It has a theoretical dir,~l- vantage by comparison with rule-based approaches: if an input is perfectly granunatical but contains many out-of-the-way (i.e. low fi'e- quency) constructions, the correct analysis may be assigned a low figure of merit relative to some alternative analysis which treats the sen- tence as an imperfect approximation to a struc- ture composed of high-frequency constructions. However, our experience is that, in authentic English, "trick sentences" of this kind tend to be much rarer than textbooks of theoretical linguistics might lead one m imagine. Against this drawback our approach balances the advan- tage of robusmess. No input, no matter how bizarre, can can cause our system simply to fail to return any analysis. Our sponsors, the Royal Signals and Radar Establishment (an agency of the U.K. Ministry of Defence) 1 ar~ principally interested in speech analysis, and arguably this robusmess should be even more advantageous for spoken language, which makes little use of constructions that are legitimate but rechercM, while it contains a great dead that is sloppy or incorrecL PARSING SCHEME Any automatic parser needs some external . standard against which its output is judged. Our "target" parses are those given by a scheme previously evolved for analysis of LOB Corpus material, which is sketched in Garside et aL I Proj~t APRIL has hem sponuned since De- cember 1986 under contract MOD2062~I28(RSRE); we me grateful to the Minhmy of Defmce for permis- sion to publish this paper. (1987, chap. 7) and laid down in minute detail in unpublished documentation. This scheme was applied in manually parsing sentences total- ling ca 50,000 words drawn from the various LOB genres: this TreeBank, as we call it, also serves as our source of grammatical statistics. A major objective in the definition of the pars- ing scheme and the construction of the TreeBank was consistency: wherever alternative analyses of a complex consm~ction might be suggested (as a malxer of analytic style as opposed to genuine ambiguity in sense), the scheme alms to stipulate which of the alterna- fives is to be used. It is this need to ensure the greatest possible consistency which sets a practi- cal limit to the size of the available database; producing the TreeBank took most of one teacher's research time for two years. The parses yielded by the TreeBank scheme are immedlate-cunstituent analyses of conven- tional type: they were designed so far as possi- ble to be theoretically uncontroversial. They were not designed to be especially convenient for stochastic parsing, which we had not at that time thought of. The prior existence of the TreeBank is also the reason why we are working with written language rather than speech: at present we have no equivalent resource for spoken English. THE PRINCIPLES OF SIMULATED ANNEALING To explain how APRIL works, two chief issues must be clarified. One is the simulated annealing technique used to locate the highest- valued tree in the set of poss~le labelled trees; the other is the function used to evaluate any such tree. We will begin by explaining the technique of simulated annealing. This technique uses stochastic (randomizing) methods to locate good solutions; it is now widely exploited, in domains where combinatorial explosion makes the search space too vast for exhaustive examination, where no algorithm is av.aii~ble which leads sys- tematically to the optimal solution, and where there is a considerable degree of "fzustration" in the sense of Toulouse (1977), meaning that a seeming improvement in one feature of a solu- tion often at the same time worsens some other feature of the solution, so that the problem can- not be decomposed into small subproblems which can each be optimized separately. (Com- 105 pare how, in parsing, deciding to attach a con- stiment A as a daughter of a constituent B may be a relatively attractive way of "using up" A, at the cost of making B a less plm~ible consti- tuent than it would be without A.) One simple optimization technique, iterafive improvement, begins by selecting a solution arbitrarily and then makes a long series of small modifications, drawn from a class of modifications which is defined in such a way that any point in the solution-space can be reached from any other point by a chain of modifications each belonging to the class. At each step the value of the solution obtained by malting some such change is compared with the value of the current solution. The change is accepted and the new solution becomes current if it is an improvement; otherwise the change is rejected, the existing solution retained, and an alternative modification is tried. The process terminates on reaching a solution superior to each of its neighbours, i.e. when none of the available modifications is an improvement. As it stands, such a technique is useless for parsing. It is too easy for the system to become trapped at a point which is better than its immediate neighbonrs but which is by no means the best solution overall, i.e. at a local but not a global optimum. Simulated annealing is a variant which deals with this difficulty by using a more sophisti- cated rule for deciding whether to accept or reject a modification. In the variant we use, a favourable step is always accepted; but an unfavonrable step is rejected only if the loss of merit resulting from the step exceeds a certain threshold. This acceptance threshold is ran- domly generated at each step from a biassed distribution; it may at any lime be very high or very low, but its mean value is made to decrease in accordance with some defined schedule as the iteration proceeds, so that ini- tially almost atl moves are accepted, good or bad, but moves which are severely detrimental soon start to be rejected, and in the later stages almost all detrimental moves are avoided. This scheme was originally devised as a simulation of the thermodynamic processes involved in the slow cooling of certain materials, hence the name "simulated annealing". Accepting modifications which worsen the current tree is at first sight a surprising idea, but such moves prevent the system getting stuck and insteed open up new possibilities; at the same time, there is an inexorable overall trend towards improvement. As a result, the system tends to seek out high-valued areas of the solution space initially in terms of gross features, and later in terms of progressively finer detail. Again, the process terminates at a local optimum, but not before exploring the possibilities so thoroughly that this is in general the global optimum. With certain simplifying assumptions, it has been shown mathematically that the global optimum is always found (Lundy & Mees, 1986): in prac- tice, the procedure appears to work well under rather less stringent conditions than those demanded by mathematical treaunents that have so far appeared" and our application does in fact take several liberties with the "pure" algorithm as set out in the literature. ANNEALING PARSE-TREES To apply simulated annealing toa given problem, it is necessary to define (a) a space of possible solutions, Co) a class of solution modifications which provides a mute from any point in the space to any other, and (c) an annealing schedule (i.e. an initial value for the mean acceptance threshold, a specification of the rate at which this mean is reduced, and a criterion for terminating the Im3cess). Solution space For us, the solution space for an input son- tence n wc~ls long is the set of all rooted labelled trees having n leaves, in which the leaf nodes are labelled with the word-class codes corresponding to the words of the sentence (for test inputs drawn from LOB, these are the codes given in the Tagged version of the LOB corpus) and the non-terminal nodes have labels drawn from the set of grammatical-category labels specified in the parsing scheme. The root node of a tree is assigned a fixed label, but any other non-terminal node may bear any category label. Move set A set of possible parse-tree modifications allowing any tree to be reached from any other can be defined as follows. To generate a modification, pick a non-terminal node of the current tree at random. Choose at random one of the move-types Merge or Hive. If Merge is chosen, delete the chosen node by replacing it, in its mother's dAughter-sequence, with its own daughter-sequence. If the move-type is Hive, choose a random continuous subsequence of the 106 node's daughter-sequence, and replace that subsequence by a new node having the subse- quence as its own daughter-sequence; assign a label drawn from the non-terminal alphabet to the new node. R is easy to see that the class of Merge and Hive moves allows at least one route from any u~e to any other tree over the same leaf-sequence: repeated Merging will ultimately mm any tree into the "flat tree" in which evea 7 leaf is directly dominated by the root, and since Merge and Hive moves mirror one another, if it is possible to get from any tree to the flat Iree it is equally possible to get from the flat tree to any tree. (In reality, there will be numerous alternative mutes between a given pair of trees, most of which will not pass through the flat tree.) New labels for nodes created by Hive moves are chosen randomly, with a bias determined by the labels of the daughter-sequence. This bias attempts to increase the frequency with which correct labels are chosen, without limiting the choice to the label which is best for the daughter-sequence considered in isolation, which may not of course be the best in context. An early version of APRIL limited itself to just the Merge and Hive moves. However, a good move-set for annealing should not only permit any solution to be reached from any other solution, but should also be such that paths exist between good trees which do not involve passing through much inferior inter- mediate stages. (See for example the remarks on depth in Lundy & Mees (1986).) To strengthen this tendency in our system it has proved desirable to add a third class of Re, attach moves to the move-set. To generate a Reattach move, choose randomly any non-root node in the current tree, eliminate the arc linking the chosen node to its mother, and insert an arc linking it to a node randomly chosen fi'om the set of nodes topologically capable of being its mother. Currently, we are exploring the cost- effectiveness of adding a fourth move-type, which relabels a randomly-chosen node without changing the tree shape; a m~lr for the future is to investigate how best to determine the propor- tions in which different move-types are gen- erated. Schedule The annealing schedule is ultimately a compromise between processing time and qual- ity of results: although the process can be speeded up at will, inevitably speeding up too much will make the system more likely to con- verge on a false solution when presented with a difficult sentence. Optimizing the schedule is a topic to which much attention has been paid in the literature of simulated annealing, but it seems fair to say that the discussion remains inconclusive. Since it does not in general bear on the specifically linguistic aspects of our pro- ject' we have deferred detailed consideration of this issue. We intend however to look at the variation in rate with respect to type of input, exploiting the division of the TreeBank (like its parent LOB Corpus) into genres: we would expect that the simple if sometimes messy sen- tences of dialogue in fiction, for instance, can be dealt with more quickly than the precise but tor- tuons grammar of legal prose. At present, then, we reduce the acceptance threshold at a constant rate which errs on the slow side; we expect that important advances in efficiency will result from improvements in the schedule, but such improvements may be over- taken by other developments to be described in later sections. The rate of decrease of the acceptance threshold is varied inversely with the length of the sentence, with the consequence that the run time varies roughly linearly with sentence length. EVALUATING PARSE-TREES The function of the evaluation system is to assign a value to any labelled tree whatsoever, in such a way that the correct parse-tree for any given sentence is the highest-valued tree which can be drawn over the sentence, and the values of other trees over the same sentence reflect their relative merit (though comparisons of values between trees drawn over diffeaent sen- tences axe not required to be meaningful). An advantage of the annealing technique is that in principle it makes no demands on the form of evaluation: in parfic-lae, we are not constrained by the nature of the parsing algo- rithm to assume that the grammar of English is context-free or has any other special property. Nevertheless, we have found it convenient in our early work to start with a context-free assumption and work forward from that. With this assumption, a tree can be treated as a set of productions m~ld2...d, ccm'esponding to the various nodes in the tree, where m is a non-terminul label and each d~ is 107 either a non-terminal label or a wordtag, and we can assign to any such production a probability representing the frequency of such productions, as a proportion of all productions having m as mother-label; the value assigned to the entire tree will be the product of the probabilities of its productions. The statistic required for any production, then, is an estimate of its probability of occurrence, and this may be derived from its frequency in the manually-parsed TreeBank. (To avoid circularity, sentences in the TreeBank • which are to be used to test the performance of the parser are excluded from the frequency counts.) Clearly, with a dam_base of this size, the figures obtained as production probabilities will be distorted by sampling effects. In gen- eral, even quite large sampling errors have little influence on results, since the frequency con- trasts between alternative tree-structures tend to be of a higher order of magnitude, but difficulties arise with very low frequency pro- doctions: in particular, as an important special case, many quite normal productions will fail to occur at all in the TrecBank, and are thus not distinguished in our raw data from virtually- impossible productions. But it seems reasonable to infer probability estimates for unobserved productions from those of similar, observed pro- ductions, and more generally to smooth the raw frequency observations using statistical tech- niques (see for insmnco Good (1953)). (One consequence of such smoothing is that no pro- duction is ever assigned a probability of zero.) A natural response by linguists would be to say that a relationship of "'similarity" between pro- ductions needs to be defined in terms of subtle, complex theoretical issues. However, so far we have been impressed by results obtainable in practice using very crude similarity ~Intlon- ships. Our current evaluation method is only slightly more elaborate than the technique described in Sampson (1986), whereby the pro- hability of a woducfion was derived exclusively from the observed frequencies of the various pairwise transitions between daughter-labels within the production (that is, for any produc- tion m--->dodt ...d.d.+t, where do and d.+t are boundary symbols, the estimated probability was the product of the observed frequencies of the various transitions m-+...d~ di+x... (O~gi ~;n) with zeroes replaced by small positive values). This approach was suggested by the success of the CLAWS system for grammatically disambi- gtt~tit~g words in context (Garside et al. 1987, chap. 3), which uses an essentially Markovian model, and by the success of Markovian tech- niques in automatic spee.~h understanding research from the Harpy project onwards (e.g. Lea 1986, Cravero et al. 1984). Subsequent versions of APRIL have begun to incorporate an evaluation measure which makes limited use of non-Markovian relation- ships. Each label in the non-terminal alphabet is associated with a transition network, each arc of which is assigned a probability as well as a (non-terminal or terminal) label: the probability estimate for a node labelled m is the product of the probabilities of the consecutive arcs in the transition network for m which carry the labels of the node's daughter-sequence. Unlike the FSAs commonly used in computational linguis- tics, ours are required to accept any label- sequence: a "crazy" sequence will be assigned a low but non-zero value. Indeed our networks make no attempt to reflect subtle nuances of grammaticallty; they diverge from Markovian networks only to represent a limited number of fundamental issues that are lost in a pure Mar- kovian system. APRIL IN ACTION It is rather difficult to convey non- mathematically a feel for the way in which the system converges from an arbitrary tree to the correct tree by a sequence of random moves. In the earliest stages, labelled nodes are being ctented, moved and destroyed at a rapid rate in all regions of the tree, but after a while it starts to become apparent that certain local featmes are tending to persisL These tend to be the most strongly marked features grammatically, such as constituents comprising a single pro- noun or an attxili.gry verb. While such a featll~ persists, surrounding developments are con- strained by it: other new nodes can be created if they are compatible, but new nodes which would conflict cannot appear. Thus the gram- matical words form a skeleton on which the phrases and clauses can start to hang, and we find there is a perceptible gradually ~creasing tendency for the tree to consist of nodes and substructures which fit together well into a coherent whole. Speaking anthropomorphically. the system tends to make the simplest and most clear-cut decisions first, and the more subtle decisions later. But the strength of the system 108 lies in the fact that no such decision is final: each is constantly being reappraised in the light of developments in its surroundings. CURRENT PERFORMANCE In order to assess APRIL's performance we need an objective way to compare output with target parses, i.e. a measure of similarity between pairs of distinct trees over the same sequence of leaf nodes. We know of no stan- dard measure for this, but we have evolved one that seems natural and fair. Fcf each word of input we compare the chains of node-labels between leaf and root in the two trees, and com- pute the number of labels which match each other and occur in the same order in the two chains as a proportion of all labels in both chains; then we average over the words. (We omit discussion of a refinement included in order to ensure that only fully-identical tree- pairs receive 100% scores.) With respect to our parsing technique, this performance measme is conservative, since averaging over words means that high-level nodes, dominating many weeds, contribute more than low-level nodes to overall scores, but APRIL tends to discover structure in a broadly bottom-up fashion. At the time of writing, our latest results were those of a test run carried out in esxly February 1988, 14 months into a 36-month pro- ject, over 50 LOB sentences drawn from techni- cal prose and fiction, with mean, minimum, and maximum lengths of 22.4, 3, and 140 words respectively. (Note that our parsing scheme, and therefore our word-counts, treat punctuation marks as separate "words".) The alphabet of non-terminal labels from which APRIL chooses when labelling new nodes included virtually all the distinctions required by our scheme in an adequately parsed output; and it included several of the more significant phrase- subeategory distinctions whose role in the scheme is to guide the parser towards the correct output rather than to appear in the out- put (Garside et al. 1987, p. 89). Altogether the non-terminal alphabet included 113 distinct labels. For a 22-word sentence, the number of dis- tinct trees with labels drawn from a 113- member alphabet (and obeying the resirictions our scheme places on the occurrence of nodes with only single daughters) is about 5×10103 . To put this in perspective, finding a particular labelled tree in a search space of this size is like finding a single atom of gold in a solid cube of gold a thousand million light-years on a side. Mean scoc¢ of the 50 output analyses was 75.3%. This is not yet good enough for incor- poration into practical language-processing application software, but bearing in mind the preliminary nature of the current version of the system we are heartened by how good the scores already are. Furthennct'e, above about 15 words there appears to be no correlation between sentence-length and output score, offering a measure of support fc¢ our decision to use an annealing schedule which increases processing time roughly linearly with input length. Kirkpalrick et al. (1983) suggest that lineax processing is adequate for simulated annealing in other domains, but orthodox deter- ministic approaches to computational linguistics do not permit linear parsing except for highly artificial well-behaved languages. The parse-trees prodir.~ in this test run typ- ically show a substantially correct overall slruc- ture, with isolated local areas of difficulty where some deviant analysis has been preferred, com- monly a constituent wrongly labelled or a con- stituent attached to the surrounding tree at the wrong level An encouraging point is that a number of these errors relate to debatable gram- matical issues and might not be seen as errors at all. In the years when our target parsing scheme was being evolved, we worded about the idiomatic construction to try and [do some- thing]: should try and Verb be grouped as a constituent equivalent to a single verb? We finally decided not: we chose to analyse such sequences as co-ordinated clauses. But, where the test sentences include the sequence I want to try and find properties that .... APRIL has parsed: I want [Ti to [VB& try and fred] proper- ties that...].--the analysis which we came close to choosing as correct. A sentence which raises less trivial issues is illustrated (this is from text E23 in the LOB Corpus). We show the manual parse in the TreeBank (Fig.l), and APRIL's current output (Fig. 2), which contains two errors. First, the final phrase of the human mind should be attached as a posunodifier of mysteries. At this stage no distinction was made in word-tagging between of and other prepositions: there is how- ever a su'ong tendency (though no absolute rule, of course) for an of phrase following a noun to be a postmodifier of the noun, and it is correspondingly rare for such a phrase to be an 109 G. _zts~ m G. "--i l I.- "--I m. z -~ ~ ~- ; ~ ~,~ ~ • ~-~ ~; Q. -~ ~ ~ • ~<-~ ~; "i; 0 e~ Q. CD m t -I. b- e, '-"1 Z '--I Z - ~ j- go E ! ~ ~ ~ 8 Q) e- U,. 110 immediate constituent of a clause. Distinguish- ing of from other prepositions will enable the evaluation system to incorlxrate a representa- tion of this piece of statistical evidence in its wansition probabilities, whereupon this error should be avoided. Secondly, APRIL has rejected the interpreta- tion of the clause beginning representing.., as a posunedifier of tulle, and has chosen to make this clause appositional to the clause beginning placing... (our scheme represents apposition in a manner akin to subordination). 1"his error can be avoided ff we note the su'ong tendency in English (again, not an absolute rule) that poslmodifiers of any kind are most often attached to the nearest element that they can logically postmodify, that is, that the chain- structure typified in Fig. 1 is preferred to the embedding-structure in Fig. 2. A preliminary statistical analysis of the TreeBank appears to support the conjecture---developed from the hypothesis formulated by Yngve (1960)---that "the greater the depth of a non-terminal consti- tuent, the greater the probability that either (a) this constituent is the last daughter of its mother, or Co) the next daughter of its mother is a punctuation mark." (We adapt Yngve's notion of depth to non-binary trees.) With this formulation it is relatively easy to incorporate into our evaluation system the necessary adjust- ments to our transition probabilities, so that trees of the more common type will tend to be preferred; but note that nothing prevents an overriding local consideration f~m leading the parser to prefer, in any given case, an analysis that departs from this general principle. When Otis is done, the initial context-free assumption will have been abandoned, to the extent that depths of constituents are taken into account as well as their labels, but no change is needed in the parsing algcxithm. The erroneous parsings in this example flout no rules of syntax that we can formulate and seem to involve no impossible productions, so they could be regarded as valid alternatives in a syntactically ambiguous sentence: a generative gmnmar could be expected to generate this sen- tence in several different ways, of which APRIL's would be one. However, as our methods improve we find that more and more sentences which are in principle ambiguous have the same reading selected by purely statistical-syntactic considerations as is preferred by human readers, who also have access to semantic and pragmatic considerations. FUTURE DEVELOPMENTS Apart from improving the evaluation system as already discussed, we plan in the near future to adapt APRIL so that it accepts raw text rather than sequences of word-class codes as input, choosing tags for grammatically ambiguous words as part of the same optimization process by which higher struclm'e is discovered. The availability of the (probabilistic but determinis- tic) CLAWS word-tagging system meant that this was not seen as an initial priority. Raw text input involves a number of problems relat- ing to orthographic matters such as capitaliza- tion and hyphenated words, but these problems have essentially been solved by our Lancaster colleagues (Garside et aL, chap. 8). We also intend soon to move from the current static sys- tent whose inputs are isolated sentences to a dynamic system within which annealing will take place in a window that scans across con- tinuous text, with the system discovering sentence-boundaries for itself along with lower- level structure. (If our system is in due course adapted to parse spoken rather than written input, it is clear that all constituent boundaries including those of sentences would need to be discovered rather than given, and a corollary appears to be that the processing time needed for any length of input must increase only linearly with input length.) As adumbrated in Sampson (1986), we expect to make the dynamic annealing parser more efficient by exploiting the insight of Marcus (1980) that back'wacking ~.is rarely needed in natural language parsing: a gradient of processing inten- sity will be imposed on the annealing window, with most processing occuning in the "newest" parts of the current tree where valuable moves are most likely to be found. However, simulated annealing is necessarily costly in terms of amount of processing needed. (The schedule used for the run discussed above involved on the order of 30,000 steps generated per input word.) Partic~l~ly with a view to applications such as re.-time speech analysis, it would be desirable to find a way of exploiting parallel processing in order to minimize the time needed for parse-lree optimization. Parallelizing our approach to parsing is not a swaightforward matter, one cannot, for instance, s~nply associate a process with each node of a tree, since there is no nalaral identity relation- 111 ship between nodes in different trees within the solution space for an input. However, we have evolved an algorithm for concurrent tree anneal- ing which we believe should be efficient, and a research proposal currently under consideration will implement this algorithm, using a wanspumr array which is about to be installed by a consm'- tium of Leeds departments. In view of the widespread occurrence of hierarchical sm~c~a-es in cognitive science, we hope that a successful solution to the problem of l~a'allel tree- optimization should be of interest to workers in other areas, such as image processing, as well as to linguists. Lastly, a reasonable criticism of our work so far is that our target parses are those defined by a purely "surfacy" parsing scheme. For some speech-prvcessing applications surface parsing is adequate, but for many purposes deeper language analyses are needed. We see no issue of principle hindering the extension of our methods to deep parsing, but at present there is a serious practical hindrance: our techniques can only be applied after a target parsing scheme has been specified in sufficient detail m prescribe unambiguous analyses for all phenomena occurring in authentic English, and then applied man~mlly to a large enough quan- tity of text to yield usable statistics. A second currently-pending research proposal plans m convert the Gothenburg Corpus (Elleg~l 1978), which consists of relatively deep manual pars- ings of 128,000 words of the Brown Corpus of American English, into a database usable for this purpose. mESERENCES Cravero, M., et al. 1984. "Syntax driven recognition of connected words by Markov models". Proceedings of the 1984 IEEE Inter- national Conference on Acoustics, Speech and Signal Processing. Elleg~rd, A. 1978. The Syntactic Structure of English Texts. Gothenburg Studies in English, 43. Garside, R. G., et al., eds. 1987. The Computa- tional Analysis of English. Longraan. Good, I. J. 1953. "The population frequencies of species and the estimation of population parameters". Biometrika 40.237-64. Kirkpatrick, S. E., et al. 1983. "Optimization by Simulated Annealing". Science 220.671-80. van Laarhoven, P. J. M., & E. H. L. Aar~. 1987. Simulated Annealing: Theory and Appli- cations. D. Reidel. Lea, R. G., ed. 1980. Trends in Speech Recog- nition. Prentice-Hall. Lundy, NL and A. Mees. 1986. "Convergence of an annealing algorithm". Mathematical Pro- gramming 34.111-24. Marcus, M. P. 1980. A Theory of Syntactic Recognition for Natural Language. MIT Press. Sampson, G.R. 1986. "A stochastic approach to parsing". Proceedings of the llth Interna- tional Conference on Computational Linguistics (COLING '86), pp. 151-5. [GRS wishes to take this opportunity to apologize for the inadvertent near-coincidence of title between this paper and an important 1984 paper by T. Fujisaki.] Sampson, G. R. 1987. "'Evidence against the 'grammafical'/'ungrammatical' distinction". In W. Meijs, eeL, Corpus Linguistics and Beyond. Rodopi. Toulouse, G. 1977. "Theory of the frustration effect in spin glasses. I." Communications on Physics, 2.115-119. Yngve, V. 1960. "A model and an hypothesis for language structure". Proceedings of the American Philosophical Society, 104.dd A. -66. 112
1988
13
DISCOURSE DEIXIS: REFERENCE TO DISCOURSE SEGMENTS Bonnie Lynn Webber Department of Computer & Information Science University of Pennsylvania Philadelphia PA 19104-6389 ABSTRACT Computational approaches to discourse understanding have a two-part goal: (1) to identify those aspects of discourse understanding that require process-based accounts, andS(2) to characterize the processes and data structures they involve. To date, in the area of reference, process-hased ac.omnts have been developed for subsequent reference via anaphoric pronouns and reference via definite descriptors. In this paper, I propose and argue for a process-based account of subsequent reference via deiedc expressions. A significant feature of this account is that it attributes distinct mental reality to units of text often called discourse segments, a reality that is distinct from that of the entities deem therein. 1. INTRODUCTION There seem to be at least two constructs that most current theories of discourse understanding have adopted in at least some form. The In'st is the discourse entity, first introduced by Lauri Karmunen in 1976 (under the name "discourse referent") [9] and employed (under various other names) by many researchers, including myself [18]. The other is the discourse segment. Discourse entities provide these theories with a uniform way of explaining what it is that noun phrases (NP) and pronouns in a discourse refer to. Some NPs evoke a new discourse entity in the listener's evolving model of the discourse (which I have called simply a discourse model), others refer to ones that are already there. Such entities may correspond to something in the outside world, but they do not have to. To avoid confusion with a sense of "referring in the outside world", I will use the terms referm here, meaning "refer in a model", and referentm, for the entity in the model picked out by the linguistic expression. The basic features of a discourse entity are that (a) it is a constant within the current discourse model and that Co) one can attribute to it, inter alia, properties and relationships with other entities. (It is for this reason that Bill Woods once called them "conceptual coat hooks".) In some theories, different parts of the discourse model (often called spaces) may represent diffeaent modalities, including hypothetical contexts, quantified contexts, the belief contexts of different agents, etc. Depending on what space is currently being described, the same NP or pronoun may evoke and/or referm to very different discourse entities. The other common construct is the discourse segment. While discourse segmentation is generally taken to be a chunking of a linguistic text into sequences of related clauses or sentences, James Allen notes: ... there is little consensus on what the segments of a particular discourse should be ~ how segmentation could be accomplished. One reason for this lack of consensus is that there is no precise definition of what a segment is beyond the intuition that certain sentences naturally group together [[1], p. 398-9] What is taken to unify a segment is different in different theories: fox example, among computational linguists, Grosz & Sidner [5] take a discourse segment to be a chunk of text that expresses a common purpose (what they have called a discourse segment purpose) with respect to the speaker's plans; Hobbs [8] takes a discourse segment to be a chunk of text that has a common meaning; while Nakhimovsky [12], considering only narrative, takes a discourse segment to be a chunk of text that describes a single event from a single perspective. 113 DS-k DS-kl DS-k2 sj 5j+l DS-k21 I DS-k21 | I DS-k21 j While discourse segment is usually deemed recursively, theories differ in what they take the minimal segment to be. Hobhs takes it to be a sentence, and Polanyi [12], a clause. Grosz & Sidner do not state explicitly how much is needed to express a single purpose, but from their examples, it appears to be a single sentence as wen. (Unlike Hohbs and Polanyi, Grosz & Siduer do not consider every sentence to be a discourse segment per so.) Since discourse segment is defmed recm~vely, the resulting segmentation of a text (or at least, large parts of it) can be described as a tree. From the point of view of processing, this means that at any point in the discourse, several segments, each embedded in the one higher, may still be open - i.e., under construction. This is illuswated schematically in Figure 1. os ,- D 7 DS-k2 1 1 Z / * DS-k2i J Figure 1. Discourse Segrnentation [2] and Rachel Reichman [15]) have discussed problems inherent in this discourse parsing task, among which is the lack of precise definition of its basic building block. At the point of processing sentence Sj÷I in this example, segments DSkl, DSk211 ..... DSk21j are complete (closed - indicated by a *), while DSk, DSk2, and DSk21 are open, able to incorporate sentence Sj+I (or, alternatively, its cones~nding unary discourse segment). Of special interest is the right frontier of the tree - the set of nodes comprising the most recent closed segment and all currently open segments - here {DSk21j, DSk21, DSk2, and DSk}, which I will make use of later in Section 3. Several researchers (including Grosz & Sidner [5], Hh-schberg & Litman [6], Robin Cohen For the current discussion, the most significant thing about these two constructs is their different associations: discourse entities go with N'Ps (to explain anaphoric and definite refemncem) and discourse segments go with sentences or clauses (to explain textual coherence and d~ourse stmctare). This leaves a gap in the case of referencem to what can only be token to be some aspect of a sequence of clauses, sentences or utterances (e.g., its content, form, modality, etc.), for example: Example 1 It's always been presumed that when the glaciers receded, the area got very hot. The Folsum men couldn't adapt, and they died out. That's what is supposed to have happened. It's the textlx)ok dogma. But it's wrong. They were human and smart. They adapted their weapons and cultme, and they survived. Example 2 The tools come from the development of new types of computing devices. Just as we thought of intelligence in terms of 114 servomechanism in the 1950s, and in terms of sequential computers in the sixties and seventies, we are now beginning to think in terms of parallel computers, in which tens of thousands of processors work together. This is not a deep, philosophical shift, but it is of great practical importance, since it is now possible to study large emergent systems experimentally. [[6] p.176] The obvious question is whether such refereneem involves the same processes used to explain how a pronoun or NP evokes and/or refersm to a discourse entity or whether some other sort of process is involved. In this paper I win argue for the latter, giving evidence for a separate referencem process by which a linguistic expression is first interpreted as a pointer to the representation of a discourse segment and then further constrained to specify either (a) a particular aspect of the discourse segment (e.g., its form, interpretation, speech act, etc.) or Co) a particular entity within its interpretation. In Section 2, I will attempt to justify the existence of a second referringm process linked to a representation of discourse segments per se. In Section 3, I will attempt to justify particular features of the proposed process, and Section 4 summarizes the impfications of this work for discourse understanding. 2. Justifying a Second Referring m Process There is ample evidence that subsequent reference can be made to some aspect of a sequence of clauses in text. Besides Examples 1 and 2 above, several other examples will be presented later, and the reader should have no trouble fmding more. So the existence of such a phenomenon is not in dispute. Also not in dispute is the fact that such subsequent reference is most often done via deictic pronouns: Of 79 instances of prominal referencem to clausal material found in five written texts 1, only 14 (-18%) used the pronoun it while the other 65 (-82%) used either this or that (17 instances of that and 48 of this). On the other hand, looking at all instances of pronominal referencem using it to discourse entities evoked by NPs 2, of 41 such references, 39 (-95%) used it while only 2 (-5%) used this or that. Because of this, I will call this type of reference discourse deixis. The f'trst thing to note about discourse deixis is that the referentm is often distinct from the things described in the sequence. For example, Example 3 There's two houses you might be interested in: House A is in Pale Alto. It's got 3 bedrooms and 2 baths, and was built in 1950. It's on a quarter acre, with a lovely garden, and the owner is asking $425K. But that's all I know about it. House B is in Portola Vally. It's got 3 bedrooms, 4 baths and a kidney-shaped pool, and was also built in 1950. It's on 4 acres of steep wooded slope, with a view of the mountains. The owner is asking $600IC I heard all this from a friend, who saw the house yesterday. Is that enough information for you to decide which to look at? In this passage, that in the second paragraph [doe s not refer to House A (although all instances of it do): ' rather it refers to the description of House A presented there. Similarly (all) this in the third paragraph does not refer to House B (although again, ~ i m s ~ of it do): rather it refers to the description of House B presented there. That in the fourth paragraph refers to the descriptions of the two houses taken together. That in each case it is the given description(s) that this and that are aeces.~g and not the houses, can be seen by interleaving the two descriptions, a technique often used when comparing two items: Example 4 There's two houses you might be interested in: House A is in Palo Alto, House B in Portola Vaily. Both were built in 1950, and both have 3 bedrooms. House A has 2 baths, and B, 4. House B also has a kidney-shaped pool. House A is on a quarter acre, with a lovely garden, while House B is on 4 acres of steep wooded slope, with a view of the mountains.The owner of House A is asking $425K. The owner of House B is ~sking $60(0 #That's all I know about House A. #This I heard from a friend, who saw House B before it came on the markeL Is that enough information for you to decide which to look at7 Here houses A and B are described together, and the failure of that and this to refer successfully in the second paragraph indicates that (a) it is not the houses being referredm to and Co) the individual descriptions available for referencem in Example 3 are no longer available here. One must conclude from this that it is 115 something associated with the sequences themselves rather than the discourse entities described therein that this and that referm to here. The next thing to note is that the only sequences of utterances that appear to allow such pronominal referencem are ones that intuitively constitute a discourse segment (cf. Section I), as in Example 1 (repeated here) and Example 5: Example 1 Ifs always been presumed that [ lWhen the glaciers receded, the area got very hot. The Folsum men couldn't adapt, and they died out. 1 ] That's what is supposed to have happened. It's the textbook dogma. But it's wrong. They were human and smart. They adapted their weapons and cuimre, and they survived. Example 5 ...it should be possible to identify certain functions as being unnecessary for thought by studying patients whose cognitive abilities are unaffected by locally confined damage to the train. For example, [lbinocular stereo fusion is known to take place in a specific area of the cortex near the back of the head. [2Patients with damage to this area of the cortex have visual handicaps but show no obvious impairment in their ability to think. 2] This suggests that stereo fusion is not necessary for thought. 1] This is a simple example, and the conclusion is not surprising .... [[61, p. 183"] In Example 1, that can be taken to referm to the narrative of the glaciers and the Folsum men, which is intuitively a cohezent discourse segment. (Brackets have been added to indicate discourse segments. Subscripts allow for embedded segments.) In Example 5, the fLrst this can be token as referring to the observation about visual cortex-damaged patients. The second this can be taken as referring to the whole embedded "brain damage" example. To summarize the current claim: in the process of discourse understanding, a referentm must be associated with each discourse segment, independent of the things it describes. Moreover, as Example 6 shows, this referentm must have at least three properties associated with it: the speech act import of the segment, the form of the segment, and its interpretation (e.g., as a situation, event, object description, etc.) Example 6 A: Hey, they've promoted Fred to second vice president. (* that speech act *) BI: That's a lie. (* that expression *) B2:: That's a funny way to describe the situation. (* that event *) B3: When did that happen7 (* that action *) B4: That's a weird thing for them to do. I have not said anything about whether or not these discot~se segment referentsm should be considered discourse entities like their NP-evoked counterparts. This is because I do not believe there is enough evidence to warrant taking a stand. Part of the problem is that there is no precise criterion for "discourse entity-hood". 3 However, ff every discourse segment evokes a discourse entity, an account will be needed of (1) wheo in the course of processing a segment such a thing happens, and (2) what the 'focus' status of each of these entities is. 3. Features of Deictic Referencem I suggest that the process of resolving discourse segment referencem involves the following steps: 1. An input pronoun is first interpreted as a pointer to a representation of a discourse segment on the fight frontier (cf. Section 1). 2. As the rest of the clause containing the pronoun is interpreted, pronoun interpretation may be either a. further consuained to some pmpe~ of the discourse segment representation b. extended to one of the discourse entities within the interpretation of the segment 3. As a consequence of whether this or that was used, the listener characterizes the speakers "psychological distance" to its referentm as either "close" or "far away". That is, this well-known deictic feature of this/that is not used in the referent-finding process but rather afterwm~, in atm~bufing the speakers relationship to that referentm. In this section, I will try to motivate each of the proposed steps. 116 I have already argued that some deictic pronouns must be interpreted with respect to a discourse segment. Here I claim that the only discourse segments so available are ones on the right frontier. My evidence for this consists of (a) it being true of the 69 clausally-referfing instances of this and that found in the five texts and Co) the oddity of examples like the following variation of Example 3 where that in paragraph 3 is intended to referm to the description of House A. Example 3' There's two houses you might be interested in: House A is in Palo Alto. It's got 3 bedrooms and 2 baths, and was built in 1950. It's on a quarter acre, with a lovely garden, and the owner is asking $425K. House B is in Ponola Vally. It's got 3 bedrooms, 4 baths and a kidney-shaped pool, and was also built in 1950. It's on 4 acres of steep wooded slope, with a view of the mountains. The owner is asking $600K. I heard all this from a friend, who saw the house yesterday. #But that's all I know about House A 4 Is that enough information for you to decide which to look at? (Note that this very limited availability of possible referentSm and the ability to coerce referents to any of their parts which I shall argue forshorfly suggests parallels between this phenomenon and definite NP and temporal anaphora.) Because at any time, there may be more than one discourse segment on the fight frontier, part of the reference resolution process involves identifying which one is intended. To see this, re-consider the fhst part of Example 5. Example $ ...it should be possible to identify certain functions as being unnecessary for thought by studying patients whose cognitive abilities are unaffected by locally confined damage to the brain. For example, binocular stereo fusion is known to take place in a specific area of the cortex near the back of the head. Patients with damage to this area of the cortex have visual handicaps but show no obvious impairment in their ability to think. This .... At this point in the discourse, there are several things that this can be taken as specifying. Considering just the things associated with clauses (and just this segment of text, and not what it is embedded in), this can be taken as specifying either the segment associated with the previous sentence (as in the original text - "This suggests that stereo fusion is not necessary for thought.") or the segment associated with the description of the whole example - "This is only a simple example, and the conclusion is not surprising..."). The listener's choice depends on what is compatible with the meaning of the rest of the sentence. 5 As with other types of ambiguity, there may be a default (i.e. context-independent) preference for one particular form of construal over the others (cf. [3]) but it is easily over-fidden by context. This ambiguity as to the intended designatum of a pointer is very similar to the ambiguity associated with the more fundamental and historically prior use of deixis in pointing within a shared spatio-temporal context, as in the following example: Example 7 [,4 and AJunior are standing in A's art gallery] A: Someday this will all be yours. Here this could be interpreted as either the business, the pictures, or the physical gallery. 6 Both Quine [14] and Miller [10] have observed in this regard that all pointing is ambiguous: the intended demonstratum of a pointing gesture can be any of the infinite number of points "intersected" by the gesture or any of the slzuctures encompassing those points. (Or, one might add, any interpretation of those structures.) The ambiguity here as to how large a segment on the fight frontier is encomp .a.~ by a this or that is very similar. (Another featme that Quine and Miller mention, that will come up later in this discussion, involves constraints on the demonswatum of a pointing gesture to being something present in the shared context or some mutually recognizable re- interpretation of it. The latter is what Quine has called deferred ostension. It enables one, given the fight audience, to point to the ceiling, with wires dangling from the center, say "That's off being cleaned" and effectively refer to the chandelier. Most examples of deferred ostension, both in spatio- temporal deixis and discourse deixis, are not that extreme. However, as I will try to show, both these features - ambiguity and "required ~ce" -- are characteristic of discourse deixis as well.) Having taken the initial step of interpreting a pronoun as pointing to the representation of a discourse segment, the proposed process must then be 117 able to further coerce [8,11] that interpretation to be some property of the discourse segment representation or to some entity within it. Example 6 (above) illustrates the first type of coercion, Example 8, the latter. Example 8 A: In the Antarctic autumn, Emperor penguins migrate to Tasmania. BI: That's where they wait out the long Antarctic winter. (* that place *) B2: So that's what you're likely to see there in May. (* that species of birds *) B3: That's when it begins to get too cold even for a penguin. (* that time *) The reason for miring discourse segment identification and coercion as two separate steps in the process is to accommodate the fact that most instances of this and that are as the fh-st NP in a clause. 7 Since the listener cannot say for sure what they referm to until more evidence comes in from the rest of the sentence, a two-stage process allows the fLrSt stage of the process to be done immediately, with the second stage done as a subsequent constraint satisfaction process. This would resemble spafio-temporal uses of this and that, where the listener recognizes the general pointing gestm-e, and then tries to figure out the intended demonslratum based on what the speaker says about it (and on general heuristics about what might be worth pointing to). Notice that this step of further constraining a pointing gesture also allows for a uniform treatment of this and do this (that and do that). A preposed this/that may be the object of do or of some other verb, but the listener will not know which, until s/he reaches the verb itself, as in Example 9. Considering actions as properties of their respective events, the listener should be able to coerce that to be some appropriate facet of the discourse segment (or to some entity within that segment - as I will discuss next) that can be said or done. 8 Example 9 Gladys told Sam last night that Fred was a complete jerk. a. Anyway, that's what Fred believes that Gladys said. b. Anyway, that's what Fred believes that Gladys did. 9 On the other hand, what appears to be an additional ambiguity in resolving this/that may not be one at all That is, a listener who is asked what a given this/that refersm to must describe the representation that s/he has created. This act of description is subject to alot of variability. For example, given a segment in which a statement A is supported by several pieces of evidence {B,C,D}, the listener might just describe A (the top level of the representation) or s/he might verbalize the whole representation. As with anaphoric pronouns, when a deictic pronoun specifies an NP-evoked discourse entity, it must actually be part of its corresponding discourse segment interpretation. The interesting thing is that the same holds for deictlc NPs, distinguishing them from anaphoric definite NPs, which can easily referm to things ~ in some way with an exisiting entity, as in Example 10 John and Mary decided to go on a picnic. While they remembered most things, they forgot to put the picnic supplies in the cooler. So when they got to the park, the beer was warm. By contrast, a similar example with a demonstrative NP sounds definitely odd - Example II John and Mary decided to go on a picnic. While they remembered most things, they forgot to put the picnic supplies in the cooler. #So when they got to the park, that beer was warm. Another example illustrates this in another way: given that both anaphoric reference and deictic refeaence are possible in a particular context, an anaphoric ~ and a deictic NP will be interpreted differently, even if in all other ways the NPs are the same. The anaphoric NP may refer m to something with the c~t focus, while the deictic NP must point to something already explicitly included there. For example, 118 Example 12 a. Some f'des are superfiles. b. To screw up some one's directory, look at the files. c. If one of them is a superfde ..... Example 13 a. Some t-des are superfiles. b. To screw up some one's directory, look at those files. c. They will tell you which of his f'des is absolutely vital to him. In Example 12, the files is anaphoric, specifying the fries in that person's directory, the entity currently in focus. In Example 13, those files is deictic, pointing to the fries that are superfdes, i.e., to a discourse entity explicitly in the interpretation of the just current discourse segment. Now, nothing in the process so far described distinguishes this and that. This is because with respect to discourse segment referencem, it is rarely the case that the two cannot be used interchangeably. 10 Thus it must be the case that this "psychological distance" feature of the deictic only comes into play after the referentm is found. This does not imply though that this and that cannot have diffeaent eff~m on the discourse: in Sidne~s 1982 theory [17] and in Schuster's theory of refm-ence to actions and events [16], this and that are also distinguished by their effect (or lack thereof) on the discourse focus. This is compatible with it being a side effect of judging the speaker's "distance" from the referent m, that the listener's beliefs about their shared discourse focus are revised. To summarize, in Section 2, I argued for the existence of a second refening process associated with discourse segments per se rather than what they describe. In this section, I have argued for it having the features of pointing to the representation of a discourse segment on the right frontier, followed by further refinement to a property of the segment or an entity within its interpretation. Here I want to argue for the proposed process having one additional feature. I have separated it out because it is not essential to the above arguments. However, it does permit an account of the common pattern of reference illustrated in Examples 1, 2, 14 and 15. Example 1 It's always been presumed that when the glaciers receded, the area got very hoL The Folsum men couldn't adapt, and they died out. That's what is supposed to have happened. It's the textbook dogma. But it's wrong. They were human and smart. They adapted their weapons and culture, and they survived. Example 2 The tools come from the development of new types of computing devices. Just as we thought of intelligence in terms of servomechanism in the 1950s, and in terms of sequential computers in the sixties and seventies, we are now beginning to think in terms of parallel computers, in which tens of thousands of processors work together. This is not a deep, philosophical shift, but it is of great practical importance, since it is now possible to study large emergent systems experimentally. [[6], p.176] Example 14 I don't think this can be taken seriously either. It would mean in effect that we had learned nothing at all from the evaluation, and anyway we can't afford the resources it would entaiL Example 15 The Texas attorney general said that the McDonald's announcement represented "a calculated effort to make the public think that they were doing this out of the goodness of their heart when, in fact. they were doing it because of pressure fiom our office. [Philadelphia Inquirer, 13 June 1986] Suppose one assumes that the ability to specify something via an anaphoric pronoun is a sufficient criterion for "discourse entity-hood". Then I would claim that whether or not a discourse segment referentm is initially created as a discourse entity, once the speaker has successfully referred to it via this/that, it must now have the status of a discourse entity since it can be referenced via the anapboric pronoun it. 11 Note that I do not mean to imply that one cannot refer deictically to the same thing more than once -- one clearly can, for example 119 Example 16 They wouldn't hear to my giving up my career in New York. That was where I belonged. That was where I had to be to do my work. [Peter Taylor, A Summons to Memphis, p.68] Example 17 By this time of course I accepted Holly's doctrine that our old people must be not merely forgiven all their injustices and unconscious cruelties in their roles as parents but that any selfmhness on their parts had actually been required of them if they were to remain whole human beings and not become merely guardian robots of the young. This was something to be remembered, not forgotten. This was something to be accepted and even welcomed, not forgotten or forgiven. But of the (admittedly few) "naun-~y occurring" instances of this phenomenon that I have so far found, the matrix clauses are strongly parallel - comments on the same thing. Moreover, except in cases such as Example 17, where the second clause intensifies the predication expressed in the first, the two clauses could have been presented in either order, which does not appear to be the case in the deixis- anaphor pattern of reference. 4. SUMMARY In this paper, I have proposed and argued for a process-based account of subsequent reference via deictic expressions. The account depends on discourse segments having their own mental reality, distinct from that of the entities described therein. As such, discourse segments play a direct role in this theory, as opposed to their indirect role in explaining, for example, how the referents of definite NPs are conswained. One consequence is it becomes as important to consider the representation of entire discourse segments and their features as it is to consider the representation of individual NPs and clauses. ACKNOWLEDGMENTS This work was partially supported by ARO grant DAA29-884-9-0027, NSF grant MCS-8219116-CER and DARPA grant NO0014-85K-O018 to the University of Pennsylvania, and an Alvey grant to the Cenlre for Speech Technology Research, University of Edinburgh. It was done while the author was on sabbatical leave at the University of Edinburgh in Fall 1987 and at Medical Computer Science, Stanford University in Spring 1988. My thanks to Jerry Hobbs, Mark Steedman, James Allen and Ethel Schuster for their helpful comments on many, many earlier versions of this paper. REFERENCES [1].Allen, J. Natural Language Understanding. Menlo Park: Benjamin/Cummings Publ. Co., 1987. [2] Cohen, R. A Computational Theory of the Function of Clue Words in Argument Understanding. Proc. COLING-84, Stanford University, Stanford CA, July 1984, pp.251-258. [3] Crain, S. and Steedman, M. On not being led up the garden path: the use of context by the psychological parser. In Natural Language Parsing, D. Dowry, L. Karttunen & A. Zwicky (eds.), Cambridge: Cambridge Univ. Press, 1985. [4] Grosz, B. The Representation and Use of Focus in a System for Understanding Dialogs. In Elements of Discourse Understanding, A. Joshi, B. Webber & I. Sag (eds.), Cambridge: Cambridge Univ. Press, 1981. (Reprinted in Readings in Natural Language Processing, B. Grosz, IC Sparck Jones & B. Webber (eds.), Los Altos: Morgan Kaafmann Publ., 1986.) [5] Grosz, B. & Sidner, C. Attention, Intention and the Structure of Discourse. Computational Linguistics, 12(3), July-Sept. 1986, pp.175- 204. [6] Hillis, W.D. Intelligence as an Emergent Behavior, Daedalus, Winter 1988, pp.175-190. [7] Hirschberg, J. & Litman, D. Now Let's Talk about Now: Identifying Cue Phrases Intonationally. Proc. 25th Annual Meeting, Assoc. for Comp. Ling., Stanford Univ. Stanford CA, July 1987. [8] Hobbs, J., Stickel, M., Martin, P. and Edwards, D. Interpretation as Abduction. Proc. 26th Annual Meeting, Assoc. for Comp. Ling., SUNY Buffalo, Buffalo NY, June 1988. [9] Karttunen, L. Discourse Referents. In Syntax and Semantics, Volume 7, J. McCawley (ed.), New York: Academic Press, 1976. [ 10] Miller, G. Problems in the Theory of Demonstrative Reference. In Speech, Place and Action, R. Jarvella & W. Klein (eds.), New York: Wily, 1982. 120 [11] Moens, M. and Steedman, M. Temporal Ontology and Temporal Reference. Computational Linguistics, to appear Summer 1988. [12] Nakhimovsky, A. Aspect, Aspectual Class and the Temporal Slructure of Narrative. Computational Linguistics, to appear Summer 1988. [13] Polanyi, L. The Linguistic Discourse Model: Towards a formal theory of discourse slrucmre. TR-6409. BBN Laboratories Incorp., Cambridge MA, November 1986. [14] Quine, W. The Inscrutability of Reference. In Semantics: An Interdisciplinary Reader, D. Steinberg & L. Jacobovits (eds.), Cambridge: Cambridge University. Press, 1971. pp. 142-154. [15] Reichman, R. Getting Computers to Talk like You and Me. Cambridge MA: M1T Press, 1985. [16] Schuster, E. Pronominal Reference to Events and Actions: Evidence from Naturally-occurring ,l~ra MS-CIS-88-13, Computer & Information Science, Univ. of Pennsylvania, February 1988. [17] Sidner, C. Focusing in the Comprehension of Definite Anaphora. In Computational Models of Discourse, M. Brady & R. Berwick (eds.), Cambridge MA: MIT Press, 1982, pp~267-330. [18] Webber, B. So What can we Talk about Now? In Computational Models of Discourse, M. Brady & R. Berwick (eds.), Cambridge MA: MIT Press, 1982, pp.331-371. 1 The five texts are (1) Peter Taylor's novel, Summons to Memphis, Ballentine Books, 1986 (pp.l-21); (2) W.D. Hillis' essay, "Intelligence as as Emergent Behavi~", Daedalus, Winter 1988, pp.175- 189; (3) an editorial from The Guardian, 15 December 1987; (4) John Ryle's review of a set of books on drug use, "Kinds of Control", TLS, 23-29 October 1987, pp.1163-1164; (5) Phil Williams' review of a set of books on disarmament, "New threats, new underminties", TLS, 20-26 November 1987, p.1270. All instances of pronominal referencem using it, this and that were tabulated. I specifically used wrilxen (primarily objective) expositions rather than spoken texts in order to avoid the common use of this/that in first-person accounts to refer to the outside world. 2 that is, ignoring all syncategorematic uses of it (as in "It is possible that John is here") 3 As I shall argue at the end of Section 3, the ability to refer to something anaphorically might be a sufficient, though perhaps not a necessary criterion for "entity-hood". 4 If the example were "That's all I know about it", that would be taken as referring to the description of House B, not the discourse segment associated with the clause "I heard all this from a friend, who saw the house yesterday'. (Call this later segment DS-h.) However, this need not invalidate my claim about the accessibility of discourse segments since DS-h can be understood as a parenthetical, which are treated differently than non-parentheticals in theories of discourse - cf. [GS85]. While a parenthetical may itself contain a decitic pointer to a discourse segment on the right frontier, it doesn't influence the frontier. Thus that still has the same discourse segments accessible as it would without the parenthetical. Another example of discourse deixis from a parenthetical is this variation of Example 5. ...it should be possible to identify certain functions as being unnecessary for thought by studying patients whose cognitive abilities are unaffected by locally confmed damage to the brain. For example, binocular stereo fusion is known to take place in a specific area of the cortex near the back of the head (This was discovered about 10 years ago). Patients with damage to this area of the cortex have visual handicaps but show no obvious impairment in their ability to think. 5 To get further data on this, I ran an informal "discourse completion" experiment, modelled on the above lines, presenting a short, multi-sentence text which I judged as having several segments on the right frontier at the point of the last sentence. As above, I asked subjects to complete a next sentence beginning "That..." <The subject here is legends of the formation of the Grand Canyon> <What follows is the second paragraph of the given text> "Another legend tells of a great chief who could not cease from mourning the death of his beloved wife. Finally the gods offered to take him to visit his wife 121 so that he could see she was contented in the happy hunting ground. In exchange, he was to stop grieving when he returned to the land of the living. That..." I also asked subjects to paraphrase what they wrote, to see explicitly what they took that to specify. The responses I got showed them taking it to specify either the chiefs action (expressed in the previous, single sentence segment) or the whole "bargain" (expressed in the segment comprising both previous clauses). While this particular experiment was only informal and suggestive, well-controlled versions should be able to produce harder results. 6 Presumably A_Junior will have enough context to resolve this more precisely, or he will be smart enough to ask. 7 Of the 69 clausally-referring instances of this and that pronouns, 51 (-70%) were in subject position in standard SVO clauses (7 instances of that and 44, of this), 17 played some other role within their malrix clause, and 1 was a preposed adverbial Cafter that"). Hence -75% were first NPs. 8 This does not say which of those actions will be picked out. See [Schus88] for a discussion of the choice of event/action referents of pronouns. 9 It is possible to construct quite acceptable examples in which a preposed that functions as the object of both do and some other verb -- for example "Several universities have made computer science a separate school But that is not necessarily what we want or could even do." The conjunction of two forms us~mily means that at some level, both forms are taken as being the same. 10 That is because with respect to discourse segment refereneem, it is rarely the case that the two cannot be used interchangcably! 11 If one assumes that a discourse segment referentm is also a discourse entity ab ovo, as it were, then this pattern might simply be interpreted as such an entity coming into focus as a result of the deictic reference. As I noted earlier, there is not enough evidence to argue'either way yet, nor is it clear that the two accounts would have vastly different consequences anyway. 122
1988
14
Cues and control in Expert-Client Dialogues Steve Whittaker & Phil Stenton Hewlett-Packard Laboratories Filton Road, Bristol BSI2 6QZ, UK. email: sjw~hplb.csnet April 18, 1988 Abstract We conducted an empirical analysis into the relation between control and discourse struc- ture. We applied control criteria to four di- alognes and identified 3 levels of discourse structure. We investigated the mechanism for changing control between these structures and found that utterance type and not cue words predicted shifts of control. Participants used certain types of signals when discourse goals were proceeding successfully but resorted to interruptions when they were not. 1 Introduction A number of researchers have shown that there is organisation in discourse above the level of the individual utterance (5, 8, 9, 10), The cur- rent exploratory study uses control as a pa- rameter for identifying these higher level struc- tures. We then go on to address how conversa- tional participants co-ordinate moves between these higher level units, in particular looking at the ways they use to signal the beginning and end of such high level units. Previous research has identified three means by which speakers signal information about discourse structure to listeners: Cue words and phrases (5, 10); Intonation (7); Pronomi- nalisation (6, 2). In the cue words approach, Reichman'(10) has claimed that phrases like "because", "so", and "but" offer explicit in- formation to listeners about how the speaker's current contribution to the discourse relates to what has gone previously. For example a speaker might use the expression "so" to signal that s/he is about to conclude what s/he has just said. Grosz and Sidner (5) relate the use of such phrases to changes in attentional state. An example would be that "and" or "but" sig- nal to the listener that a new topic and set of referents is being introduced whereas "any- way" and "in any case" indicate a return to a previous topic and referent set. A second in- direct way of signalling discourse structure is intonation. Hirschberg and Pierrehumbert (7) showed that intonational contour is closely re- lated to discourse segmentation with new top- ics being signalled by changes in intonational contour. A final more indirect cue to discourse structure is the speaker's choice of referring ex- pressions and grammatical structure. A num- ber of researchers (4, 2, 6, 10) have given ac- counts of how these relate to the continuing, retaining or shifting of focus. The above approaches have concentrated on particular surface linguistic phenomena and then investigated what a putative cue serves to signal in a number of dialogues. The problem 123 with this approach is that the cue may only be an infrequent indicator of a particular type of shift. If we want to construct a general theory of discourse than we want to know about the whole range of cues serving this function. This study therefore takes a different approach. We begin by identifying all shifts of control in the dialogue and then look at how each shift was signalled by the speakers. A second problem with previous research is that the criteria for identifying discourse structure are not always made explicit. In this study explicit criteria are given: we then go on to analyse the rela- tion between cues and this structure. 2 The data The data were recordings of telephone conver- sations between clients and an expert concern- ing problems with software. The tape record- ings from four dialogues were then transcribed and the analysis conducted on the typewrit- ten transcripts rather than the raw recordings. There was a total of 450 turns in the dialogues. 2.1 Criteria for classifying utterance types. Each utterance in the dialogue was classified into one of four categories: (a) As- sertions - declarative utterances which were used to state facts. Yes or no answers to ques- tions were also classified as assertions on the grounds that they were supplying the listener with factual information; (b) Commands - utterances which were intended to instigate action in their audience. These included vari- ous utterances which did not have imperative form, (e.g. "What I would do if I were you is to relink X') but were intended to induce some action; (c) Questions - utterances which were intended to elicit information from the audience. These included utterances which did not have interrogative form. e.g. "So my question is...." They also included para- phrases, in which the speaker reformulated or repeated part or all of what had just been said. Paraphrases were classified as questions on the grounds that the effect was to induce the lis- tener to confirm or deny what had just been stated; (d) Prompts - These were utterances which did not express propositional content. Examples of prompts were things like "Yes" and ~Uhu ~. 2.2 Allocation of control in the dia- logues. We devised several rules to determine the location of control in the dialogues. Each of these rules related control to utterance type: (a) For questions, the speaker was defined as being in control unless the question directly followed a question or command by the other conversant. The reason for this is that ~ ques- tions uttered following questions or commands are normally attempts to clarify the preceding utterance and as such are elicited by the previ- ous speaker's utterance rather than directing the conversation in their own right. (b) For assertions, the speaker was defined as being in control unless the assertion was made in re- sponse to a question, for the same reasons as those given for questions; an assertion which is a response to a question could not be said to be controlling the discourse; (c) For com- mands, the speaker was defined as controlling the conversation. Indirect commands (i.e. ut- terances which did not have imperative form but served to elicit some actions) were also classified in this way; (d) For prompts, the listener was defined as controlling the conver- sation, as the speaker was clearly abdicating his/her turn. In cases where a turn consisted of several utterances, the control rules were only applied to the final utterance. We applied the control rules and found that control did not alternate from speaker to speaker on a turn by turn basis, but that there were long sequences of turns in which con- trol remained with one speaker. This seemed to suggest that the dialogues were organised above the level of individual turns into phases 124 where control was located with one speaker. The mean number of turns in each phase was 6.63. 3 Mechanisms for switch- ing control We then went on to analyse how control was exchanged between participants at the bound- aries of these phases. We first examined the last utterance of each phase on the grounds that one mechanism for indicating the end of a phase would be for the speaker controlling the phase to give some cue that he (both par- ticipants in the dialogues were always male) no longer wished to control the discourse. There was a total of 56 shiRs of control over the 4 dialogues and we identified 3 main classes of cues used to signal control shifts These were prompts, repetitions and summaries. We also looked at when no signal was given (interrup- tions). 3.1 Prompts. On 21 of the 56 shifts (38%), the utterance immediately prior to the con- trol shift was a prompt. We might therefore explain these shifts as resulting from the per- son in control explicitly indicating that he had nothing more to say. (In the following examples a line indicates a control shift) Example 1 - Prompt Dialogue C - 1. E: "And they are, in your gen you'll find that they've relocated into the labelled common area" (E con- trol) 2. C: "That's right." (E control) 3. E: "Yeah" (E abdicates control with prompt) 4. C: "I've got two in there. There are two of them." (C control) 5. E: "Right" (C control) 6. C: "And there's another one which is % RESA" (C control) 7. E: "OK urn" (C control) 8. C: "VS" (C control) 9. E: "Right" (C control) 10. C: "Mm" (C abdicates control with prompt) 11. E: "Right and you haven't got - I assume you haven't got local la- belled common with those labels" (E control) 3.2 Repetitions and summaries On a further 15 occasions (27%), we found that the person in control of the dialogue signalled that they had no new information to offer. They did this either by repeating what had just been said (6 occasions), or by giving a summary of what they had said in the preceding utterances of the phase (9 occasions). We defined a rep- etition as an assertion which expresses part or all of the propositional content of a previous assertion but which contains no new informa- tion. A summary consisted of concise reference to the entire set of information given about the client's problem or the solution plan. Example 2 - Repetition. Dialogue C - I. Client: "These routines are filed as DS" (C control) 125 2. Expert: "That's right, yes" (C control) 3. C: "DS" (C abdicates control with repetition) 4. E: "And they are, in your gen you'll find they've relocated into your local common area." (E control) Half the repetitious were accompanied by cue words. These were "and", "well" and "so", which prefixed the assertion. Example 3 - Summary Dialogue B - 1. E. "OK. Initialise the disc retain- ing spares" (E control) 2. C: "Right" (E control) 3. E: "Uh and then TF it back" (E control) 4. C: "Right" (E control) 5. E: "Did you do the TF with ver- ify. ~ (E control) 6. C: "Er yes I did" (E control) 7. E: "OK. That would be my recom- mendation and that will ensure that you get er a logically integral set of files" (E abdicates control with sum- mary) 8. C: "Right. You think that initial- ising it using this um EXER facility." (C control) What are the linguistic characteristics of summaries? Reichman (10) suggests that "so" might be a summary cue on the part of the speaker but we found only one example of this, although there were 3 instances of "and", one "now" one "but" and one "so". In our di- alogues the summaries seemed to be charac- terised by the concise reference to objects or entities which had earlier been described in de- tail, e.g. (a) "Now, I'm wondering how the two are related" in which "the two" refers to the two error messages which it had taken several utterances to describe previously. The other characteristic of summaries is that they con- trast strongly with the extremely concrete de- scriptions elsewhere in the dialogues, e.g. "err the system program standard call file doesn't complete this means that the file does not have a tail record" followed by "And I've no clue at all how to get out of the situation". Exam- ple 3 also illustrates this change from specific (1, 3, 5) to general (7). How then do rep- etitious and summaries operate as cues? In summarising, the speaker is indicating a nat- ural breakpoint in the dialogue and they also indicate that they have nothing more to add at that stage. Repetitions seem to work in a similar way: the fact that a speaker reiterates indicates that he has nothing more to say on a topic. 3.3 Interruptions. In the previous cases, the person controlling the dialogue gave a sig- nal that control might be exchanged. There were 20 further occasions (36% of shifts) on which no such indication is given. We there- fore went on to analyse the conditions in which such interruptions occurred. These seem to fall into 3 categories: (a) vital facts; (b) re- spouses to vital facts; (c) clarifications. 3.3.1 Vital facts. On a total of 6 occasions (11% of shifts) the client interrupted to con- tradict the speaker or to supply what seemed to be relevant information that he believed the expert did not know. 126 Example 4 Dialogue C - 1. E: ".... and it generates this warn- ing, which is now at 4.0 to warn you about the situation" (E control) 2. C: "It is something new though urn" (C assumes control by interrup- tion) 3. E: "Well" (C control) 4. C: "The programs that I've run before obviously LINK A's got some new features in it which er..." (C con- trol) 5. E: "That's right, it's a new warn- ing at 4.0" (E assumes control by in- terruption) Two of these 6 interjections were to supply ex- tra information and one was marked with the cue "as well". The other four were to con- tradict what had just been said and two had explicit markers "though" and "well actually": the remaining two being direct denials. 3.3.2 Reversions of control following vital facts. The next class of interruptions occur after the client has made some interjec- tion to supply a missing fact or when the client has blocked a plan or rejected an explanation that the expert has produced. There were 8 such occasions (14% of shifts). The interruption in the previous example il- lustrates the reversion of control to the expert after the client has suIiplied information which he (the client) believes to be highly relevant to the expert. In the following example, the client is already in control. Example 5 Dialogue B - 1. "I'11 take a backup first as you say" (C control) 2. E: "OK" (C control) 3. C: "The trouble is that it takes a long time doing all this" (C control) 4. E: "Yeah, yeah but er this kind of thing there's no point taking any short cuts or you could end up with no system at all." (E assumes control by interruption) On five occasions the expert explic- itly signified his acceptance or re- jection of what the client had said, e.g."Ah","Right", "indeed" , "that's right',"No',"Yeah but". On three occasions there were no markers. 3.3.3 Clarifications. Participants can also interrupt to clarify what has just been said. This happened on 6 occasions (11%) of shifts. Example 6 Dialogue C - 1. C: "If I put an SE in and then do an EN it comes up" (C control) 2. E: "So if you put in a ...?" ( E control) 3. C: "SE" (E control) On two occasions clarifications were prefixed by "now" and twice by "so". On the final two occasions there was no such marker, and a di- rect question was used. 3.3.4 An explanation of interruptions. We have just described the circumstances in which interruptions occur, but can we now ex- plain why they occur? We suggest the follow- ing two principles might account for interrup- 127 tions: these principles concern: (a) the infor- mation upon which the participants are basing their plans, and (b) the plans themselves. (A). Information quality: Both expert and client must believe that the informa- tion that the expert has about the prob- lem is true and that this information is sufficient to solve the problem. This can be expressed by the following two rules which concern the truth of the informa- tion and the ambiguity of the information: (A1) if the speaker believes a fact P and believes that fact to be relevant and either believes that the speaker believes not P or that the speaker does not know P then in- terrupt; (A2) If the listener believes that the speaker's assertion is relevant but am- biguous then interrupt. (B). Plan quality: Both expert and client must believe that the plan that the ex- pert has generated is adequate to solve the problem and it must be comprehensi- ble to the client. The two rules which ex- press this principle concern the effective- heSS of the plan and the ambiguity of the plan: (B1) If the listener believes P and either believes that P presents an obstacle to the proposed plan or believes that part of the proposed plan has already been sat- isfied, then interrupt; (B2) If the listener believes that an assertion about the pro- posed plan is ambiguous, then interrupt. In this framework, interruptions can be seen as strategies produced by either conversational participant when they perceive that a either principle is not being adhered to. 3.4 Cue reliability. We also investigated whether there were occasions when prompts, repetitions and summaries failed to elicit the control shifts we predicted. We considered two possible types of failure: either the speaker could give a cue and continue or the speaker could give a cue and the listener fall to re- spond. We found no instances of the first case; although speakers did produce phrases like "OK" and then continue, the "OK" was always part of the same intonational contour as that further information and there was no break between the two, suggesting the phrase was a prefix and not a cue. We did, how- ever, find instances of the second case: twice following prompts and once following a sum- mary, there was a long pause, indicating that the speaker was not ready to respond. We conducted a similar analysis for those cue words that have been identified in the liter- ature. Only 21 of the 35 repetitions, sum- maries and interruptions had cue words asso- ciated with them and there were also 19 in- stances of the cue words "now", "and", "so", "but" and "well" occurring without a control shift. 4 Control cues and global control The analysis so far has been concerned with control shifts where shifts were identified from a series of rules which related utterance type and control. Examination of the dialogues indicated that there seemed to be different types of control shifts: after some shifts there seemed to be a change of topic, whereas for others the topic remained the same. We next went on to examine the relationship between topic shift and the different types of cues and interruptions described earlier. To do this it was necessary first to classify control shifts ac- cording to whether they resulted in shifts of topic. 4.1 Identifying topic shifts. We iden- tified topic shifts in the following way: Five judges were presented with the four dialogues and in each of the dialogues we had marked where control shifts occurred. The judges were 128 asked to state for each control shift whether it was accompanied by a topic shift. All five judges agreed on 24 of the 56 shifts, and 4 agreed for another 22 of the shifts. Where there was disagreement, the majority judg- ment was taken. 4.2 Topic shift and type of control shift. Analysing each type of control shift, it is clear that there are differences" between the cues used for the topic shift and the no shift cases. For interruptions, 90% oc- cur within topic, i.e. they do not result in topic shifts. The pattern is not as obvious for prompts and repetitions/summaries, with 57% of prompts occurring within topic and 67% of repetitions/summaries occurring within topic. This suggests that change of topic is a care- fully negotiated process. The controlling par- ticipant signals that he is ready to close the topic by producing either a prompt or a rel>- etition/summary and this may or may not be accepted by the other participant. What is apparent is that it is highly unusual for a participant to seize control and change topic by interruption. It seems that on the ma- jority of occasions (63%) participants walt for the strongest possible cue (the prompt) before changing topic. 4.3 Other relations between topic and control. We also looked at more general aspects of control within and between top- ics. We investigated the number of utterances for which each participant was in control and found that there seemed to be organisation in the dialogues above the level of topic. We found that each dialogue could be divided into two parts separated by a topic shift which we labelled the central shift. The two parts of the dialogue were very different in terms of who controlled and initiated each topic. Be- fore the central shift, the client had control for more turns per topic and after it, the ex- pert had control for more turns per topic. The respective numbers of turns client and ex- pert are in control before and after the central shift are :Before 11-7,22-8,12-6,21-6; After 12- 33,16-23,2-11,0-5 for the four dialogues. With the exception of the first topic in Dialogues 1 and 4, the client has control of more turns in every topic before the central shift, whereas af- ter it, the expert has control for more turns in every topic. In addition we looked at who ini- tiated each topic, i.e. who produced the first utterance of each topic. We found that in each dialogue, the client initiates all the topics be- fore the central shift, whereas the expert initi- ates the later ones. We also discovered a close relationship between topic initiation and topic dominance. In 19 of the 21 topics, the per- son who initiated the topic also had Control of more turns. As we might expect, the point at which the expert begins to have control over more turns per topic is also the point at which the expert begins to initiate new topics. 5 Conclusions The main result of this exploratory study is the finding that control is a useful parameter for identifying discourse structure. Using this parameter we identified three levels of struc- ture in the dialogues: (a) control phases; (b) topic; and (c) global organisation. For the con- trol phases, we found that three types of utter- maces (prompts, repetitions and summaries) were consistently used to signal control shifts. For the low level structures we identified, (i.e. control phases), cue words and phrases were not as reliable in predicting shifts. This re- sult challenges the claims of recent discourse theories (5, 10) which argue for a the close re- lation between cue words and discourse struc- ture. We also examined how utterance type related to topic shift and found that few inter- ruptions introduced a new topic. Finally there was evidence for high level structures in these dialogues as evidenced by topic initiation and 129 control, with early topics being initiated and dominated by the client and the opposite be- ing true for the later parts. Another focus of current research has been [3] the modelling of speaker and listener goals (1, 3) but there has been little research on real dialogues investigating how goals are commu- nicated and inferred. This study identifies surface linguistic phenomena which reflect the [4] fact that participants are continuously moni- toring their goals. When plans are perceived as succeeding, participants use explicit cues such as prompts, repetitions and summaries [5] to signal their readiness to move to the next stage of the plan. In other cases, where partic- ipants perceive obstacles to their goals being achieved, they resort to interruptions and we have tried to make explicit the rules by which [6] they do this. In addition our methodology is different from other studies because we have attempted to provide an explanation for whole dialogues rather than fragments of dialogues, and used explicit criteria in a bottom-up manner to [7] identify discourse structures. The number of dialogues was small and taken from a single problem domain. It seems likely therefore that some of our findings (e.g the central shift) will be specific to the diagnostic dialogues we stud- ied. Further research applying the same tech- [8] niques to a broader set of data should establish the generality of the control rules suggested here. References [1] Allen, J.F. and Perrault, C.R. (1980). Analyzing intentions in utterances. Ar- tificial Intelligence, 15, 143-178. [2] Brennan, S. E., Friedman, M. W., and Pollard, C. (1987) A centering approach to pronouns. In Proceedings of the 25th [lO] Annual Meeting of the Association for Computational Linguistics. Cohen, P. R. and Levesque, H. J. (1985) Speech acts and rationality. In Proceed- ings of the ~3th Annual Meeting of the Association for Computational Linguis- tics. Grosz, B. J., Joshi, A. K., Weinstein, S. (1986) Towards a computational theory of discourse interpretation. Draft. Grosz, B. J., and Sidner, C. L. (1986) At- tentions, intentions and the structure of discourse. Computational Linguistics, 12, 175 - 204. Guindon, R., Sladky, P., Brunner, H., and Conner, J. (1986). The structure of user-adviser dialogues: Is there method in their madness? In Proceedings of the 24th Annual Meeting of the Association for Computational Linguistics. Hirschberg, J. and Pierrehumhert, J. B. (1986) The intonational structuring of discourse. In Proceedings of the ~4th An- nual Meeting of the Association for Com- putational Linguistics. Levin, J. A. and Moore, J. A. (1977) Dia- logue games: metacommunication struc- tures for natural language interaction. Cognitive Science, 4, 395 - 421. Polanyi, L. and Scha, R. (1983). Con- nectedness in Sentence, Discourse and Te~t. Tilburg University, Tilburg, 141- 178. Reichman, R. (1985) Getting computers to ta& like you and me. Cambridge, M.A.: MIT Press. 130
1988
15
A COMPUTATIONAL THEORY OF PERSPECTIVE AND REFERENCE IN NARRATIVE Janyce M. Wlebe and William J. Rapaport Department of Computer Science State University of New York at Buffalo Buffalo, NY 14260 wiebe~s.buffalo.edu, rapapurt~s.buffalo.edu ABSTRACT Narrative passages told from a character's perspective convey the character's thoughts and perceptions. We present a discourse process that recognizes characters' thoughts and perceptions in third-person narrative. An effect of perspective on reference In narrative is addressed: references in passages told from the perspec- tive of a character reflect the character's beliefs. An algorithm that uses the results of our discourse process to understand references with respect to an appropriate set of beliefs is presented. 1. INTRODUCTION. A narrative is often told from the perspective of one or more of its characters; it cam also con- tain passages that are not told from the perspective of any character. We present a computational theory of how readers recognize the current perspective in thixd-person n~- rative, end of the effects of perspective on the way readers understand references in third-person narrative. We consider published novels and short stories, rather than m.ificially constructed narratives. 2. BANFIELD'S THEORY. Our notion of perspective in narrative is based on Ann Bardield's (1982) c~t_egorization of the sentences of narration into subjective and objective sentences. Subjective sentences include those that portray a character's thoughts (represented thought) or present a scene a~ a character perceives it (represented perception). Objec- tive sentences present the story directly, rather than through the thoughts or perceptions of a character. The language used to convey thoughts and perceptions is replete with linguistic elements that make no sense unless they are inter- preted with respect to the thinking or perceiving character's consciousness. Banfield calls them subjective elements; they appear only in subjective sentences and cannot appear within objective sentences. Banfield identifies perspective in narra- tive with subjectivity, which is expressible via subjective elements. We call the thinking or perceiving character of a subjective sentence the subjective character. 3. A DISCOURSE-LEVEL APPROACH. Our task of recognizing the currant perspective is, therefore, to recognize subjective sentences and the subjective characters to whom they are attributed. However, we cannot take a umtence- by-sentence approach, deciding independently for each sen- tence whether it is objective or subjective, and, if subjective, who the subjective character is. First, although thoughts and perceptions are often reported (as by sentences beginning with "He thought that ..." or "She saw ... "), end thoughts are often accompsnied by narrative parerttheticaLs (such as "he thought" or "he realized"), many thoughts and perceptions are not marked in these ways. Second, sub- jective sentences do not always explicitly indicate who the subjective character is. For example: (1) 1"1He wanted to talk to Dennys. 1"2How were they going to be able to get home from this strange desert land into which they had been cast and which was hcaven knew where in all the countless solar systems in all the countless galaxies? [L'Engle, Many Waters, p. 91] (2) ~lBut what [Muhammad] had seen in those few moments m~e him catch his breath in amazement. z~On the floor of the cave, which curved back in a nalmal fault in the rock, there were several large cylindrical objects sumdin 8 in a row. [John Allegro, The Dead Sea Scrolls] Sentence (1.2) is a represented thought, and (2.2) is a represented perception, presenting what the character sees as he sees it; yet neither is explicitly marked as such. Also, nei- ther indicates who the subjective character is. Finally, although a subjective element marks a santence as subjective (cf. Section 4.2), not all subjective sentences contain subjec- tive alements, and subjective elements do not in general indicate who the subjective character is. However, subjective sentences that are not marked as such, or that do not indicate who the subjective character is, usually appear in the midst of other subjective sentences attributed to the same subjective character. That is, once a clearly marked subjective sentence appears for which the subjective character can be deternfined" unmarked subjective sentences auributed to the same subjective character often follow. Thus, to recognize subjective smumces in general we need to consider subjectivity at the level of the discourse. For this reason, we extend the nodons of subjective and objective sentences to the notions of subjective and objective contexts, which consist of one or more subjective sentences attributed to the same subjective character, or one or more objective sentences, respectively. Our algorithm for recognizing the current perspective is a discourse process that looks for the boundaries of sub- jective contexts. During narrative understending, it main- ta~ a stack, called the current perspecdve (CP). At the beginning of a narrative, the CP is initialized to be the reader. When a new subjective context is recognized, its 131 subjective character is pushed onto the CP. When the ead of a subjective context is recognized, a character is popped from the CP. More precisely, since SNePS (Shapiro 1979), our knowledge representation system, is fully intmsional, only the reader's concepts of the characters are represented (Msida and Shapiro 1982, Shapiro and Rapaport 1987). So, it is actually the re~_~__--'s concepts of the char~ters that ~e pushed onto the CP. 4. RECOGNIZING SUBJECTIVE CONTEXTS. To recognize subjective contexts, our discourse process relies exclusively on linguistic signals of subjective contexts. In this, it is incomplete: If a subjective cmltext appears in which these linguistic signals are not present, then the sub- jective context is not recognized. 4.1. Psychological Verbs, Actions, Adjectives, and Per- ceptual Verbs. Reports involving psychological vorbs (e.g~ 'think', 'wonder', 'realize', 'went', 'remember') or percep- tual verbs (e.g.. 'see', 'hear') signal that a subjective coatext will follow. So do predicate-adjective sentences with psychological adjectives (e.g., 'delighted', 'happy', 'jealous', 'scared') (cf. Banfield (1982). Cohn (1978). Dole~el (1973)). In addition, we have identified what we call psychological actions--e.g., "he smiled to himself", "she gasped", "she winced"--which function in the same way as psychological verbs. A sentence of one of these types is a typical way of establishi~ a subjective context. Exemples (1) and (2), above, and (3), below, exhibit this l~n ~*-n: O) 3"1She [Hannah] winced as she heard them crash to the platform. 3"~he lovely little mirror that she had brought for Ellen, and the gifts for the baby[ [F~at~- chere, Hannah Herse~ p. 3] In each example, the first smtence is a psycholosical or per- ceptual report, end the second is a represented thought or represented perception, respectively; the subjective character of the second sentence is taken to be the subject of the firsL In our discourse process, the subject of a porceptmd or psychological report, or of a In~Aicete-adjective sentenco with a psychological adjective, is pushed onto the CP if a character isn't already on the top of it. If a character is already on the top of the CP, then no change is made, and the sentence is understood to be part of the already esta- blished subjective ccnteXL 4.2. Subjective Elements. Many subjective elements mark a sentence in third-person narrative as subjective because they are expressive in nalme. Some that Banfield identifies are ectclanmt/ons, which express emotion; que~ons, which express wonder;, ep/¢hets, such as 'the bastard', which express some qualification of the refeie~t; and certain k/n- ship terms, e.g., 'Daddy', 'Morn', and 'Aunt Margaret', which express a relationship to the referee. She also identifies evaluative adjec¢/ves, which express an attitude toward the referent, e.g., 'ghastly', 'surprising', 'poor', and 'dasrmed', although some evaluative adjectives, such as 'poor' and 'damned', have their evaluative meanings only when they occ~ in certain parts of the sentence. Intensifiers such as 'too', 'quite', and 'so' are also evaluative (Banfield 1982), as in: (4) He could tell they were tears because his eyes were too shiny. Too round. [Bddgers, All Together Now, p. 92] So are emphasizers, such as 'really' and 'just'. An example is 'really' in (5.3): O) XtJody managed a frail smile, s2she was a little bit ashamed. SzShe should really try to be more cheerful for Aunt Margaret's sake. S~*After all, Atmt Margaret had Imubles of her own--she was the mother of that ghastly Dill. [Gage, Miss Osborne-the-Mop, pp. 16- 17] Modal verbs of obligation, possibility, and necessity are also expressive. For example, 'should', in (5.3), is a modal verb of obligation. So are many content (or atti:udinal) disjunct~ which comment on the conumt of the utterance (Quirk et al. 1985). For example, "l~ely', 'maybe', 'probably', and 'perhaps' express some degree of doubt: (6) Something jingled---cer keys probably. [Oneal, War Work, p. 132] Conjoncts, which comment on the connection between items (Quirk et el. 1985), can also be expressive. For example, 'anyhow', 'anyway', 'still', and 'after all' express conces- sion (Quirk et ai. 1985). An example is 'After an' in (5.4). Other subjective elements am sentence fragments (Benfieki 1982), such as (7.2), (7) 7"1His brain worked slowly through what he knew about this person. ~David's kid. [Bridgers, All Together Now, p. 91] and the uses of 'this', 'that', 'these', and 'those' that Robin Lakoff (1974) has identified as emotional dei.xb. In conver- sation, they are "generally linked to the speaker's emotional involvement in the subject-matter of his utterance" (Lakoff 1974: 347); in thlrd-person narrative, they are linked to the subjective character's emotional involvement in the subject matter of his thoughts or perceptions. Examples are 'this' in (8.1) and 'That' in 0:2): (8) S'llbrahim could remember every time this godless pig had patronized lOzn ,.. [Clancy, Red Dawn Rbing, p. 13] 132 O) 9aAs she watched, a wave of jealousy spread through her. 9-2'rhat insufferable stranger who had passed them on the road was receiving the welcome that she had been dreaming of all the way from Connecticut. [Franchere, Hannah Herself, p. 15] In speech, the emotion, evaluation, etc, expressed by a subjective element is always attributed to the speaker; in third-persun narrative, it is auributed to a character, l Clearly, many types of language-understanding abilities are needed m understand the range of subjective elements. Our purpose here is to show how our discourse process uses them as markers of subjective contexts, and how it determin~ the subjective character whose thoughts or perceptions they mark. However, recognizing the subjective character is always required before a subjective element can be under- stood. When a subjective element is m~.ountered in the nar- rative, our discourse process uIxiates the CP according to the following algoriflun: (A1) If there is currently a character on the CP. Xthen do not change the CP else if there is an actor focus at the start of the current sentence who is a character in the scene 2then push him or her c~to the CP 3else create a new and indetennina~ concept and push it onto the CP. 4.2,1, Discussion of branch 1. Branch I is taken when a subjective element continues the current subjective context. For example, the exclamation in O.2), which is a subjective element, continues the subjective context esteblished by (3.1). The subjective elements 'should' and *really' in (5.3) and 'After all', 'Aunt Margaret', 'that', and 'ghastly' in (5.4) continue the subjective context established in (5.1). 4.2.2. Discussion of branch 2. The actor focus used in branch 2 is one of the foci that need to be maintained for the comprehension of definite snaphora (Sidner 1983). It is whoever is the agent of the current sente:r.e. (Note that quoted speech has its own foci. which must be maintained separately. In this sense, quoted speech constitutes a separate discourse segment (cf. Grosz and Sidne~ (1986).) Consider the following example: (10) le~XIn the kitchen she [Jody] set the basket down on the table, m'2She put the thermos and the cups in the sink and filled them with cool water to soak. xo-"rhen In stone third-pencn novels, panicularly in the 19th century, an overt narrator (Chamum 1978) rues mbjective elemems. We do not consider novels with overt narrators. she tiptoed upstairs to her room. l°~Perhaps Aunt Margaret was taking a nap. m3It wouldn't do to dis- nu'b her. [Gage, Miss Osborne-the-Mop, p. 25] Since Jody is the actor focus at the beginning of (10.4) (she is the actor focus of (10.1)-(10.3)), and she is a character in the scene, the mbjective element 'Perhaps' is attributed m her when it is encountered, and she is pushed onto the CP. Sidner (1983) has shown that, in anaphora comprehension, the current actor or discourse focus can he rejected as the co-apedfier of an anaphor on the basis of pragmatic factors. Similarly. the actor focus may be rejected as the subjective character to whom the subjective element is mributed, in favor of another character in the scene. The pragmatic factors involved appear to be which characters have been subjective characters in the past and whose thoughts or perceptions the sentence containing the subjec- tive dement is likely to be reflecting. Consider the follow- ing example, in which Adr~el, a seraph, has just appeared before Lemech and Sandy: (11) n'SLemech greeted him [Adnarel] respectfully. "Adnarel, we thank you." n:~rhen he said to Sandy, "The seraph will be able to help you. Seraphim know much about healing." n~So this was a seraph. [L'Engle" Many Wmers, p. 39] Lmnach is the actor focus of (11.1) and (11.2). However, it would be clear to someone who had read the novel up to this point that (11.3) is Sandy's thought. First, Sandy is a visitor to a slrange world, of which Lemech is an inhabitant; so it is Sandy, not Lemech, who is likely not to have known what a seraph is. Second, Irior to this passage, subjective contexts have been attributed to Sandy, but not to Lemech. We are investigating the reasoning required by the re~d_~ in reject- ins the cm'rem actor in favor of another character. 4.2.3. Discussion of branch 3. Branch 3 is taken when the reader cannot identify the subjective character and must read further in the text in order to do so. In this case, an indemr- minate, inta~ional concept is pushed onto the CP. When the re~_d~ finally identifies the subjective character, the information that this character and the indeterminate one are co-extensional is built (that is, it is asserted that they are concepts of the same individual; cf. Malda and Shapiro (1982), Shapiro and Rapaport (1987)). Naicong Li (1986) uses this approach in her pronoun resolution algorithm if the infonna~n needed to resolve a lmmoun is supplied after the pronoun is encountered. Often, the subjective charact~ is identified by a nar- rative parenthetical, as in the following example: (12) 12JWhat was holding Washington up? the colonel asked himself. Iz2Ail he needed was a simple yes or no. [clancy, Red Storm Rising, p. 170] 133 Sentence (12.1) begins with a question, which is a subjective element. It occurs just after a shift in scene, so it does not continue a current subjective context, and there is no actor focus. When 'What' is encountered in (12.1), branch 3 pushes a new concept onto the CP: At this point, the reader has recognized someone's thought' but does not yet know whose. When the reader encounters the parenthetical, she identifies the subjective character as the colonel, and builds a proposition that the colonel and the new concept are co- extemsional (that is, she comes to believe that the question was the colonel's thought). 4.2.4. Comparison of evaluative and psychological adjectives. Before leaving oor discussion of subjective ele- ments, it will be useful to concast the ways that predicate- adjective sentences with psychological and with evaluative adjectives are treated by our discotwse process. Compare the following sentences: (A) Jody was delighted. (B) Jody was ghastly. Sentence (A) contains the psychological adjective 'delighted', and (B) contains the evaluative adjective 'ghastly'. In (A) (assuming no previous subjective context), Jody is pushed onto the CP, and (A) establishes a subjective context atlributed to Jody. In (B)' algorithm (A1) determines whose atti_-_~a_e toward Jody is being expressed, and it does not choose the subject of the sentence. Thus, psycbological adjectives can establish the perspective of the subject, whereas evaluative adjectives express an altitude toward the subject. $. RECOGNIZING ENDING BOUNDARIES OF SUB- JECTIVE CONTEXTS. Recognizing the ending boun- d&ies of subjective contexts is a more difficult problem than recognizing the beginning boundaries. While not all subjec- tive contexts are signaled in the ways discussed in Section 4, it is very common that they are. However, we have not found equally reliable or common signals for the ending boundaries. It appears that the reader often has to reason about the content of the current sentence and confirm that it can continue the subjective context; if it cannot, then the ending boundary has been found. Nevertheless, we have identified two reliable ways of recognizing the ending boundaries of subjective contexts. One way subjective contexts are ended is by a shift in scene, as in the following example: (13) He [Sandy] wanted to talk to Dennys. How were they going to be able m get home from this storage desert land into which they had been cast and which was heaven knew where in all the countless solar sys- tears in aLl the countless galaxies? <Chapter Break> Dennys was sleeping fitfully when he heard the tent flap move. [L'Engle, Many Waters, pp. 91-92] Dannys and Sandy are not at the same place. The shift in scene at the chapter break ends the subjective context attri- buted to Sandy. A second way is by a negated perceptual report whose subject is the subjective character and whose object is something in the scene. For example, (14) x4"xSbe [Yalith] was not sure why she was hesitant. x4~'he breathed in the strange odor of his wings, smelling of stone, of the cold, dark winds which came during the few brief weeks of winter. X~Envelopod in Eblis's wings, she did not hear the rhydmtic thud as a great lion galloped toward them across the desert, roaring as it neared them. 14~rhan both Yalith and Eblis turned and saw the lion rising to its hind legs... [L'Engle, Many Wooers, p. 47] The subjective context in the first paragraph is ended by the negated perceptual verb in (14.3). Sentence (14.4) then establishes a new subjective context attributed to Yalith and Eblis. In a similar way, subjective contexts can be ended by negated factive verbs, as in "He did not realize that... ". 6. BELIlC~F AND SUBJECTIVE CONTEXTS. Since subjective contexts portray thoughts and perceptions, the reader understands that the information they convey reflects the subjective character's beliefs (cf. Fillmore (1974)' Banfield (1982), Usponsky (1973)). Whatever else the reader may infer that the characters believe, she has to attri- bute the informafi.__on in subjective contexts to the subjective ch&acter. Brian Reiser (1981) showed that one of the effects of perspective on a read_~'s understanding is that it focuses inec.essing. In particular, he showed that the read& primarily infers the goals and plans of the ch&acter whose perspective the narrative is taking. In a similar way, per- spective focuses the reader's atm'bution of beliefs to the characters. References in subjective contexts reflect just what the subjective character believes. (CL Clark and Marshall (1981), Cohen, Perranlt, and Allen (1982), and Wilks and Bien (1983) for discussions of belief and reference in conversation.) The subjective charecter might be mistaken, o~ know less about the referent than the reader or the other characters know, or know more than the other characters. The remainder of this paper addresses the auribution of beliefs to ch&acte~s in order to understand references in sub- jective contexts. 6.1. An Algorithm for Understandin 8 References Using the CP. Ore- belief representation, described in Rapalx~ and Shapiro (1984), Rapaport (1986)' and Wiebe and Rapa- port (1986)' is based on the notion of belief spaces. A be//ef space is accessed by a stack of individuals, and consists of what the bottom member of the stack believes that.., the top member believes. The re~__aer_ is always the bottom member of the stack, and the belief space corresponding to a stack consisting only of the re~_d~ contains the set of 134 propositions that the reader believes are true. All proposi- tions in the knowledge base appear in at least one belief space, and a single proposition can appear in more than one belief space. This occurs, for example, if the reader believes a proposition and believes that a character believes it, too. The CP determines the current belief space with respect to which references are understood. So far, our analysis extends only to non-anaphoric, specific ref~er~ces. The following is our algorithm for understanding a non- anaphoric, specific reference 'X' in third-person mmmive (there may actually be more then one proposition found or built in order to understand 'X', for example, if 'X' is plural or a possessive): (A2) If 'X' is art indefinite noun phrase of the form 'a Y', tthen create a new concept, N; build in the CP's belief space the proposition that N is a Y; return N else if 'X' is a definite noun phrase or proper name, then if a proposition that N is X can be found in the CP's belief space, 2then return N else if a proposition that N is X can be found in a belief space other than the CP's, 3then add the found proposition to the CP's belief space; remm N 4else create a new concept, N; build in the CP's belief space the proposition that N is X; retm'n N. 6.1.1. Discussion of branch 1. Indefinite references intro- duce new individuals into the CP's belief space. We discuss this in Section 6.3, below. 6.1.2. Discussion of branches 2 and 3. The search for the referent of a non-enaphoric definite noun phrase or proper name starts in the CP's belief space. Branch 2 is taken if the referent can be found there. If the test in branch 2 falls, then the rest of the knowledge base must be searched. To see why this is so, suppose that the reference is 'Ellen' and that it occurs in a subjective context. It is possible that Ellen has been referred to previously in the narrative but not under any cir- cumstences that would have required the reader to explicitly amibute the belief that she is named 'Ellen' to the subjective character. Perhaps she has only been referred to in objective contexts, for example. So, to find the referent, other belief spaces than the CP's must be searched. Branch 3 is taken if the search is successful, and, before the referent is returned, the proposition that the referent is X is A_dded to the CP's belief space. In the case just discussed, the fact that the reference occurs in a subjec- tive context indicates that the belief that the referent is named 'Ellen' should now be attributed to the subjective character. 6.1.3. Discussion of branch 4. Branch 4 is taken in order to understand definite noun phrases and proper names that refer to individuals who have not been previously introduced into the narrative. Fur example, in (3.2), above, neither the mirror, the gifts, the baby, nor Ellen have been mentioned before in the novel. A new referent for each is introduced into the CP's belief space; that is, by virtue of understanding the references in (3.2), the reader comes to believe that Han- nah. the subjective character, believes that there is a mirror, some gifts, a baby, and a person named 'Ellen'. 6.1A. An example. We now illustrate our algorithms on a passage that reflects a character's mistaken belief. The pas- sage is f~om a novel in which a character, Dwayne, mistak- enly believes that another character, Casey. is a boy: 05) L~'IHis [Dwayne's] brain worked slowly through what he knew about this person [Casey]. tS'2David's kid. ~The name stumbled into place. Is~nis was David's boy. ~S-SDavid was in the war, and here was his kid in the arc~_de scared of something. [Bridgers, All Together Now, p. 91] Note that (15.1) end (15.3) are psychological reports that employ metaphor, rather than psychological verbs, to report the character's psychological experience ((9.1), above, employs metaphor in a similar way). Memphur is beyond the scope of this work. so, before applying our algorithm to this passage, we paraphrase it as follows: (15a) L~He [Dwayne] thought of what he knew about this person [Casey]. tX2David's kid. XS~He remembered the name. ls~*]'his was David's boy. lkSDavid was in the war, and here was his kid in the arcade scared of something. Fn'st, consider the operation of our discourse process. Sentence (15a.1) is a psychological report, and so Dwayne, its subject, is pushed onto the CP; this establishes a subjec- five context, auributad to Dwayne, which is continued throughout the passage. Note that when the sentence frag- ment (which is a subjective element) is encotmtered, no change is made to the CP because there is already a charac- ter on the top of it. Similarly, no change is made to the CP when (15a.3), a psychological report, and the second con- jun~ of (15a.5), a predicate-adjective umtew.e with a psychological adjective, are encountered, since there is already a character on the top of the CP. Now, consider the refe~e4-c.e to David in (15a.2). The reader knows that David is Casey's father. If, before read- ing (15L2), the reader didn't explicitly believe that Dwayne knew about David too, then branch 3 of algorithm (A2) would be taken to understand this reference; the result is that the reader now explicitly believes that Dwayne knows about David. 135 'David's boy' in (15a.4) reflects Dwayne's mistaken belief about Casey, and branch 2 of algorithm (A2) is taken in order to understand it. To illustrate that information in subjective contexts is atu'ibuted to the subjective character, suppose that (15a.4) were "This was David's girl"; in that case. the reader would have to infer that Dwayne had somehow found out that Casey is a girl. 6.1.5. Further discussion of algorithm (A2)..Note that if a reference is a subjective elemertt, such as 'the bastard', it may be a non-classifw.atory noun (Banfield 1982); that is, it carmot be understood entirely propositionally, since it expresses subjectivity. How it should be undemtood depends on the particular subjective elmnmt. Thus, specific algorithms for nouns that are subjective elemants must supersede algorithm (A2). As mentioned above, algorithm (A2) is enable to understand anaphoric references. However, mmphor comprehension can he affected by perspective. Consider the following passage: (16) mtThe man had mined, xt'~He started to walk away quickly in the direction of the public library. le°"O.K.," said Joe, "get Rosie." x6~Zoe crept back to the blinker, xt~She felt hollow in her smmuch. 1~She'd never really expected to see the Enemy again. [Ones], War Work, p. 64] In (16.6), 'the Enemy' is m ana~oric reference that occurs in a subjective context (established by (16.5), which is a psychological report); it co-specifies 'the man' in (16.1) and 'He' in (16.2). It reflects Zoe's belief that the raan is an enemy spy, although it is not at all clear to the reader, at point' that he is. Personal pronouns can also reflect the beliefs of a character. The following passage is a continuation of pas- sage (15) (italics ours): (17) He [Dwayne] wasn't sure of what. What in the arcade could scare a boy like that? He rubbed his head under his baseball cap. He could see tears in Casey's eyes. He could tell they were tears h/s eyes were too shiny. Too round. WelL it was all right to cry. He'd cried when they took him to that place a few years back. Now Casey was in a new place, too, feeling maybe the same as him. If he just knew what to do about it. "let's don't play that game anymore," he said. "I don't like that one." Casey wiped her face on her sleeve... [Bridgers, All Together Now, p. 92] Both italicized pronouns refer m Cesey; the first occurs in a subjective context attributed to Dwayne, and the second occurs after the subjective context has ended (in this pas- sage, the subjective context is ended by direct speech). 6.2, Assertive Indefinite Pronouns. Assertive indefinite pronouns--e.g., 'someone', 'something', 'somebody'----are specific, though unspecified (Quirk et al. 1985); that is, they generally refer to particular people, things, etc., without identifying them. When referring to a particular referent, a speaker typically uses an assertive indefinite pronoun if (1) she doesn't know the identity of the referent, (2) she doesn't want the addressee to know the identity of the reread-a, or (3) she doesn't believe that the identity of the referent is relevant to the conversation. A cheract~'s thoughts end perceptions are not directed toward an addressee, and so the first of these uses is the predominant one in subjective con- texts. Used in this way, they express a lack of knowledge, and so are subjective elements. When one of them appears in a subjective context, the r~d~ onderstands that the sub- jective character does not know who or what the refermt is. Often. the pronoun is the only source of this information. Consider the following example: (18) maSuddenly she [Zoe] gasped, n~She had touched somebody! [Oneal, War Work., p. 129] The~ is no explicit statement in the novel that Zoe does not know whom she touched; this has to be inferred from the use of 'somebody'. Sentences (6) and (15.5) provide fm'ther examples. 6.3. Indefinite Refereneas. In conversation, definite refer- ances are used only if the speaker believes that the addressee has enough inform~on to intexpret them. As mentioned above, thoughts and perceptions are not directed toward m addressee, and so the use of definite references in subjective contexts is not subject to this conslraint; as illustrated by (3.2), they me used to refer to referents familiar to the sub- jective character, whether or not the r-,~ler has been told about them before. So, when a specific indefinite reference appears in a subjective context, the reader understands that the referont is unfamiliar to the subjective character;, other- wise, a defmlte reference would have appeared (Fdlmore 1974). However, the referent may not be unknown to the reader or to the other characters. For example, (19) There they [the King and his men] saw close beside them a great rubbleheap; and suddenly they were aware of two small figures lying on it at their ease, grey-cled, hardly to be seen among the stones. [Tol- kien, The Two Towers, p. 206] The reader knows that the King and his men have come upon two hobbits, Menv end Pippin. The King and his men do not know the hobbits, but other characters also present in the scene do know them. When the King and his man are on the top of the CP (after 'saw' and continued by 'were aware of'), the hobbits are not referred to by name, but as 'two small figures'. Branch 1 of algorirlun (A2) crea~ new referents and, in the belief space of the King and his men, 136 builds propositions that they are small figures. The new referants can be asserted to be co-exteusional with the con- cepts who the reader and other characters believe are named 'Merry' and 'Pippin'. Indefinite references can sometimes indicate that the subjective character doesn't evan know what the refermt is. This occurs when the head noun is a SUlZ~ordinate, rather than a basic-level, term (Rosch and Lloyd 1978). The basic level is the preferred level at which people identify things. If a superordinate, rather than a basic-level, term appears in an indefinite refarance in a subjective context, the reader understands that the subjective character can't even idantify the referant at the basic level. In example (19), the hobbits are referred to as 'two small figures', because the King and his man have never seen hobbits before. Here is an example that is not from a fantasy novel: (20) Slowly Hannah raised her head and blinked her eyes. Small dots of purple coveted the ground around h~ and she reached out to explore. Violets l [Fx~.-~che~, Hannah Herse~, p. 25] Whan she first sees the violets, Hannah can only identify them as 'small dots of purple'. Another occurs in (2.2): The fact that the reference 'several large cylindrical objects' includes the ~dinate term 'objects' indicates that Muhammad doesn't know what the referents are. Another example is (21): (21) He felt firm reslralnts of some sort holding him in place. [Wu, Cyborg, p. 141] Peters and Shapiro (1987ab) describe a SNePS represantation for natural category systems in which superor- dinate categories can be distinguished from basic-level and subordinate categories. After an indefinite reference with a superordinate term in a subjective context has been parsed, the fact that the subjective character was able to identify the referent only at a supemrdinate level is represented in the knowledge base by using their represcntatio~ 7. CONCLUSIONS AND FUTURE RESEARCH. Many problems remain to be solved. Our discourse process cannot recognize subjective contexts that are not established by the linguistic signals it relies on, and general principles are needed to explain how readers recognize the ending boun- daries of subjective contexts. We are investigating how tense, deictic terms (cf. Bruder et aL (1986), Bantield (1982)), the characters' goals (cf. Wilansky (1983)), and the argumant structure (of. Cohen (1987)) often exhibited by thoughts might be used to recognize the boundaries of sub- jectlve contexts. Branches 2 and 3 of algorithm (A1) need to be expanded to determine who the subjective character is if the actor focus isn't a reasonable candidate and no paranthetical appea~. We are investigating how focus of attention (cf. Grosz (1981), Sidner (1983)) can be incor- porated into algorithm (A2) in such a way that anaphoric references reflecting the beliefs of a character can be under- stood. Finally, there is the general problem of revision. Our algorithms assume that signals occur at the beginning of subjective contexts. However, there are cases when a sub- jective context cannot be recognized until some of it has already been parsed. A difficult case is inustrated by the fol- lowing: "CaJody was rich and famous, c2why wasn't she happy? Bin wondered." Only after reading (C2) can the reader recognize that (C1) is a represented thought. We have erguad that a discourse-level approach must be taken to the problem of recognizing character's thoughts and perceptions in third-person narrative. Our discourse process, which is implemented in an ATN grammar inter- faced to SNePS, recognizes subjective contexts that are linguistically signaled in ways frequantly employed in naturally-occurring narratives. By using the results of the discourse process to determine the belief context needed m onderstand refermces, our reference algorithm demonstrates how perspective affects referance in third-person narrative. 8, ACKNOWLEDGMENTS. We are indebted to Mary Galbraith, David Zubin, Sandra Peters, Stuart Shapiro, and the other members of the SUNY Buffalo Graduate Group in Cognitive Science and the SNePS Research Group for many discussions end ideas. This research was supported in part by NSF grauts IST-8504713, IRI-8610517. REFERENCES Banfield, Arm (1982), Unspeakable Sentences: Narratwn and Representation in the Language of Fiction (Boston: Roufledge & Kegan Paul). Bruder, Gail A.; Duchan, Judy F.; R~papmt, W'dliam J.; Sagal, Erwin M.; Shapiro, Smart C.; and Zubin, David A. (1986), "Deictic Canters in Narrative: An Interdisciplinary Cognitive-Sciance Project," Technical Report 86.20 (Buffalo: SUNY Buffalo Dept. of Computer Science). Chamm~ Seymour (1978), Story and Discourse: Narrative Structure in Fiction and Film (Ithaca, NY: Comell Univer- sity Press). Clark, Herbert H. end Marshall, Catharine R. (1981), "Definite Ref~,w,ce and Mutual Knowledge," in A. Joshi, B. Webber, and L Sag (eds.), Elements of Discourse Under- ~and/ng (Cambridge: Cambridge University Press): 10-63. Cohen, Philip R.; Perrault, C. Raymond; and Allen, James F. (1982), "Beyond Question Answering," in W. Lchnert and M. Ringle (eds.), Strategies for Natural Language Process- /ng (Hillsdale, NJ: Lawrence Erlbamn): 245-274. Cohen, Robin (1987), "Analyzing the Structure of Argumentative Discourse," Computational Linguistics 13: 11-24. Cohn, Dorrit (1978), Transparent Minds: Narrative Modes for Representing Consciousness in Fiction (Princeton: Princeton University Press). 137 Dole~el, Lubomir (1973)' Narrative Modes in Czech l..itera. tare (Toronto: University of Toronto Press). Fillmore, Charles (1974)' "pragmatics and the Description of Discourse," in C. Fillmore, G. Lakoff, and R. Lakoff (eds.), Berkeley Studiea in Syntax and Semantics I (Berkeley: University of California Dept. of Linguistics and Institute of Human I.e.aming): VI-V21. Grosz, Barbara J. (1981)' "Focusing and Description in Natural Language Dialogues," in A. Joshi, B. Webber, and L Sag (eds.), Elements of Discourse Understanding (Cam- bridge: Cambridge University Press): 84-105. Grosz, Barbara J. and Sidner, Candace L. (1986), "Amm- tion. Intantions, and the Slructure of Disoom'se," Computa- tional Linguistics, 12: 175-204. Lakoff, Robin (1974)' "Remarks on 'this' and 'that'," Papers from the Tenth Regional Meeting of the Chicago Linguistic Society (Chicago: Chicago Linguistic Society): 345-356. Li, Nalcong (1986). "Pronoun Resolution in SNePS." SNe.RG Technical Note 18 (Buffalo: SUNY Buffalo Dept. of Computer Science). Maida, Anthony S. and Shapiro, Smart C. (1982)' "Inten- sional Concepts in Propositional Semantic Networks," Cog- hi:ire Science 6: 291-330. Quirk. Randolph; Greenbaum. Sidney;, Le_ec.h, Geoffxe~ end Svartvik. Jan (1985)' A Comprehensive Grammar of the English Language (New York: Longuum). Peters, S~adra L. and Shapiro, Stuart C. (1987a), "A Relxesentation for Natural Category Systems," Proceedings of the 9th Annual Conferotce of the Cognitive Science Society (Seattle) (Hillsdale, NJ: Lawrence Erlhaum): 379- 390. Peters, Sandra L. and Shapiro, Smart C. (19871)), "A Representation for Natural Category Systems," Proc~d'mgs of the lOth International Joint Conference on Artificial Intel. ligence (1JCAI-87; Milan) (Los Altos, CA: Morgan Kanf- mann): 140-146. Rapaport, William J. end Shapiro, Stuart C. (1984)' "Quasi-Indexical Reference in Propositional Semantic Net- works," Proceedings of the lOth International Conference on Computational Linguistics (COLING-84 ; Stanford Univ.) (Monistown, NJ: Assoc. for Computational L,ingulstics): 65-70. Rapaport, William L (1986)' "Logical Foundations for Belief Representation,'" Cognitiv e Science 10: 371-422. Reise~, Brian J. (1981), "Character Tracking and the Under- standing of Narrative," Proceedings of the 7th International Joint Conference on Artificial Intelligence (IJCAI-81; Van. couver) (Los Altos, CA: Morgan Kanfmarm): 209-211. Rosch, Eleanor and Lloyd, B3. (1978)' Cognition and Categorization (Hillsdale, NJ: Lawrence Erlbaum Associ- ates). Shapiro, Smart C. (1979)' "The SNePS Semantic Network Processing System," in N.V. Findler (ed.), Associative Net. worka (New York: Acedomic): 179-203. Shapiro, Smart C. and Rapapo~ William J. (1987), "SNePS Comidered as a Fully Intensional Propositional Semantic Network," in N. Cerc~m and G. McCalla (eds.)' The Knowledge Frontier (New York: Springer-Verlag): 262- 315. Sidner, Cmdaze L. (1983)' "Focusing in the Comprehension of Definite Anaphora," in M. Brliy md R. Berwick (eds.), Computational Models of Discourse (Cambridge, MA: The Mrr Press): 267-330. Uspemky. Boris (1973), A Poetics of Composition (Berke- le~ University of California Press). Wiebe, Janyce M. and Rapalx~ W'dliam J, (1986). "Representing De Re and De Dicto Belief Reports in Discourse and Nan'afive'" Proc. IEEE 74: 1405-1413. Wilemky, Robert (1983)' Planning and Understanding (Reading, MA: Addison-Wesley). Willm, Yorick and Bien, Janusz (1983), "Befiefs, Points of View, and Multiple Environments," Cognitive Science 7: 95-119. CITED TEXTS Allegro, John (1977), The Dead Sea Scrolls (Harmon&- worth, Eng.: Panguin). Bridgers, Sue Ellen (1979)'.AII Together Now (New York: Knopf). Clarity, Tom (1986), Red Storm Rising (New York: G.P. Pumam's Sons). Franchere" Ruth (1964)' Hannah Herself (New York: Tho- rues Y. Crowell). Gage, Wilson (1963)' Miss Osborne-the-Mop (Cleveland: World Publishing). L'Engle. Madeleine (1986)' Many Waters (New York, Dell PubUsh~). Oneal. Zibby (1971)' War Work (New York. Viking Press). Tolkian. LR.R. (1965)' The Two Towers (New York: Bal- lantine Books). Wu, William (1987), Cyborg (New York: Ace Books). 138
1988
16
PARSING JAPANESE HONORIFICS IN UNIFICATION-BASED GRAMMAR Hiroyuki MAEDA, Susumu KATO, Kiyoshi KOGURE and Hitoshi IIDA ATR Interpreting Telephony Research Laboratories Twin 21 Bldg. MID Tower, 2-1-61 Shiromi, Higashi-ku, Osaka 540, Japan Abstract This paper presents a unification-based approach to Japanese honorifics based on a version of HPSG (Head-driven Phrase Structure Grammar)ll]121. Utterance parsing is based on lexical specifications of each lexical item, including honorifics, and a few general PSG rules using a parser capable of unifying cyclic feature structures. It is shown that the possible word orders of Japanese honorific predicate constituents can be automatically deduced in the proposed framework without independently specifying them. Discourse Information Change Rules (DICRs) that allow resolving a class of anaphors in honorific contexts are also formulated. 1. Introduction Japanese has a rich grammaticalized system of honorifics to express the speaker's honorific attitudes toward discourse agents (i.e. persons who are related to the discourse). As opposed to such written texts as scientific or newspaper articles, where the author's rather 'neutral' honorific attitude is required, in spoken dialogues, an abundant number of honorific expressions is used and plays an important role in resolving human zero-anaphors. In this paper, a unification-based approach to Japanese honorifics is proposed. First, Mizutani's theory of honorific expression actl3] is introduced to define basic honorific attitude types used in specifying pragmatic constraints on the use of Japanese honorifics. Then a range of honorifics are classified into subtypes from a morphological and syntactico- semantic perspective and examples of their lexical specifications are shown. The main characteristics of the utterance parser and an approach to explaining possible word orders of honorific predicate constituents are described. Finally, Discourse Information Change Rules are formulated that resolve a class of anaphors in honorific contexts. 2. Speaker's Honorific Attitudes toward Discourse Agents 2.1. Grammatical Aspects of Honorifics A distinction must be made between the speaker's honorific attitude as determined by the utterance situation (the social relationship between discourse agents, the atmosphere of the setting, etc), and the honorific attitude as expressed by special linguistic means independent of the • utterance situation. For example, by violating a usage principle for the determination of an honorific attitude (i.e. "one should not exalt oneself in front of others"), uses of an honorific expression about the speaker himself can function as a kind of joke. However, without the help of grammatical properties of honorifics independent of particular utterance situations, the violation of a usage principle itself could not be recognized at all, thus the expression could not function as a joke. Though the former situational determination of honorific attitude is an interesting subject matter for socio and psycho-linguistic researchers, the latter grammatical properties of hot~orifics are our concern here and what is described with lexical specifications for honorifics. 2.2. Mizutani's Theory of Honorific Expression Act Mizutani's theory of honorific expression act is introduced to define basic honorific attitude types that stipulate the pragmatic constraints on Japanese honorifics. In this model, discourse agents are positioned in an ~bstract two-dimenslonal honorific space (Fig. 1). How they are positioned is a socio and psycho-linguistic problem, which is not pursued here. Agent P (px,py) Hearer (hx,hy) e e~,~ Speaker (0,0) ~" Agent Q (qx,qy) I Fig 1. Honorific Space An honorific expresson act reflects the configuraion of these discourse agent points. The speaker is set as the point of origin, and the speaker's honorific attitude toward a discourse agent, say P, is defined as the position vector of point P. The speaker's honorific attitude toward agent P relative to agent Q is defined as a vector from point Q to point P. The value and the direction of the vector are defined as follows: 139 Honorific Value : for v = (x. y), the honorific value of a vector v (written as IvJ) is defined as: Ivl = y iffx=0; 0 iffx ~0; Honorific Direction : a. up I,t>0, b. down Ivi < O, c. flat Iv~=O and x=O, d. across Ivl = 0 and x ~ O. IN.B.J Assuming an honorific space to be two dimensional (not one dimensional), an across direction can be distinguished from a fiat direction. An acrosS direction of a vector corresponds to the case where no positive honorific relation between the two agents (i.e. up, down, or flat) is recognized by the speaker. Though the speaker's honorific attitudes can be characterized from several viewpoints (e.g. up/down, distant/close, formal/informal), Mizutani's model is appropriate for describing Japanese honorifics because the up~down aspect most relevantly characterizes Japanese honorifics. Moreover, it is not clear how the other aspects are independently grammaticalized in the Japanese honorific system. Based on the direction of the vector defined above, the following four subtypes of honorific attitude relations are distinguished. Honorific Attitude Type : a. honor-up b. honor-down c. honor-flat e. honor-across 3. Description of Japanese Honorifics 3.1. Classification of Japanese Honorifics 3.1.1. Morphological Viewpoint In Japanese, words in a wide range of syntactic categories (i.e. nouns, verbs, adjectives, nominal-verbs, nominal- adjectives, etc) are systematically put into their honorific forms. They are classified into two subtypes according to how they are derived from their nonhonorific forms. Classification by the lexical derivation type: honorific-word = a. regular-form-honorific-word (e.g. "ookak-i" from "kak-i" [writevinf]) [HP-[writevstem-CSinf]l b. irregular-form-honorific-word (e.g. "ossyar-" from "iw-" [speakvstem]) |N.B.] HP and CS stand for 'Honoric Prefix' and 'Conjugation Suffix' respectively. Words is transcribed in its phonemic representation. While regular-form honorific words share a common base with their nonhonorific forms because they are derived by the productive honorific-affixation process, irregular-form honorific words have special word forms that have no direct connection to their nonhonorific forms. This distiction plays an important role in the lexical specification of honorifics and in possible word orders of Japanese honorific predicate constituents. 3.1.2. Syntactico-Semantic Viewpoint In traditional school grammar, Japanese honorifics have been classified into three categories: respect words ('sonkeigo'), condescending words ('kenjougo'), and polite words ('teineigo'). However, in this traditional tripartite classification, common features of respect-words and condescending-words not shared by polite-words are not explicit. That is, while an agent toward whom the speaker's honorific attitude is expressed must be grammatically located in the sentence (i.e. as subject or object) in the case of respect or condescending words, this requirement does not apply to polite words. Thus a more elaborate classification is adopted. Conventional terms are replaced by Haradal4l's more syntactico-semantically motivated ones. Classification by the syntactic role of an aqent to whom the speaker's honorific attitude is expressed: honorific-word = a. propositional-honorific-word= a. 1. subject-honorific-word (respect-word) (e.g. "kudaser-u" [give~nf]) a.2.object-honorific-word(condescendiog-word) (e.g. "sesiage-ru" lgivev~ef]) b. performative-honorific-word (polite-word) (e.g. 'des-u', 'mas-u') IN.B.] For example, a verb which takes a nonanimete subject (e.g. "fur-u" in the sentece "Ame (rain) ga (SBJ) fur-u(fall). ° IThe rain falb.]) can be put into its performative honorific form ('Ame ga fur-i mas-u.'), but not into its subject honorific form (* "Ante ga o-for-t ni nar-u.'). This is in accordance with the difference between propositional honorifics and performative honorificl. IN.B.] There are a class of words which function in between the a.2 and b types of honorifics (e.g. "mair-u" [go/come~] in "Basu ga mair-i mas-u." [A bus will come.]). Let us call them propositional-performattve-wordl. Minus-honorifics are given no place in the traditional tripartite classification. However, they are classified in our approach as correponding to the expressed honorific attitude types. 140 Classification by the expressed honorific attitude type: honorific-word = a. plus-honorific-word (e.g. "aw-a-re-ru" [meetregular.sbjhon]) [{.,.meet~tem'CSvong]-PlusHonAuxv~tem-CSaml] b. minus-honorific-word (e.g. "aw-i-yagar-u" [meetregular-sbjhon]) [[...meet~em-C$1nf|-MinusHonAux~tm-CSm~J IN B.] The Japanese honorific system has no systematized means to positively express honor-flat or honor-across honorific attitudes. An non- honorific plain word form may express honor-flat honorific attitudes towerd a discourse agent in a situation such as speaking to an old friend, while it may express honor-across honorific attitudes in a situation such as writing a technical paper. Because the classfications of honorifics from different viewpoints as summarized above are cross-categorical, and thus independent of one another, a single honorific word (e.g. "hozak-u" [sayvsenf]) can function at the same time as irregular-form-honorific-word, subject-honorific-word, and minus-honorific-word. 3.2. A Unification-based Lexical Approach A unification-based lexicalism approach is adopted here for describing Japanese honorifics for the following reasons: (a) a unification-based approach enables the integrated description of information from various kinds of sources (syntax, semantics, etc), thus allowing their simultaneous analysis; (b) a lexical approach helps to increase the modularity of grammar. In this approach, a grammar has only a small number of general syntactic rule schemata and most of grammatical information is to be specified in a lexicon. Linguistic word-class generalizations can be formed by making grammatical categories complex by representing them with feature-structures. The specification of verbal category honorifics is important because the verbal categories are the most productive in the honorification process, and thus appropriate to clearly show how diverse aspects of the Japanese honorific system are described in this approach. 3.3. Examples of lexical specifications 3.3.1. Regular-Form Honorifics Subject Honorification by "Vvong + (ra)re-ru" Regular form honorifics are compositionally analyzed by giving lexical specifications for each honorific-word formation formative. For example, most plain-form verbs can be put into their simple subject-plus-honorific form by postpositioning the auxiliary verb "(ra)re-ru" to them ('re- ru" and "rare-ru" are allomorphs of a single morpheme). Lexical information for these formatives is specified in the feature structure: [[orth(orthography) ?orth] [head [[pos(part-or-speech) v] [ctype(conJugat|on-type) vowel] [cform(conJugatton-rorm) stem]I] [adjacent ?prod] [subcat ( ?sbJ[[haad [[pos p] [grf(grmmaticel-functton) sbJ]]] [subcat 0) [sam ?sbJsem] [sear [[huNn +]]]] ?prad[[heed [[pos v] [ctype ?predctype] [cforll vong(vofce-nagattva)] [subcet {~sbJ}] [sea ?predsem]])] [Sam ?predsam] [prsg [[restrs {[[reln honor-up] [origin espeakar e] [goal TshJsem]])]]]]) where <?orth ?pradctypa> E (<'ra" cons> <'rBre" (:or vowel kuru suru)>) Fig 2. Lexical Specification for a simple subject-plus honorification morpheme ('(ra)re-ru') IN.g,] ? ~ a prefix for a tag-name used to represent a token identity of feature-~ru~ures. *Speaker* is a special global variable bound to a feature stru~ure representing the speaker's information. The 'prag' feature describes the pragmatic constraint on this expression (the "honor-up" relationship from the speaker to the subject agent of the predicate is required for this expression to be used in a pragmatically appropriate way). Description with the 'honor-up' honorific attitude relation shows that this expression is a 'plus-honorific' expression. Structure-sharing of the 'goal' feature value of this honorifc attitude relation with the semantic value of the predicate's subject shows that this expression is a 'subject- honorific' expression. The requirement for the 'orth' feature value (?'orth) and the 'ctype' value in the 'subcat' feature (?predctype) describes the morphophonemic characteristic of this morpheme by stipulating that 're-(ru)' subcategorize for either a regular consonant-stem ctype verb or an irregular ctype verb ('suru'[do]), and that 'rare-(ru)' subcategorize for either a regular vowel-stem ctype verb or an irregular ctype verb ('kuru' [come]), correctly allowing (la) and (lc) but not fib). (1) a. Sensei ga kyoositu e ika re to. teacher $8J classroom to golctYoe vowell Past "(The) teacher went to (the) classromm. ° b. *Sensei ga kyoositu e ika rar.__ee ta. c. $ensei ga kyoositu e ko rare to. comelctvoe kuru] "(The) teacher came to (the) classroom." d. *Kyoositu • ko $ensei ga rare to. 141 The 'adjacent' feature is a special feature which assures that its value be the first element in the list when the set description in the 'subcat' value is expanded into list descriptions by a rule reader. The specification of this feature implies that this morph is a bound morph and thus requires its adjacent element to be realized as a nonnull phonetic form. Though the set description in the 'subcat' value is introduced to allow word order variation among complement daughters in Japanese, without this kind of specification, ungrammatical sequences such as (ld) are also allowed for auxiliariy verbs. [N.B.) A set description in the subcat feature of a feature sturucture,[ladjacent ?c][subcat ETa ?b ?c)]|, for example, is expanded into its corresponding two possible list descriptions by a rule reader as follows: I[adjacent 7c)[subcat (:or <7c ?b ?a> <?c ?a ?b>)]. Furthermore, <?c ?b ?a>. for example, is expanded into a feature structure such as [Jfirst ?c][rest [Ifirst 7bnrest Ilfirst ?a][rest end]]. Object Honorification by "HP + Vinf + suru" Next, let us consider a more complicated formation pattern for deriving a regular object-plus-honorific form. As productive as the above "Vvong + (ra)re-ru" pattern is, an "HP +Vinf +suru" pattern can put most verbs with two grammatical human arguments into their corresponding object honorific forms as follows: "o + aw-i + suru" from "aw-" (meetvstem), "go + shoukai + suru" from "shoukai" (introduce-verse). IN.B.] "o," and "go-" are two forms of s single morpheme (honorific prefix) that is prefixed to words in a variety of syntactic categories (See Appendix I). The choice depends on the following element's origin. If the element is a Sine-Japanese morpheme (kango), the honorifc prefix takes the form "go-'; if it is a native one, the honorific prefix is realized as "o-', though there are exceptions. In a naive analysis of Japanese honorifics, these honorific forms derive from their corresponding plain forms by a simple object honorification lexical rule that does not take into account their internal constituent structures (e.g. "aw-u" --) "o-aw-i-suru'). Accordingly, this kind of naive analysis is inadequate for the following reasons: Ca) it is arguable that "HP+Vinf" forms a unit in some structural level before forming the unit "HP + Vinf + suru', considering the existence of such constructions as "lIP+Vinf+ni+nar-u" (normal-sbj-plus-hon-form), "HP + Vinf + negaw-u(request)', and "HP + Vinf + itadak- u(receive-favorirregular.obj.plus.hon.form)', but this assertion is not explicitly illustrated in a naive ana4~sis; (b) though some adverbial postpositions such as "we" (contrastive), "me" (also) and "sae" (even) can appear inside the object honorific form (e.g. "o-aw-i-WA-suru', "go- shoukai-SAE-MO-suru'), it is difficult to derive these forms by a naive analysis in light of the generalization concerning adverbial postpositions appearing in other environments (e.g. "Sensei ga kyoositu DAKE e WAko rare ta" [the teacher came only to the classroom] ); (c) a naive analysis fails to explain the kind of the elements that can operate as a Vinf element in the pattern, which is automatically explained in the proposed framework as will be shown in section 5. This regular object-plus-honorification process is compositionally analyzed in the proposed framework by giving each of its formatives a lexical specification, inthe same manner as the "Vvong + (ra)re-ru" pattern subject-plus- honorific analysis. Here the expression "o-aw-i-suru" is analized. Fig 3.a represents the lexical information of the verb "aw-' (meet) in its infinitive form ('aw-i'). [[orth "aw-t"] (cen-tsKe-hp +][lex ~] [head [[pos v](ctype cons][cform tnf] [hpforll "O']]] (subcet [[(heed [(pos p][grf sbJ][rom gel (seer ([hullan +]]]]] [subcat {)] (sell ?sbJsell]] [[head [[pos p][grf obJ][fom nf] (sellf ((hullan +]]]]] (subcat ()) [see ?ohJsmel]))) (sell [[reln meet] [agent ?sbJsell] [object ?obJsee]]]] Fig 3.a. Lexicallnformationfor "aw-i" (meetvinf) First, honorific prefixation lexical rule is applied to this infinitive-form verb. Fig 3.b represents the lexical information of an honorific prefix (HP) and Fig 3.c shows how this lexical rule is stated in the proposed framework. [[o rth ?hpform] [head [(pos hp] (coh ([can-take-hp +][lex +] [head ([pos v][cforlx fnf] [hpform 7hpfom])]]]]] [subcet 0)) Fig 3.b. Lexicalinformation for HP preceding Vinf (defrule x -> (hp x) (C0 can-take-hp) -- -) ((1 head coh> -- (2)) ((0 head> -= C2 head>) (C0 subcut) -- C2 subcut)) ((0 sell> -- C2 Sell>) ((0 pro 9 rsstrs) -- (:union C! prog restrs) (2 preg restrs)))) Fig 3.C. Honorific prefixation rule IN.B.i The rule stated in an extended version of PATR41 notation consists of two parts; CFG-part and constraints. CFG-part is used to propose an efficient top-down expectation in the parser. Constraints are required for the rule application to end successfully. Here, all constraints are described by equations of two feature structures. °< >" is used to denote a feature structure path, and ° ,," to denote a token identity relation between two feature structures. 142 The 'headlcoh(CategoryOfHead)' feature of a category specifies the kind of its head. An HP can take a lexical infinitive-form verb whose 'can-take-hp' value is' + '. An HP is assigned its appropriate realization form (.) (in this case, "o" form), because its 'orth' value and the head's 'hpform' value are the same. The first equation in the rule statement prevents a second application of the honorific prefixation rule to the same verb (*'o-o-aw-i °) by specifying that the mother category's 'can-take-hp' feature value be ,., (**) The other equations in the rule are ones common to the adjunct- head structures. I*N.B.] A note is needed here concerning the realization of Hr. When the adjacent feature of the second right-hand-side symbol in the CFG-part is nil as in the above case, it is enough just to concatenate both 'orth' feature values of the right-hand-side symbols and make it the 'orth' feature value of the left-hand-side symbol. However. when the head element's adjacent feature has a nonnull value (i.e. in the case that the head element is n bound morph)o a more complicted operation is needed. But here we only mention its necessity and avoid its precise formulation to save space. I**N.BJ The 'can-take-hp' feature is specified as '-' not only for already HP- prefixed elements, but also for almost all irregular form honorific verbs (e.g. *'o-osshar-i'lsay], *'o-itadak-i'lreceive*favorD and most mono-synablic infinitive-form verbs that have corresponding irregular-form honorifics (e.g. *'o-si" [doJ, *'o-mi" [look atJ). Next, the usual complement-head structure rule (Fig 3.d) is applied to the resulting feature structure for "o-aw-i" and the feature structure for a normal object-plus honorification formative ('-suru', as shown in Fig 3.e). Thus the normal object plus honorifc form ('o-aw-i-(suru)') for "aw-'[meet] is obtained in a compositional way. (derrule m -> (c h) ((0 heed> -- <2 head>) (<1> -- (:ftrst <2 subcat>) ((0 subcat> -- (:rest <2 subcat>)) ((0 sam> -- <2 sam>) (<0 prag restrs> (:union (1 prag restrs> (2 prag restrs>))) Fig 3.d. Complement head structure rule [[orth "'] [heed [[pus v][ctype suru][cform stem] [frregular-crorms [[vong sf][inf sf]''']]]] [can-take-hp -] [adjacent ?prod] [subcat (?sbJ[[head [[pos p][grf sbJ] [samf [[human +]]]]] [subcat (}] [sem ?sbJsem]] ?obJ[[hend [[pos p][grf obJ] [semr [[hu.en +]]]]] [subcat {}] [sam ?ohJsem]] ?prod[[head [[pos v][cform tnf][hp +]]] [subcet {?sbJ ?obJ}] [scm ?prsdsem]]}] [sam ?predsem] [prag [[restrs {[[reln honor-up] [or4gtn ?sbJsem] [gee] ?obJsem]]}]]]] Fig 3.e. Lexical Specification for a normal object-plus honorification formative ('(-suru)') 3.3.2. Irregular Form Honorifics Irregular form honorifics share most of their lexical information with their nonhonorific counterparts. In our framework, redundant lexical specification for irregular-form honorifics is avoided by using lexical inheritance mechanism from their superclassas. For example, the necessary lexical specification for the irregular subject honorific form "(- te)itadak-" of the donatory auxiliary verb "(-te)moraw-" is reduced, as shown in Fig 4.a. This turns out to be equivalent to Fig 4.b by unifying pieces of information from its super- classes, te-receive-favor and obj-plus-hon. (:supere]asses to-receive-favor obJ-p]us-hon) [[orth "ftadak"] [head [[ctypa cons][cform stem]]]]) Fig 4.a. Neccesarylexical specification for the irregular form donatoryauxiliaryverb'~te)itadak-" [[orth "ftedak ° ] [head [[pos v][ctype cons][cform stem]J] [subcet {[[head [[pus p][grf sbJ][form g8]]] [zuhcat {}] [sam ?sbJsem]] [[head [[pus p][grf obJJ[fons nt]]] [subcJt {}] [sam ?ob~sem]] [[head [[pus v][cform teJ]] [subcat {[[heed [[pus p][grf sbJ]]] [subcat (}] [see 7obJsa=]]}J [sam ?predsem]]}] [Sell [[reln transfer-favor] [donator ?zbJsam] [donatea ?ob~sem] [accmepenfed-actton ?predsem]]] [prag [[rostra {[[reln honor-up] [orfgfn ?sbJsem] [go81 ?obJsam]J [reln empathy-degree] [more ?sbJsem] [lass ?ohJsemJ]J]]]]) Fig 4.b. Whole lexical Information for "(-te)itadak-" Lexical Information for other irregular-form honorifics is likewise specified. 4. Unification-based CFG Parser Fig 5 shows the organization of the unification-based CFG parser. The parser is essentially based on Earley's algorithm, and unifies feature structures in its completion process. The description of grammatical rules and lexical items are complied into feature structures by the rule reader. Unification of cyclic feature structuers might be necessary to analyze certain expressions. To give some examples: (a) frozen honorific words such as "o-naka" (belly) and "go- ran" (to look at) must always be prefixed by an HP (the element in bold face); (b) the polite form ('gozar-') of the verb "ar-'/'ir-" (to be) almost always needs to be followed by the polite honorific auxiliary verb "-masu" in modern Japanese. 143 ~ ' ~ Sauce Wmww I ~"~" I t Utterance Pmrser based on Earley's algorithm I ~ l ~ ~-~ Festwestm(t~emtlficJitl(m ] I Fig 5. Organization of the Unification-based Parser In describing the above linguistic phenemena, it is convenient if requirements f.or its head category can be specified not only for adjunct elements, but also for complement elements. In such cases, one more equation as follows needs to be added to the usual head-complement structure rule statement shown in Fig 3.d. <1 head coh> .. <2> The complied feature structure for the equations in Fig 3.d plus the above equation includes a cyclic structure as shown in Fig 6 An extended version of WroblewskilS]'s feature structure unification algorithm was developed to allow rule statements including cyclesl61. The extended algorithm can unify cyclic feature structures while avoiding unnecessary overcopying of feature stuructures. 5. Word Order of Honorific Predicate Constituents In Japanese, a verbal predicate is composed of one main verb and postpositioned auxiliary verbs (though possibly none exist). Because both main verbs and auxiliary verbs may have honorific forms, various sequences of honorifics might be expected to occur in a predicate as a simple matter of possible combinations. However, their possible word orders are restricted by a grammatical principles. Traditionally, possibile word orders were described in detail and the s REST Fig 6. Cyclic part of the compiled feature structure 144 explanations for them were given from a rather speculative perspective. In this research, it is shown how possible word orders can be deduced from lexical specifications of honorifics. 5.1. Propositional and Performative Honorifics A propositional honorific formative always precedes a performative honorific formative. For example, though "awa-re-masu" ([[[meetvong]-SbjPIusHon]-PerformativeHon]) and "o-awi-si-masu" ([[[HP-meetvlnf]-ObjPlusHonJ- PerformativeHon]) are possible expressions, they would be impossible if their word orders were reversed (i.e. performative honorific placed before propositional honorific). This restriction on word order is considered a consequence of the lexical specifications for both types of honorifics. As shown in section 3, propositional honorification formatives subcategorize a verbal category whose subject (and object) elements are not filled yet as its adjacent element. On the other hand, a performative honorification formative subcategorizes a verbal category with saturated subcategorization. This represents the lexical specification for "mesu °. [[orth "'] [heed [[pos v][ctype musu][cforll stem] [4rrugullr-cforlu [[senf mesu]...]]]] [cen-tlko-hp -] [adjacent ?prod] [subcut {?prud[[heud [[pos v][cform musu]]] [suhcet (}] [sea ?predsum]]]J [sims ?prudsms] [prig [[restrs {[[reln honor-up] [ordgdn Ospuakure] loom1 *hem.re]]}]]]] Fig 7. Lexical Specification for a performative honorification formative "masu" The performative honorificaton formative "masu" cannot, therefore, immediately precede a propositional honorification formative due to the requirement concerning the adjacent element of propositional honorifics. The opposite order, however, constitutes a syntactically legitimate structure. 5.2. Subject and Object Honorifics An object honorific formative must precede a subject honorific formative, though there is an important class of exceptions (verbs that subcategorize a 'te' form verb as an adjacent element such as "(-te)itadak-'[receive-favor]). For example, "o-awi-sa-reru" ([[[HP-meetvtnf]-ObjPlusHon|- SbjPIusHon]) is a possible word order, but "o-awa-re-suru" ([[HP-[meetvong-SbjPlusHon]]-ObjPlusHon]) is not possible if "-re(ru)" is used as an honorification formative. This word order restriction can be explained in the same way as for the above case: that is, as shown in section 3, the normal object honorification formative %suru" subcategorizes a verb whose subject and object are not yet filled. The simple subject honor|float|on formative "-(ra)reru" that requires its object to be already filled cannot, therefore, precede the normal subject plus honorification formative on account of conflicting specifications for the 'subcat' value. Otherwise, no conflict exist. Other kinds of restrictions on the possible word order of Japanese honorific predicate constituents can likewise be explained in the proposed framework. 6. Anaphora Resolution in Honorific Contexts In Japanese honorific contexts, many human anaphors can be resolved by recourse to pragmatic constraints on the use of honorifics. This is an attempt to apply DR theory to the anaphora resolution in Japanse honorific contexts. Discourse information is represented by a feature structure consisting of a set of reference markers (Universe) and a set of conditions, as in the standard version of DR (Discourse Representation) theoryl7]. Fig 8.a is the initially posited DRS (Discourse Representation Structure). Addition of other discourse information to the initial ORS does not affect the theory. [[unfv ([[rm espeakare[[type 'tndfvtdual]]] [[l'm eheeureC[type 'tnd4vtdual]]] [[rm *now*[[type 'temporal-location||| Jim *heree[[type 'spatfo1-1ocatfon]]]}] [conds {}3] Fig 8.a. Initial DRS (N.B.1} Reference markers for the indexicals are directly anchored to objects in the world, but the anchoring information is not shown here. Now let (3a) represent a discourse-initial utterance. (3) a. Izen ACL-88 ga hiraka-re ta toki, watasi wa aru chomei-na keisan-gengogaku-sha ni o-a| si masi ta. "Once when ACL-88 was held. I met (object-honorific and performative-honorific) a certain famous computational linguist. ° From this, Fig 8.b is unified as its semantic/pragmatic information. The method of specifying necessary lexical information was briefly explained in section 3. The initial discouse information is updated by the semantic/pragmatic information of a new utterance as follows: First, DICR 1, shown in Fig 9.a below, is applied to the semantic value of a new utterance. DICR 2 is then applied to the pragmatic value. Meanwhile, anaphoric expressions in a new utterance are resolved so that the NFCIS| shown in Fig 9.b below is observed. In this case, Fig 8.c is obtained as an updated DRS, because the type of semlcont value is a 'basic-circumstance' and every 145 [[sam [[cent ?xOl[[reln 'meet] [agent espeaker*] [object ?xO2] [t;oc ?xO3]]] [fnds { ?xO4[[ver ?xO2[[type 'fnd]]] [fem41tartty '-] [restrs (?x0S[[reln 'computettonal- lfngu4st] [fnstance ?xO2]] ?xO6[[reln 'famous] [Instance ?x0Z]3)]333. ?x07[[var ?x03[[type 'tloc]]] [famtlfartty '-] [restrs [?xOa[[reln "hold| [object ?xO9] [tloc ?x03]] ?xlO[[reln "temporally-precedes| [ante ?x03] [post "no.']]}]]]] ?xll[[ver ?xOg[[type 'fnd]]] [fam411artty '-] [restrs {?x|Z[[reln 'namtng] [name 'a01-88] [namod ?x0033}333333 [prag [[restrs [<?xt3[[ruln 'honor-up] [agent *speaker*] [object ?xO2]]. ?xl4[[reln "honor-up| [agent espeaker*] [object "hearere]]]]]]]]] Fig 8.b. Resulting Semantic Information for(3a) Let k be a current DP, S, o be a linguistic structure for an input utterance unified from lexical specifications, and k' be a DRS to be obtained. DICR 1. (i) if o~sem~cont is typed as a "non-quantified- circumstance', then kluniv - kluniv U oisem[indslvar, and klconds - klconds U oJsemlcont U otsemlindsJrestrs. (ii) if olsemlcont is typed as a 'universally-quantified- circumstance', then kluniv - k[univ, and kJ~onds - k[conds U {[(reln ',e|lante kl]lpost k2]]} where k I and k2 are newly introduced ORS$ whose information contents are specified bemcl on the o~Lsemlcontlquantlind value and the dsem[contlscope value as follows DICR 2. kluniv . kJuniv, and k'lconds - kjconds U dpraglrestrs Fig 9.a. Discourse Information Change Rules (part) For o to be felicitous w.r.t, k, it is required for every index i in o that: (i) if i~familiarity - ' -, then i[variable f kJuniverse. (ii) if i[familiartty - ' +,then (a) ilvariable ( kluniverse, and (b) ilrestriction is unifiable with kJcondition. Fig 9.b. Novelty Familiarity Condition index in the semicontJinds value has a Ifamiliarity '-] attribute in Fig 8.b. ([[unfv [[[rm espeaker.]] [[rat ehearer.]] [[rm *now*]] [[rm *harem|| [[rat ?x02]] [[m ?x033] [[m ?x0033}3 [conds (?x0! ?xg5 ?x06 ?x08 ?x|0 ?x;2 ?x13 ?x14]]]] Fig 9.b. Updated DRS In this context, assume (3b) is uttered, Fig 8.c is its unified semlprag values. (3) b. ?Sono keisan.gengogaku-sha wa watasi ni aisatu si yagari masi ta. "That computational linguist greeted (subject-minus-honorific and performative-honorific) me." [[sam ]]cent ?xlS[[reln 'greet] [agent ?xl6] [recipient *speaker*] [tloc 7x17 ]]] [tnds (?x18[Cvar ?xlG[(typa 'lnd)]] [familiarity '+] [restrs { ?xlg[[raln "computational- linguist) ]Instance ?xl6]])]] ?20[[var ?17[[typa 'tloc]] [restrs { ?21[[raln ' tlmpor811y- precedes] [ante 717] [post *noo']))])]]] [prag [[restrs (?22[[roln 'honor-down) [agent *speaker*] [object (16)]] ?23['[reln 'honor-up) [agent *speaker e] [object *hearer*]])]]]]]] Fig 8.c. Resulting Semantic Information for (3b) Because the index 7x18 for "song keisan-gengogaku-sha" (that computational linguist) has a ]familiarity '+] attribute based on the lexical specification for 'song', an attempt is made to resolve it by unifying 7x16 with an element of the kluniv value, requiring that their restrictions can also be unified. It stands to reason that it can be resolved because 7x16 and 7x02 are, semantically speaking, unifiable, because their semantic restrictions are {[]rein 'computational- linguist]!instance 7x16]]} and [[[reln 'computational- linguist]linstance ?x02]] Ilreln 'famous)[instance ?x02]]) respectively, and their variable types are both 'individual', which causes no incompatibility. However, their pragmatic restrictions ({llreln 'honor-downJlagent %peeker*)lob]act 7x16|] [[reln "honor- upJlagent %peaker*]lobject "hearer*]]}, and {([reln 'honor-up)[agent *speaker*)lob]act ?x02]] ]It*In 'honor-up]iagent *speeker*]lobject *hearer*)l}) prevent ?x16 from being unified with ?x02, due to the stipulation 'llreln 'honor-up][agent ?ailobject ?b]] A [Ireln 'honor- down)[agent ?el]object ?b)] - bottom'. This anaphoric resolution therefore fails. Other ways of resolving this anaphoric expression also fail because of the incompatibility of their variable types or semantic features. In any case, utterance (3b) turns out to be infelicitous by NFC. Unlike (3b), utterance (3b'), whose sem/prag values are the same as Fig 8.c except for [[rein 'honor-up)[agent *speaker*)lob]act ?x16]] instead of []rein 'honor-down)[agent *speaker*)]object ?x16]], can be given a felicitous reading, because anaphora resolution is possible without violating NFC in this case, (3) b'. Song keisan-gengogaku-sha wa watasi ni aisatu nasal masi ta. "That computational linguist greeted (subject-honorific and per for mative-honoriflc) me." IN.L) Our DICRI with NFC also explain the failure of coindexing "song keisan-gengogaku-she" in (4b) with a universally quantified expression °done ... me" (every ...) in a previous utterance, because the reference markers introduced for a universally quantified expression are in sul:mrdiate DRSs by OICR 1 end not accessible from "song keisan-gangogaku-she" as a possible antecedent. ) (4) e. Izen ALL-88 ni sanka sl ta toki, watad via done charnel.ha kelsan- gengogeku.sha rd me o-el si meg ta. "When I once took part in ACL-88, I met (object-honorific and per formative-honorific) every famous computational linguist." b. ? Song keisan-oenoooaku-sha we watasYniaisatunasaimesita.($b~ Though many issues rermain unaddressed concerning anaphora resolution in Japanese honorific contexts, these can be approached by use of the proposed model. This model regards discourse understanding as the process of unifying various kinds of partial information, including contextual information. 7. Condusion A unification-based approach to Japanese honorifics based on a version of HPSG was proposed. Utterance parsing is based on the lexical specifications of a range of honorifics using a parser capable of unifying cyclic feature structures. The developed parser constitutes an important part of NADINE (NAtural Dialogue INterpretation Expert), an experimental system which translates Japanese-English telephone and inter-keyboard dialogues. Acknowledement The authors are deeply grateful to Dr. Kurematsu, the president of ATR Interpreting Telephony Research Laboratories. Dr. Aizawa, the head of Linguistic Processing Department, end all the members of Linguistic Processing Department for their constant help end encouragement. References [1] Pollard, Carl & Ivan Sag, 1967, Information-Based Syntax and Semantics. vol. 1. CSLI Lecture Notes 13. 12] Genii, Takao. 1987. Japanese Phrase Structure Grammar. Reidel. [3] Mizutani. Sizuo., 1963, "Taiguu Hyougen no Sikumi." (Structure of Honorific Expressions), in Unyou (The Progmatics). Asakura. [4] Harada. S. I., 1976, "Honorifics." in Shibatani (ed.), Syntax and Semantics 5. Academic Press. IS] Wroblewski, David A., 198, "Nondestructive graph unification." in the sixth conf. on AI. [6] Kogure, Kiycsi, et al. 1988 (forthcoming), "A Method of Analyzing Japanese Speech Act Types." in the 2nd conf. on Theoretical and Methodological Issues in Machine Translation of Natural Languages. [7] Kemp, Hans., 1981, "A Theory of Truth and Semantic Representation." in Groenendijk et el. (ads.), Formal Methods in the Study of Language. Mathametisch Centrum. 18] Helm. Irene. 1963, "File Change Semantics and the Familiarity Theory of Definiteness." in BSuerle et al. (ads.), Meaning, Use and Interpretation of Language. Waiter de Gruyter. 146
1988
17
ASPECTS OF CLAUSE POLITENESS IN JAPANESE: AN EXTENDED INQUIRY SEMANTICS TREATMENT John A. Bateman* USC/Information Sciences Institute 4676 Admiralty Way, Suite 1001 Marina del Rey, CA 90292 U.S.A. (e-mail: bateman@va=a.isi.edu) Abstract The inquiry semantics approach of the Nigel compu- tational systemic grammar of English has proved capa- ble of revealing distinctions within propositional con- tent that the text planning process needs to control in order for adequate text to be generated. An extension to the chooser and inquiry framework motiwLted by a Japanese clause generator capable of expressing levels of politeness makes this facility available for revealing the distinctions necessary among interpersonal, social meanings also. This paper shows why the previous inquL'y framework wu incapable of the klnd of se- mantic control Japanese politeness requires and how the implemented extenslon achieves that control. An example is given of the generation of a sentence that is appropriately polite for its context of use and some implications for future work are suggested. 1 Introduction - inquiry se- mantics • A crucial task in text generation is to be able to con- trol linguktic resources so as to make what is gen- erated conform to what is to be expre~ed. In the computational systemic-functional grammar (SFG) 'l~gel' (Mann, 1985; Mztthleseen, 1985; Mann and ~/~atthlessen, 1985), this task is the responslbility of the grammar's inquirv memardice. Nlgel follows gen- eral systemic-functlonni linguistics (SFL) practice in presenting grammar as a resource for expressing mean- ings; meanings are realized by a network of interlock- ing options and ps_,-ticular grammatical forms are ar- rived at by making choices In this network. Gener- ating appropriate text is then a problem of making the chokes in such a way that the distinct needs of individual texts to be expressed are satisfied. This is *Thk research was supported by a post-doctoral re- search fellowship from the Japan Society for the Promotion of Science (Tokyo) and the Royal Society (London), and was principally carried out at the Nagao Laboratory of the Department of Electrical Engineering, Kyoto University. achieved by means of choice experts, or ©hooserl, that collectively ensure that the choices made will be those appropriate for any particular text need. Each choice point in the grammar network has associated with it a chooserwhose responslbiHty is to interrogate the text need in respect of just those aspects of meaning nec- essary for determining the appropriate option to take. These choosers are formalized as decision trees whose nodes consist of basic knowledge base interrogation primitives called inqu/r/es. Each aspect of meaning to be expressed that the ~ a r needs to know about is made a~cesslble to the choosers by means of a single inquiry whose func- tion le to determ;Y,e where any particular meaning to be expressed stands on that aspect. For example, should the gr=mm~r need to know whether the text need was for the expression of a unitary object (say, a llon) rather than a set object (lions), then at the appro- priate choke points in the grammar choosers would ap- peal to the inquiry named i~,fultlplleltyQ to determine the text need. When fully specified, inquiries have two forms, an informal English gloss representing the func- tion of the inquh'y in terms of the theory of meaning adopted, and an implementation, currently in Lisp, of an actual interrogatlon of a knowledge base. Typically, constructing an inquiry proceeds first by means of suc- cees|ve approximations in informal terms, glossed in English, followed by an encoding of the understanding achieved of the semantic distinction at issue. This inquiry semantics approach has been very suc- cessful in the l~gel grammar of English; the grammar now has a very wide coverage all under inquiry con- trol. The type of coverage has, however, been limited primarily to what in SFL terms is called the ideational component of meaning (Hall]day, 1985). This is the component concerned with expressing our represen- tation of the world in terms of propositional content and logical organization. It is natural, therefore, that the inquiry approach should be successful in this do- main since this is typically the kind of information that is stored in the knowledge base and so is read- ily retrievable. Another SFL component of meaning, however, is the interpersonal. This aspect concerns 147 the expression of social relationships, an area that will become increasingly important as more natural inter- actions between people and machines are attempted. Although the N]gel grammar does contain a few in- quiries that are termed interpersonal, there has not been enough work here really to determine whether the inquiry framework is going to provide the took necessary for capturing the kind of meaning this in- volves. If the inquiry framework can be used in thk area also, then we can use it to investigate the knowledge base distinctions that will need to be represented in order to control interpersonal grammatical resources. This is a methodology that has already been applied with great success to ideational meaning in the Nlgel project. There, projecting through the inquiry inter- face from the grammar on to context has allowed for the construction of a domain independent knowledge organization hierarchy called the upper structure (e.g. Moore and Arens, 1985). Since inquiries rely upon specific semantic distinctions to control the grammat- ical choices for which they are responsible, the for- mulation of a chooser's inquiries amounts to a con- straiut on the organization and content of the knowl- edge base and text planning that needs to be done of the following form: if the ling~stic distinction for which the present chooser is responsible is to be avail- able as a resource for the text planner to exploit, then that text planner and the knowledge base have at least to support the semantic distinctions identified by the inquiries that constitute that chooser. Thus, the semantic distinctions revealed to be nec- essary for the implementation of the inquiries that con- trol ideational choices have guided the construction of the upper structure. To extend the kind of organiza- tional resource the upper structure provides into the interpersonal arena would therefore be very beneficial for our understanding of what needs to be included in the interpersonal area of the knowledge base and the text planning process and so would promise to improve the range and quality of the texts we can generate. 2 A new domain: The ex- pression of politeness in Japanese clauses As part of a proposed text generation project in Japanese at Kyoto University, some fragments of a systemlc-functlonal grammar of Japanese have been constructed (Bateman, 1985; Bateman et aL, 1987). In Japanese discourse the grammatical expression of various interpersonal relationships is quite common. Gaining control of these resources was therefore an ideal way to test further the applicability of the inquiry semantics approach in a domain which was clearly not ideational The particular area of interpersonal meaning exam- ined here ls that concerned with the expression of ap- propriate degrees of humility and respect in references to one's own actions, to those of one's audience, and to those of third parties. Although the general rule of being humble about one's own actions and respectful about those of others is complicated by a number of factors, even thk simplest case presents problems as far as controlling the grammar is concerned. In this section, I will briefly describe some of the forms in- voived and, in the next, how these create problems for the inquiry and chooser framework as used in l~geL A variety of clause forms are regularly employed in Japanese for the expression of interpersonal meanings related to 'politeness'. For example, the 'demotion' of the process information to a nominal-like form pre- ceded by a normal nominal honorific prefix (e.g. o, as in o-e]~: 'honorable' tea) supported by an auxiliary verb such as Juru, 'to do', or naru, 'to become', of- ten explicitly expresses the relative social statuses of the participants involved and the fact of those partici- pants' acknowledgment of those statuses. This we can see in, o-VERB suru humble referral to do seif's action o-VERB- n/ naru respectful referral becomes to action of other o-VERB dssu more distant respect • be for action of other Another type of form involves combinations of mor- phemes that conventionall~, represent distinctive ways of being polite. Here, there are a number of different interpersonal speech act types that may be performed. For example, both the expression of gratitude for fa~ yore received and the expression of the ~v/nO of favors virtually obligatory in normal discourse; this is achieved by appending one of the many verbs express- ing 'to give/receive' to the process performed. These verbs are highly sensitive to relative social positions and the perspective taken on the action performed (e.g. Kuno and Kaburaki, 1977; Inoue, 1979) and this aspect of their meaning is carried over for the expres- sion of favors done or perceived. 1 Typical combina~ tions also express po]ite ways of seeking permission for actions; one here modifies the action to be performed by means of the morphemes for causation/allowing, receiving a favor, wizhlng for, and thinking: a rough literal gloss of this form would be along the lines of 'I think I want to humbly receive from you your allowing me to do X'. Thus, the following clause forms are also commonly required in normal discourse: lThus, for verbs corresponding to the English 'give' and 'receive', there are seven Japanese verbs in common usage and these differ in most part according to the relative social positions of the participants in the giving. 148 VERB-giving doing a 'favor': respectfully or humbly VERB-recelving receiving a 'favor': respectfully or humbly [VERB-cause-receive-wish]-t hink deferential seeking of permission This by no means exhausts the range of formo that are relevant to discussions of politeness, respect, and humility in present-day Japanese, but it will be suf- ficient as an indication of the kinds of structures and meanings addressed within the present grammar, s It should also be noted that there are different 'dimen- sions' of politeness involved in the use of these forms; for example the clause yoku kite- kureta- ne well come favor to speaker tag which means 'thanks for coming' is in the familiar level of speech form, i.e. it could only be used between people who are on familiar terms. It is nevertheless 8Jill necessary for the favor being done to be explic- itly acknowledged; not expressing it would result in a clause that would often be inappropriate. The present grammar also treats the range of distinctions that arise along this 'famlliar'/'polite' levels of speech dimension but this will not be of immediate concern here. The differences in meaning that these alternative politeness-related forms represent need to be made available to a text generation system. Thls may be done by offering a set of grammatical resources that serves to express interpersonal knowledge about the interactive situation. As has been the case in the sys- temic grammar approach employed in Nigel generally, it is desirable to factor the knowledge and meanings to be expressed in terms of a structured set of alter- natives that may be selected from straightforwardly; for ideational meanings this is provided by the upper structure. The internal organization of the systemic grammar then takes care of the construction of lin- guistic structures appropriate to those meanings. Now we want to be able to do the same with the linguistic structures described here. Information which will need to be held in appropriately constructed speaker and hearer models should be factored according to the in- quirles that are necessary for driving the grammatical distinctions concerned. A problem arises here, how- ever, in that it is not possible to state within N]gel's grammar and chooser framework that the alternative grammatical forms available for the expression of po- 2A very good introduction and summary of the range of meanings and forms devoted to aspects of politeness in Japanese is given in Migutani and Misutani (1987). liteness are alternatives at all. The next section ex- plains why this is so. 3 Problems with the existing formalization of chooser- grammar interaction The principle problem encountered with controlling the deployment of structures such as those introduced in the previous section by means of a chooser mecha- nism k that, formerly, all chooser decisions have been local. Each chooser determines which grammatical feature is to be selected as appropriate for the con- text of use from a single point of minimal grammat- ical alternation. For example, the grammatical aVa- tern that presents the minimal grammatical alterna- tion in Japanese between having a constituent express a circumstance of location, and not having such a con- stituent, has a chooser associated with it which inter- rogates the knowledge base and text plan by means of its inquiries in order to see which of the two alterna- tives is applicable in the case at hand. If a location is to be expressed a grammatical ?eature is selected that entails the insertion of a constituent character- ized functionally as a location; if there is no location to be expressed than a feature which does not have such an entailment is selected. This selection between the alternative grammatical choices, or features, that are offered by a sinOle grammatical system is the only influence that the chooser of that system is permitted to have on the generation process. Thus, in the lo- cation case, the effects of the chooser responsible for insertion or not of a location constituent are entirely local to the portion of the generation process delimited by the location system of the grammar. With the politeness forms we seem to be faced again wlth a set of alternative meanings concerning level and type of politeness to be expressed. However, the prob- lem as far as the previously implemented view of the possible effects of choosers is concerned is that these alternatives correspond to no single points of grammat- ical alternation. For example, if the process of reading (yomu) is to be expressed but we want to make a se- lection of politeness-related meaning between a simple respectful reference to another's actions and a more distanced, indirect and reserved respectful reference, then the choice of appropriate forrn~ for that process is between o- vomi nl naru HONORIFIC reading CASE becoming and o- Voyr6 desu HONORIFIC reading COPULA-be 149 Now, while the distinction in meaning may be cap- tured by a simple scale of the 'directness' of the sen- tence that is appropriate for the particular interactive situation in which it is to be used, there is no gram- matical system in the grammar of Japanese that offers a direct choice between these two clause structures. The former structure is similar to the typical use of the verb 'become' as in Z-hi naru, 'to become X'; the latter is similar to clauses such as X deau, 'it is X'. They are not normally, e.g. in contexts not involving this particular contrast of politeness, in grammatical contrast. The distinction is, thenl in the use. and meaning of the structures rather than in their grammatical con- struction. Indeed, such distinctions may often cross- cut the distinctions that are made in the grammar; this is simply to accept that the semantic and prag- matic distinctions that a language draws need not be matched one-for-one by corresponding minimal points of grammatical alternation. The levels of coding are distinct and incorporate distinct aspects of the mean- ing and construction of the linguistic units involved. It is not then possible to associate a 'politeness' chooser with a grammatical system as is done with the choosers for ideational meanings because there is no grammatical system of 'politeness' to which it may be attached. A simple choice between minimal alter- natives of politeness can result in radically different grammatlcal structures that differ by virtue of many features. This means that politeness of this kind can- not be made available as a controllable expressive re- source for a text planner within the chooser framework as it is implemented within the Nigel project. 4 An implemented solution In order to meet this problem and to allow full control of politeness phenomena, the following extension was implemented within the context of the computational systemic grammar framework supported at Kyoto. The chooser framework is maintained as a deci- sion tree that selects between minimal points of se- mantic alternation. However, it is no longer the case that this needs to be held in a one-to-one correspon- dence with the minimal alternations that the gram- mar network represents. The possibility of distinct patterns of organization at the two levels, as would be claimed by systemic linguistics proper, is therefore captured. Accordingly, any chooser is permitted to make any number of selections of grammatical features from anywhere in the grammatical network. Choosers are thereby permitted to take on more of the organi- zational work required during text planning. This extension made it possible to construct a chooser decision tree that interrogates the text need concerning precisely those distinctions in meaning re- quired to ascertain which level and form of politeness to employ. The inquiries of this decision tree are free to ask all the questions related to the aspects of the social relationships of the participants in the speech situation that are necessary without being concerned about where in the grammatical network the conse- quences of those questions will be felt. This makes that reasoning available in a modular and easily com- prehensible form. The result of any particular path through the decision tree is a set of grammatical fea- tures that the grammatical product being generated as a whole must bear. This can therefore call for very different structural results to be selected which differ by many grammatical features drawn from many dis- tinct grammatical points of alternation° The present politeness 'chooser', or decision tree, has around 15 decision points where a distinct inquiry needs to be put to the knowledge base. These ]nqulrles are still at the stage of informal approximation. For example, after traversal of the decision tree has already established a number of important facts con- cerning the text need, including that the actor is the hearer, that the situation is not one classifiable as for- many 'o/~clal', that there is considerable social 'dis- tance' between the speaker and hearer, among others, the simple semantic distinction glossable in English as Is the subject-matter o/ the procssa ~uch that additional reserve should be ahownf is drawn. If the text need is classifiable as requir- ing a yes-response to this inquiry then the gram- matlcal features: identi/ying, intensive, and speeial- frammalical-pla¢ing are constrained to appear. If a no-classification is possible, then the grammatical features: becomlng-attribute, intensive, and special- grammatical-placing appear. The former set results in clauses with a functional structure of the form: e- VERB dssu HONORIFIC X COPULA-be which, as we have seen, expresses additional distance between the action and its performance as required. The latter set is sufficient to constrain the structure produced to be of the form: o- VERB ni naru HONORIFIC X CASE becoming which is the less indirect expression of respect. By way of contrast, the portion of the 'politeness' chooser that is concerned with the expression of hu- mility, rather than respect, is shown in figure 1. Formerly, any such decision tree would only have been able to call for the appearance of a single gram- matlcal feature; here any number of features may be selected (as indicated by the '++' operator in figure 1) during the decision tree's traversaL Modelling the kind of non-local organization inherent in the expres- sion of politeness would therefore have required nu- merous decision trees split according to the grammat- 150 IS the action independent Of otJlerS, the audience In particular? Is the process of the kind that • s¢~e¢t•l lextcat verb exists that em~'esses huld 11 ty? *+ post LI ~-SOCl al -pl 1¢t ng favm,trs ** see¢~i al- | exl cai -pl act n 9 post tt v~ sot1 •1 -pi a.:t ng Vould the performance of the fwocess obligate the hearer In any va)~ IS the process of the kind that • sp¢~t•1 lextcal verb exists that e~resses hmM14t~ t~ spect•l-lex4caI-placln9 ~ poslt|ve-soc~•l-pl•ctng pOSl ti ve- soot al °pl •Cl ng ** v~ shfulness favours modl fled- ~.-~ss cause modt fl ed-W~cess 81-9rmtt ca1 -pi act~g IS there • reas(m for e)q)llcltly laktng clear COnsideration Of the others wishes • regarding the process, such as tn seeking perldsslon for an act4on ~hlch way benefit the actor as ma=h its more than tt does the hearer'? \- ** {the features for:. ~IE~8 1"~) Fig-ure 1: The humility portion of the politeness chooser ical org-~nization. This subordinates the semantic or- ganizatlon to the grammatical organization and nec- essar;]y obscures the unity of the politeness reasoning process. By allowing the two levels of gray,mar and semantics their own, not necessarily isomorphic, di- mensions of organization it ]s possible to express the unity and coherence of patterns at either level and to capture the relationship between those levels. 5 Example of the genera- tion of appropriately polite clauses In this section, the generation of an actual utter- ance exhibiting complex attributes of politeness is il- lustrated. The utterance is drawn from a corpus of telephone conversations concerning hotel reservations. The traces given are those actually produced by the currently implemented Japanese systemic grammar program that is written In Symbolic• Common Lisp and runs upon a Symbolics 3600 Y.~p Machine. The context for the utterance is as follows. After a negotiation of precisely where, when, and how long the customer is to stay, the person responsible for hotel booking states that he will send the confirmation of the reservation to the customer 'today'. It is worth noting that the 'direct' translation of thle statement in terms of its ideational content (perhaps glossable as a very neutral I wall! send it today), such as might be handled by current machine translation systems, would be quite inappropriate in a genuine interactive situation such as the one described. What was actually said was of the following form: kVou A~sou saaete, itadaki, tai to omoim~u today send do-canes receive wish think forward humbly might I be permitted to send it today? During generation the grammar causes the politeness reasoning chooser network to be entered; this performs the classifications shown in figure 2, the humility sec- tion of thk reasoning may be followed through in figure 1 also. The ~-a~nmatical features constrained to appear in this case, i.e. ~sh/uinesa, [avoura, cause, etc., then result in particular predetermined paths being taken through the grammar network. For example, figure 3 shows when the grammatlcal system responsible for the construction of the functional structure con- cerned wlth the expression of causallty is entered, s S a number of experimental extensJo~ over the corn- 151 ENTERED SO CI AL-P LACING-REQU I REM ENTS;SY STEM CHOOSER: Inquiring Is tt possible f.or the -~peaker to Identify ~dth the actor of. the process SENDING (PROCESS)? ENVIRONHENT RESPONSE: YES CHOOSER: t nqut rtng Does the re1 art onsht p (e. g. one o£ great sot1 al distance) bergen the current speaker and the hearer requt re the expresst on oP spectal soctal post tl ont ng tnf.ormaMon during the statement of" SENDING (PROCESS)? ENVZRONHENT RESPONSE= YES CHOOSER: presel ectt ng f.eature HUHBI~NG CHOOSER: HUMILITY-REASONIN~ CHOOSER: I nqut rd ng Is the actl on SENDING (PROCESS) tndependent of others, ENVI RONHENT RESPON~ CHOOSER: t nqut H ng ENVIRONHENT RESPON~ CHOOSER: t nqut H ng ENVIRONHENT RESPONSE: CHOOSER: choostng CHOOSER: - I nqut rd ng ENVZRONHENT RESPON~ CHOOSEP- preselecttng feature • CHOOSER: preselectt ng feature CHOOSER: preselecM ng f.eature the audtence t n part| cular? NO Would the performance of. the process SENDING (PROCESS) obll gate the hearer, tn any way? (e. g. to carry for so¢~fteo .°°) NO Is the process SENDING (PROCESS) oF the ktnd that a spectal lextcal verb extsts that expresses huadltty? NO POSZI~VE-SOCIN.-PLACZ NG Is there a reason /'or explicitly maktng clear consideration of the other's wishes regarding the process SENDING (PROCESS). such as I n seekt ng petrol sst on for an action ~ht ch may beneftt the actor as much as more than tt does the hearer? YES VI StFIJLRESS FAVOURS HOOIFIED-PROCESS CHOOSER: preselectt ng feature CAUSE CHOOSER: presele~tt ng feature HOOTFIED-PRO(TcSS CHOOSER: preselec%t ng Feature SPECIN.-GRNqHAT~CAL-PLACING SELECTED FEATURE is POSITIVE-SOCIAL-PLACING Figure 2: Trace of the 8rammar's poHteneu reasoning ENTERED MOOIFIED-PROCESS-TYPE-SY-S~ P.M RECURSIVELY PRESELECTIONS OVERRIDING: =;electing feature CAUSE, . SELECTED FEATURE is CAUSE ~J;LZZEI~ tnserttng REALIZER: conflaM ng REN~ZER: pre.sel ectt ng REALZZER: preselectl fig REN.ZZER: or'dent ng ~ENTERED MODIFIED-EXPERIENCE-SYS/EM RECURSIVELY CHOOSER: t nqut H ng I= tht = use of the process SENDING (PROCESS) rood1 fted further t n some way? ENVZRONHENT RESPONS~ NO CHOOSER: selec¢tng feature CORE-PROCESS SELECTED FEATURE is CORE-PROCESS REN.IZ'F.~ preselecttng PROCE~ for SIMPLE-PROCESS INITJ[ATOR INI~ATOR and AGENT. 2 PROCESS For COPPL~-PROCESS PROCESS for CAUSATIVE AGENT. 2 before AGENT. 1 FiKure 3: ~raversal of the causaCivity region of the grammar 152 This grammatical system offer two alternative selec- tlous of feature: one which constrains the structure generated to be an expression of causation and one which does not. Here, since the grammatical feature cause has been constrained to appear by the polite- ness chooser, no further reasoning needs to be done at this point and the construction of the appropriate structure may proceed directly (via excution of the re- alization etatemerds associated wlth the cause feature, which call for a variety of operations to be performed on functionally-labelled constituents such as AGENT, PROCESS, etc.). Similarly prsssiscted grammatical decisions are made for each of the other regions of the grammar responsible for creating the structure required to ex- press the politeness need as determined during polite- ness reasoning. Thls serves to build the structure of the example sentence as an appropriate realization of the distinctions in politeness that were ascertained to be necessary by the politeness chooser inquiries. 6 Implications for further work It has been shown how a straightforward extension of the chooser and inquiry framework employed within the Nigel grammar permits its application to the control of the resources for expre~ing politeness in Japanese. In addition to the choice of humble and respectful forms of expression illustrated here, thk mechanism has been used in one current version of the grammar to support the selection of appropriate verbs of 'giving' and their combinations with other processes for the expression of favors done and con- slderation for other's actions, the selection of the par- ticipants or circumstances in the clause that are to be made 'thematic', and the selection of appropriate levels of speech (familiar, polite, deferential) across a variety of grammatical forms. The flexibility that this approach offers for cap- turing the semantic distinctions involved in interper- sonal meanings is allowing us to apply to interper- sonal knowledge the technique that was adopted for ideational meanings of determining the knowledge that needs to be maintained for satisfactory control of the resources of the grammar. An examination of how the inquiries informally glossed here may be implemented with respect to an actual knowledge base significantly constrains the types of constructs and their interre- laticnships that that knowledge base will be required to support. Thus notions of relative social position, obligatlons owed, favors done, social situation types, putational systemic framework implemented in Nigel ap- pear in this trace, e.g. the entering of granunatical s/stems 'recursively' and the insertion of multiple functions of the same type, as in AGENT.1 and AGENT.2. These are be- yond the scope of this paper however; their detail may be found in Bateman et a/. (1987). consequences of actions upon other people, and oth- ers that adequate inquiries have been found to rely upon are isolated in a linguistically-motivated and con- strained manner for incorporation in the interpersonal component of any knowledge base that is intended to support Japanese text generation. It is to be expected that similar results may be found with respect to En- gllsh also and so the identification of the interpersonal constructs necessary for knowledge bases for English text generation is now a clear priority. A more general application of the extension to the inquiry semantics approach illustrated here is that it opens up the possibility of using the chooser and in- quiry framework to capture the selection of grammat- ical forms according to the uses that are to be made of those forms, without imposing the grammar's organi- zation upon the decision trees that control that selec- tion. Since this non-isomorphism between distinctions that are to be drawn between uses and the distinctions that axe maintained in the grammar is as widespread across English as it is across Japanese, it is to be ex- pected that the mechanism proposed here could find wlde application. However, further experimentation into the mechanism's utility and appropriateness as a representation of what is involved in areas of language use where this occurs needs to be undertaken. Acknowledgments Many thanks are due to Professors Makoto Nagao and Jun-ichl Tsujii, all the members of the Nag~o lab- oratory, and to the staff and students of the Kyoto Japanese School for attempting to improve my under- standing of the Japanese language and its situated use. References [I] Batsman, J.A. (1985) 'An initial fragment of a computational systen~c grammar "of Japanese'; Kyoto University, Dept. of Electrical Engineer- ing. [2] Bateman, J.A., Kikul,G., Tabuchi~. (1987) 'De- signing a computational systemic grammar of Japanese for text generation: a progress report'; Kyoto University, Dept. of Electrical Engineering. [sl Benson, J/)., Greaves, W.S. (eds.)(1985) Sys- temic Perspectives on DlecouFeez Volume I; Selected Theoretical Papers from the 9th International Systemic Workshop, New Jersey, Ablex. [41 Hailiday,l~d~A.K. (1985) An introduction to functional grammar; London: Edward Arnold. [5] Inoue,K. (1979) ' "Empathy and Syntax m re- exmnined: A case study from the verbs of giv- ing in Japanese'. The 15th. Annual 1~feeting o[ the Chicago Linguiatics Society, pp149-159. 153 [6] Kuno,S., Kaburaki~E. (1977) 'Empathy and Syn- tax'. Linguisfic /nqu/ry, 8, pp627-672. [7] Mann,W.C. (1985) 'An introduction to the Nigel text generation gr2rnm~r', in Benson, J.D. and Greaves, W.S. (eds.)(op.cit.), pp84-95. [8] Mann,W.C., Matthieesen, C.I~I.M. (1985) 'A demonstration of the N]gel text generation com- puter program', in Benson, J.D. and Greaves, W.S. (eds.)(op.cit.), pp50-83. [9] Matthieuen,C.M.I.M.. (1985) 'The systemic framework ]n text generation', in Benson, J.D. and Greaves, W.S. (eds.)(op.rAt.), pp96-118. [10] M~utani,O. and ~r=utani.~. (19S7) •o= ~o 6e polite in Japaneae. Tokyo: The Japan Times, Ltd. [11] Moore,J..a.rens, Y. (1985) 'A Hierarchy for Enti- ties'; USC/Informatlon Sciences Institute, work- ing draft ms. 154
1988
18
EXPERIENCES WITH AN ON-LINE TRANSLATING DIALOGUE SYSTEM Seiji MHKE, Koichi HASEBE, Harold SOMERS , Shin-ya AMANO Research and Development Center Toshiba Corporation 1, Komukai Toshiba-cho, Saiwai-ku Kawasaki-City, Kanagawa, 210 Japan ABSTRACT An English-Japanese bi-directional machine translation system was connected to a keyboard conversation function on a workstation, and tested via a satellite link with users in Japan and Switzerland. The set-up is described, and some informal observations on the nature of the bilin- gual dialogues reported. INTRODUCTION We have been developing an English-Japanese bi-directional machine translation system imple- mented on a workstation (Amano 1986, Amano et a/. 1987). The system, which is interactive and designed for use by a translator, normally runs in an interactive mode, and includes a number of spe- cial bilingual editing functions. We recently real- ized a real-time on-line communication system with an automatic translation function by combin- ing a non-imeractive version of our Machine Trans- lation system with the keyboard conversation func- tion just like talk in UNIX**. Using this system, bilingual conversations were held between mem- bers of our laboratory in Japan and visitors to the 5th World Telecommunications Exhibition Tele- corn 87, organized by the International Telecom- munication Union, held in Geneva from 20th to 27th October 1987. In the fh-st part of this paper, we discuss in detail the configuration of this system, and give some indications of its performance. In the second part, we report informally on what for us was an interesting aspect of the experiment, namely the nature of the dialogues held using the system. In *the Centre for Computational Linguistics, University of Manchester Institute of Science and Technology, England **UNIX is a trademark of AT&T Bell Labora- tories. particular we were struck by the amount of meta- dialogue, i.e. dialogue discussing the previous interchanges: since contributions to the conversa- tion were being translated, this metadialogue posed certain problems which we think are of gen- eral interest. In future systems of a similar nature, we feel there is a need for users to be briefly trained in certain conventions regarding metadialogue, and typical system translation errors. Furthermore, an environment which mini- mizes such errors is desirable and the system must be 'tuned' to make translations appropriate to con- versation. SYSTEM CONFIGURATION A general idea of the system is illustrated in Figure 1. Workstations were situated in Japan and Switzerland, and linked by a conventional satel- lite telephone connection. The workstations at either end were AS3260C machines. Running UNIX, they support the Toshiba Machine Transla- tion system AS-TRANSAC. On this occasion, the Machine Translation capability was installed only at the Japanese end, though in practice both termi- nals could run AS-TRANSAC. The workstation screens are divided into three windows, as shown in Figure 2, not unlike in the normal version of UNIX's talk. The top window shows the user's dialogue, the middle window the correspondenfs replies. The important difference is that both sides of the dialogue are displayed in the language appropriate to the location of the ter- minal. However, in a third small window, a workspace at the bottom of the screen, the raw input is also displayed. (This access to the English input at the Japanese end is significant in the case of Japanese users having some knowledge of English, and of course vice versa if appropriate.) The bottom window also served the purpose of indicating to the users that their conversation part- ners were transmitting. 155 OIALOGUE USING KEYOOARO$ svalze~ Figure 1. General Set-up tel lo, Takeda. My name is suzanne. [ live in geneva, but I come froe California. /es, ~t ~hen I ~as 12 ~ars old. /ery interesting, Quick, and useful ! ~ov many languages do you spaak, Takeda ? rhet is ok. ] =- __'L., . . . . . . . . --:_._-'- II MY name is Takeda. Please tell me your name. Where do YOU live? ]see. Have you visited Japan? Please tell me the impression of this machir Thank you. I can speak only Japanese. IS [l[liil~ : SUZLHIII,C't,~qVI I1 Ill=ill Iil]lll',,,i~ilBI g IIIl~i] Switzerland /~-, Tal<eclao ~o)~l~l:s u zanne~o ~v,, ----b~,b/~'~J 2~'~'~o ~ o k~1"o That is ok, Figure 2. Screen Display Japan 156 Figure 3 shows the set-up in more detail. At the Japanese end, the user inputs Japanese at the keyboard, which is displayed in the upper window of the workstation screen. The input is passed to the translation system and the English output, along with the original input is then transmitted via telecommunications links (KDD's Venus-P and the Swiss PTT's Telepac in this case) to Switzer- land. There it is processed by the keyboard conver- sation function, which displays the original input in the workspace at the bottom of the screen, and the translated message in the middle window on the screen. The set-up at the Swiss end is similar to that at the Japanese end, with the important exception that only the original input message is transmitted, since the translation will take place at the receiving end. TRANSLATION METHOD An input sentence is translated by morphologi- cal analyzer, dictionary look-up module, parser, semantic analyzer, and target sentence generator. Introducing a full-fledged semantic analyzer con- flicts with avoiding increases in processing time and memory use. To resolve this conflict, a Lexi- cal Transition Network Grammar (LTNG) has been developed for this system. LTNG provides a semantic framework for an MT system, at the same time satisfying processing time and memory requirements. Its main role is to separate parsing from semantic analysis, i.e., to make these processes independent of each other. In LTNG, parsing includes no semantic analysis. Any ambiguities in an input sentence remain in the syn- tactic structure of the sentence until processed by the semantic analyzer. Semantic analysis proceeds according to a lexical grammar consisting of rules for converting syntactic structures into semantic structures. These rules are specific to words in a pre-eompiled lexicon. The lexicon consists of one hundred thousand entries for both English and Japanese. SYSTEM PERFORMANCE Once the connection has been established, con- versation proceeds as in UNIX's talk. An impor- tant feature of the function is that conversers do not have to take turns or wait for each other to finish typing before replying, unlike with write. This has a significant effect on conversational strategy, and occasionally leads to disjointed con- versations, both in monolingual and bilingual dia- logues. For example, a user might start to reply to a message the content of which can be predicted after the first few words are typed in; or one user might start to change the topic of conversation while the other is still typing a reply. Transmission of input via the satellite was gen- erally fast enough not to be a problem: the real bottle-neck was the physical act of input. Novice users do not attain high speed or accuracy, a prob- lem exacerbated at the Swiss end by a slow screen echo. But the problem is even greater for Japanese input: users typed either in romaji (i.e. using a standard transcription into the Roman alphabet) or in hiragana (i.e. using Japanese-syllable values for the keys). In either case, conversion into kanji (Chinese characters) is necessary (see Kawada et al. 1979 and Mori et al. 1983 on kana.to-kanji conversion); and this conversion is needed for between a third and a half of the input, on average (el. Hayashi 1982:211). Because of the large hum- AS 3260C E2 conversation I _~ function El.,, r2.,E2 PTT Telep~ "J2,E2 KDD E2 Venus-P Jr 3260C \ I conversation function translation system Switzerland Figure 3. Configuration Japan 157 ber of homophones in Japanese, this can slow down the speed of input considerably. For exam- ple, even for professional typists, an input speed of 100 characters (including conversions) per minute is considered reasonable (compare expected speeds of up to 100 words/minute for English typ- ing). It is of interest to note that this kana-to- kanji conversion, which is accepted as a normal part of Japanese word-processor usage, is in fact a natural form of pre-editing, given that it serves as a partial disambiguation of the input. On the other hand, slow typing speeds are also encountered for English input, one side-effect of which is the use of abbreviations and shorthand. In fact, we did not encounter this phenomenon in Geneva, though in practice sessions (with native English speakers) in Japan, this had been quite common. Examples included contractions (e.g. pls for please,.u for you, cn for can), omis- sions of apostrophes (e.g. cant, wont, dont) and non-capitalization (e.g. i, tokyo, jal). The translation time itself did not cause signif- icant delays compared to the input time, thanks to a very fast parsing algorithm, which is described elsewhere (Nogami et al. 1988). Input sentences were typically rather short (English five to ten words, Japanese around 20 characters), and transla- tion was generally about 0.7 seconds per word (5000 words/hour). Given users' typing speed and the knowledge that the dialogue was being trans- mitted half way around the world, what would, under other circumstances, be an unacceptably long delay of about 15 seconds (for translation and transmission) was generally quite tolerable, because users could observe in the third window that the correspondent was inputting something, even if it could not read. TRANSLATION QUALITY This environment was a good practical test of our Machine Translation system, given that many of the users had little or no knowledge of the tar- get language: the effectiveness of the translation could be judged by the extent to which communi- cation was possible. Having said this, it should also be remarked that the Japanese-English half of the bilingual translation system is still in the experimental stage and so translations in this direction were not always of a quality comparable to those in the other direction. To offset this, the users at the Japanese end, who were mainly researchers at our laboratory and therefore famil- iar with some of the problems of Machine Transla- tion, generally tried to avoid using difficult con- structions, and tried to 'assist' the system in some other ways, notably by including subject and object pronouns which might otherwise have been omitted in more natural language. We recognized that the translation of certain phrases in the context of a dialogue might be dif- ferent from their translation under normal circum- stances. For example, Engfish I see should be translated as naruhodo rather than watashi ga miru, Japanese wakarimashita should be I under- stand rather than I have understood, and so on. Nevertheless, the variety of such conversational fillers is so wide that we inevitably could not foresee them all. The English-Japanese translation was of a high quality, except of course where the users - being inexperienced and often non-native speakers of English - n~de typing mistakes, e.g. (I). (In these and subsequent examples, E: indicates English input, J: Japanese input, and T: transla- tion. Translations into Japanese are not shown. Typing errors and mistranslations are of course reproduced from the original transmission.) (la) E: this moming i came fro st. galle to vizite the exosition. E: it is vwery inyteresti ng to see so many apparates here. (lb) E: (lc) E: J: J: I arderd "today's menu'. i would go tolike a girl. b~ 9 "~-¢A,o T: I don't understand. t o 1 i k e ~j:fSJ'C"~';~o T: What is tolike? These were sometimes compounded by the delay in screen echo of input characters, as in example (2). (2) E: Sometimes, I chanteh the topic, suddenly. E: I change teh topic. J: ~ ~ ~o T:I understand. E: I had many mistakes. J: ~b ~ ~: I¢~-C v-,'£ ~o T: Are you tired? E: A little. E: But the main reason is the delay fo dispaying. E: But the main reason is the delay of display. 158 Failure to identify proper names or acronyms often led to errors (by the system) or misunder- standings (by the correspondent), as in (3a), espe- cially when the form not to be translated happens to be identical to a known word, as in (3b). In (3b), 'go men na sai' means in Japanese that I'm sorry. (3a) E: lars engvall. J: 1at s engva 1 lhtfaJ" "O3'-~0 T: What is lars engvall? E: this is my name. (3b) [having been asked if he knows Japanese] E: How about go men na sai? T: &'©,,t: 5 Ir__Ic.-9 v-,-C~ <_ Jk_n a_ s a i. This was avoided on the Japanese-English side where proper names were typed in romaji (4). (4) J: ~I,©~--~I~N o g a m i "C'"J-o T: My name is Nogami. As with any system, there were a number of occasions when the translation was too literal, though even these were often successfully under- stood (5). (5) E: J: Do you want something to drink? ~o T: Yes. E: What drink do you want? J: ~w= -- e -- ~JJC~.3.h: w, T: I want to drink a warm coffee. E: warm coffee? E: Not a hot one? J: ,,~, v, = -- e --'e3"o T: It is a hot coffee. One problem was that the system must always give some output, even when it cannot analyse the input correcdy: in this environment failure to give some result is simply unacceptable. Howev- er, this is difficult when the input contains an unknown word, especially when the source lan- guage is Japanese and the unknown word is trans- mitted as a kanji. Our example (6) nevertheless shows how a cooperative user will make the most of the output. Here, the non-translation of tsuki mae (fi] ~ ) is compounded by its mis-translation as a prepositional object. The first Japanese sen- tence said that I married two months ago. But the English correspondent imagines the untranslated kanji might mean 'wives'! (6) J: ~/,t-J:27J ~ E ~ L too T: I married to 2 ~ ~-~]. E: are married to 2 what???. J: ~-~©6~ tc~ l.fco T: I married in this year June. E: now i understand. E: i thought you married 2 women. In the reverse direction, the problem is less acute, since most Japanese users can at least read Roman characters, even if they do not understand them (7): this led in this case to an interesting metadialogue. Again, the English user was cooper- ative, and rephrased what he wanted to say in a way that the system could translate correcdy. (7) E: can you give me a crash course in japanese?. J: c r a s h c o u r s e~f~'O~ ~o T: What is crash course? E: it means learn much in a very short time. Mistransladons were a major source of metadi- alogue, to be discussed below, though see particu- larly example (10). THE NATURE OF THE DIALOGUES There has been some interesting research recent- ly (at ATR in Osaka) into the nature of keyboard dialogues (Arita et aL 1987; Iida 1987) mainly aimed at comparing telephone and keyboard conver- sions. They have concluded that keyboard has the same fundamental conversational features as tele- phone conversation, notwithstanding the differ- ences between written and spoken language. No mention is made of what we are calling here meta- dialogue, though it should be remembered that our dialogues are quite different from those reported by the ATR researchers in that we had a transla- tion system as an intermediary. No comparable experiment is known to us, so it is difficult to find a yardstick against which to assess our find- ings. Regarding the subject matter of our dialogues, this was of a very general nature, often about the local situation (time, weather), the dialogue part- ner (name, marital status, interests) or about recent news. A lot of the dialogue actually con- cemed the system itself, or the conversation. An 159 obvious example of this would be a request to rephrase in the case of mistranslation, as we have seen in (6) above, though not all users seemed to understand the necessity of this tactic (8). (8) E: how does your sistem work please. J: ~.L~ ~: ©~©,~b~ r) ~-'~-A,o T: I don't understand a meaning of the sentence. E: how does your sistem work? Often, a user would seek clarification of a mis- or un-translated word as in (9), or (3) above. (9) E: I could have riz in the dinner. J: r i z ~7 ~ :/:~-@~o T: Is riz French? E: May be. I'm not sure. J: ~-(" 3",~o T: Is it rice? E: In my guess, you are right. J: ~ ~'9"o T: It is natural. E: What is natural? T: I understand French. J: r i z ~:~'C~o T: Riz is rice. The most interesting metadialogues however occurred when users failed to distinguish cited words - a problem linguists are familiar with - for example by quotation marks: these would then be re-translated, sometimes leading to further confusion (10). 0o) Jl: B~:©Ep~'~L."C < ~ Wo T: Please speak a Japanese impression. E1 : ichibana. J2: b~ ~ ",~ ~-A,o J3: i c h i b a n a ~1~'~';~o T:What is ichibana? E2: i thought it means number one. J4: f~ ~--~:'(-~o T:What is the first? E3: the translation to you was incorrect. This example may need explanation. First the translation of the Japanese question (J1) has been misunderstood: the translation should have been 'Please give me your impressions of Japan', but the English user (E-user) has understood Japanese to mean 'Japanese language'. That is, E-user has under- stood J1 to be saying 'Please speak an impressive Japanese word.' Then E-user confused ichiban ('number 1' or 'the first') and ikebana ('flower arranging'). The word ichibana (El) does not exist in Japanese. His explanation 'number one' was correctly translated (not shown here) as ichiban. But not realizing of course that the mean- ing of his first sentence (J1) was incorrectly understood, the Japanese user (J-user) could not understand E1 (J2) and asked for its sense (J3). So E-user tried to explain the meaning of /¢h/bana, which in fact was ichiban. By the answer, J-user has identified what E-user ment, but since J-user still did not realized that his first sentence was incorrectly understood and hence J- user has understood E2 to be saying that some- thing was 'number 1', he tried to ask what was 'number 1' (J4). But in the translation of this question, ichiban (--~ ) was translated as 'the fLrsf. At this point, it is not clear which comment E-user is referring to in E3, but anyway, not realizing what answer J-user have expected and not knowing enough Japanese to realize what has happened - i.e. the connection between 'number one' and 'the firsf - E- user gives up and changes the subject. If E-user had intended to speak ikebana and explained its meaning, J-user could have realized J1 had been misunderstood. Because it is meaningless in a sen- tence saying someone's impression that something is ikebana. On the other hand, where the user knew a lit- de of the foreign language (typically the Japanese user knowing English rather than vice versa), such a misunderstanding could be quickly dealt with (11). (11) E: How is the weathere in Tokyo? J:we a t h e r e i'~we a t h e r T: Is weathere weather? CONCLUSIONS There are a number of things to be learnt from this experiment, even if it was not in fact set up to elicit information of this kind. Clearly, typing errors are a huge source of problems, so an envi- ronment which minimizes these is highly desir- able. Two obvious features of such an environment are fast screen echo, and delete and back-space keys which work in a predictable manner in relation to what is seen on the screen. For the correction of typing errors, the system should have a spelling- 160 check function which works word-by-word as users are typing in. The main reasons for syntax errors are ellipsis and unknown words. Therefore, the system should have a rapid syntax-check func- tion which can work before transmission or trans- lation and can indicate to users that there is a syn- tax error so that users can edit the input sentence or reenter the correct sentence. These are in them- selves not new ideas, of course (e.g. Kay 1980 and others). Conventions for citing forms not to be trans- law, d, especially in metadialogue, must be devel- oped, and the Machine Translation system must be sensitive to these. The system must be 'tuned' in other ways to make translations appropriate to conversation, in particular in the translation of conversational fillers like I see and wakarimashita. Finally, it seems to be desirable that users be trained briefly, not only to learn these conven- tions, but also so that they understand the limits of the system, and the kind of errors that get pro- duced, especially since these are rather different from the errors occasionally produced by human translators or people conversing in a foreign lan- guage that they know only partially. REFERENCES AMANO Shin-ya, Hiroyasu NOGAMI & Seiji MIIXE (1988). A Step towards Telecommu- nication with Machine Interpreter. Second Interna- tional Conference on Theoretical and Methodologi- cal Issues in Machine Translation of Natural Lan- guages, June 12-14, 1988 (CMU). NOGAMI Hiroyasu, Yumiko YOSHIMU- RA & Shin-ya AMANO (1988). Parsing with look-ahead in a real time on-line translation sys- tem. TO be appeared in COLING 88, Budapest. AMANO shin-ya, Kimihito TAKEDA, Koichi HASEBE & Hideki HIRAKAWA (1988). Experiment of Automatic Translation Typing Phone (in Japanese). Information Processing Soci- ety of Japan (1988.3.16-18) TAKEDA Kimihito, Koichi HASEBE & Shin- ya AMANO (1988). System Configuration of Automatic Translation Typing Phone (in Japanese). Information Processing Society of Japan (1988.3.16-18) ASAHIOKA Yoshimi, Yumiko YOSHIMU- RA, Seiji MIIKE & Hiroyasu NOGAMI (1988). Analysis of the Translated Dialogue by Automatic Translation Typing Phone (in Japanese). Informa- tion Processing Society of Japan (1988.3.16-18) ARITA Hidekazu, Kiyoshi KOGURE, Izum NOGAITO, Hiroyuki MAEDA & Hitoshi IIDA (1987). Media-dependent conversation manners: Comparison of telephone and keyboard conversa- tions (in Japanese). Information Processing Soci- ety of Japan 87-M (1987.5.22) IIDA Hitoshi (1987). Distinctive features of conversations and inter-keyboard interpreta- tion. Workshop on Natural Language Dialogue Interpretation, November 27-28, 1987, Advanced Telecommunications Research Institute (ATR), Osaka. AMANO Shin-ya (1986). The Toshiba Machine Translation system. Japan Computer Quarterly 64 "Machine Translation - Threat or Tool" (Japan Information Processing Development Center, Tokyo), 32-35. AMANO Shin-ya, Hideki HIRAKAWA & Yoshinao TSUTSUMI (1987). TAURAS: The Toshiba Machine Translation system. MT Machine Translation Summit, Manuscripts & Program, Tokyo: Japan Electronic Industry Development Association (JEIDA), 15-23. KAY Martin (1980). The proper place of men and machines in language translation. Report CSL-80-11, Xerox-PARC, Palo Alto, CA. HAYASHI Ooki (kanshuu) (1982). ~ ~k: (~ 1982). ~lH~r~. KAWADA Tsutomu, Shin-ya AMANO, Ken-ichi MORI & Koji KODAMA (1979). Japanese word processor JW-10. Compcon 79 (Nineteenth IEEE Computer Society International Conference, Washington DC), 238-242. MORI Ken-ichi, Tsutomu KAWADA & Shin-ya AMANO (1983). Japanese word proces- sor. In T. KITAGAWA (Ed.) Japan Annual Reviews in Electronics, Computers & Telecommu- nications Volume 7: Computer Sciences & Tech- nologies, Tokyo: Ohmsha, 119-128. APPENDIX A. Overall Performance Data sessions 78 times utterances 1429 times (100%) 18.3 time/session utterances that were successfully translated 1289 times (90%) utterances that were mis-translated 140 times (10%) metadialogues 31 times 0.4 time/session 161 APPENDIX B. Subject Matter in Utterances total utterances greeting and self introduction response signals about weather about time others 1429 times (100%) 470 times (33%) 154 times (11%) 92 times (6%) 56 times (4%) 657 times (46%) APPENDIX C. Type of Expressions in Metadialogue total metadialogues 31 times repetition of a part of partner's utterances (e.g. What is ichibana?) 22 times (English users' are 2 and Japanese users' are 20) telling typing errors or mistranslations (e.g. Error in Translation.) 9 times (English users' are 6 and Japanese users' are 3) APPENDIX D. Distribution of Utterances (ia), (lb), (2) and so on are corresponding to examples in the text. Those numbers are put in the area in which main utterances in the examples are involved. ::,, i!iiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii!iiiiiiiiiiiiiiiiiiiiiill iiiiiiiii!iiii ® : total 1429 utterances I' : 1289 utterances that were successfully translated : 140 utterances that were mis-~ranslated : 31 utterances that caused metadialogues A: by typing errors (7times) B: by mistranslations (5times) C: by unknown words to the partner and so on (19times) 162
1988
19
SENTENCE FRAGMENTS REGULAR STRUCTURES Marcia C. Linebarger, Deborah A. Dahl, Lynette Hirschman, Rebecca J. Passonneau Paoli Research Center Unlsys Corporation P.O. Box 517 Paoli, PA ABSTRACT This paper describes an analysis of telegraphic fragments as regular structures (not errors) han- dled by rn~n~nal extensions to a system designed for processing the standard language. The modu- lar approach which has been implemented in the Unlsys natural language processing system PUNDIT is based on a division of labor in which syntax regulates the occurrence and distribution of elided elements, and semantics and pragumtics use the system's standard mechankms to inter- pret them. 1. INTRODUCTION In t]~ paper we discuss the syntactic, semantic, and pragmatic analysis of fragmentary sentences in English. Our central claim is that these sentences, which have often been classified in the literature with truly erroneous input such as misspellings (see, for example, the work dis- cussed in ~wnsny1980, Thompson1980, Kwnsny1981, Sondheimer1983, Eustman1981, Jen- sen1983]), are regular structures which can be processed by adding a small number of rules to the grammar and other components of the sys- tem. The syntactic regularity of fragment struc- tures has been demonstrated elsewhere, notably in ~/larsh1983, Hirschman1983]; we will focus here upon the regularity of these structures across all levels of linguistic representation. Because the syntactic component regularizes these structures into a form almost indistinguishable from full tThis work has been supported in part by DARPA under contract N00014-85-C-0012, administered by the Office of Naval Research; by National Science Foundation contract DCR-85-02205; and by Independent R~D fuudinz from Sys- tens Development Corporation, now part of Unisys Corpora- tion. Approved for public release, distribution unlimited. assertions, the semantic and pragmatic com- ponents are able to interpret them with few or no extensions to existing mechanisms. This process of incremental regularisation of fragment struc- tures~is possible only within a linguistically modu- lar system. Furthermore, we claim that although fra~nents may occur more frequently in special- ised sublanguages than in the standard grammar, they do not provide evidence that sublanguages are based on gra,~m*tical principles fundamen- tally different from those underlying standard languages, as claimed by ~itspatrick1986], for example. This paper is divided into five sections. The introductory section defines fragments and describes the scope of our work. In the second section, we consider certain properties of sentence fragments which motivate a modular approach. The third section describes our implementation of processing for fragments, to which each com- ponent of the system makes a distinct contribu- tion. The fourth section describes the temporal analysis of fragments. Finally, the fifth section discusses the status of sublanguages characterized by these telegraphic constructions. We define fragments as regular structures which are distinguished from full assertions by a missing element or elements which are normally syntactically obligatory. We distinguish them from errors on the basis of their regularity and consistency of interpretation, and because they appear to be generated intentionally. We are not denying the existence of true errors, nor that pro- ceasing sentences containing true errors may require sophisticated techniques and deep reason- ing. Rather, we are saying that fragments are dis- tinct from errors, and can be handled in a quite general fashion, with minimal extensions to nor- mal processing. Because we base the definition of /ragmer, t on the absence of a syntactically 7 obligatory element, noun phrases without articles are not considered to be fragmentary, since this om;~sion is conditioned heavily by sem•ntlc fac- tors such •s the mass vs. count distinction. How- ever, we have implemented a pr•gm•tlcaliy based treatment of noun phrases without determiners, which is briefly discussed in Section 3. Fragments, then, •re defined here as eli- slons. We describe below the way in which these ore;••ions are detected and subsequently 'filled in' by different modules of the system. The problem of processing fragmentary sen- tences has arisen in the context of a l•rge-scnle natural language processing research project con- ducted at UNIsYs over the past five years ~al- mer1986, Hirschman1986, Dowding1987, Dahl1987]. We have developed a portable, broad-coverage text-processing system, PUNDIT. 1 Our initial applications have involved v•rlons message types, including: field engineering reports for maintenance of computers; Navy maintenance reports (Casualty Reports, or CASR~S) for start- ing air compressors; Navy intelligence reports (~m~roRm); trouble and f•U~ reports (TEas) from Navy Vessels; and recently we have exam- ined several medical domains (radiology reports, COmments fields from • DNA sequence database). At least half the sentences in these corpora are fragments; Table 1 below gives • summary of the fragment content of three domains, showing the percent of centers which are classified as frag- ments. (Centers comprise all sentence types: assertions, questions, fragments, and so forth.) Table 1. Fragments in three domaiu~ Total centers Percent fragments CASP.EPS 153 53% ]~s.J~F OP.~ 41 7S% TFR 35 51% The PUNDIT system is highly modular: it consists of a syntactic component, based on string grammar and restriction grammar [Sager1981, Hirschman1985]; a semantic component, based on inference-driven mapping, which decomposes predicating expressions into predicates and thematic roles ~almer1983, Palmerlg85]; and a pragmatic• component which processes both refer- ring expressions ~)ah11986], and temporal expres- sions ~assonneau1987, Passonneau1988]. 1 Prolog UNDer#h;~isO ol l~tzgr~zd Teal 2. DIVISION OF LABOR AMONG SYN- TAX, SEMANTICS, AND PRAGMATICS We argue here that sentence fragments pro- vide a strong case for linguistically modular sys- tems such as PUNDIT, because such elislons have distinct consequences •t different levels of linguis- tic description. Our approach to fragments can be snmm•rlsed by saying that syntax detects 'holes' in surface structure and creates dummy elements as piaceholders for the missing elements; seman- tics and pragmatics interpret these placeholders at the appropriate point in sentence processing, utllising the same mechanisms for fragments •s for full assertions. Syntax regulates the holes. Fragment eUsions cannot be accounted for in purely semantlc/pragmatic terms. This is evidenced by the fact that there •re syntactic restrictions on om;nlons; the acceptability of a sentence frag- ment hinges on gramm•tlcal factors rather than, e.g., how readily the elided material can be inferred from context. For example, the discourse Old howe too small. *New one ~ be larger titan _ was (where the elided object of t~an is under- stood to be old howe) is Ul-formed, whereas a comparable discourse First repairman ordered new air eonditiom~r. Second repairman will inltali_ (where the elided object of inJto//is understood to be air eoaditloasr) is acceptable. In both cases above, the referent of the elided element is avail- able from context, and yet only the second elilpsis sounds well-formed. Thus •n appreciation of where such ellipses may occur is part of the lingu, t/e knowledge of speakers of English and not simply a function of the contextual salience of elided elements. Since these restrictions con- cern structure rather than content, they would be d;~cult or impossible to state in • system such •s a 'pure' semantic grammar which only recognised such omissions at the level of semantic/pragmatic representation. Furthermore, it matters to semantics and pragmatic• HOW an argument is omitted. The syntactic component must tell sem•ntlcs whether a verb argument is re;Ring bec•use the verb is used intransitively (as in The tiger was eating, where the patient argument is not specified) or because of • fragment ellipsis (as in Eaten bl/ a tiger, where the patient argument is missing because the subject of a passive sentence has been elided). Only in the latter case does the missing argument of eat function •s •n antecedent subsequently in the discourse: compare Eaten by a tiler. Had mcreamed bloody murder right before tKe attack (where the victim and the screamer are the same) vs. TKe tiger teas eating. Had screamed bloody murder right before tKe attack (where it is dlmcnlt or impossible to get the reading in which the victim and the screamer are the same). Semantles and pragmstles fill the holes. In PUNDIT's treatment of fragments, each com- ponent contributes exactly what is appropriate to the specification of elided elements. Thus the syn- tax does not attempt to 'fill in' the holes that it discovers, unless that information is completely predictable given the structure at hand. Instead, it creates • dummy element. If the missing ele- ment is an elided subject, then the dummy ele- ment created by the syntactic component is assigned a referent by the pragmatics component. This referent is then assigned • thematic role by the semantics component llke any other referent, and is subject to any selectlonal restrictions atom- cinted with the thematic role assigned to it. If the missing element is a verb, it is specified in either the syntactic or the semantic component, depending upon the fragment type. |. PROCESSING FRAGMENTS IN PUN- DIT Although the initial PUNDIT system wu designed to handle full, as opposed to fragmen- tary, sentences, one of the interesting results of our work is that it has required only very minor changes to the system to handle the basic frag- ment types introduced below. These included the additions of: 6 fragment BNF definitions to the grammar (a 5~ increase in grammar size) and 7 context-sensitive restrictions (a 12~o increase in the number of restrictions); one semantic rule for the interpret••ion of the dummy element inserted for missing verbs; • minor modification to the reference resolution mechanism to treat elided noun phrases llke pronouns; and a small addition to the temporal processing mechanism to handle tenseless fragments. The small number of changes to the semantic and pragmatic com- ponents reflects the fact that these components are not 'aware' that they are interpreting frag- mentary structures, because the regularlsatlon performed by the syntactic component renders them structurally indistinguishable from full assertions. Fragments present parsing problems because the ellipsis creates degenerate structures. For example, • sequence such as cheer negative can be analysed as a 'sero-copuia' fragment meaning the chest X-ray im negative, or • noun compound llke tKe nefative of the ehe,L This is compounded by the lack of deriv•tional and inflectional mor- phology in English, so that in many cases it may not be possible to distinguish • noun from • verb (repair parts) or a past tense from a past partici- ple (decreased medication). Adding fragment definitions to the grammar (especially if deter- miner om;Mion is •]so allowed) results in •n explosion of ambiguity. This problem has been noted and discussed by Kwasny and Sondheimer ~wasny1981]. Their solution to the problem is to suggest special relax••ion techniques for the analysis of fragments. However, in keeping with our thesis that fragments are normal construc- tions, we have chosen the alternative of con- straining the explosion of parses in two ways. The first is the addition of • control structure to implement a i;m;ted form of preference via 'unbacktr•ckable' or (xor). This binary operator tries its second argument only if its first argu- ment does not lead to • parse. In the grammar, this is used to prefer "the most structured" alter- native. That is, full assertions are preferred over fragments - if an assertion or other non-fragment parse is obtained, the parser does not try for • fragment parse. The second mechanism that helps to control generation of incorrect parses is selection. PUNDIT applies surface selectlonal constraints incremen- tally, as the parse is built up ~ang1988]. For example, the phrase air compressor would NOT be allowed as • serocopnla because the construction air is eompree#or would fall selection, s 8.1. Fragment Types The fragment types currently treated in PUNDIT include the following: Zerocopula: a subject followed by • predicate, differing from a full clause only in the absence of • verb, as in ImpeUor blade tip erosion eviden~ Tvo (tensed verb + object): a sentence m;~ing its subject, as in Believe the coupling from diesel to lac lube oil pump to be reheated; s Similarly, the assertion parse for the title of this pa- per would fail selection (sentences don't frngment structures), permitting the serocopuin fragment pLrse. Nst~.ag: an isolated noun phrase (noun-string fragment), as in Lou o/o~ primp preuure. ObJlze_frag (object-of-be fragment): an isolated complement appropriate to the main verb be, as in Unable to eonJ.tenffy Itart nr lb gaa turbine; Predicate: an isolated complement appropriate to a~ary be, as in Believed due to worn b~h- ingJ, where the full sentence counterpart is Failure 14 believed (to be) due to uorn b~hlnfm; s Obj..gap_flea&qnent: a center (assertion, ques- tion, or other fragment structure) mining an obli- gatory noun phrase object, as in Field engineer t~l replace_ Note that we do not address here the pro- cessing of reapon~e frafmen~ which occur in interactive discourse, typically as responses to questions. The relative frequency of these six fragment types (expressed as a percentage of the total frag- ment content of each corpus) is summarised below.' Ta~e2. 3reLkdown of fragments by CASREPS RAINFORM TVO 17.5% 40.8% zc s=.s% so% NF 2S% 8.=% O.BJBE a.7% 0% PRED 1.2% 3.1% OBJ_GAP 0% 3.1% typ•o TFR 61% 18.8% 18.8% S.S% 0% 0% The processing of these basic fragment types can be svmm~rlsed briefly as follows: a detailed surface parse tree is provided which represents the overt lexical content in its surface order. At this level, fragments bear very little resemblance to full assertions. But at the level of the Intermediate S~/ntac~e Representation (ISR), s It is interesting to note that at least some of these types of fragments resemble non-frnsmentary structures in other languages, two fragments, for m--Lmple, can be com- pared to sero-subject sentences in Japanese, seroeopulas resemble copular sentences in Arabic and Russian, and strue- tures similar to predlcate can be found in Cantonese (our thanks to K. Fu for the Cantonese data). This being the case, it is not surprising that analozoue sentences in Englkh can be processed without resorting to extra~immnticzd mechanismsc 4 ZC -- serocopula; NF =- ustg_fragment; PRED -, predicate; OBJBE ,- objba_frag; OBJ_GAP - obj..L~p_fraEment. which is a regularized representation of syntactic structure ~)ah11987..], fragments are regularized to paranel full assertions by the use of dummy elements standing in for the mlasing subject or verb. The CONTENT of these dummy elements, however, is left unspecified in most cases, to be filled in by the semantic or pragmatic components of the system. Tvo. We consider first the tvo, a subject- less tensed clause such as Operate, norton/Ill. This is parsed as a sequence of tensed verb and object: no subject is inferred at the level of surface struc- ture. In the ISR, the missing subject is fined in by the dnmmy element elided. At the level of the ISR, then, the fragment operates norma/f~/ differs from a full assertion such as ]t operates normaU~/ only by virtue of the element elided in place of sn overt pronoun. The element elided is asslgned a referent which subsequently fills a thematic role, exactly as if it were a pronoun; thus these two sentences get the same treatment from semantics and reference resolutlon~)ah11986, Pal- mer1988]. Elided subjects in the domains we have looked at often refer to the writer of the report, so one strategy for interpreting them might be simply to assume that the filler of the elided sub- Sect is the writer of the report. This simple stra- tegy is not snlBclent in all cases. For example, in the CASREPS corpus we observe sequences such as the following, where the filler of the elided sub- Sect is provided by the previous sentence, and is clearly not the writer of the report. (i) Problem appears to be caused by one or more of two hydraulic valves. Requires disassembly and investigation. (2) Sac lube oll pressure decreases below alarm point approximately seven minutes after engagement. Believed due to worn bushings. Thus, it is necessary to be able to treat elided subjects as pronouns in order to handle these sen- tences. The effect of an elided subject on subse- quent focusing is the same as that of an overt pronoun. We demonstrated in section 2 that elided subjects, but not semantically implicit arguments, are expected loci (or forward-looklng centers [Gross1988]) for later sentences. 10 The basic assumption underlying this treat- ment is that the pragmatic analysis for elided subjects should be as re;re;far to that of pronouns as possible. One piece of supporting evidence for this assumption is that in many languages, such as Japanese [Gundel1980, l-nnds1983, Kameyama1985] the functional equivalent of unstressed pronouns in English is a sere, or elided noun phrase, s If seres in other languages can correspond to unstressed pronouns in English, then we hypothesise that seres in a sublunguage of English can correspond functionally to pro- nouns in standard English. In addition, since pro- ceasing of pronouns is independently motlvated, it is a priori simpler to try to fit elision Into the pro- nominal paradigm, if possible, than to create an entirely separate component for handling elision. Under this hypothesis, then, tvo fragments represent 8~ply a realization of a grammatical strategy that is generally available to languages of the world, s Zeroeopula. For a serocopuia (e.g., D~Jk bad), the surface parse tree rather than the ISR inserts a dnmmy verb, In order to enforce sub- categorization constraints on the object. And In the ISR, this null verb is 'filled in' as the verb be. It is possible to fill in the verb at this level because no further semantic or pragmatic infor- mation is required in order to determ;ne its con- tent. 7 Hence the representation for D~k bad is nearly indistinguishable from that assigned to the corresponding/)/Ik/s bad; the only difference is in the absence of tense from the former. If the null verb represents an~llsLry be, then, like an overt an~I;ary, it does not appear in the regularised form. Sac .failing thus receives a regularisatlon with /ai/ as the main verb. Thus the null verb inserted in the syntax is treated in the ISR ill a fashion exactly parallel to the treatment of overt t Stressed pronouns in Eugiish corrupond to overt pro- nouns in lanzua,res like Japanese. u discummd in [Gun- dell980, Gundellg81J, and [Dahl1982J. t An interesting hypothesis, discussed by Gundel and Kameyama, is that the more topic prominent a language is, the more likely it is to have sero-NP's. Perhaps the fact that sublangusge mumn~J are characterised by rigid, contextualiy supplied, topics contributes to the availability of the rye fragment type in English. 7 In some restricted subdomains, however, other verbs may be omitted: for example, in certain radiology reports an omitted verb may be interpreted u ,hew rather than be. Hence we find Chemf Fdm* 1/.10 tittle cAa~e, paraphruable as Che#t .Fdme show Htffe cA~sge. occurrences of 6c. Nstg-.~ag. The syntactic parse tree for this fragment type contains no empty elements; it is a regular noun phrase, labeled as an nstg_f~aK. The ISR transforms it into a VSO sequence. This is done by treating it as the sub- Sect of an element empty_verb; in the semantic component, the subject of empty_verb is treated as the sole argument of a predicate exlstentlsl(X). As a result, the nstg_frag Fai/ure o[ see and a synonymous assertion such as Failure o.f sac occurred are eventually mapped onto s;rnil~r final representations by virtue of the temporal semantics of empty_verb and of the bead of the noun phrase. Objbe_/~ag and predicate. These are iso- inted complements; the same devices described above are utillsed in their processing. The sur- face parse tree of these fragment types contains no empty elements; as with seroeopula, the unteused verb be is inserted into the ISR; as with tvo, the dnr-my subject elided is also inserted in the ISR, to be filled in by reference resolution. Thus the simple adjective Inoperatiee will receive an ISR quite s;rn;lsr to that of .~e/,Ise/it ~ ino- perative. ObJ_gap_~agment. The final fragment type to be considered here is the elided noun phrase object. Such object elisioca occur more widely in English in the context of instructions, as in Handle _ udtA sere. Cookbooks are especially well-known respositories of elided objects, presum- ably because they are filled with instructions. Object elision also occurs in telegrarnmatic sub- languages generally, as in Took _ under .~re ud~ m,e~es from the Navy sighting messages. If these omissions occurred only in direct object position following the verb, one might argue for a lexlcal treatment; that is, such omissions could be treated as a lexlcal process of intransitivisation rather than by explicitly representing gaps in the syntactic structure. However, noun phrase objects of prepositions may also be omitted, as in FraCas. Do not tamper ~th _. Thus we have chosen to represent such elislons with an explicit surface structure gap. This gap is permitted in most con- texts where nstKo (noun phrase object) is found: as a direct object of the verb and as an object of a preposition. 8 In PUNDIT, elided objects are s Note, however, that there are some restrictions on the occurrence of these elements. They seem not to occur in 11 permitted only in a fragment type called obj_gap_fkagment, which, llke other fragment types, may be attempted only if an assertion parse has failed. Thus a sentence such as Pressure was c/stressing rap~ffy will never be analysed as containing an elided object, because there is a semantically acceptable assertion parse. In con- trust, Johts ~as deere~inf gr~uag[I/ will receive an elided object analysis, paraphrasable as Joh~ w~ deere~i~f IT gradua~v, because Jo~n is not an acceptable subject of intransitive Jeere~e; only pressure or some equally mensurable entity may be said to decrease. This selectional failure of the assertion parse permits the elided object analysis. Our working hypothesis for determ;u;uS the reference of object gaps is that they are, just llke subject gaps, appropriately treated as pronouns. However, we have not as yet seen extensive data relevant to this hypothesis, and it remains subject to further testing. These, then, are the fragment types currently Inzplemented In PUNDIT. As mentioned above, we do not consider noun phrases without determ;-ers to be fragments, because it is not clear that the missing element is symf~f~e~y obligatory. The Interpretation of these noun phrases is treated as a pragmatic problem. In the style of speech characteristic of the CASREPs, determ;uers are nearly always omitted. Their function must therefore be replaced by other mechanisms. One possible approach to this prob- lem would be to have the system try to determine what the determ;uer would have been, had there been one, insert it, and then resume processing as if the detervn;ner had been there all along. This approach was taken by ~V[arsh1981]. However, it was rejected here for two reasons. The first is that it was judged to be more error-prone than simply equipping the reference resolution com- ponent with the ability to handle noun phrases without determiners directly. 0 The second reason predicative objects, in double dative constructions, and, perhaps, in sentence adjuncts rather than arguments of the verb. (Thus compare P4fiesf eertf d/..Do sot opersfe os with Opersti~ room cloud os Snadslt. Do nor pe~om ~r- gcIT oz..) One po~ibility is that these expreruione can occur only where a definite pronoun would also be acceptable. In general, object pps seem mcet acceptable where they represent an argument ot n verb, either as direct object or u object of a preposition selected for by a verb. This ability would be required in any case, should the system be extended to process languages which do not have for not selecting this approach is that it would el|m;uate the distinction between noun phrases which originally had a determiner and those which did not. At some point in the development of the system it may become necessary to use this information° The basic approach currently taken is to assume that the noun phrase is definite, that is, it triggers a search through the discourse context for a previously mentioned referent. If the search succeeds, the noun phrase is assumed to refer to that entity. If the search fans, z new discourse entity is created. In summary, then, these fragment types are parsed 'as is' at the surface level; dummy ele- ments are inserted Into the ISR to bring fragments into close parallelism with fuil assertions. Because of the resulting structural s;m;l~rlty between these two sentence types, the semantic and pragmatic components can apply exactly the same Interpretive processes to both fragments and assertions, using preexisting mechanisms to 'flu In' the holes detected by syntax. 4. TEMPORAL ANALYSIS OF FI~G- MENTS Temporal processing of fragmentary sen- tences further supports the efficacy of a modular approach to the analysis of these strings. 1° In PUNDIT'S current message domains, a single assumption leads to assignment of present or past tense in untensed fragments, depending on the nspectual properties of the fragment, lz This assumption is that the messages report on actual situations which are of present relevance. Con- sequently, the default tense assignment is present unless th~ prevents assigning an actual time. 1~ For sentences having progressive grammati- cal aspect or statlve lexical aspect, the assign- ment of present tense always permits interpreting articl~ 1°For a discussion of the temporal component, of. ~Parsonsoan1987, PassonnenulgSnJ. u$ince the rye fragment is tensed, its input to the time component is indistinguishable from that of a full mntence. z~Pundit do~ not currently take full advantage of modifier information that could indicate whether a situation has real time associated with it (e.,r, pot4ntial sac tinware), or whether a situation is past or present (e.g., sac 1~ure yen- teeday; pump now opera/~ng so~m~y). 12 a situation as having an actual time ~asson- neau1987]. Thus, • present tense reading is always assigned to an untensed progressive frag- ment, such as pressure decreasing; or an untensed serocopula with • non-partlclplal complement, such as pump i~operatlee. A non-progressive serocopula fragment con- taining • cognitive state verb, as in /a~ure believed due to wow bushings, is assigned • present tense reading. However, if the lexlc•l verb has non-stative aspect, Is e.g., tss~ eomluetsd (process) or new sac received (transition event) then assignment of present tense conflicts with the assumption that the mentioned situation has occurred or is occurring. The slmple present tense form of verbs in this class is given • habi- tual or iterative reading. That is, the corresponding full sentences in the present, tss~ are conducted and nelo sac ~ reeelved, are inter- preted as referring to types of situations that tend to occur, rather than to situations that have occurred. In order to permit actual temporal reference, these fragments are assigned • past tense reading. Nst~/~ag represents another case where present tense may conflict with lexical aspect. If • n nmtg_frag refers to • non-st•tire situation, the situation is interpreted as having an actual past time. This can be the case if the head of the noun phrase is • nom;nallsation, and is derived from • verb in the process or tr•nsltlon event aspectual class. Thus, ineestlgation of problem would be interpreted as an actual process which took place prior to the report time, and ~irnilurly, sac/ai/ure would be interpreted •s • past transi- t|on event. On the other hand, an nstff~raJ¢ which refers to • st•tire situation, as in i~opera- ~iee pump, is assigned present tense. 5. RELATION OF FRAGMENTS TO THE LARGER G ~ An important finding which has emerged from the investigation of sentence fragments in a variety of sublanguage domains is that the linguistic properties of these constructions are largely domain-independent. A~nrn|rlg that these sentence fragments remain constant across different sublanguages, what is their relationship to the language at large? As indicated above, we Is Mourelat~' class of occurrences [Mourelatoslg81]. believe that fragments should not be regarded as ERRORS, • position taken also by ~ehrberger1982, Marsh1983], and others. Fragments do occur with disproportionate frequency in some domains, such as field reports of mechanical failure or newspaper headlines. However, despite this fre- quency v•riatlon, it appears that the parser's preferences remain constant •cross domains. Therefore, even in telegraphic domains the prefer- ence is for • full assertion parse, if one is avail- able. As discussed above, we have enforced this preference by means of the xor ('unbacktrack- able' or) connective. Thus despite the greater frequency of fragments we do not require either • gr•mm*r or • preference structure different from that of standard English in order to apply the stable system ~rammlr to these telegraphic mes- sages. Others have argued against this view of the relationship between sublanguages and the language at large. For example, Fitspatrlck et al. ~itspatrick1986] propose that fragments are sub- ject to • constraint quite unlike any found in English generally. Their Tr*n*ltlvity Con- straint (TC) requires that if • verb occurs as • transitive in • sublanguage with fragmentary messages, then it may not also occur in an intran- sitive form, even if the verb is ambiguous in the language at large. This constraint, they argue, provides evidence that sublanguage gramm,,rs have "• llfe of their own", since there is no such principle governing standard languages. The TC would also cut down on ambiguities arising out of object deletion, since • verb would be permit- ted to occur transitively or intransltlve]y in • given subdomain, but not both. As the authors recogulse, this hypothesis runs into tllt~culty in the face of verbs such as resume (we find both Sac resumed norm~ opera- tlon and No~e ]~am resumed), since resume occurs both transitively and intransitively in these cases. For these cases, the authors are forced to appeal to a problematic analysis of resume as syntacti- caliy transitive in both cases; they analyse TKe ~o~e /sue resumed, for example, as deriving from a structure of the form CSomeone/aomethingJ resumed tKc nose; that is, it is analysed as under- lyingiy transitive. Other transitivity alternations which present potential counter-examples are treated as syntactic gapping processes. In fact, with these two mechanisms available, it is not clear what COULD provide a counter-example to 13 the TC. The effect of all this insulation is to render the Transitivity Constraint vacuous. If all trans|tive/intranslt|ve alternations can be treated as underlying|y transitive, then of course there win be no counter-examples to the transitivity constraint. Therefore we see no evidence that sublanguage grammars are subject to additional constraints of this nature. In snmm*ry, this supports the view that fragmentary constructions in English are regular, gramm~t|caliy constrained ellipses differing minimally from the standard language, rather than ill-formed, unpredictable sublanguage exo- tlca. ~Vithln a modular system such as PUNDIT this regularity can be captured with the l~rn~ted augmentations of the grammsr described above. ACKNOWLEDGMENTS The system described in this paper has been developed by the entire natural language group at Unisys. In particular, we wish to acknowledge the contributions of John Dowding, who developed the ISR in conjunction with Deborah Dahi; and h~rtha Palmer's work on the seman- tics component. The ISR is based upon the work of Mark Gawron. We thank Tim F;-;" and Martha Palmer as well as the anonymous reviewers for useful com- ments on an earlier version of this paper. ]~f~Fen~es ~ah11987 ] Deborah A. Dahi, John Dowdlng, Lynette Hirschman, Francois Lang, Marcia Linebarger, ~rtha Palmer, Rebecca Passonneau, and Leslie Riley, Integrating Syntax, Semantics, and Discourse: DARPA Natural Language Understanding Program, RScD Status Report, Paoli Research Center, Unlsys Defense Systems, May 14, 1987. ahi1980] Deborah A. Dahi, Focusing and Refer- ence Resolution in PUNDIT, Presented at AAAI, PhUadelphi~, PA, 1988. [Dah11982] Deborah A. Dahi and Jeanette K. Gun- del, Identifying Referents for two kinds of Pronouns. In Minnesota Wor~n¢ Pa- pete in Lingn~ca and Ph~osophy o/ Language, Kathieen Houlihan (ed.), 1982, pp. 10-29, ~ah11987] Deborah A. Dahl, Martha S. Palmer, and Rebecca J. Passonneau, Nom;-ali- satious in PUNDIT, Proceedings of the 25th Annual Meeting of the ACL, Stanford, CA, July, 1987. ~)owdlng1987] John Dowdlng and Lynette Hirschman, Dynamic Translation for Rule Pr-n;-$ in Restriction Gra,~m~r. In Proc. o~ the ~d Intewatlonal Workshop on Natural Language Under#tandln~ and Logic Pro- gramming, Vancouver, B.C., Canada, 1987. ~astmn1981] C.M Eastman and D.q. McLean, On the Need for Parsing l~Formed Input. Amev/can Jonma/ o/ Compn~s- tional Lingu~tlee 7, 1981. ~itspatrick1988] E. Fitzpatrick, J. Bachenko, and D. Hindie, The Status of Telegraphic Sublanguages. In Ana/yz/nf laneuaee in Restricted Domalna, R. Grishnmn and R. Kittredse (ed.), Lawrence Erlbaum Associates, HUlsdale, lqY, 1986. [G.o,,19.] Barbara J. Gross, Arsvind K. Joahi, and Scott Welnstein, Towards a Com- putatlonal Theory of Discourse In- terpretation, M~., 1986. [Gundel1981] Jeanette K. Gundel and Deborah A. Dab], The Comprehension of Focussed and Non-Focussed Pronouns, Proceed- ings of the Third Annual Meeting of the Cognitive Science Society, Berke- ley, CA, August, 1981. 14 [Gunde11980] Jeanette K. Gundel, Zero-NP Anaphora in Russian. Chicago LingtJistic ";ocisty Parasession on Pronouns and AnapKora, 1980. [Hinds1983] John Hinds, Topic Continuity in Japanese. In Topic Continuit!! in Discourse, T. Givon (ed.), John Benja- mlns Publishing Company, Philadel- phla, 1983. nrsc n1983] Lynette Hirschman and Naomi Sager, Automatic Inforumtion Formatting of a Medical Sublanguage. In ~ub]anguagc: Studies of Languayc in Restricted Se- mantic Domains, R. Kittredge and J. Lehrberger (ed.), Series of Foundations of Communications, Walter de Gruyter, Berlin, 1983, pp. 27-80. ~-Iirschman1986] L. HL'schman, Conjunction in Meta- Restriction Grammar. ,I. of Lo~ Pro- grammin~4), 1986, pp. 299-328. [mnchman1985] L. H]zschxn~n and K. Puder, Restriction Gramm*r: A Prolog Implementation. In Logic Programming and its Applications, D.H.D. Warren and M. VanCaneghem (ed.), 1985. [Jensen1983] K. Jensen, G.E. Heidoru, L.A. ~uller, and Y. Ravin, Parse Fitting and Prose FlYing: Getting a Hold on Ill- Formedness. American Journal of Com- putational Linguistic8 9, 1983. ~ameyama1985] Megumi Kameyama, Zero Anaphora: The Case of Japanese, Ph.D. thesis, Stanford University, 1985. ~wasny1981] S.C. Kwasny and N~. Sondheimer, laxstlon Techniques for Parsing 111- Formed Input. Am J. of Computational Linguutica 7, 1981, pp. 99.108. ~wasny1980] Stan C. Kwasny, Treatment of Ungram- marie a[ and Eztra- Grammatie a[ Phenomena in 2Va~ural Language Under- standing Systems. Indiana University Linguistics Club, 1980. [Lang1988] Francois Lang and Lynette Hirschman, Improved Portability and Parsing Through Interactive Acquisition of Se- mantle Information, Proc. of the Second Conference on Applied Natural Language Processing, Austin, TX, February, 1988. ~,ehrberger1982] J. Lehrberger, Automatic Translation and the Concept of Sublanguage. In Sublangua~e: Studies of Languafe in Restricted Semantic Domains, R. Kit- tredge and J. Lehrberger (ed.), de Gruyter, Berlin, 1982. p rsh1983] Elaine Marsh, Utilislng Domain-Specific Infornmtion for Processing Compact Text. In Proceedings of tKe Conference on Applied Natured Language Process- ing, Santa Monlca, CA,, February, 1983, pp. 99-103. [Marsh1981] Elaine Marsh, A or THE? Reconstruc- tion of Omitted Articles in Medical Notes, lVlss., 1981. ~Vlourelatos1981] Alennder P. D. Mourelatos, Events, Processes and States. In Spntaz and Se- mantics: Tense and Aspect, P. J. Tedes- chi and A. Zaenen (ed.), Academic Press, New York, 1981, pp. 191-212. ~almer1983] M. Palmer, Inference Driven Semantic Analysis. In Proceedingm of tKe National Conference on Artificial Intelligence (A.d.A[-83), Washington, D.C., 1983. 15 ~'almer1986] Martha S. Palmer, Deborah A. Dahl, Rebecca J. [Passonnesu] Sch~man, Lynette Hirschmsn, Marcia Linebarger, and John Dowding, Recovering Implicit Information, Presented at the 24th An- nual Meeting of the Association for Computational Linguistics, Columbls University, New York, August 1986. ~almer1985] Martha S. Palmer, Driving Semantics for a L;mlted Domain, Ph.D. thesis, University of Edinburgh, 1985. ~assonnesu1988] Rebecca J. Passonneau, A Computa- tional Model of the Semantics of Tense and Aspect. Gomputatio~a/ Lingu~h~I, 1988. ~assonneau1987] Rebecca J. Passonueau, Situations and Intervals, Presented at the 25th Annu- al Meeting of the Association for Com- putational Linsuistics, Stanford University, California, July 1987. [Sager1981] N. Sager, Natur~ Laaeu~e In/orma~a Proceuing: A Computer Grammar o/ Engl~h and I~ Application. Addkon- Wesley, Reading, Mau., 1981. [Sondhelmer1983] N. K. Sondhelmer and R. M. Wekchedel, Meta-rules as a Basis for Processing m-Formed Input. Amerieaa .lour~a~ o~ Computa~iona~ Lingu/~ticm 9(3-4), 1983. [Thompson1980] Bosena H. Thompson, Linguistic Analysis of Natural Languase Com- munication with Computers. In Proceedings of O,c 8~, Intcrnatlonal Con/erer~ee on Computationag Li~gu~- ~icl, Tokyo, 1980. 16
1988
2
PLANNING COHERENT MULTISENTENTIAL TEXT Eduard H. Hovy USC/Information Sciences Institute 4676 Admiralty Way, Suite 1001 Marina del Rey, CA 90292-6695, U.S.A. HOVY~VAXA.ISI.EDU Abstract Though most text generators are capable of sim- ply stringing together more than one sentence, they cannot determine which order will ensure a coherent paragraph. A paragraph is coherent when the information in successive sentences fol- lows some pattern of inference or of knowledge with which the hearer is familiar. To signal such inferences, speakers usually use relations that llnk successive sentences in fixed ways. A set of 20 relations that span most of what people usually say in English is proposed in the Rhetorical Struc- ture Theory of Mann and Thompson. This paper describes the formalization of these relations and their use in a prototype text planner that struc- tures input elements into coherent paragraphs. 1 The Problem of Coherence The example texts in this paper are generated by Penman, a systemic grammar-based genera- tor with larger coverage than probably any other existing text generator. Penman was developed at ISI (see [Mann & Matthiessen 831, [Mann 831, [Matthiessen 84]). The input to Penman is pro- duced by PEA (Programming Enhancement Ad- visor; see [Moore 87]), a program that inspects a user's LISP program and suggests enhancements. PEA is being developed to interact with the user in order to answer his or her questions about the suggested enhancements. Its theoretical focus is the production of explanations over extended in- teractions in ways that are superior to the simple goal-tree traversal of systems such as TYRESIAS ([Davis 76]) and MYCIN ([Shortliffe 76]). Supported by DARPA contract MDAg03 81 C0~5. In answer to the question how does the system enhance a program~, the following text (not gen- erated by Penman) is not satisfactory: (a). The system performs the enhance- ment. Before *hat, the system resolves conficts. First, the system asks the user to tell Jt the characteristic of the program to be enhanced. The system app//es transformations to the program. /t confrms the enhancement with the user. It scans the program in order to find opportunities to apply transfarma- tions to the program. ... because you have to work too hard to make sense of it. In contrast, using the same propo- sitions (now rearranged and linked with appro- priate connectives), paragraph (b) (generated by Penman) is far easier to understand: (b). The system as/ca ~he user to tell it the characteristic of the program to be enhanced. Then the system applies transformations to the program. In par- ticular, the system scans the program in order to ~nd opportunities to apply transformations to the program. Then the system resolves contlicts. It con~rms the enhancement with the user. Fina//y, it performs the enhancement. Clearly, you do not get coherent text simply by stringing together sentences, even if they are re- lated -- note especially the underlined text in (b) and its corresponding three propositions in (a). The goal of this paper is to describe a method of planning paragraphs to be coherent while avoiding unintended spurious effects that result from the juxtaposition of unrelated pieces of text. 163 2 Text Structuring This planning work, which can be called tezt siructuring, must obviously be clone before the actual generating of language can begin. Text structuring is one of a number of pre-generation text planning tasks. For some of the other tasks Penman has special-purpose domain-specific solu- tions. They include: • aggregation: determining, for input ele- ments, the appropriate level of detail (see [Hovy 87]), the scoping of sentences, and the use of connectives • reference: determining appropriate ways of referring to items (see [Appelt 87a, 87b]) • hypotheticals: determining the introduc- tion, scope, and closing of hypothesis contexts (spans of text in which some values are as- sumed, as in air you want to go to the game, then ... ~) The problem of text coherence can be character- ized in specific terms as follows. Assuming that in- put elements are sentence- or clause-sized chunks of representation, the permutation set of the input elements defines the space of possible paragraphs. A simplistic, brute-force way to achieve coherent text would be to search this space and pick out the coherent paragraphs. This search would be factorlally expensive. For example, in paragraph (b) above, the 7 input clusters received from PEA provide 7! ---- 5,040 candidate paragraphs. How- ever, by utilizing the constraints imposed by co- herence, one can formulate operators that guide the search and significantly limit the search to a manageable size. In the example, the operators described below produced only 3 candidate para- graphs. Then, from this set of remaining candi- dates, the best paragraph can be found by apply- ing a relatively simple evaluation metric. The contention of this paper is that, exercis- ing proper care, the coherence relations that hold between successive pieces of text can be formu- lated as the abovementioned search operators and used in a hierarchical-expanslon planner to limit the search and to produce structures describing the coherent paragraphs. The illustrate this contention, the Penman text structurer is a simplified top-down planner (as de- scribed first by [Sacerdoti 77]). It uses a formal- ized version of the relations of Rhetorical Struc- ture Theory (see immediately below) as plans. Its output is one (or more) tree(s) that describe the structure(s) of coherent paragraphs built from the input elements. Input elements are the leaves of the tree(s); they are sent to the Penman generator . to be transformed into sentences. 3 Previous Approaches The heart of the problem is obviously coherence. Coherent text can be defined as text in which the hearer knows how each part of the text relates to the whole; i.e., (a) the hearer knows why it is said, and (b) the hearer can relate the semantics of each part to a. single overarching framework. In 1978, Hobhs ([Hobhs 78, 79, 82]) recognized that in coherent text successive pieces of text are related in a specified set of ways. He produced a set of relations organised into four categories, which he postulated as the four types of phenom- ena that occur during conversation. His argument, unfortunately, contains a number of shortcomings; not only is the categorization not well-motivated, but the llst of relations is incomplete. In her thesis work, McKeown took a different approach ([McKeown 82]). She defined a set of relatively static schemas that represent the struc- ture of stereotypical paragraphs for describing ob- jects. In essence, these schemas are paragraph templates; coherence is enforced by the correct nesting and 6]llng.in of templates. No explicit the- ory of coherence was offered. Mann and Thompson, after a wide-ranging study involving hundreds of paragraphs, proposed that a set of 20 relations suffice to represent the relations that hold within the texts that normally occur in English ([Mann & Thompson 87, 86, 83]). These relations, called RST (rhetorical struc- ture theory), are used recursively; the assumption (never explicitly stated) is that a paragraph is only coherent if all its parts can eventually be made to fit into one overarching relation. The enterprise was completely descriptive; no formal definition of the relations or justification for their complete- ness were given. However, the relations do include most of Hobbs's relations and support McKeown's schemas. A number of similar descriptions exist. The de- scription of how parts of purposive text can re- late goes back at least to Aristotle ([Aristotle 54 D. Both Grimes and Shepherd categorize typical in- tersentential relations ([(]rimes 75] and [Shepherd 26]). Hovy ([Hovy 86]) describes a program that uses some relations to slant text. 164 4 Formalizing RST Relations As defined by Mann and Thompson, RST rela- tions hold between two successive pieces of text (at the lowest level, between two clauses; at the highest level, between two parts that make up a paragraph} 1. Therefore, each relation has two parts, a aucle~ and a satell~te. To determine the applicability of the relation, each part has a set of constraints on the entities that can be related. Relations may also have requirements on the com- bination of the two parts. In addition, each rela- tion has an effect field, which is intended to denote the conditions which the speaker is attempting to achieve. In formalizing these relations and using them generatively to plan paragraphs, rather than ana- lytically to describe paragraph structure, a shift of focus is required. Relations must be seen as plans the operators that guide the search through the permutation space. The nucleus and satellite con- straints become requirements that must be met by any piece of text before it can be used in the re- lation (i.e., before it can be coherently juxtaposed with the preceding text}. The effect field contains a description of the intended effect of the relation (i.e., the goal that the plan achieves, if properly executed}. Since the goals in generation are com- municative, the intended effect must be seen as the inferences that the speaker is licensed to make about the bearer's knowledge after the successful completion of the relation. Since the relations are used as plans~ and since their satellite and nucleus constraints must be re- formulated as subgoais to the structurer, these constraints are best represented in terms of the communicative intent of the speaker. That is, they are best represented in terms of what the hearer will know -- i.e., what inferences the hearer would run -- upon being told the nucleus or satellite filler. As it turns out, suitable terms for this purpose are provided by the formal theory of rational inter- action currently being developed by, among oth- ers, Cohen, Levesque, and Perrault. For example, in ICohen ~z Levesque 851, Cohen and Levesque present a proof that the indirect speech act of re- questing can be derived from the following bask modal operators • (BEL x p) -- p follows from x's beliefs 1This is not strictly true; a small number of relations, such as Seqtlence, relate more than two pieces of text. However, for ease of use, they have been implemented as binary relations in the structurer. • (BMB x y p) -- p follows from x's beliefs about what x and y mutually believe • (GOAL x p) -- p follows from x's goals • (.AFTER a p) -- p is true in all courses of events after action a as well as from a few other operators such as AND and OR. They then define suture,ties as, essen- tiaUy, speech act operators with activating condi- tious (g~tes) and e~ectz. These summaries closely resemble, in structure, the RST plans described here, with gates corresponding to satellite and nu- cleus constraints and effects to intended effects. 5 An Example The RST relation Purpose expresses the relation between an action and its intended result: = Pro.pose Nucleus Constraintsz 1. (BMB S H (ACTION ?act-l)) 2. (BMB S H (ACTOR ?act-1 ?agt-1)) Satellite Constraintsz 1. (BMB S H (STATE ?state-l)) 2. (BMB S H (GOAL ?a~-I ?state-l)) s. (B~ S H (RESULT Zact-1 ?~t-2)) 4. (BMB S H (OBJ ?act-2 ?state-I)) Intended EEectss 1. (BMB S H (BEL ?ag~-I (RESULT ?act-1 ?state-l))) 2. (BMB S H (PURPOSE ?act-I ?state-l)) For example, when used to produce the sentence The system scans the program in order to find op- portunltJes to apply ~ansformatlons to t~e pro- gram, this relation is instantiated as I:~I3UL'pO|6 Nucleus Coustraints- I. (B~m S H (ACTION SCA~-I)i The program k scanned 2. (BMB S H (ACTOR SCAN-I SYS-I}) The system scans it Satellite Constraints: 1. (BMB S H (STATE oee-1)) Opportunities to apply transformations exkt 2. (BMB S H (GOAL SYS-10PP-1)) The system =wants" to find them 3. (BMB S H (RESULT SCAN-1 FIND-I)) Scanning wil/result; in findlng 4. (BMB S H (OBJ FIND-10PP-1)) the opportunities Intended Effects: 1. (BMB S H (BEL SYS-1 (RESULT SCAN-10PP-1})) The system ~believes = that scanning will disclose the opportunities 2. (BMB S H (PURPOSE SCAN-10PP-I)) This is the purpose of the scanning 15S • /SRTELL.IrTE_SEQUEttCE~qTELL~TE-,(YHPUTREC with (P3)=' (~) SRTELL~TE--SEQUEtlCI~ I'OJCL£US--<IrlPUTREC ,A'lth (C2 f14) * (~ %rlUCLEUS--<Ir(PUTREC vlt.h (R1 C4)) ~P-) ( ,~I'ELLI T E-- SE OUEtICE/t J ~ , /SRTELL'II'E--('rltPUTREC u4th (FI KS)* (~) /SATELLITE--ELROORRTIO~ " tNUCLEUS--PURPOS%NUCLEUS--¢IttPUTREC v, th (S2) * Co) S~QUEHC~ I=I'tt,ICLEUS-. <ZHPUTREC utth (R2) • ~ ~) ttUCL£US--(IHPUTRgC vlth (RI P4 E6))~ Figure 1: Paragraph Structure ~ree The elements SCAN-l, OPP-1, etc., are part of a network provided to the Penman structurer by PEA. These elements are defined as propo- sitions in a property-inheritance network of the usual kind written in NIKL ([Schmolze & Lipkis 83], [Kaczmarek et aL 86]), a descendant of KL- ONE ([Brachman 78]). Some input for this exam- ple sentence is: (PEA-SYST~4 SYS-I) " (OPPORTUNITY OPP-I) (PROGRAM PROG-I) (EHABL~4ENT ENAB-S) (SCAN SCAN-I) (DOMAIN F~-S OPP-I) (ACTOR SCAN-I &",'S-l) (RANGE EN)3-S APPLY-3) (OBJ SCAN-I PROG-I) (APPLY APPLY-3) (RESULT SCAN-1-FIND-l) (ACTOR APPLY-3 SYS-1) (FIND FIND-I) (OBJ APPLY-S TKANS-2) (ACTOR FI~)-I SYS-I) (RZCIP APPLY-3 PROG-1) (OBJ FIND-I OPP-I) (TRANSFORMATION TRANS-2) The relations are used as plans; their intended effects are interpreted as the goals they achieve. In other words, in order to bring about the state in which both speaker and hearer know that OPP-1 is the purpose of SCAN-I (and know that they both know it, etc.), the structurer uses Purpose as a plan and tries to satisfy its constraints. In this system, constraints and goals are inter- changable; for example, in the event that (RESULT SCAN-I FIND-I) is believed not known by the hearer, satellite constraint 3 of the Purpose re= lation simply becomes the goal to achieve (BHB S H (RESULT SCAN-I FIND-I)). Similarly, the propo- sitions (B~ S H (RESULT SCAN-1 ?ACT-2)) (BMB S H (0BJ ?ACT-2 0PP-I)) are interpreted as the goal to find some element that could legitimately take the place of ?ACT-2. In order to enable the relations to nest recur- sively, some relations' nucleuses and satellites con- taln requirements that specify additional relations, such as examples, contrasts, etc. Of course, these additional requirements may only be included ff such material can coherently follow the content of the nucleus or satellite. The question of ordering such additional constituents is still under investi- gation. The question of whether such additional material should be included at all is not addressed; the structure," tries to say everything it is given. The structurer produces all coherent paragraphs (that is, coherent as defined by the relations) that satisfy the given goal(s) for any set of input ele- ments. For example, paragraph (b) is produced to satiny the initial goal (BMB S e (SEQUENCE ASK-1 ?l~E~r)). This goal is produced by PEA, to- gether with the appropriate representation ele- ments (ASK-1. SCAM-I, etc.) in response to the question hoto a~oes ~e system enhance a progr~m~. Di~erent initial goals will result in di~erent pars- graphs. Each paragraph is represented as a tree in which branch points are RST relations and leaves are input elements. Figure 1 is the tree for para- graph (b). It cont~n, the relations Sequence (signalled by "then" and "finally'i, Elaboration ('in particular'), and Purpose ('in order to'). In the corresponding paragraph produced by Pen- man, the relations' characteristic words or phrases (boldfaced below) appear between the blocks of text they relate: [The system asks the user to tell it the character~stlc of the program to be enhanced.l(6) Then [the system applies transformations to the program.](b) In particular, [the system scans the pro- gram](c) in order to [f~nd opportu- nitlea to apply ~ranaformations to the program.]{a) Then [the system resolves conflicts.](e) lit confu'ms the enhance- meng with the user.](/) Finally, [it per- forms the enhancement.](g) 166 i I input update agenda get next bud expand bud grow tree H ] I choose final plan RST relations sentence generator Figure 2: Hierarchical Planning Structurer 6 .... The Structurer As stated above, the structurer is a simplified top-down hierarchical expansion planner (see Fig- ure 2). It operates as follows: given one or more communicative goals, it find s RST relations whose intended effects match (some of) these goals; it then inspects which of the input elements match the nucleus and subgoal constraints for each re- lation. Unmatched constraints become subgoals which are posted on an agenda for the next level of planning. The tree can be expanded in either depth-first or breadth-first fashion. Eventually, the structuring process bottoms out when either: (a) all input elements have been used and unsatis- fied subgoais remain (in which case the structurer could request more input with desired properties from the encapsulating system); or (b) all goals axe satisfied. If more than one plan (i.e., para. graph tree structure) is produced, the results axe ordered by preferring trees with the minimum un- used number of input elements and the minimum number of remaining unsatisfied subgoals. The best tree is then traversed in left-to-right order; leaves provide input to Penman to be generated in English and relations at branch points provide typical interclausal relation words or phrases. In this way the structurer performs top-down goal re- finement clown to the level of the input elements. 7 Shortcomings and Further Work This work is also being tested in a completely sep- arate domain: the generation of text in a multi- media system that answers database queries. Pen- man produces the following description of the ship Knox (where CTG 070.10 designates a group of ships): (c). Knox is en route in order to ren- denvous with CTG 070.10, arriving in Pearl Harbor on 4/24, for port visit until 4~so. In this text, each clause (en route, rendezvous, arrive, visit) is a separate input element; the structurer linked them using the relations Se- quence and Purpose (the same Purpose as shown above; it is signalled by ~in order toN). However, Penman can also be made to produce (d). Knox is en route in order to ren- dezvous with CJTG 070.10. It w~11 arrive in Pearl Harbor on 4/24. It will be on port visit until 4/30. The problem is clear: how should sentences in the paragraph be scoped? At present, avoiding any claims about a theory, the structurer can feed 167 Penman either extreme: make everything one sen- tence, or make each input element a separate sen- tence. However, neither extreme is satisfactory; as is clear from paragraph (b), ashort" spans of text can be linked and "long" ones left separate. A simple way to implement this is to count the number of leaves under each branch (nucleus or satellite) in the paragraph structure tree. Another shortcoming is the treatment of input elements as indivisible entities. This shortcoming is a result of factoring out the problem of aggre- gation as a separate text planning task. Chunking together input elements (to eliminate detail) or taking them apart (to be more detailed) has re- ceived scant mention -- see [Hovy 87], and for the related problem of paraphrase see [Schank 75] -- but this task should interact with text structur- ing in order to provide text that is both optimally detailed and coherent. At the present time, only about 20~ of the RST relations have been formalized to the extent that they can be used by the structurer. This formal- ization process is di~cult, because it goes hand- in-hand with the development of terms with which to characterize the relations' goals/constra£uts. Though the formalization can never be completely finalized -- who can hope to represent something like motivation or justification complete with all ramifications? -- the hope is that, by having the requirements stated in rather basic terms, the re- lations will be easily adaptable to any new repre- sentation scheme and domain. (It should be noted, of course, that, to be useful, these formalizations need only be as specific and as detailed as the do- m~in model and representation requires.) In ad- dition, the availability of a set of communicative goals more detailed than just say or ask (for ex- ample), should make it easier for programs that require output text to interface with the gener- ator. This is one focus of current text planning work at ISL 8 Acknowledgments For help with Penman, Robert Albano, John Bate- man, Bob Kasper, Christian Matthiessen, Lynn Poulton, and Richard Whitney. For help with the input, Bill Mann and Johanna Moore. For general comments, all the above, and Cecile Paris, Stuart Shapiro, and Norm Sondheimer. 9 1. 2. References Appelt, D.E., 1987a. A Computational Model of Referring, SRI Technical Note 409. Appelt, D.E., 1987b. Towards a Plan-Based Theory of Referring Actions, in Natural Language Generation: Recent Advances in Artificial Intelligence, Psyclwlogy, and Linguistic8, Kempen, G. (ed), (Kluwer Academic Publishers, Boston) 63-70. 3. 4. Aristotle, 1954. The Rhetoric, in The l~,eto~c and the Po- etics of Ar~to~e, W. Rhys Roberts (Pans), (Random House, New York). Brachman, R.J., 1987. A Structural Paradigm for Representing Knowledge, Ph.D. dissertation, Harvard Uni- versity; also BBN Research Report 3605. 5. Cohen, P.R. & Levesque, H.J., 1985. Speech Acts and Rationality, Proceedings of the A CL Conference, Chicago (49-59). 6. Davis, R., 1976. Applications of Meta-Level Knowledge to the Constructions, Maintenance, and Use of Large Knowledge Bases, Ph.D. dissertation, Stanford University. 7. Grimes, J.E., 1975. The Thread of D/~course Hague). (Mouton, The 8. Hobbs, J.R., 1978. Why is Discourse Coherent?., SRI Technical Note 176. 9. 10. Hobbs, J.R., 1979. Coherence and Coreference, in Cognitive Sci- ence 3(1), 67-90. Hobbs, J.R., 1982. Coherence in Discourse, in Strategies for Nat- ural Language Processing, Lehnert, W.G. & Ringle, M.H. (eds), (Lawrence Erlbaum As- sociates, ]:[HI.dale N J) 223-243. 11. Hovy, E.H., 1986. Putting Affect into Text, Proceedings of the Cognitive Science Society Conference, Amherst (669-671). 168 12. Hovy, E.H., 1987. Interpretation in Generation, Proceedings of the AAAI Conference, Seattle (545-549). 13. Kaczmarek, T.S., Bates, R. & Robins, G., 1986. Recent Developments in NIKL, Proceedings of the AAAI Conference, Philadelphia (978- 985). 14. Mann, W.C., 1983. An Overview of the Nigel Text Generation Grammar, USC/Information Sciences Insti- tute Research Report RR-83-113. 15. Mann, W.C. & Matthiessen, C.M.I.M., 1983. Nigeh A Systemic Grammar for Text Gen- eration, USC/Information Sciences Institute Research Report RR-83-I05. 16. Mann, W.C. & Thompson, S.A., 1983. Relational Propositions in Discourse, USC/- Information Sciences Institute Research Re- port RR-83-115. 17. Mann, W.C. & Thompson, S.A., 1986. Rhetorical Structure Theory: Description and Construction of Text Structures, in Nat- ural Language Generation: Nero Results in Artificial Intelligence, Psychology, and L~n- guistics, Kempen, G. (ed), (Kluwer Academic Publishers, Dordrecht, Boston MA) 279-300. 18. Mann, W.C. & Thompson, S.A., 1987. Rhetorical Structure Theory: A Theory of Text Organization, USC/Information Sci- ences Institute Research Report RR-87-190. 19. Matthiessen, C.M.I.M., 1984. Systemic Grammar in Computation: the Nigel Case, USC/Information Sciences Insti- tute Research Report RR-84-121. 20. McKeown, K.R., 1982. Generating Natural Language Text in Re- sponse to Questions about Database Queries, Ph.D. dissertation, University Of Pennsylva- nia. 21. Moore, J.D., 1988. Enhanced Explanations in Expert and Advice-Giving Systems, USC/Information Sciences Institute Research Report (forth- coming). 22. Sacerdoti, E., 1977. A Structure for Plans and B¢l~avior (North- Holland, Amsterdam). 23. Schank, R.C., 1975. Conceptual Information Processing, (North- Holland, Amsterdam). 24. Schmolze, J.G. & Lipkis, T.A., 1983. Classification in the KL-ONE Knowledge Representation System, Proceeding8 of the IJ- CAI Conference, Karisruhe (330-332). 25. Shepherd, H.R., 1926. The Fine Art of Writing, (The Macmillan Co, New York). 26. Shortliffe, E.H., 1976. Computer-Based Medical Consultations: MYCIN. 169
1988
20
A Practical Nonmonotonic Theory for Reasoning about Speech Acts Douglas Appelt, Kurt Konolige Artificial Intelligence Center and Center for the Study of Language and Information SRI International Menlo Park, California Abstract A prerequisite to a theory of the way agents un- derstand speech acts is a theory of how their be- liefs and intentions are revised as a consequence of events. This process of attitude revision is an interesting domain for the application of non- monotonic reasoning because speech acts have a conventional aspect that is readily represented by defaults, but that interacts with an agent's be- liefs and intentions in many complex ways that may override the defaults. Perrault has devel- oped a theory of speech acts, based on Rieter's default logic, that captures the conventional as- pect; it does not, however, adequately account for certain easily observed facts about attitude revi- sion resulting from speech acts. A natural the- ory of attitude revision seems to require a method of stating preferences among competing defaults. We present here a speech act theory, formalized in hierarchic autoepistemic logic (a refinement of Moore's autoepistemic logic), in which revision of both the speaker's and hearer's attitudes can be adequately described. As a collateral benefit, effi- cient automatic reasoning methods for the formal- ism exist. The theory has been implemented and is now being employed by an utterance-planning system. 1 Introduction The general idea of utterance planning has been at the focus of much NL processing research for the last ten years. The central thesis of this 170 approach is that utterances are actions that are planned to satisfy particular speaker goals. This has led researchers to formalize speech acts in a way that would permit them to be used as op- erators in a planning system [1,2]. The central problem in formalizing speech acts is to correctly capture the pertinent facts about the revision of the speaker's and hearer's attitudes that ensues as a consequence of the act. This turns out to be quite difficult bemuse the results of the attitude revision are highly conditional upon the context of the utterance. To consider just a small number of the contin- gencies that may arise, consider a speaker S utter- ing a declarative sentence with propositional con- tent P to hearer H. One is inclined to say that, if H believes S is sincere, H will believe P. How- ever, if H believes -~P initially, he may not be convinced, even if he thinks S is sincere. On the other hand, he may change his beliefs, or he may suspend belief as to whether P is true. H may not believe --P, but simply believe that S is neiter competent nor sincere, and so may not come to believe P. The problem one is then faced with is this: How does one describe the effect of ut- tering the declarative sentence so that given the appropriate contextual elements, any one of these possibilities can follow from the description? One possible approach to this problem would be to find some fundamental, context-independent ef- fect of informing that is true every time a declara- tive sentence is uttered. If one's general theory of the world and of rational behavior were sufficiently strong and detailed, any of the consequences of attitude revision would be derivable from the ba- sic effect in combination with the elaborate theory of rationality. The initial efforts made along this path [3,5] entailed the axiomatization the effects of speech acts as producing in the hearer the be- lief that the speaker wants him to recognize the latter's intention to hold some other belief. The effects were characterized by nestings of Goal and Bel operators, as in Bel(H, Goal(S, Bel(H, P))). If the right conditions for attitude revision ob- tained, the conclusion BeI(H,P) would follow from the above assumption. This general approach proved inadequate be- cause there is in fact no such statement about b.e- liefs about goals about beliefs that is true in every performance of a speech act. It is possible to con- struct a counterexample contradicting any such ef- fect that might be postulated. In addition, long and complicated chains of reasoning are required to derive the simplest, most basic consequences of an utterance in situations in which all of the "nor- real" conditions obtain -- a consequence that runs counter to one's intuitive expectations. Cohen and Levesque [4] developed a speech act theory in a monotonic modal logic that incorpo- rates context-dependent preconditions in the ax- ioms that state the effects of a speech act. Their approach overcomes the theoretical difficulties of earlier context-independent attempts; however, if one desires to apply their theory in a practical computational system for reasoning about speech acts, one is faced with serious difficulties. Some of the context-dependent conditions that deter- mine the effects of a speech act, according to their theory, involve statements about what an agent does no~ believe, as well as what he does believe. This means that for conclusions about the effect of speech acts to follow from the theory, it must in- clude an explicit representation of an agent's igno- rance as well as of his knowledge, which in practice is difficult or even impossible to achieve. A further complication arises from the type of reasoning necessary for adequate characterization of the attitude revision process. A theory based on monotonic reasoning can only distinguish between belief and lack thereof, whereas one based on non- monotonic reasoning can distinguish between be- 171 lief (or its absence) as a consequence of known facts, and belief that follows as a default because more specific information is absent. To the extent that such a distinction plays a role in the attitude revision process, it argues for a formalization with a nonmonotonic character. Our research is therefore motivated by the fol- lowing observations: (1) earlier work demonstrates convincingly that any adequate speech-act theory must relate the effects of a speech act to context- dependent preconditions; (2) these preconditions must depend on the ignorance as well as on the knowledge of the relevant agents; (3)any prac- tical system for reasoning about ignorance must be based on nonmonotonic reasoning; (4) existing speech act theories based on nonmonotonic rea- soning cannot account for the facts of attitude re- vision resulting from the performance of speech acts. 2 Perrault's Default Theory of Speech Acts As an alternative to monotonic theories, Perrault has proposed a theory of speech acts, based on an extension of Reiter's default logic [11] extended to include-defanlt-rule schemata. We shall sum- marize Perrault's theory briefly as it relates to in- forming and belief. The notation p =~ q is intended as an abbreviation of the default rule of inference, p:Mq q Default theories of this form are called normal. Every normal default theory has at least one ex- tension, i.e., a mutually consistent set of sentences sanctioned by the theory. The operator Bz,t represents Agent z's beliefs at time t and is assumed to posess all the properties of the modal system weak $5 (that is, $5 without the schema Bz,t~ D ~b), plus the following axioms: Persistence: B~,t+IB~,~P D B~,~+IP Memory: (1) B~,~P D B~,t+IB~,~P (2) Observability: Do~,,a ^ D%,,(Obs(Do~,,(a))) B.,,+lDo.,,(a) Belief Transfer: (3) B~,tBy,~P =~ B,,tP (4) Declarative: Do~,t(Utter(P)) =~ Bz,,P (5) In addition, there is a default-rule schema stat- ing that, if p =~ q is a default rule, then so is B~,~p =~ Bx,tq for any agent z and time t. Perrault could demonstrate that, given his the- ory, there is an extension containing all of the desired conclusions regarding the beliefs of the speaker and hearer, starting from the fact that a speaker utters a declarative sentence and the hearer observes him uttering it. Furthermore, the theory can make correct predictions in cases in which the usual preconditions of the speech act do not obtain. For example, if the speaker is ly- ing, but the hearer does not recognize the lie, then the heater's beliefs are exactly the same as when the speaker tells the truth; moreover the speaker's beliefs about mutual belief are the same, but he still does not believe the proposition he uttered m that is, he fails to be convinced by his own lie. 3 Problems with Perrault's Theory A serious problem arises with Perrault's theory concerning reasoning about an agent's ignorance. His theory predicts that a speaker can convince himself of any unsupported proposition simply by asserting it, which is clearly at odds with our in- tuitions. Suppose that it is true of speaker s that ~Bs,tP. Suppose furthermore that, for whatever reason, s utters P. In the absence of any further information about the speaker's and hearer's be- liefs, it is a consequence of axioms (1)-(5) that Bs,~+IBh,~+IP. From this consequence and the belief transfer rule (4) it is possible to conclude B,,~+IP. The strongest conclusion that can be derived about s's beliefs at t + 1 without using 172 this default rule is B,,t+I"~B,,~P, which is not suf- ficient to override the default. This problem does not admit of any simple fixes. One clearly does not want an axiom or default rule of the form that asserts what amounts to "igno- rance persists" to defeat conclusions drawn from speech acts. In that case, one could never con- clude that anyone ever learns anything as a result of a speech act. The alternative is to weaken the conditions under which the default rules can be defeated. However, by adopting this strategy we are giving up the advantage of using normal de- faults. In general, nonnormal default theories do not necessarily have extensions, nor is there any proof procedure for such logics. Perrault has intentionally left open the question of how a speech act theory should be integrated with a general theory of action and belief revision. He finesses this problem by introducing the per- sistence axiom, which states that beliefs always persist across changes in state. Clearly this is not true in general, because actions typically change our beliefs about what is true of the world. Even if one considers only speech acts, in some cases • one can get an agent to change his beliefs by say- ing something, and in other cases not. Whether one can or not, however, depends on what be- lief revision strategy is adopted by the respective agents in a given situation. The problem cannot be solved by simply adding a few more axioms and default rules to the theory. Any theory that allows for the possibility of describing belief revi- sion must of necessity confront the problem of in- consistent extensions. This means that, if a hearer initially believes -~p, the default theory will have (at least) one extension for the case in which his belief that -~p persists, and one extension in which he changes his mind and believes p. Perhaps it will even have an extension in which he suspends belief as to whether p. The source of the difficulties surrounding Per- ranlt's theory is that the default logic h e adopts is unable to describe the attitude revision that oc- curs in consequence of a speech act. It is not our purpose here to state what an agent's belief re- vision strategy should be. Rather we introduce a framework within which a variety of belief revision strategies can be accomodated efficiently, and we demonstrate that this framework can be applied in a way that eliminates the problems with Perranlt's theory. Finally, there is a serious practical problem faced by anyone who wishes to implement Per- fault's theory in a system that reasons about speech acts. There is no way the belief transfer rule can be used efficiently by a reasoning sys- tem; even if it is assumed that its application is restricted to the speaker and hearer, with no other agents in the domain involved. If it is used in a backward direction, it applies to its own result. In- voking the rule in a forward direction is also prob- lematic, because in general one agent will have a very large number of beliefs (even an infinite num- ber, if introspection is taken into account) about another agent's beliefs, most of which will be ir- relevant to the problem at hand. 4 Hierarchic Autoepistemic Logic Autoepistemic (AE) logic was developed by Moore [I0] as a reconstruction of McDermott's nonmono- tonic logic [9]. An autoepistemic logic is based on a first-order language augmented by a modal op- erator L, which is interpreted intuitively as self belief. A stable ezpansio, (analogous to an exten- sion of a default theory) of an autoepistemic base set A is a set of formulas T satisfying the following conditions: 1. T contains all the sentences of the base the- ory A 2. T is closed under first-order consequence 3. If ~b E T, then L~b E T 4. If ¢ ~ T, then --L~b 6 T Hierarchic autoepistemic logic (HAEL) was de- veloped in response to two deficiences of autoepis- temic logic, when the latter is viewed as a logic for automated nonmonotonic reasoning. The first is a representational problem: how to incorporate preferences among default inferences in a natural way within the logic. Such preferences arise in many disparate settings in nonmonotonic reason- ing -- for example, in taxonomic hierarchies [6] or in reasoning about events over time [12]. To some extent, preferences among defaults can be 173 encoded in AE logic by introducing auxiliary in- formation into the statements of the defaults, but this method does not always accord satisfactorily with our intuitions. The most natural statement of preferences is with respect to the multiple ex- pansions of a particular base set, that is, we pre- fer certain expansions because the defaults used in them have a higher priority than the ones used in alternative expansions. The second problem is computational: how to tell whether a proposition is contained within the desired expansion of a base set. As can be seen from the above definition, a stable expansion of an autoepistemic theory is defined as a fixedpoint; the question of whether a formula belongs to this fixedpoint is not even semidecidable. This prob- lem is shared by all of the most popular nonmono- tonic logics. The usual recourse is to restrict the expressive power of the language, e.g., normal de- fault theories [11] and separable circumscriptive theories [8]. However, as exemplified by the diffi- culties of Perrault's approach, it may not be easy or even possible to express the relevant facts with a restricted language. Hierarchical autoepistemic logic is a modifica- tion of autoepistemic logic that addresses these two considerations. In HAEL, the primary struc- ture is not a single uniform theory, but a collection of subtheories linked in a hierarchy. Snbtheories represent different sources of information available to an agent, while the hierarchy expresses the way in which this information is combined. For ex- ample, in representing taxonomic defaults, more specific information would take precedence over general attributes. HAEL thus permits a natural expression of preferences among defaults. Further- more, given the hierarchical nature of the subthe- ory relation, it is possible to give a constructive semantics for the autoepistemic operator, in con- trast to the usual self-referential fixedpoints. We can then arrive easily at computational realiza- tions of the logic. The language of HAEL consists of a standard first-order language, augmented by a indexed set of unary modal operators Li. If ~b is any sentence (containing no free variables) of the first-order lan- guage, then L~ is also a sentence. Note that nei- ther nesting of modal operators nor quantifying into a modal context is allowed. Sentences with- out modal operators are called ordinary. An HAEL structure r consists of an indexed set of subtheories rl, together with a partial order on the set. We write r~ -< rj if r~ precedes rj in the order. Associated with every subtheory rl is a base set Ai, the initial sentences of the structure. Within A~, the occurrence of Lj is restricted by the following condition: If Lj occurs positively (negatively) in (6) Ai, then rj _ r~ (rj -< ri). This restriction prevents the modal operator from referring to subtheories that succeed it in the hier- archy, since Lj~b is intended to mean that ~b is an element of the subtheory rj. The distinction be- tween positive and negative occurrences is simply that a subtheory may represent (using L) which sentences it contains, but is forbidden from repre- senting what it does not contain. A complez stable e~pansion of an HAEL struc- ture r is a set of sets of sentences 2~ corresponding to the subtheories of r. It obeys the following con- ditions (~b is an ordinary sentence): 1. Each T~ contains Ai 2. Each Ti is closed under first-order conse- quence 3. If eEl, and ~'j ~ rl, then Lj~b E~ 4. If ¢ ~ ~, and rj -< rl, then -,Lj ~b E 5. If ~ E Tj, and rj -< vi, then ~bE~. These conditions are similar to those for AE sta- ble expansions. Note that, in (3) and (4), 2~ con- tains modal atoms describing the contents of sub- theories beneath it in the hierarchy. In addition, according to (5) it also inherits all the ordinary sentences of preceeding subtheories. Unlike AE base sets, which may have more than one stable expansion, HAEL structures have a unique minimal complex stable expansion (see Konolige [7]). So we are justified in speaking of "the" theory of an HAEL structure and, from this point on, we shall identify the subtheory r~ of a structure with the set of sentences in the complex stable expansion for that subtheory. Here is a simple example, which can be inter- preted as the standard "typically birds fly" default i74 scenario by letting F(z) be "z flies," B(z) be "z is a bird," and P(z) be "z is a penguin." Ao -- {P(a), B(a)} AI - {LIP(a) A",LoF(a) D -,F(a)} A2 -" {L2B(a) A ",LI-,F(a) D F(a)} (7) Theory r0 contains all of the first-order con- sequences of P(a), B(a), LoP(a), and LoB(a). -~LoF(a) is not in r0, hut it is in rl, as is LoP(a), -LooP(a), etc. Note that P(a) is inherited by rl; hence L1P(a) is in rl. Given this, by first- order closure ",F(a) is in rl and, by inheritance, LI",F(a) is in r2, so that F(a) cannot be derived there. On the other hand, r2 inherits ",F(a) from rl. Note from this example that information present in the lowest subtheories of the hierarchy percolates to its top. More specific evidence, or preferred defaults, should be placed lower in the hierarchy, so that their effects will block the action of higher-placed evidence or defaults. HAEL can be given a constructive semantics that is in accord with the closure conditions. W'hen the inference procedure of each subtheory is decidable, an obvious decidable proof method for the logic exists. The details of this develop- ment are too complicated to be included here, but are described by Konolige [7]. For the rest of this paper, we shall use a propositional base language; the derivations can be readily checked. 5 A HAEL Theory of Speech Acts We demonstrate here how to construct a hierarchic autoepistemic theory of speech acts. We assume that there is a hierarchy of autoepisternic subthe- ories as illustrated in Figure i. The lowest subthe- ory, ~'0, contains the strongest evidence about the speaker's and hearer's mental states. For exam- ple, if it is known to the hearer that the speaker is lying, this information goes into r0. In subtheory vl, defaults are collected about the effects of the speech act on the beliefs of both hearer and speaker. These defaults can be over- ridden by the particular evidence of r0. Together r0 and rl constitute the first level of reasoning about the speech act. At Level 2, the beliefs of the speaker and hearer that can be deduced in rl are used as evidence to guide defaults about nested beliefs, that is, the speaker's beliefs about the heater's beliefs, and vice versa. These results are collected in r2. In a similar manner, successive levels contain the result of one agent's reflection upon his and his interlocutor's beliefs and inten- tions at the next lower level. We shall discuss here how Levels r0 and rl of the HAEL theory are ax- iomatized, and shall extend the axiomatization to the higher theories by means of axiom schemata. An agent's belief revision strategy is represented by two features of the model. The position of the speech act theory in the general hierarchy of theories determines the way in which conclusions drawn in those theories can defeat conclusions that follow from speech acts. In our model, the speech act defaults will go into the subtheory rl, while evidence that will be used to defeat these defaults will go in r0. In addition, the axioms that relate rl to r0 determine precisely what each agent is willing to accept from 1"0 as evidence against the default conclusions of the speech act theory. It is easy to duplicate the details of Perrault's analysis within this framework. Theory r0 would contain all the agents' beliefs prior to the speech act, while the defaults of rl would state that an agent believed the utterance P if he did not be- lieve its negation in r0. As we have noted, this analysis does not allow for the situation in which the speaker utters P without believing either it or its opposite, and then becomes convinced of its truth by the very fact of having uttered it m nor does it allow the hearer to change his belief in -~P as a result of the utterance. We choose a more complicated and realistic ex- pression of belief revision. Specifically, we allow an agent to believe P (in rl) by virtue of the ut- terance of P only if he does not have any evidence (in r0) against believing it. Using this scheme, we can accommodate the hearer's change of be- lief, and show that the speaker is not convinced by his own efforts. We now present the axioms of the HAEL theory for the declarative utterance of the proposition P. The language we use is a propositional modal one 175 for the beliefs of the speaker and hearer. Agents s and h represent the speaker and hearer; the sub- scripts i and f represent the initial situation and the situation resulting from the utterance, respec- tively. There are two operators: [a] for a's belief and {a} for a's goals. The formula [hI]c~, for exam- ple, means that the hearer believes ~b in the final situation, while {si}¢ means that the speaker in- tended ~b in the initial situation. In addition, we use a phantom agent u to represent the content of the utterance and certain assumptions about the speaker's intentions. We do not argue here as to what constitutes the correct logic of these opera- tors; a convenient one is weak $5. The following axioms are assumed to hold in all subtheories. [u]P, P the propositional content of ut- (8) terance [~]¢ D [~]{s~}[hA¢ (9) [a]{a}¢ ~. {a}¢, where a is any (10) agent in any sit- uation. The contents of the u theory are essentially the same for all types of speech acts. The precise ef- fects upon the speaker's and heater's mental states is determined by the propositional content of the utterance and its mood. We assume here that the speaker utters, a simple declarative sentence, (Ax- iom 8), although a similar analysis could be done for other types of sentences, given a suitable repre- sentation of their propositional content. Proposi- tions that are true in u generally become believed by the speaker and hearer in rl, provided that these propositions bear the proper relationship to their beliefs in r0. Finally, the speaker in$ends to bring about each of the beliefs the hearer acquires in rl, also subject to the caveat that it is consistent with his beliefs in to. Relation between subtheories: ro -~ n (11) Speaker's beliefs as a consequence of the speech act: in AI: [u]¢ A -~L0-~[sl]¢ D [s/]¢ (12) Level 1 SSSS~S~SS~SSS~S SSSS~SSSSSSS~ S S S ~ S ~ SS~ SSS SSS SSS SS~ SS~ SS~ S~ ~S S~S SSS SS~ s~SSSSSSS~SSS~S Level 3 1 S,S ~'~'~" ~'s" S SS Ss" S S SSSSe'~'SS~'SSSS SSO' SS SSSSSSSSSfSSSS, SSSSSSSss/ssss, SSSSSSS S/S//S, S S S ~ S S , SSS SS, SsS SS, SSS SS, /SS S/. SSS SS, ~ SSJ ......... -iS, SSSSSSs~SSSSSS, SSSSSSSSSSSSSS, ¢sssss¢¢¢sssss, Figure 1: A Hierarchic Autoepistemic Theory Hearer's beliefs as a consequence of the speech act: in AI: ^ (13) -~L0--[h/]~b ^ ~Zo[hy]',[Sl]¢~ ^ "~Lo[hy]"{si}[hy]~) D [h/l~b The asymmetry between Axioms 12 and 13 is a consequence of the fact that a speech act has dif- ferent effects on the speaker's and hearer's mental states. The intuition behind these axioms is that a speech act should never change the speaker's mental attitudes with regard to the proposition he utters. If he utters a sentence, regardless of whether he is lying, or in any other way insincere, he should believe P after the utterance if and only if he believed it before. However, in the bearer's case, whether he believes P depends not only on his prior mental state with respect to P, but also on whether he believes that the speaker is being sincere. ~iom 13 states that a hearer is willing to believe what a speaker says if it does not conflict with his own beliefs in ~, and if the utterance does not conflict with what the hearer believes about the speaker's mental state, (i.e., that the speaker is not lying), and if he believes that believing P is consistent with his beliefs about the speaker's prior intentions (i.e., that the speaker is using the utterance with communicative intent, as distinct from, say, testing a microphone). As a first example of the use of the theory, con- sider the normal case in which A0 contains no evi- dence about the speaker's and bearer's beliefs after the speech act. In this event, A0 is empty and A1 contains Axioms 8-1b. By the inheritance condi- tions, 1"1 contains -~L0-,[sl]P , and so must contain [s/]P by axiom 12. Similarly, from Axiom 13 it fol- lows that [h/]P is in rl. Further derivations lead to {sl}[hl]P , {si}[hl]{si}[hy]P , and so on. As a second example, consider the case in which the speaker utters P, perhaps to convince the hearer of it, but does not himself believe either P or its negation. In this case, 1"0 contains -~[sf]P and -~[sl]-~P , and ~'1 must contain Louis tiP by the inheritance condition. Hence, the application of Axiom 12 will be blocked, and so we cannot conclude in ~'1 that the speaker believes P. On the other hand, since none of the antecedents of Axiom 13 are affected, the hearer does come to believe it. Finally, consider belief revision on the part of the hearer. The precise path belief revision takes depends on the contents of r0. If we consider the hearer's belief to be stronger evidence than that of the utterance, we would transfer the heater's ini- tial belief [hl]~P to [h/]'-,P in ~'0, and block the de- fault Axiom 13. But suppose the hearer does not believe --P strongly in the initial situation. Then 176 we would transfer (by default) the belief [h]]~P to a subtheory higher than rl, since the evidence furnished by the utterance is meant to override the initial beliefi Thus, by making the proper choices regarding the transfer of initial beliefs in various subtheories, it becomes possible to represent, the revision of the hearer's beliefs. This theory of speech acts has been presented with respect to declarative sentences and repre- sentative speech acts. To analyze imperative sen- tences and directive speech acts, it is clear in what direction one should proceed, although the required augmentation to the theory is quite com- plex. The change in the utterance theory that is brought about by an imperative sentence is the addition of the belief that the speaker intends the hearer to bring about the propositional content of the utterance. That would entail substituting the following effect for that stated by Axiom 8: [u]{s/}P, P the propositional con- (14) tent of utterance One then needs to axiomatize a theory of intention revision as well as belief revision, which entails de- scribing how agents adopt and abandon intentions, and how these intentions are related to their be- liefs about one another. Cohen and Levesque have advanced an excellent proposal for such a theory [4], but any discussion of it is far beyond the scope of this article. 6 Reflecting on the Theory When agents perform speech acts, not only are their beliefs about the uttered proposition af- fected, but also their beliefs about one another, to arbitrayr levels of reflection. If a speaker reflects on what a hearer believes about the speaker's own beliefs, he takes into ac- count not only the beliefs themselves, but also what he believes to be the hearer's belief revi- sion strategy, which, according to our theory, is reflected in the hierarchical relationship among the theories. Therefore, reflection on the speech-act- understanding process takes place at higher levels of the hierarchy illustrated in Figure 1. For exam- ple, if Level 1 represents the speaker's reasoning about what the hearer believes, then Level 2 rep- 177 resents the speaker's reasoning about the heater's beliefs about what the speaker' believes. In general, agents may have quite complicated theories about how other agents apply defaults. The simplest assumption we can make is that they reason in a uniform manner, exactly the same as the way we axiomatized Level 1. Therefore, we ex- tend the analysis just presented to arbitrary reflec- tion of agents on one another's belief by proposing axiom schemata for the speaker's and heater's be- liefs at each level, of which Axioms 12 and 13 are the Level 1 instances. We introduce a schematic operator [(s, h)n] which can be thought of as n lev- els of alternation of s's and h's beliefs about each other. This is stated more precisely as [(8, h),,]¢ (is) n times Then, for example, Axiom 12 can be restated as the general schema in An+l : ([,]~ ^ (16) "L.[(hl, 8I).]'[8j]~) [(hi, 81),] [81]~. 7 Conclusion A theory of speech acts based on default reasoning is elegant and desirable. Unfortunately, the only existing proposal that explains how this should be done suffers from three serious pioblems: (1) the theory makes some incorrect predictions; (2) the theory cannot be integrated easily with a theory of action; (3) there seems to be no efficient imple- mentation strategy. The problems are stem from the theory's formulation in normal default logic. We have demonstrated how these difficulties can be overcome by formulating the theory instead in a version of autoepistemic logic that is designed to combine reasoning about belief with autoepistemic reasoning. Such a logic makes it possible to for- realize a description of the agents' belief revision processes that can capture observed facts about attitude revision correctly in response to speech acts. This theory has been tested and imple- mented as a central component of the GENESYS utterance-planning system. Acknowledgements This research was supported in part by a contract with the Nippon Telegraph and Telephone Cor- poration, in part by the Office of Naval Research under Contract N00014-85-C-0251, and in part under subcontract with Stanford University un- der Contract N00039-84-C-0211 with the Defense Advanced Research Projects Agency. The original draft of this paper has been substanti .ally improved by comments from Phil Cohen, Shozo Naito, and Ray Perrault. The authors are also grateful to the participants in the Artificial Intelligence Principia seminar at Stanford for providing their stimulat- ing discussion of these and related issues. References [1] Douglas E. Appelt. Planning English Sen- tences. Cambridge University Press, Cam- bridge, England, 1985. [2] Philip R. Cohen. On Knowning What to Say: Planning Speech Acts. PhD thesis, University of Toronto, 1978. [3] Philip R. Cohen and H. Levesque. Speech acts and rationality. In Proceedings of the ~3rd Annual Meeting, pages 49-59, Associ- ation for Computational Linguistics, 1985. [4] Philip R. Cohen and H. Levesque. Rational Interaction as the Basis for Communication. Technical Report, Center for the Study of Language and Information, 1987. [5] Philip R. Cohen and C. Raymon d Perranlt. Elements of a plan-based theory of speech acts. Cognitive Science, 3:117-212, 1979. [6] D. W. Etherington and R. Reiter. On inheri- tance hierarchies with exceptions. In Proceed- ings of AAAI, 1983. [7] Kurt Konolige. A Hierarchic Autoepistemic Logic. Forthcoming technical note, 1988. [8] Vladmir Lifsehitz. Computing circumscrip- tion. In Proceedings of AAA1, pages 121-127, 1985. [9] Drew McDermott. Nonmonotonic logic II: nonmonotonic modal theories. Journal of the Association for Computing Machinery, 29(1):33-57, 1982. 178 [10] Robert C. Moore. Semantical considerations on nonmonotonic logic. Artificial Intelli- gence, 25(1), 1985. [11] Raymond Reiter. A logic for default reason- ing. Artificial Intelligence, 13, 1980. [12] Yoav Shoham. Reasoning about Change: Time and Causation from the Standpoint of Artificial Intelligence. MIT Press, Cam- bridge, Massachusetss, 1987.
1988
21
TWO TYPES OF PLANNING IN LANGUAGE GENERATION Eduard H. Hovy USC/Informat|on Sciences Institute 4676 Ar]miralty Way, Suite 1001 Marina del Rey, CA 90292-6695, U.S.A. [email protected] Abstract As our understanding of natural language gener- ation has increased, a number of tasks have been separated from realization and put together un- der the heading atext planning I. So far, however, no-one has enumerated the kinds of tasks a text planner should be able to do. This paper describes the principal lesson learned in combining a num- ber of planning tasks in a planner-realiser: plan- ning and realization should be interleaved, in a limited-commitment planning paradigm, to per- form two types of p]annlng: prescriptive and re- strictive. Limited-commitment planning consists of both prescriptive (hierarchical expansion) plan- ning and of restrictive planning (selecting from op- tions with reference to the status of active goals). At present, existing text planners use prescriptive plans exclusively. However, a large class of p]anner tasks, especially those concerned with the prag- matic (non-literal) content of text such as style and slant, is most easily performed under restric- tive planning. The kinds of tasks suited to each planning style are listed, and a program that uses both styles is described. 1 Introduction PAULINE (Planning And Uttering Language In Natural Environments) is a language generation program that is able to realize a given input in a number of different ways, depending on how its pragmatic (interpersonal and situation-specific) This work was done while the author was at the Yale University Computer Science Departmentt New Haven This work was supported in part by the Advanced Re- search Projects Agency monitored by the Office of Naval Research under contract N00014-82-K-0149. It was also supported by AFOSR contract F49620-87-C-0005. goals are set by the user. The program consists of over 12,000 lines of T, a dialect of LISP devel- oped at Yale University. PAULINE addresses simultaneously a wider range of problems than has been tried in any sin- gle language generation program before (with the possible exception of [Clippinger 74]). As is to be expected, no part of PAULINE provides a sat- iefactorily detailed solution to any problem; to a larger or smaller degree, each of the questions it addresses is solved by a set of simpl~ed, somewhat ad ho¢ methods. However, this is not to say that the program does not provide some interesting in- sights about the nature of language generation and the way that generators of the future will have to be structured. One insight pertains to the problems encoun- tered when the various tasks of generation -- both of text planning and of realization ~ are inter- leaved to provide plannlng-on-demand rather than strict top-down planning (which has been the ap- proach taken so far). The planning tasks that are best performed on demand tend to have short- range effects on the text (compared to those best performed in full before realization). In order to achieve the types of communicative goals such tasks usually serve, the planner must ensure that they work together harmoniously so that their effects support one another rather than conflict. This requirement imposes constraints on the orga- nlzation and architecture of a generation system. This paper describes PAULINE's architecture, the text planning tasks implemented, and how the tasks are managed. Unfortunately many details have to be left unsaid; the interested reader is re- ferred to relevant material at appropriate points. Overview descriptions appear in [Hovy 87a, 87b]. 179 1.1 The Problem Depending on how the user sets the communica- tive goals, PAULINE produces over 100 variations of an episode that took place at Yale University in April 1986 (it also produces multiple versions of episodes in two other domains; see [Hovy 86a, 86b]). In each case, PAULINE is also given a de- scription of the hearer and the same three princi- pal topics from a single underlying representation network. As a quick informal description of the episode, PAULINE says: Exaxnple I. YALE UNIVERSITY PUNISHED A NUMBER OF STUDENTS FOR BUILDING A SHANTYTOWN ON BEINECKE PLAZA BY ARRESTING 76 STUDENTS AND TEARING IT DOWN ONE MORNING IN EARLY APRIL. THE STUDENTS WANTED YAlE TO DIVEST FROM COMPANIES DOING BUSINESS IN SOUTH AFRICA. FINALLY. THE UNIVERSITY GAVE IN AND ALLDVED THE STUDENTS TO REBUILD IT. This is the kind of description one may hear from a passerby. In contrast, when PAULINE is given the goals to be more formal and to slant its description of the episode toward the protesters, it produces: Example 2. AS A RD4INDER TO YAlE UNIVERSITY TO DIVEST FROM COI~ANIES DOING BUSINESS IN SOUTH AFRICA. A LARGE NUMBER OF CONCERNED STUDENTS ERECTED A SHANTYTOWN --- NAMED WINNIE 14ANDELA CITY --- ON BEINECKE PLAZA IN EARLY APRIL. THE LOCAL COMMUNITY EXPRESSED SUPPORT FOR THE STUDENTS" ACTION. LATER, AT 6:30 AM ON APRIL 14, THE SHANTYTe~ WAS DESTROYI~ BY OFFICIALS; ALSO, AT THAT TIME, THE POLICE ARRESTED 76 STUDENTS. THE STUDENTS REQUESTED THAT YALE UNIVERSITY GIVE TH~ PERMISSION TO REASSEMBLE IT ON BEINECKE PLAZA; CONCURRENTLY, SEVERAL LOCAL POLITICIANS AND FACULTY M~{BEP.S EXPRESSED CRITICISM OF YALE'S ACTIONS. FINALLY, YALE PERMITTED THE STUDENTS TO RECONSTRUCT THE SHASTYTOWN. Of course, in face-to-face conversations a protester would probably be less formal and more openly opinionated. PAULINE (although now spe-]c~-g in support of the university) illustrates: Example 8. IT PISSES ME OFF THAT A FEW SHIFTLESS STUDENTS WERE our TO MAKE TROUBLE ON BEINECKE PLAZA ONE DAY. THEY TOOK OVER BEINECKE PLAZA AND BUILT A SHANTYTOWN. WINNIE MANDELA CITY, BECAUSE THEY WANTED YALE UNIVERSITY TO PULL THEIR MONEY OUT OF COMPANIES WITH BUSINESS IN SOUTH AFRICA. THE UNIVERSITY ASKED THE STUDENTS TO BUILD THE SHANTYTOWN ELSEWHERE, BUT THEY REFUSED. I AM HAPPY THAT OFFICIALS RDIOVED THE SHANTYTOWN ONE MORNING. FINALLY. YALE GAVE IN LET THE IDIOTS PUT IT UP AGAIN. AT THE SAME TIME YALE SAID THAT A COMMISSION WOULD GO TO SOUTH AFRICA IN JULY TO CHECK OUT THE SYST]~f OF APARTHEID, BECAUSE THE UNIVERSITY WANTED TO BE REASONABLE. The construction of such texts is beyond the capabi~ties of most generators written to date. Though many generators would be capable of producing the individual sentences, some of the pre-real~ation planning tasks have never been attempted, and others, though studied exten- sively (and in more detail than implemented in PAULINE) have not been integrated into a single planner under pragmatic control This paper involves the questions: what are these pl~n-;-g tasks? How can they all be inte- grated into one planner? How can extralinguistic communicative goals be used to control the plan- ning process? What is the nature of the relation between text planner and text realiser? 2 Interleaving or Top-Down Planning? 2.1 The Trouble with Traditional Planning In the text planning that has been done, two prin- cipal approaches were taken. With the integrated approach, planning and generation is one contln- uous process: the planner-realizer handles syntac- tic constraints the same way it treats treats all other constraints (such u focus or lack of requisite hearer knowledge), the only difference being that syntactic constraints tend to appear late in the planning-realisation process. Typically, the gener- ator is written as a hierarchical expansion planner (see [Sacerdoti 77]) -- this approach is exempU- fled by KAMP, Appelt's planner-generator ([Ap- pelt 81, 82, 83, 85]). With the #eparated approach, planning takes place in its entirety before realiza- tion starts; once planning is over, the planner is of no further use to the realizer. This is the case in the generation systems of [McKeown 82], [McCoy 180 85], [R~sner 86, 87], [Novak 87], [Bienkowski 86], [Paris 87], and [McDonald & Pustejovsky 85]. Neither approach is satisfactory. Though con- ceptually more attractive, the integrated ap- proach makes the grammar unwieldy (it is spread throughout the plan library) and is slow and impractical m after all, the realization process proper is not a planning task -- and furthermore, it is not clear whether one could formulate all text planning tasks in a sufficiently homogeneous set of terms to be handled by a single planner. (This argument is made more fully in [How/85] and [Mc- Donald & Pustejovsky 85].) On the other hand, the separated approach typically suffers from the stricture of a one-way narrow-bandwidth inter- face; such a planner could never take into account fortuitous syntactic opportunities -- or even he aware of any syntactic notion! Though the sepa- ration permits the use of different representations for the planning and realization tasks, this solu- tion is hardly better:, once the planning stage is over, the realizer has no more recourse to it; if the realizer is able to fulfill more than one plan- ner instructions at once, or if it is unable to an instruction, it has no way to bring about any replanning. Therefore, in practice, separated gen- erators perform only planning that has little or no syntactic import -- usually, the tasks of topic choice and sentence order. Furthermore, both these models both run counter to human behavior: When we speak, we do not try to satisfy only one or two goals, and we operate (often, and with success) with conflicting goals for which no resolution exists. We usually begin to speak before we have planned out the full utterance, and then proceed while performing cer- tain planning tasks in bottom-up fashion. 2.2 A Solution: Interleaving T, Lking this into account, a better solution is to perform limited-commitment planning ~ to de- fer planning until necessitated by the realization process. The planner need assemble only a par° tial set of generator instructions m enough for the realization component to start working on and can then continue planning when the realiza- tion component requires further guidance. This approach interleaves planning and realization and is characterized by a two-way communication at the realizer's decision points. The advantages are: First, it allows the separation of planning and re- alization tasks, enabling them to be handled in appropriate terms. (In fact, it even allows the separation of special-purpose planning tasks with idiosyncratic representational requirements to be accommodated in special-purpose planners.) Sec- ond, it allows planning to take into account unex- pected syntactic opportunities and inadequacies. Third, this approach accords well with the psy- cholinguistic research of [Bock 87], [Rosenherg 77], [Danks 77], [De Smedt & Hempen 87], [Hempen & Hoenkamp 78], [Hempen 77, 76], and [Levelt & Schriefers 87]. This is the approach taken in PAULINE. But there is a cost to this interleaving: the type of planning typically activated by the realizer dif- fers from traditional top-clown planning. There are three reasons for this. 1. Top-down planning is prescriptive: it determines a series of actions over an extended range of time (i.e., text). However, when the planner cannot expand its plan to the final level of detail m remember, it doesn't have access to syntactic information m then it-has to complete its task by planning in-line, during real- ization. And in-line planning usually requires only a single decision, a selection from the syntactically available options. After in-line planning culmi- nates in a decision, subsequent processing contin- ues as realkation -- at least until the next set of unprovided-for options. Unfortunately, unlike hi- erarchical plan steps, subsequent in-llne planning optidns need not work toward the same goal (or in- deed have any relation with each other); the plan- ner has no way to guess even remotely what the next set of optious and satisfiable goals might be. 2. In-line planning is different for a second rea- son: it is impossible to formulate workable plans for common speaker goals such as pragmatic goals. A speaker may, for example, have the goals to im- press the hearer, to make the hearer feel socially~ subordinate, and yet to be relatively informal These goals play as large a role in generation as the speaker's goal to inform the hearer about the topic. However, they cannot be achieved by con- structing and following a top-down plan -- what would the plan's steps prescribe? Certainly not the sentence "I want to impress you, but still make you feel subordinatem! Pragmatic effects are best achieved by making appropriate subtle decisions during the generation process: an extra adjective here, a slanted verb there. Typically, this is a mat- ter of in-line planning. 3. A third difference from traditional plan- ning is the following: Some goals can be achieved, flushed from the goal list, and forgotten. Such goals (for example, the goal to communicate a certain set of topics) usually activate prescriptive plans. In contrast, other goals cannot ever be 181 fully achieved. If you are formal, you are formal throughout the text; if you are friendly, arrogant, or opinionated, you remain so -- you cannot sud- denly be "friendly enough" and then flush that goal. These goals, which are pragmatic and stylis- tic in nature, are well suited to in-llne planning. Generation, then, requires two types of plan- ning. Certain tasks are most easily performed in top-down fashion (that is, under guidance of a hi- erarchical planner, or of a fixed-plan (schema or script) applier), and other tasks are most natu- rally performed in a bottom-up, selective, fashion. That is, some tasks are prescriptiee -- they act over and give shape to long ranges of text -- and some are restr/ct/ee -- they act over short ranges of text, usually as a selection from some number of alternatives. Prescriptive strategies are forms, tive: they control the construction and placement of parts in the paragraph and the sentence; that is, they make some commitment to the final form of the text (such as, for example, the inclusion and order of specific sentence topics). Restrictive strategies are selective: they decide among alter- natives that were left open (such as, for example, the possibility of including additional topics un- der certain conditions, or the specific content of each sentence). A restrictive planner cannot sim- ply plan for, it is constrained to plan with: the options it has to select from are presented to it by the realizer. 2.3 Planning Restrictively: Moni- toring Since there is no way to know which goals sub- sequent decisions will affect, restrictive planning must keep track of all goals -- confllcting or not and attempt to achieve them all in parallel. Thus, due to its bottom-up, run-time nature, planning with restrictive strategies takes the form of execu- tion monitoring (see, say, [Fikes, Hart & Niisson 72], [Sacerdoti 77], [Miller 85], [Doyle, Atkiuson & Doshi 86], [Broverman & Croft 87]); we will use the term monitoring here, appropriate for a sys- tem that does not take into account the world's actual reaction (in generation, the bearer's actual response), but that trusts, perhaps naively, that the world will react in the way it expects. Moni- toring requires the following: • checking, updating, and recording the current satisfaction status of each goal • determining which goal(s) each option will help satisfy, to what extent, in what ways • determining which goal(s) each option will thwart, to what extent, and in what ways • computing the relative priority of each goal in order to resolve conflicts (to decide, say, whether during instruction to change the topic or to wait for a socially dominant hearer to change it) When the planner is uncertain about which long- term goals to pursue and which sequence of actions to select, the following strategies are useful: • prefer common intermediate goals (subgoals shared by various goals [Durfee & Lesser 86]) • prefer cheaper goals (more easily achieved goals; [Durfee & Lesser 86]) • prefer disorlmlnatiue ~ntermediate goals (goals that most effectively indicate the long- term promise of the avenue being explored) ([Durfee & Lesser 86]) • prefer least-satlsfied goals (goals furthest from achievement) • prefer least-recently satisfied goals (goaLs least recently advanced) • combine the latter two strategies (a goal re- ceives higher priority the longer it waits and the fewer times it has been advanced) 3 Planning in PAULINE 3.1 Program Architecture, Input and Opinions The user provides PAULINE with input topics and a set of pragmatic goals, which activate a number of intermediate rhetorical goals that control the style and slant of the text. Whenever planning or realization require guidance, queries are directed to the activated rhetorical goals and their associ- ated strategies (see Figure 1). Prescriptive planning is mostly performed dur- ing topic collection and topic organiEation and re- strictive planning is mostly performed during re- alization. Restrictive planning is implemented in PAULINE in the following way: None of the pro- gram's rhetorical goals (opinion and style) are ever fully achieved and flushed; they require decisions to be made in their favor throughout the text. PAULINE keeps track of the number of times each such goal is satisfied by the selection of some op- tion (of course, a single item may help satisfy a number of goals simultaneously). For conflict reso- lution, PAULINE uses the least-satisfied strategy: the program chooses the option helping the goals with the lowest total satisfaction status. In order to do this, it must know which goals each option will help satisfy. Responsibility for providing this 182 Input Topics "1 Topic Collection Topic Organization Realization Text - topic collection: CONVINCE RELATE DESCRIBE - interpretation - new topics - juxtaposition - ordering - sentence type - organisation - clauses - wordJ l G O A R L H S ET & O S R T I R C A A T L E G I E S Input: Pragmatic Aspects of Conversation Figure 1: Program Architecture information lies with whatever produces the op- tion: either the lexicon or the language specialist functions in the grammar. PAULINE's input is represented in a standard case-frame-type language based on Conceptual Dependency ([Schank 72, 75], [Schank & Abel- son 77]) and is embedded in a property-inheritance network (see [Charnlak, Riesbeck, & McDermott 80], [Bohrow & Winograd 77]). The shantytown example consists of about 120 elements. No inter- mediate representation (say, one that varies de- pending on the desired slant and style) is created. PAULINE's opinions are based on the three af- fect values GOOD, NEUTRAL, and BAD, as de- scribed in [Hovy 86b]. Its rules for a~ect combina- tion and propagation enable the program to com- pute an opinion for any representation element. For instance, in example 2 (where PAULINE speaks as a protester), its sympathy list cont~-. the elements representing the protesters and the protesters' goal that Yale divest, and its antipathy list contains Yale and Yale's goal that the univer- sity remain in an orderly state. 3.2 Text Planning Tasks This section very briefly notes the text planning tasks that PAULINE perforras: topic collection, topic interpretation, additional topic inclusion, topic juxtaposition, topic ordering, intrasentential slant, and intrasententlal style. Topic Collection (Prescriptive): This task collecting, from the input elements, additional representation elements and determining which aspects of them to say -- is pre-eminently pre- scriptive. Good examples of topic collection plans (also called schemas) can be found in [McKeown 82], [Paris & McKeown 87], and [R~sner 86 I. In this spirit PAULINE has three plans m the DE- SCRIBE plan to find descriptive aspects of ob- jects, the RELATE plan to relate events and state- changes, and the CONVINCE plan to select topics that will help convince the hearer of some opinion. Whenever it performs topic collection, PAULINE applies the prescriptive steps of the appropriate collection plan to each candidate topic, and then in turn to the newly-found candidate topics, for as long as its pragmatic criteria (amongst others, the amount of time available) allow. The CON- VINCE plan (described in [Hovy 85]) contain% 183 amongst others, the steps to ~ay good intention, say good results, and appeal to authority. Example 1 presents the topics as given; in example 2, the CONVINCE plan prescribes the inclusion of the protesters' goal and the support given by the lo- cal community and faculty; and in example 3, with opposite sympathies, the same plan prescribes the inclusion of Yale's request and of the announce- ment of the investigation commission. Topic Interpretation (Preserlptlve and Restrictive): As described in [Hovy 87c], gen- erators that slavishly follow their input elements usually produce bad text. In order to produce for- mulations that are appropriately detailed and/or slanted, a generator must have the ability to ag- gregate or otherwise interpret its input elements, either individually or in groups, as instances of other representation elements. But finding new interpretations can be very dlt~cult; in general, this task requires the generator (a) to run infer- ences off the input elements, and (b) to determine the expressive suitability of resulting interpreta- tions. Though unbounded inference is not a good idea, limited inference under generator control can improve text significantly. One source of control is the generator's pragmatic goals: it should try only inferences that are likely to produce goal- serving interpretations. In this spirit, PAULINE has a number of prescriptive and restrictive strate- gies that suggest specific interpretation inferences slanted towards its sympathies. For example, in a dispute between ~we ~ (the program's sympathies) and UtheyS, some of its strategies call for the in- terpretations that • coercion: they coerce others into doing things for them • appropriation: they use ugly tactics, such as taking and using what isn't "theirs • conciliation: we are conciliatory; we moderate our demands Interpretation occurred in examples 1 and 3: the notions of punishment in example 1, and of appro- priation (%ook over Beinecke Plaza s) and conc~- iation (~¥ale gave in~) in example 3, did not ap- pear in the representation network. Additional Topic Inclusion (Restrictive): During the course of text planning, the genera- tor may find additional candidate topics. When such topics serve the program's goals, they can be included in the text. But whether or not to in- clude these instances can only be decided when such topics are found; the relevant strategies are therefore restrictive. For example, explicit state- ments of opinion may be interjected where appro- priate, such as, in example 3, the phrases Ult pisses me off m and uI am happy that ~. Topic Juxtaposition (Restrictive): By jux- taposing sentence topics in certain ways, one can achieve opinion-related and stylistic effects. For example, in order to help slant the text, PAULINE uses multi-predicate phrases to imply certain af- fects. Two such phrases are aNot only X, but Y~ and uX; however, Y~; depending on the speaker's feelings about X, these phrases attribute feelings to Y, even though Y may really be neutral (for more detail [How/ 86b]). With respect to stylis- tic effect, the juxtaposition of several topics into a sentence usually produces more complex, forma~ sounding text. For example, consider how the phrases uas a reminder w, us]so, at that time s, and ~concurrently ~ are used in example 2 to link sentences that are separate in example 3. The task of topic juxtaposition is best implemented re- strictively by presenting the candidate topics as options to strategies that check the restrictions on the use of phrases and select suitable ones. (The equivalent prescriptive formulation amounts to giving the program goals such as [find in the net- work two topics that will fit into a %Yot o,~/buff phrase], a much less tractable task.) Topic Ordering (Prescriptive): The order- ing of topics in the paragraph is best achieved prescriptively. Different circumstances call for different orderings; newspaper articles, for in- stance, often contain an introductory summa- rising sentence. In contrast to the abovemen- tioned schemas ([McKeown 82], etc.), steps in PAULINE's topic collection plans are not ordered; additional plans must be run to ensure coher- ent text flow. PAULINE uses one of two topic- ordering plans which are simplified scriptifications of the strategies discussed in [Hobbs 78, 79] and [Mann & Thompson 83, 87]. Intrasentential Slant (Restrictive): In ad- dition to interpretation, opinion inclusion, and topic juxtaposition, other slanting techniques in- clude the use of stress words, adjectives, adverbs, verbs that require idiosyncratic predicate con- tents, nouns, etc. Due to the local nature of most of these techniques and to the fact that options are only found rather late in the realization process, they are best implemented restrictively. In exam- ple 2, for example, the protesters are described as "a large number of concerned students ~. This is generated in the following way: The generator's noun group specialist produces, amongst others, the goals to say adjectives of number and of opin- ion. Then the specialist that controls the real- 184 ization of adjectives of number collects all the al- ternatives that express number attributively (such as ~a few =, Zmany ~, a number) together with the connotations each carries. The restrictive strate- gies activated by the rhetorical goals of opinion then select the options of ~many ~ and ~a large number" for their slanting effect. Finally, the re- strictive strategies that ~xve the rhetorical goals determining formality select the latter alternative. The opinion %oncerned" is realized similarly, as are the phrases zas a reminder ~ and, in example 3, "a few shiftless students" and ~idiots'. Intrasentential Style (Restrictive): Con- trol of text style is pre-eminently a restrictive task, since syntactic alternatives usually have rel- atively local effect. PAULINE's rhetorical goals of style include haste, formality, detail, simplicity (see [Hovy 87d]). Associated with each goal is a set of restrictive strategies or plans that act ae criteria at relevant decision points in the realization pro- cess. Consider, for example, the stylistic difference between examples 2 and 3. The former is more for- real: the sentences are longer, achieved by using conjunctions; they contain adverbial clauses, usu- ally at the beginnings of sentences (~later, at 5:30 am one morning'); adjectival descriptions are rel- ativised (anamed Winnie Mandela City'); formal nouns, verbs, and conjunctions are used (%rected, requested, concurrently, permitted=). In contrast, example 3 seems more colloquial because the sen- tences are shorter and simpler; they contain fewer adverbial clauses; and the nouns, verbs, and con- junctions are informal (ffibuilt, asked, at the same time, let=). Indications of the formality of phrases, nouns, and verbs are stored in discriminations in the lexicon (patterned after [Goldman 75]). 4 Conclusion The choices distributed throughout the genera- tion process are not just a set of unrelated ad hoc decisions; they are grammatically related or, through style and slant, convey pragmatic infor- mation. Therefore, they require control Since traditional top-down prescriptive planning is uno able to provide adequate control, a different kind of planning is required. The limited-commitment planning organization of PAULINE illustrates a possible solution. Text planning provides a wonderfully rich con- text in which to investigate the nature of prescrip- tive and restrictive planning and execution moni- toring -- issues that are also important to general AI planning research. 5 Acknowledgement Thanks to Michael Factor for comments. 6 References 1. 2. 8. 4o 6. 6. T. Appelt, D.E., 1982. P/,~mu'n~ N~m//-~mlm~ge U~ter- w~eemto,q~i~iMulh'ple Goelz Ph.D. dissertation, Stan- ford University. Appelt, D.E., 1982. Planning Natural-Language Ut- teranc~. /h~t~d/~# of ~ S~oml AAA/Co~fe~, Pittsburgh. Appelt, D.E., 1983. Telegram: A Grammar Formal- km for Language Planning. Pme~d/ngs of the ~/~ £/CAI Conference, Karlgruhe. Appelt, D.E., 1986. Planning E~bh Sentee~eu. Cam- bridge: Cambridge University Pre~. Bienkow=kl, MJL., 1986. A Computational Model for Externporaneou~ Elabor~tions. Princeton Univerwity Cognitive Science Laboratory Technical Report no I. Bobrow, D.G. & Winograd, T., 197"/. An Overview of KRL, a Knowledge-Reprementation LanSuage. C.o9- Bock, J.K., 1987. Exploring Levels of Processing in 5entm, ce Production. In N~'w~/Language G'ee~.,r,~on.- Reee~ Ad~nt~ bt Arlifteial lntdlige~, P~Aolo~, ~mi /~'nt~d~/e~, Kempon G. (ed), $51-364. Boston: Kluwer Academic Publishers. 8. 9. 10. 11. 12. 13. 14. Broverman, C.A. & Croft, W.B., 1987. Reasoning about Exceptions during Plan Execution Monitoring. P~med/~m o~ the ~ Conferee of AAA/, Seattle. Chm~iak, E., Riubeck, C.K. & McDermott, D.V, 1980. Art/JL.mt I~/ee~ Pmg,umm/ng. Hilkdale: Lawrence Erlbamn Auociat~. Cllppinger, J.H., 1974. A D/seourse Spea/d~ P~n a P ~ Theo~ ofDi#eom.me Beh~dor and a Limltcd Theo~ of P~jehoaml/~ D/~o~'me. Ph.D. di~ertation, Univ~ity of Pennzylvania. Dmnkt, J.H., 1977. ProducingIdeu and Senteneu. In Sentence Pmdud/on.- Detdop,ne~ s'n Re~areh and The- orll, Rosenberg S. (ed), 226-258. Hilkdale: Lawrence Erlb-um A~oci=tu. De Smedt, K. & Kempen, G., 1987. Increment,d Sen- fence Production. In Na~nd Languace Genemt/on." Re- cent Advancem in A~'~¢iol Intdllgenee, P~/chotogg, and Zin- ~t/em, Kempen G. (ed), 356-870. Boston: Kluwer Academic Publisher#. Doyle, R.J., AtkinJon, D.J. & Doshi, R.S., 1986. Gen- erating Perception Requemt~ and Expectations to Ver- ify the Execution of Plans. Prooee4a'ngm of t, Jue ~ Com- ,fem~ of AAA/, Philadelphia. Durfee, E.H. & Le~er, V.R., 1986. Incremental Plan- ning to Control a Blackboard-Bued Problem Solver. Pmeee~ng. of t.ke .F,~g/~ Gon/evm~e o.f t~e Com'~e Sd- e~ S~e~/s Arnh~1"mt. 185 15. Fikes, R~E., Hart, P.E. & Niisson, N.J., 1972. Learn- ing and executing generalized robot plans. Arh~qe/a/ Intdlige~, 3, 251-288. 16. Goldman, N.M., 1975. Conceptual Generation. In Conceptu~ In/orm~o~ Pmce~'n¢, Schank, R.C. (ed), 289-371. Amsterdam: North.Holland Publishing Company. 17. Hobbs, J.R., 1978. Why is Discour~ Coherent? $111 Technical Note 176. 18. Hobbs, J.R., 1979. Coherence and Coreferenos. ~'ee Selence, 8(I), 67-90. 19. Hovy, E.H., 1985. Integrating Text Planning and Pro- duction in Generation. Pmceed/nf~ oj' t~ AqnZ/s Z/CA] Co~e,e,¢e, Los Angeles. 20. Hovy, E.H., 1986a. Some Pragmatic Decision Criteria in Generation. In N ~ r a ~ Genemh~.. New Re~dt~ in Arh'fwi~ Intdlieenee, P~;e~o~, and Lin~,i~tle~ Kempen G. (ed), 3-18. Boston: Kluwer Academic Publishers, 1987. 21, Hovy, E.H., 1986b, Putting A~ect into TexL Pro- eeedlnc, ol t~, Eighth Co,/evince o! t&~ Coen~ee Sdmu= Socletp, Amherst. Pm0m0~ Co~hu/nt~. Ph.D. dissertation, Yale Uni- versity. 23. How/, E.H., 1987o. Generatin 8 Natural Language under Pragmatic Constraints. Journal o~ Pmomat~, 11(6), 889-719. 24. Hovy, E.H., 1987c. Interpretation in Gener~ion. Pro- eee~ng~ ol the Siz~ Co~e~,nce o~ AAA], Seattle. 25. Hovy, E.H., 1987d. What Makes Lan~uap Formal? Pmceed~no,. of the Ni~tA Co~v~ee~¢e ol the Cog~iH~e Sdme~ Soe~etg, Seattle. 26. Kempen, G., 1976. Directions for Building a Sen- tents Generator which is Psychologically Plausible. Unpublkhed paper, Yale University. 27. Kempen, G., 1977. Concep!;uali~ing and Formulating in Sentence Production. In Se~e~e Pn~&wt~n: De- edopme~ i~ ~Je~eA and Theory, Rosenberg S. (ed), 259-274. Hilisdale: Lawrence Eribaum Aesociates. 28. Kempen, G. & Hoenkamp, E, 1978. A Procedural Grammar for Sentence Production. University of Ni- jmegen Technical Report, Nijmegen. 29. Levelt, W~.]V[. & Schriefers, H., 1987. StaRes of Lex- ical Access. In N~,,mt r..,~,~e Geaemtio~" Rec~ Ad- ~anee~ in Artifidal In~dllgense, P~jdu~o~, and I.i~. Kempen G. (ed), 895-404. Boston: Kluwer Academic Publishers. 30. Mann, W.C. & Thompson, S,k., 1983. Relational Propositions in Discourse. USC/Information Sciences Institute Research Report RS-8.~115. 31. Mann, W.C. & Thompson, S.A., 1987. Rhetorical Structure Theory: Description and Construction of Text Structures. In NaZuml L~nguage Generation: Reeer~ Ad,;aneee in Am'tidal Intdlieen~, Pal~holo~, and Lingei,- t/ee, Kempen G. (ed), 85-96. Boston: Kluwer Aca- demic Publishers. 32. McCoy, K.F., 1985. The Role of Perspective in Re- sponding to Property Misconceptions. Proceedings oi the Nimbus XJCAI Co~el~.mee. Los An~l~. 33. McDonald, D.D. & Pustejovsky, J.D., 1986. Description-Directed Natural Language Generation. Proceedingm el tAe Ninth IJCAI Conference, Los Angeles. 84. McKeown, K.R., 1982. Genera~ng Nahum/Language in l~qJm~ to Q~m~o~ ~ D~.~b~e q~.riee. Ph.D. disesrtation, University Of Pennsylvar~a. 85. Miller, D.P., 1985. P/mm/~ by Sea,w.h Thmugk $1mula~ 6o~. Ph.D. diesertation, Yale University. 86. Novek, H-J., 1987. Strategies for Generating Coher- ent Descriptions of Object Motions in Time-Varying IroN,cry. In N~m//m,r~e Ge~L*ro~on.. R~ Nnce~ in Arti~'ml lntdllomce, P~chologg, and Ldnoui~icm, Kempen G. (ed), 117-182. Boston: Kluwer Academic Publishers. 87. Paris, C.L. & McKeown, K.IL, 1987. Discourse Strategies for Descriptions of Complex Physical Ob- jects. In N~/.An4~e G~w~/on." New Re~t~ in A~7~/~ Intd//genee, Pmuehotol~ 6nd/'/,4u/at/eJ, Kempen G. (ed), 97-118. Boston: Kluwer Academic Publish. ers. 88. Paris, C.L., 1987. The Use o~ Ezptidt User Modeb in Te~ Gensrm~o~. Tm]o~.~ to a User's Lewd oi ~ e . Ph.D. di~ertation, Columbia University. 89. Rosenber~, S., 1977. Semantic Constraints on Sen- tenos Production: An Experimentni Approach. In Smtme, Pmdae6on: Deedopment~ in P~o~ch a.d The. orw, Rosenberg S. (ed), 195-228. Hilisdale: "Lawrence Eribaum Amoc/ates. • 40. R~nar, D., 1986. ~n S#~mm ~ Gem~ie~ng son D~¢~ ~ a~ Sema.~c/u.t Rep~en~a~onsn. Ph.D. dissertation, Univemit~.t Stuttgart. 41. R6sner D., 1987. The Automated News Agency SEM- TEX -- a Text Generator for German. In Nahm~ Oe~:~a. New Re~t~ ia A~ifwial I,~dli~'~e, P~.Ado~, and ~ , Kempen G. (ed), 188-148. Boston: Kluwer Academic Publishers. 42. Sacerdoti, E., 1977. A R~zcho~yorPlen~msgBehat~or. North.Holland Publishing Company. 45. Schank, ILC., 1972. 'Semantics' in Conceptual Anal- ysis. Li~ 30(2), 101-139. Amsterdam: North- Holland Publishing Company. 44. Schank, R.C., 1975. Concept~ I~orm~on P~'e~,4. Amsterdam: North-Holland Publishing Company. 45. Schank, R.C. & Abekon, R.P., 1977. Serip~ P~u, Goa/s ami U~n~ng. Hilisdale: Lawrence Erlbaum A,ociates. 186
1988
22
Assigning Intonational Features in Synthesized Spoken Directions ° James Raymond Davis The Media Laboratory MIT E15-325 Cambridge MA 02139 Julia Hirschberg AT&T Bell Laboratories 2D-450 600 Mountain Avenue Murray Hill N3 07974 Abstract Speakers convey much of the information hearers use to interpret discourse by varying prosodic features such as PHRASING, PITCH ACCENT placement, TUNE, and PITCH P.ANGE. The ability to emulate such variation is crucial to effective (synthetic) speech generation. While text-to- speech synthesis must rely primarily upon structural in- formation to determine appropriate intonational features, speech synthesized from an abstract representation of the message to be conveyed may employ much richer sources. The implementation of an intonation assignment compo- nent for Direction Assistance, a program which generates spoken directions, provides a first approximation of how recent models of discourse structure can be used to control intonational variation in ways that build upon recent re- search in intonational meaning. The implementation fur- ther suggests ways in which these discourse models might he augmented to permit the assignment of appropriate intonational features. Introduction DIRECTION ASSISTANCE ! was written to provide spo- ken directions for driving between any two points in the Boston areal7] over the telephone. Callers specify their origin and destination via touch-tone input. The program finds a route and synthesizes a spoken description of that route. Earlier versions of Direction Assistance exhibited notable deficiencies in prosody when a simple text-to- speech system was used to produce such descriptions[6], because prosody depends in part on discourse-level phe- nomena such as topic structure and information status which are not generally inferrable from text, and thus *The inton~tion,d component described here was completed at AT&T Bell Laboratories in the summcT of 1987. We th~nk Janet Pie~Tehtunbert and Gregory Ward for valuable discussions. 1 Direction Assistance was originally developed by Jim Davis and Tom Trobaugh in 1985 at the Thinking Maf_~ines Corporation of Cambridge. cannot be correctly produced by the text to speech sys- tem. To alleviate some of these problems, we modified Direc- tion Assistance to make both attentional and intentional information about the route description available for the assignment of intonational features. With this informa- tion, we generate spoken directions using the Bell Labo~ ratories Text-to-Speech System[21] in which pitch range, accent placement, phrasing, and tune can be varied to communicate attentional and intentional structure. The implementation of this intonation assignment component provides a first approximation of how recent models of discourse structure can be used to control intonational variation in ways that build upon recent research in into- national meaning. Additionally, it suggests ways in which these discourse models must be enhanced in order to per- mlt the assignment of appropriate intonational features. In this paper, we first discuss some previous attempts to synthesize speech from representations other than sim- ple text. We next discuss the work on discourse structure, on English phonology, and on intonational meaning which we assume for this study. We then give a brief overview of Direction Assistance. Next we describe how Direction Assistance represents discourse structures and uses them to generate appropriate prosody. Previous Studies Only a few voice interactive systems have attempted to exploit intonation in the interaction. The Telephone En- quiry Service (TES) [19] was designed as a framework for applications such as database inquiries, games, and calculator functions. Application programmers specified text by phonetic symbols and intonation by a code which extended Halliday's[ll] intonation scheme. While TES gave programmers a high-level means of varying prosody, it made no attempt to derive prosody automatically from an abstract representation. 187 Young and Fallside's[20] Speech Synthesis from Con- cept (SSC) system first demonstrated the gains to be had by providing more than simple text as input to a speech synthesizer. SSC passed a network representation of syn- tactic structure to the synthesizer. Syntactic information could thus inform accenting and phrasing decisions. How- ever, structural information alone is insufficient to deter- mine intonational features[10], and SSC does not use se- mantic or pragmatic/discourse information. Discourse and Intonation The theoretical foundations of the current work are three: Grosz and Sidner's theory of discourse structure, Pierre- humbert's theory of English intonation, and Hirschberg and Pierrehumbert's studies of intonation and discourse. ing a discourse is reconstructing the DP, DSPs and rela- tions among them. Attentional structure in this model is an abstraction of 'focus of attention', in which the set of salient entities changes as the discourse unfolds. 2 A given discourse's attentional structure is represented as a stack of FOCUS SPACES, which contain representations of entities refer- enced in a given DS, such as 'flywheel' or 'allen-head screws', as well as the DS's DSP. The accessibility of an entity m as, for pronominal reference m depends upon the depth of its containing focus space. Deeper spaces are less accessible. Entities may be made inaccessible if their focus space is popped from the stack. Intonational Features and their Interpre- tation Modeling Discourse Structure Grosz and Sidner[9] propose that discourse be understood in terms of the purposes that underly it (INTENTIONAL STRUCTURE) and the entities and attributes which are salient during it (ATTENTIONAL STRUCTURE). Ill this ac- count, discourses are analyzed as hierarchies of segments, each of which has an underlying Discourse Segment Purpose (DSP) intended by the speaker. All DSPs con- tribute to the overall Discourse Purpose (DP) of the discourse. For example, a discourse might have as its DP something like 'intend that Hearer put together an air compressor', while individual segments might have as contributing DSP's 'intend that Hearer remove the fly- wheel' or 'intend that Hearer attach the conduit to the motor'. Such DSP's may in turn be r.epresented as hier- archies of intentions, such as 'intend that a hearer loosen the allen-head screws', and 'intend that Hearer locate the wheel-puller'. DSPs a and b may be related to one an- other in two ways: a may DOMINATE b if the DSP of a is partially fulfilled by the DSP of b (equivalently, b CONTRIBUTES TO a). So, 'intend that Hearer remove the flywheel' dominates 'intend that Hearer loosen the allen-head screws', and the latter contributes to the for- mer. Segment a SATISFACTION-PRECEDES b if the DSP of a must be achieved in order for the DSP of b to be successful. 'Intend that Hearer locate the wheel-puller' satisfaction-precedes 'intend that Hearer use the wheel- puller', and so on. Such intentional structure has been studied most extensively in task-oriented domains, such as instruction in assembling machinery, where speaker in- tentions appear to follow the structure of the task to some extent. In Grosz and Sidner's model, part of understand- This model of discourse is employed for expository purposes by Hirschberg and Pierrehumbert[12] in their work on the relationship between intonational and dis- course features. In Pierrehumbert's theory of English phonolog~v[16], intonational contours are represented as sequences of high (H) and low (L) tones (local max- ima and minima) in the FUNDAMENTAL FREQUENCY (f0). Pitch accents fall on the stressed syllables of some lexical items, and may be simple H or L tones or complex tones. The four bitonal accents in English (H*-}-L, H-I-L*, L*-I-H, L-I-H*) differ in the order of tones and in which tone is aligned with the stressed syllable of the accented item-- the asterisk indicates alignment with stress. Pitch accents mark items as intonationally prominent and con- vey the relative 'newness' or 'salience' of items in the dis- course. For example, in (la), right is accented (as 'new'), while in (lb) it is deaccented (as 'old'). (I) a. Take a right, onto Concord Avenue. b. Take another right, onto Magazine Street. Different pitch accents convey different meanings: For ex- ample, a L-t-H* on right in (la) may convey 'contrastive- ness', as after the query So, you take a left ontoConcord?. A simple H* is more likely when the direction of the turn has not been questioned. A L*~H, however, can convey incredulity or uncertainty about the direction. INTERMEDIATE PHRASES are composed of one or more pitch accents, plus an additional PHRASE ACCENT (H or L), which controls the pitch from the last pitch accent to ~See [1] and [3] for earlier AI work on global and local focus. 188 the end of the phrase. INTONATIONAL PHRASES consist of one or more intermediate phrases, plus a BOUNDARY TONE, also H or L, which falls at the edge of the phrase; we indicate boundary tones with an '%', as H%. Phrase boundaries are marked by lengthened final syllables and (perhaps) a pause -- as well as by tones. Variations in phrasing may convey structural relationships among el- ements of a phrase. For example, (2) uttered as two phrases favors a non-restrictive reading in which the first right happens to be onto Central Park. (2) Take the first right [,] onto Central Park. Uttered as a single phrase, (2) favors the restrictive read- ing, instructing the driver to find the first right which goes onto Central Park. TUNES, or intonational contours, have as their domain the intonational phrase. While the meaning of tunes ap- pears to be compositional w from the meanings of their pitch accents, phrase accents, and boundary tones[15], certain broad generalizations may be made about par- ticular tunes in English. Phrases ending in L H% ap- pear to convey some sense that the phrase is to be com- pleted by another phrase. Phrases ending in L L% ap- pear more 'declarative' than 'interrogative' phrases end- ing in H H%. Phrases composed of sequences of H*-I-L accents are often used didactically. The PITCH RANGE of a phrase is (roughly) the distance between the maximum f0 value in the phrase (modulo segmental effects and FINAL LOWERING effects) and the speaker's BASELINE, defined for each speaker as the low- est point reached in normal speech over all utterances. Variation in pitch range can communicate the topic struc- ture of a discourse[12, 18]; increasing the pitch range of a phrase over prior phrases can convey the introduction of a new topic, and decreasing the pitch range over a prior phrase can convey the continuation of a subtopic. After any bitonal pitch accent pitch range is compressed. This compression, called catathesls, or downstep, extends to the nearest phrase boundary. Another process, called FI- NAL LOWEP~NG, involves a compression of the pitch range during the last half second or so of a 'declarative' utter- ances. The amount of final lowering present for utterance appears to correlate with the amount of 'finality' to be conveyed by the utterance. That is, utterances that end topics appear to exhibit more final lowering, while utter- ances within a topic segment may have little or none. Intonation in Direction-Giving To identify potential genre-specific intonational charac- teristics of direction-giving, we performed informal pro- duction studies, with speakers reading sample texts of directions similar to those generated by Direction As- sistance. From acoustic analysis of this data, we noted first that speakers tended to use H*+L accents quite frequently, in utterances like that whose pitch track ap- pears in Figure 1. The use of such contours has been associated in the literature with 'didactic' or 'pedantic' contexts. Hence, the propensity for using this contour in giving directions seems not inappropriate to emulate. We also noted tendencies for subjects to vary pitch range in ways similar to proposals mentioned above that is, to indicate large topic shifts by increasing pitch range and to use smaller pitch ranges where utterances appeared to 'continue' a previous topic. And we noted variation in pausal duration which was consistent with the notion that speakers produce longer pauses at major topic boundaries than before an utterance that contin- ues a topic. However, these informal studies were simply intended to produce guidelines. In the intonation assignment component we added to Direction Assistance, pitch accent placement, phrasing, tune, and pitch range and final lowering are varied as noted above to convey information status, structural information, relationships among utterances, and topic structure. We will now describe how Direction Assistance works in general, and, in particular, how it uses this com- ponent in generating spoken directions. Direction Assistance Direction Assistance has four major components. The Location Finder queries the user to obtain the origin and destination of the route. The Route Finder then finds a 'best' route, in terms of drivability and describabil- ity. Once a route is determined, the Describer generates a text describing the route, which the Narrator reads to the user. In the work reported here, we modified the Describer to generate an abstract representation of the route description and replaced the Narrator with a new component, the Talker, which computes prosodic values from these structures and passes text augmented with commands controlling prosodic variation to the speech synthesizer. 189 lO0 150 1|$ 100 7 5 =. =. i i i i i ! i i ! i i i i ! i i i i i i i i i i i ! ! ! i ! ! f ...i....i.-i-.,,-.-.i.,ti.-.i-i ...... ,...~....,....,. ...... ,....,....,...,.... r..I.....I....i....i ...... i...l-i._i-..-i.-.i....i_i ....... .i....i....I.....i ...... i.-i...i...i....I ....i. ...i...i...i..-i ..... .L.~.L..I....L ...... -...~....i....i. ...... i....i....i....i...] ,...L..-I....L...I ...... .L..~--i.--i .... ,....i--.i....i ....... .L...L...L...I ...... i....L.~....i. ........ i.. ...i...i..i.-.ii ~. ..... 4...~t'-..i..i ..... i...i...i...i. ...... i....i....i...i ....... .i...~...~....i ...... .L..~..i...i-.....i...L-.i....h....-4--.i...L...i ..... i...i...i...i. ....... i. ...h..i...|--.i.../- ,.,4 .... .i..i . . . . . . . ...... J.....L..i...J. ...... i i....i-..l....i:::; ........ -:....L-.L...II; i i ....... :....L..i..J.i i ; i ........ i....i...i....il i i i ....... . . . . . . . . . . . .i....L...i....i ...... "...'~...~...i. ..... i" .... i...,i.,,,i,,,4.-.r..,.i-.],,~,,.~ .... ,,, .................. ,....,..,,,...., .................... ., ...... , ....... ,.-~ .... ~ ....... , ...... .,....,.,..,..., ...... <....,...,...i. .... t:... -=.-.~-~-~.,.-.i=l'~:~, .... -...--,---,- ...... ,-., ......... , ...... -,----,--.,---, ...... -,-,--.-,- .... ,-.,--.-,-., ....... -~-~...-~..~'..~....~--..~. ,o.,i. *.,@...4,*.,i-.~i---..*..l--,"l---.I,o,oloo,, ,*o,l.~,i....l*.o*l....**i,,o.i*o*@,*.i-,., .... i...|...|---L.~ ...i.4.~" .i.'~ .... l....i..i...-i. ...... =. ............. I ....... .l....I .....I....l. • • . . . . . . " : " " ~ : : : ' i i " " I : : : i ' ; II : : i ! : " : I i l : i I i : i i I ! ...... " ........ ........ ....... ...... ..... < I t l .... i....i....h-- .l... i ..=.-.~....~....i. ...... ~...~....~....~ ....... ~...~.--~....~. ...... " ....... i.....i....i....i.--i ....... .i....i--.L...i ...... i...i...i...i. ....... i.. -f'+-!- .... .i-i-i_:_ ~,...,...~..~. ..... ,._,...., ........... ,...., ............. _ ,.. ..... i...i....i...~ ...... .~...i....i....i ...... .L.i...'...i. ....... i. . . . . . . . . . . . . . . . . . . _ I . .... . . . . . . : i t: -':"-'-' t ..... t ...... : ' : " " --~ ,::.~:::i:i:.".;=::::~:::~:.-.:::-.:: .... -'-~"~-~-::"=" , I ~ ~", . , , ' " " o~i.o..;~." .~i,~. ~.i.~o.; *; o.;o**, ,. "oo.4oo*...~. ooo-'..i,...&oo " * ..,p.:.........i..o "":""--'i"T'"I'":"'~.'":"','"4 "T"'T"T":" ~II.T":"T/": "-' " :-- :--'.'~'!. .... T'IT'"T'":,"" .... ""~"--'-"'-÷ - - - ..... :" i 1~"!--"'""'1"'~"'-'"~" ......... -.."'T-r-r- I~_-'..--'r-, ...... -..--"'-,"'i~ ..... ""'E"'";"" -.i,-i--i+,-i-÷ '--,:-,'i-":-~-1~'=-+-1 ~-~..--~ ................... ~ ~ ~ ~ ~:[] -,-.-..'-,- ..... ~..''-, ............. , . . . . . . . . ' ' " ~ - , .............. ' ' ~ , - . .,.+, .: .: .: [email protected] l ..... • . . . . . . . . . --i.'--.'~ ".-i." . . . . . -.: ---.." .--:. r -.." ...... ÷..÷.@.+.....+....~..'--I....L..~. --@..: --..'...,...~...~...~...'- ....... ~. ..i.--~-.:.--: ..... ~...~..-'..~..i..-..~...~...-...i ..~--.--....~.,i....-'.-~..-'--~ ........ ..'~ * * * . ; : : : . . . . ill ~bj ie~i i im~l i ; iq "'T-~'"~"~ .... • • - • : : :-~ " .: " !- !-~ i i : ." "--:" i ii-~.!!i i ix..Ll!~-;~ill i. • " " "-- : : : = • L, Figure 1: Pitch Track of Subject Reading Directions Generating text and discourse structures The Describer's representation of a route is called a tour. A tour is a sequence of acts to be taken in following the route. Acts represent something the driver must do in following the route. Act types include start and stop, for the beginning and ending of the tour, and various kinds of turns. A rich classification of turns is required in order to generate natural text. A 'fork' should be described differently from a 'T' and from a highway exit. Turning acts include enter and exit from a limited access road, merge, fork, u-turn, and rotary. For each act type, there is a corresponding descriptive schema to produce text describing that act. Text gen- eration also involves selecting an appropriate cue for the act. There are four types of cues: Action cues signal when to perform an act, such as "When you reach the end of the road, do x'. Confirmatory cues are indica- tors that one is successfully following the route, such as "You'll cross x" or "You'll see y'. Warning cues caution the driver about possible mistakes. Failure cues to de- scribe the consequences of mistakes (e.g. "If you see x, you have gone too far') have not yet been implemented. In general, there will be several different items potentially useful as action or confirmatory cues. The Describer se- lects the one which is most easily recognized (e.g. a bridge crossing) sad which is close to the act for which it is a cue. Descriptive schemas are internally organized into syn- tactic constituents. Some constituents are constant, and others, e.g. street names and direction of turns, axe slots to be filled by the Describer from the tour. Constituents axe further grouped into one or more (potential) intona- tional phrases. Each phrase will have a pitch range, a pre- ceding pause duration, a phrase accent, and a boundary tone assigned by the Talker. Phrases that end utterances will also have a final lowering percentage. Where schemas include more than one intonational phrase, relationships among these phrases are documented in the schema tem- plate so that they may be preserved when intonational features are assigned. Intentional structure is also represented at the level of the intonational phrase. Unlike in Grosz and Sidner's model, a single phrase may represent a discourse seg- ment. This departure stems from our belief that, follow- ing [12, 15], certain intonational contours can communi- cate relationships among DSP's. 3 Certain relationships 3It is possible that the intermedla~e phrase my prove an even betty" u~t for discourse segmentation. 190 among DSP's are specified within schemas; others are de- termined from the general task structure indicated by the domain and the particular task structure indicated by the current path. Constituents may be annotated with semantic infor- mation to be used in determining information status. Se- mantic annotations include the type of the object and a pointer (to the internal representation for the object designated). For each type of object, there is a predicate which can test two objects of that type for co-designation. For example, for purposes of reference or accenting we may want to treat 'street' and 'avenue' as similar. Each DS has associated with it a focus space. Following [2], a focus space consists of a set of FORWARD-LOOKING CENTERS, potentially salient discourse entities and mod- ifiers. Focus spaces are pushed and popped from the FO- CUS STACK as the description is generated, according to the relationships among their associated DS's. As an example, the generator for the rotary act ap- pears in figure 2. This schema generates two sentences, second of which is a conjunction. One slot in this schema is taken by an NP constituent for the rotary. The make-np-constituent routine handles agreement between the article and the noun. A second slot is filled with an expression giving the approximate angular dis- tance traveled around the rotary. The actual value de- pends upon the specifics of the act. A third slot in this schema is filled by the name of the street reached after taking the rotary. The choice of referring expression for the street name depends upon the type of street. No cues are generated here, on the grounds that a rotary is unmistakable. Assigning Intonational Features The TAlicer employes variation in pitch range, pausal du- ration, and final lowering ratio to reflect the topic struc- ture of the description, or, the relationship among DS's as reflected in the relationship among DSP's. Following the proposals of [12], we implement this variation by assigned each DS an embeddedness level, which is just the depth of the DS within the discourse tree. Pitch range decreases with embeddedness. In Grosz and Sidner's terms, for ex- ample, for DS1 and DS2, with DSPz dominating DSP2, we assign DS1 a larger pitch range than DS2. Similarly, if DSP2 dominates DSP3, DSs will have a still smaller pitch range than DS2. Sibling DS's will thus share a common pitch range. Pitch variation is perceived logarithmically, so pitch range decreases as a constant fraction (.9) at each (defun disc-seg-rotary (act) (list (make-sentence "You'll" "come" "to" (make-np-constil;uenl; ' ("rotary") :article :indefinite)) (make-conjunction-sentence (make-sentence "Go" (rotary-angle-amount (get-info act 'rotary-angle)) "eay .... around" (make-anaphora nil "it")) (make-sentence "l;nrn" "onto" (make-street-constituent (move-to-segment act) act)) ) )) Figure 2: Generator for Rotary Act Type level, but never falls below a minimum value above the baseline. Also following [12], we vary final lowering to indicate the level of embeddedness of the segment com- pleted by the current utterance. We largely suspend final lowering for the current utterance when it is followed by an utterance with greater embedding, to produce a sense of topic continuity. Where the subsequent utterance has a lesser degree of embedding than the current utterance, we increase final lowering proportionally. So, for example, if the current utterance were followed by an utterance with embedding level 0 (i.e., no embedding, indicating a major topic shift), we would give the current utterance maxi- mal final lowering (here, .87). Pansal duration is greatest (here, 800 msec) between segments at the least embedded level, and decreases by 200 msec for each level of embed- ding, to a minimum of 100 msec between phrases. Of course, the actual values assigned in the current applica- tion are somewhat arbitrary. In assigning final lowering, as pitch range and intervening pausal duration, it is the relative differences that are important. Accent placement is determined according to relative salience and 'newness' of the mentioned item.[12, 14, 5] (We employ Prince's[17] Givens, or given-salient notion here to distinguish 'given' from 'new' information. How- ever, it would be possible to extend this to include hi- erarchically related items evoked in a discourse as also given, or 'Chafe-given'[17], were such possibilities present in our domain.) Certain object types and modifier types in the domain have been declared to be potentially salient. When such an item is to be mentioned in the path descrip- tion, it is first sought in the current focus space and its ancestors. In general, if it is found, it is deaccented; oth- erwise it receives a pitch accent. If the object is not a 191 potentially salient type, then, if it is a function word, it is deaccented, otherwise it is taken to be a miscellaneous content word and receives an accent by default. In some cases, we found that -- contra current theories of focus -- items should remain deaccentable even when the focus spaces containing them have been popped from the focus stack. In particular, items in the current focus space's preceding sibling appear to retain their 'givenness'. Re- analysis to place both occurrences in the same segment or to ensure that the first is in a parent segment seemed to lack independent justification. So, we decided to allow items to remain 'given' across sibling segment boundaries, and extended our deaccenting possibilities accordingly. We vary phrasing primarily to convey structural infor- mation. Structural distinctions such as those presented by example (2) are accomplished in this way. Intentional structure is conveyed by varying intona- tional contour as well as pitch range, final lowering, and pausal duration. A phrase which required 'completion' by another phrase is assigned a low phrase accent and a high boundary tone (this combination is commonly known as CONTINUATION RISE).[15] For example, since we gener- ate VP conjunctions primarily to indicate temporal or causal relationship (e.g Stay on Main Street for about ninety yards, and cross the Longfellow Bridge.), we use continuation rise in such cases on the first phrase. The sample text in Figure 3 ia generated by the sys- tem. Note that commands to the speech synthesizer have been simplified for readability as follows: 'T' indicates the topline of the current intonational phrase; 'F' indi- cates the amount of final lowering; 'D' corresponds to the duration of pause between phrases; 'N*' indicates a pitch accent of type N; other words are not accented. Phrase accents are represented by simple H or L, and boundary tones are indicated by %. The topic structure of the text is indicated by indentation. Note that pitch range, final lowering, and pauses be- tween phrases are manipulated to enforce the desired topic structure of the text. Pitch range is decreased to re- fleet the beginning of a subtopic; phrases that continue a topic retain the pitch range of the preceding phrase. Final lowering is increased to mark the end of topics; for exam- ple, the large amount of final lowering produced on the last phrase conveys the end of the discourse, while lesser amounts of lowering within the text enhance the sense of connection between its parts. Pauses between clauses are also manipulated so that lesser pauses separate clauses which are to be interpreted as more closely related to one another. For example, the segment beginning with You'll come to a rotary.., is separated from the previous dis- T[170] H*+L If your H*+L car is on the H*+L same H*+L side of the H*+L street as H*+L 7 H*+L Broadway Street L H\Y, D[600] TILES] He+L turn H*+L around L H\Y, T[153] F[.90] and H*+L start H*+L driving L L\~. D[600"] T['ISS] F[.90] He+L Merge with He+L Maiu Street L L\~, D[600] T[IS3] H*+L Stay on Main Street for about H*+L one H*+L quarter of a He+L mile L H\Y. D[800] T[15S] F[.90] and M*+L cross the Longfellow He+L Bridge L L\Y. D[600] T[153] F[.96] You'll He+L come to a H*+L rotary L L\Y, V[400] T[IS7] H*+L Go about a He+L quarter He+L way H*+L around it L H\Y. D.[400] T[137] F[.90] aud H*+L turn onto He+L Charles Street L L\~. D[600] T[153] H*+L Number He÷L 130 is about H*+L one He+L eighth of a He+L mile H*+L down L H\7. D[400] T[137] F[.87] on your L÷H* right H* side L LkY, Figure 3: A Saml)le Route Description from Direction Assistance course by a pause of 600 msec, but phrases within this segment describing the procedure to follow once in the rotary are separated by pauses of only 400 msec. Summary We have described how structural, semantic, and dis- course information can be represented to permit the prin- cipled assignment of pitch range, accent placement and type, phrasing, and pause in order to generate spoken directions with appropriate intonational features. We have tested these ideas by modifying the text genera- tion component of Direction Assistance to produce an ab- stract representation of the information to be conveyed. This 'message-to-speech' approach to speech synthesis has clear advantages over simple text-to-speech synthe- sis, since the generator 'knows' the meanings to be con- veyed. This application, while over-simplifying the rela- tionship between discourse information and intonational features to some extent, nonetheless demonstrates that it should be possible to assign more appropriate prosodic 192 features automatically from an abstract representation of the meaning of a text. Further research in intonational meaning and in the relationship of that meaning to as- pects of discourse structure should facilitate progress to- ward this goal. References [1] Barbara Grosz. The Representation and Use of Focus in Dialogue Understanding. Phd thesis, University of California at Berkeley, 1976. [2] B. Grosz, A. K. Joshi, and S. Weinstein. Provid- ing a Unified Account of Definite Noun Phrases in Discourse. Proceedings of the Association for Com- putational Linguistics, pages 44-50, June 1983. [3] Candace Sidner. Towards a computational theory of definite anaphora comprehension in English dis- course. PhD thesis, MIT, 1979. [4] M. Anderson, J. Pierrehumbert, and M. Liberman. Synthesis by rule of English intonation patterns. Pro- ceedings of the conference on Acoustics, Speech, and Signal Processing, page 2.8.1 to 2.8.4, 1984. [5] Gillian Brown. Prosodic structure and the given/new distinction. In Cutler and Ladd, editors, Prosody: Models and Measurements, chapter 6, Springer Vet- lag, 1983. [6] James R. Davis. Giving directions: a voice interface to an urban navigation program. In American Voice I/0 Society, pages 77-84, Sept 1986. [7] James-R. Davis and Thomas F. Trobangh. Direction Assistance. Technical Report, MIT Media Technol- ogy Lab, Dec 1987. [8] Marcia A. Derr and Kathleen R. McKeown. Using focus to generate complex and simple sentences. Pro- ceedings of the Tenth International Conference on Computational Linguistics, pages 319-325, 1984. [9] Barbara J. Grosz and Candace L. Sidner. Attention, intentions, and the structure of discourse. Computa- tional Linguistics, 12(3):175-204, 1986. [10] Dwight Bolinger. Accent is predictable (if you're a mind-reader). Language, 48:633-644, 1972. [11] M. A. K. Hal]iday. Intonation and Grammar in British English. Mouton, 1967. [12] J. Hirschberg and J. Pierrehumbert. The intona- tional structure of discourse. Proceedings of the As- sociation for Computational Linguistics, pages 136- 144, July 1986. [13] Kathleen R. McKeown. Discourse strategies for gen- erating natural-language text. Artificial Intelligence, 27(1):1-41, 85. [.14] S. G. Nooteboom and J. M. B. Terken. What makes speakers omit pitch accents? an experiment. Pho- netica, 39:317-336, 1982. [15] J. Pierrehumbert and J. Hirschberg. The meaning of intonation contours in the interpretation of dis- course. In Plans and Intentions in Communication, SDF Benchmark Series in Computational Linguis- tics, MIT Press, forthcoming. [16] Janet B. Pierrehumbert. The Phonology and Pho- netics of English Intonation. PhD thesis, MIT, Dept of Linguistics, 1980. [17] Ellen F. Prince. Toward a taxonomy of given - new information. In Peter Cole, editor, Radical Pragmat. ics, pages 223-256, Academic Press, 1981. [18] Kim E. A. Silverman. Natural prosody for synthetic speech. PhD thesis, Cambridge Universtity, 1987. [19] L. Witten and P. Madams. The telephone in- quiry service: a man-machine system using synthetic speech. International Journal of Man-Machine Stud- ies, 9:449--464, 1977. [20] S. J. Young and F. Fallside. Speech synthesis from concept: a method for speech output from infor- mation systems. Journal of the Acoustic Society of America, 66(3):685-695, Sept 1979. [21] J. P. Olive and M. Y. Libermem. Text to speech - An overview. Journal of the Acoustic Society of America, Suppl. 1, 78(3):s6, Fall 1985. 193
1988
23
ATOMIZATION IN GRAMMAR SHARING M~umi Kamey-m~, Micrneleclmnim and Compui~" Technology Coopomtion (MCC) 3500 West Balcones C.enm" Drive, Austin, Tcxas 78759 megumi@mcc~om ABSTRACT new insights with which to account for certain linguistic We describe a prototype SK~RED CmAt~eAR for the syntax of simple nominal expressions in Arabic, E~IL~lx, French, German, and Japanese implemented at MCC. In this Oamm~', a complex inheritance ian/cc of shared gr~mmAtlcal templates provides pans that each language can put together to form lansuug~specific gramm-ti~tl templates. We conclude that grammar shsrin 8 is not only possible but also desirable. It forces us to reveal cross- liuguistically invm'iant grammatie~ primitives that may otherwise rem~ conflamd with other primitives if we deal only with a single ~.nousge or l-n~uuge type. We call this the process of OaA~O~AT~CAL ^TOI~aZAT~ON. The specific implementation reported here uses catcgorial tmifr, ation grammar. The topics include the mono-lcvel nominal category N, the functional distinction between ARGUMENT and NON-ARGUMENT of nominals, grammatical agreement, and word order types. Is grammar sharing possible? The multill.eual pmjec~ of MCC a~mpts to build a grammatical system hierarchic~tily shared by multiple languages (Slucum & Justos 1985). ~ ~ as proposed should have an advantage over a system with separate grammars for different languages: It should reduce the ~ of a mnllflinsual rule base, and fecilltat~ the addition of new languages. Bef~e Inesenting evidence for such advantages, however, there is the basic question m be answered: Is grammar sharing at all possible? Although it is well known that languages possess similarities based on genetic, typological, of areal grounds, the question remains whether and how these ~imilarities translate into computational techniques. In this paper, we will describe a prototype shared for simple nominal expressions in Arabic, English, French, German~ and Japanese. x We conclude that grammar sharing is not only possible but also desirable. It forces us to reveal crces-liuguiatic~y invariant grRmmAtiCal primitives that may otherwise confiated with other primitives if we deal only with a single language of language type. We call this the process of ~Tlf.~. ATOMmA~ON 2 forced by grammar sharing. Each language or language type is then characterized by particular combinations of such primitives, often providing Xpreliminary investigations have also been made on Spanish, Russian, and Chinese. 2The verb atom/ze means "to separate of be separated into free atoms" (The Collins English Dictionary, 2nd edition, 1986). problems. Before we go into more derail, the following is our view of what general components and mechanisms COllStiUlle 8 shared gr~ntle~l SyStem- Bask mechanisms In a shared grammar:. The process of buildiug a shared grammaT, in our view, requires (i) linguistic description of a set of languages in a common theoretical framework, (ii) a mechanism for E~1~ACr1~O a common grammatical asse~on from two or more assertions, and (fii) a mechanism for MEROINO grammatical asse~ous. The linguistic description should define certain string-combination operations (defined on siring I"YI~) associated with information structures. Then what we do is identify shamble packages of common string-types and information slmctures among independently motivated languuge-spccific grammatical assertaions. These packages are then put into the shared part of the grammnr D and the remaining language-specifics are potential sources for mofe sharing. This extraction is essential in what we call ATOMIZATION, which is basically "breaking up of grammatical a~gions into mailer independeot parts" (Le. decomposition). If we assume that all grammatical aase~iem ~e expressed in terms of FEAI"ORE ST~UCTtn~ES (Shieber 1986), the atomi.Jtlon process would be defined mound the notion of <~2q~.,,H~TION (i.e. reverse of Ut~C.A~ON) as follows: basic at~s/za~a.. Given two feature structures, Xa for category X in language A end Xb for category X in language B, the shared m'ucture X~t for category X is the ~ ' n O N of Xa and Xb (i.e., the must specific feature slmcmm in commnn with both Xa and Xb). Xa is separated out of eithar Xa or Xb, and placed into the shared space. Consequently, a ~ ofdering is established wlm~fin Xa sue~ Xa and Xb, respectively. There is an underlying assumption that two language- specific de~uitiom of a commn~ grammatical camgony share something in comn~a no matter how small it is. This means that the linguis~ descriptive basis is questionable if the content of Xa above is nulL Conversely, if clo~ly common information structures appear under language- specific definitions of distinct grammatical categories, we may suspect a basis for a new common grammatical category. Once the shared and iauguage-spucific pm'ts are separated out, a mechanism for merging them is necessary for successfully incorporating the shared assertion into the language-specific assertion. ~m~c.ATIO~ by n~rr~.~c~ is such a merging mechanism that we employ in our system (see below). The shared space is a complex inheritance lattice that provides various predefined grammatical assertions that can be freely merged to create language- specific ones. 194 / / I 1"~6 "~-/. \ \ ~A,~"~~ T ?,TYT?WI qi..nun qi..t~ neko cats cat Katzen Katze c~ ~ij ~ieCrSer which welcher que! Film 1. A simplified shared httt/¢e Shared inheritance lattice: Let us now take • look at a grossly simplified shared inheritance lattice that results from the process described above. See Figure 1. Them is • universal notion N(ominal) in all five languages under consideration. This common notion is part of the N definition of each language by inheritance. There ~e some nominals that am 'complete' in the ~mse that they can be used as subjects or objects (e.g. I saw ¢~s/¢~ cat.). Some others am 'incomplete' in that they cmnot be used as such (e. 8. I saw scat.). General notions Complete and Incomplete are thauby defined for characterizing relevant nominal classes of each language (see the diacmufion on ARG vs. NON-ARG below). Since Determiners in English, German, and ~ch make such incomplete nominals complete, the Determiner definition inherits (i.e. includes) the definition of Complete. Lexical items in these languages are defined by multiply inheriting relevant assertions: In what follows, we will f'n'st describe the specific linguistic and computational approaches that we employed to build our first shared grammar. We will then discuss the grammatiCul primitives for chm'ac~rizing scne~d nominals, ednommal modifiers, agreem~t, and word order types, illustrating solutions to specific cross-linguistic problems. We will end with prospects for further work. Framework Grammatical framework: We use a cutogorial unification grammar (CUG) OVittenbur8 1986a; Karmmea 1986; Uzkoreit 1986b). The one described here is a non- directional categorial system (e.g. Montague 1974; Schmerling 1983; van Benthem 1986:Ch.7) with a non- directed functional application rule as the only reduction rule (i.e., a functor XIY may combine with adjacent Y in either direction to build X). Non-directionality allows for desired flexibility in the shared part of the grammsr. A sepm-ate compommt constrains the linear ord~ of elements in each lmguage (see Arislar 1988 for motivation). Unification and template inheritance: CUG's lexical orlentafioo end unification arc employed. In the t.e~coN of each kngusgu, lexical itema are defined to be the unification of language-specific ¢mAMMA~C.~ ~T~S (Shinber 1984, 1986; Ftickeoger et al. 1985; Pollmd & Sag 1987). These language-specific templates, prefixed with AR(abic), EN(glish), FR(ench), OE(rman), and JA(panese), In fesm~ slzuctun= composed by multiplc inheritance from sluu'ed gra~atle~! templates prefixed with SO (for "Shm~d Grammar"). SG-templates are tbemsclves composed by multiple iulm'imnce in a complex INHI~rrANCZ LATI'/CE, whose holXom-end feeds into language-specific templmes. Tbe CUG parser (MCC's Astm, Wittenberg 1986b) applies reduction rules to the feature struclan~ of words in the input slring. 3 Arabic and: Japanese strings are currently represented in RomAn letters (augmanted for Arabic) with spaces between 'words'. 4 3Tho parser is linked m an independently developed morphology analyzer (Slocum 1988). This enables each word to undergo a morphological analysis including a dictionary look-up of the root morpheme, and to output a list (or altel'llative ]JsLq) of ~mmatiCal ~m~la~ llsm~ that, when their contents ere unified, produce a single fealme s~rucmre (or more than one if the word is ambiguous) for that particular token word. 4If we were to process Japanese texts directly, the system would have to perform morphological end syntactic analyses simultaneously since there is no explicit word boundaries. (Thh is one of the strong motivations for our recent movement toward building a new CUG-based morphology system.) 195 Present linguistic coverage Simple nominals: The present linguistic coverage is the syntax of ~ NOMINALS: nouns and nominal expressions with lexical or phrasal modifiers such as attributive adjectives (e.g. long), demonstratives (e.g. th/s), articles (e.g. the), quanth"ters (e.g. a//), nmnera~ (e.g. three), genitives (e.g. of the Sun), and pp-modifiers (e.g./n the ocean). Complex nominals including conjunctions, derived nominals, gerunds, nominal compound& and relative clause modification have not been handled yet. Data ualysis: We first analyzed a data chart of simple nominals in each language. The chart focused on the syntactic well-formedness of nominal expression& in particular, the order and dispensability of elements when the nominal expression acts as an argument (e.g. subject, object) to a verb or an adposition (Le. preposition or postposition). Shared templates overview By design, the SG-LATHCE captures shared grammatical fealmcs in the given set of languages, whether they me due to universal, typological, genetic, or meal bases. As our research proceeded, we observed an atomization process whereby more and more grammatical properties were distinguished. This was because certain grammatical characterizations that seemed most natural for some language(s) were only partially relevant to others, which forced us to break them down into smaller parts so that other languages can use only the relevant parts. Modules in the SG-iattke: As the shared templates underwent atomization, we created sublattices corresponding to independent grammatical modules so that a grammar writer can make a langnage-specific combination of shared templates by consciously selecting one or more from each group. The existing subgroups me: (i) categorial grammar categories (the theory-dependent aspect of the shared grammar), (ii) common syntactic categories (theory-independent linguistic notions), (iii) grammatical agreement (to handle grammatical agreement within nominals), (iv) reference types (semantic features of the nominals, e.g. definite, indef'mite, specific), (v) determiner types (to handle co-occurrence and order restrictions among determiners), and (vi) atlributive modifier types (to handle order restrictions among attributive modifiers). We will focus on (i)-(iii) in this paper. Kinds of SG-templates: SG-templatns as they exist fall under the following types. The most general distinction can be made between ATOMIC and COM~rrE templates. Atomic templates inherit from no other template. They result from the atomization process, and are primitive parts that a grammar writer can put together to create mere complex templates. A composite template inherits from at least one other, to which a partial slructure defined for itself may be added. We may also distinguish between UTn.r~ and sUeSTA~rnve templates. Utility templates contribute integral parts of categodal grammar categories such as how many arguments they need to combine within none for a BASIC CATEGORY, ~ one or more for a PUNCIDR CA'EBGORYo Substantive templates supply grammatical categndes and features expressed in terms of various linguistic notions. Specific examples are discussed below. Highlights of shared grammatical atoms The basic graph structure Each word must be associated with a complete CUG feature structure. The current implementation uses a malx~ notation for ACYCLIC DIRP.~-I-~ GRAPH. ~ Figure 2: [result: [cat: [ ] index: [ ] agr: [ ] feats: [ l type: [ ] elements: [ ] order: [ ] arguments: [ ]] <- the syntactic type of (~ <- relative linear position of (~ <- grammatical agreement features of o< (optional) <- pragmatic agreement features of ~-, <- the functional type of ¢x (see below) <- elements within c~ <- order of elements (see below) <- arguments sought (see below) l~lure2. Tae notation for a word whose resulting structure is ot A ca~gnry is either SATURXT~D (looking for no argumen0 or UNSATU~TED (needing to combine with one or more arguments). It is saturated when the value of ARGUMENTS is 'closed' with symbol #. An unsaturated category may seek one or more arguments, each of which is either unspecified ([ ]) or typed (e.g. [cat: N]). Overall • saturation is sought in parsing. The parser assigns index numbers to words in the input string from left to right, and coindexes corresponding subsWactares under ELEMENTS. The ELEMENTS component currently has A for the word for which this structure is defined, B for the first argument, and C for the second argument. These labels simply flag PATHS for accessing particular elements. There can be any number of order-relevant labels corresponding to an element. These labels, with coindices with respective elements, are in the ORDER component, which is subject to the Word Order ConsU'alnt (discussed later). TYPE is the slot for assigning the pseudo-functional category ARG or NON-ARG that we found significant in the present cross-linguistic treatment of nominals (see below). AGR(eement) and FEATS subgraphs contain grammatical and pragmatic agreement features, respectively (discussed later). 196 atomic templates %SG-NO--ARGUMEN'I~: [arguments: #] <- saturates the category $SG-LEX: [result: [elements: [a: [lex: [ ]]]]] <- has a slot foe the word form %SG-WORD-FEATS-ARF~TOP-FEATS: <- passes the word's own features to the top [result: [feats: <1> elements: [a: [feats: 1[ ]1111 inheritance of composite templates %SG-WO RD- FEATS-ARE-TOP-FEATS $SG-LEX ",,,/ JA-N EN-N FR-N GEoN AR-N FISUm 3. C~nerai N A few more remarks about the notation follow. A value can be either atomic (e.g N), a disjunction of atomic: values enclosed in curly brackets (e. 8. {N P]), or a complex feature structure. It can also be umi~ffied ([ D. The identity of two or more values is fo~.~d by reenmmt structmm indicated by coindexing (e.g. I[ ] and <I>). Such coreferring value slots automatically point to a sin81e data structure entered through any one of the slots. Universal mono-level category N Category N: We posit the universal categmy N for nominals. Nominals here are those that realize A R ~ such as subjects and objects. Nominals are more commonly labeled NP, a phrase typically built axound N or CN (comm*~ noun), as in phrase structure NP->DET N as well as in the categorlal grammar characterization of DET as a functor NPICN (Le. combines with CN and builds NP) (e.g. Ades & Steedm~n 1982; Wittenberg 1986a). This BI.LEV]~ View of nominals is motivated by facts in western European languages. In English, for instance, while cat or wide cat cannot f'dl a subject position, a cat and thLv ca: can. In comrast, while he can be a subject, it cannot be modified as ~ he or srange h~. This motivates the following category-assJguments with a constraint that only NPs can be arguments: ca: is CN, he is NP, a and #~s are NP/CN, and white and sWange are CN/CN. This, bewevef, requires that plurals and mass nouns be CN and NP at the sanlc time since ca~, gold, white cats, white gold, these cms, and this gold can all be arguments. The count/nmss distinction is also often blurred since a singular count noun llke ca: may be used as a mass noun referring to the meat of the cat, and a mass noun like gold may be used as a singular count noun referring to a UNIT of gold or a KIND of gold (see e.g. Bach 1986). The boundmT between NP and CN is at best Ftr22Y. When we ~ to othm" languages, the basis for the bi-level view vmisbes. In Japanese, for instance, neko 'cat' can be an argument on its own, and pronoun kam 'he' can be modified as in ano kate 'that he' and okas/na kate 'strange he'. In short, there is no basic syntactic diff~iew.e among count nouns, pronouns, and mass nouns (and no singular/plural distinction on a 'count' noun). All of them behave iJ~ plural and mass nouns in English. This supports a mono-level view of nominals, which we intend to captm~ with category N. Figure 3 shows the SG- templates relevant to the most general characterization of N in each language. SG-templates in the following illustrations are marked as follows: atomic templates SG-x (boldface), utility templates 9~SG-x, and substantive templates $SG-x. At the moat general level, the basic llomlnall ill Gezman (OE-N) and Arabic (AR-N) must be unsaturated because gcnitivc-inflectod Ns may take arguments. The basic nominals in Japanese (JA-N), English (EN-N), md French fiR-N), on the other hand, are basic categories that are salmated? In *_d,]ition, all but JA-N inherit relevant AGR(eemant) templates (see below). Crucially, note that what 1oo~ like a reasonable characterization of N in each language actually consists of a particular selection from the common set of primitives. ARGUMENT and NON-ARGUMENT: We posit a pseudc~functiomd level of description in terms of ARG(ument) and NON-ARG for category N instead of the categozy=level distinction between NP and CN. ARG may function as an ~ t alone, and NON-ARG cannot. 5Note that English possessive marker's is not treated as an inflection here. 197 NON-ARG becomes ARG only by being combined with a certain modifier or by undergoing a semantic change (e.g massifying). In this view, the ARG/NON-ARG distinction is 'grounded on a complex intcraction of morphology, semantics, and syntax. In English and Germa~ singular count nouns (e.g. wee, Baum) are NON-ARG while plurals, mass (~ngu~) nouns, proper names, and pronouns are ARG. The NON- ARG nouns become 'complete' ARG nominals either by being modified with deteTmin~'s of by chmsing int~ mass nouns (typically changing an object reference into a property/substance mfe~nce, e.g., i uaed app/, /n my p/e.).° In French, all forms of commo~ nouns (i.e. singul&, plural, and mass) me NON-ARG, in need of delcrminers to become ARC; (e.g~ $'a/~ *ar~ arbrea 'I saw tn~J'; *AmourlL' omour e~ delica~ 'Love is delkate'). In Japanese, them ~e few NON-ARG nouns (e.g., kam 'person' (HONORIFIC)), which can become ARG with any modifier such as a relative clause or an adjective (e.g. ~mana tam 'free person (HON.)'3 In Arabic, the morphological distinction of nouns between a~rexzo vs. UNA~VeXED corresponds to NON-ARG md ARG statues, respectively, s For instance, the unmlnexed form q~.ma.~ CAT-DUAL NOM-UNANNEX 'tWO Ca~' may occur u mbject alone whereas the mnexed form q'.~a: CAT.DU~M ce~not. The latter must be modified with a noun-based modifier such as a genitive phrase, and this modifier must be unsnncxod (e.g. with rajulin MAN-ffeN.UNANNIDG q't~a: raju//n 'mAn's two cats'). These facts in Japanese mul Arabic show that the proposed fun~onal distinction for nominals is motivated independently from the syntaodc role of determiuen since ueithcr language has modifiers of categmy DET that we find in Engl_i~h; French, and Gennm (more discussed later). We realize that the ARG/NON-ARG distinction itself is not a final solution until fine-grained syntactic-romantic interdependence is fleshed out. For now, we simply posit pseudo-functional types ARG md NON-ARG, which me either changed or passed up within the nominal slructure: 9 $SG-ARG: [result" [type: erg]] $SG-NON-ARG:[result: [type: non-&g]] Category NIN: Adnominal modif'~m (N-MODs) are now universally NIN (Le. a functor that combines with N and builds N). This includes both determiners and aUribulive modif'u:rs. Figure 4 shows the SG-templates for the basic N-MOD. Different kinds of N-MOD must then distinguish whether it takes one or two arguments and whether the resulting nominal with modification is ARG or NON-ARG. Each distinction is briefly illustrated below. Two kinds of Igenltlve: Genitive N-MOD functors may take different numbers of arguments cross- linsuist/cally. An inf~ted genitive nominal (e.g. GE: Marias, AR: rajulln 'man's') takes one, while a genitive 8dposition (e.g. EN: o)) takes two. The former is captured with SG-I~ONAI.~ENrrIVE-CASE-MOD, and the latter, with SG-PARTICLE-GENITIVE-CASE-MOD. see ~,ur, s. Non-universal determiner category: In the present ~roach, DET(enniner) is a modifim- type (including &ticks, demonstratives, quantifiers, numerals, and possessives) such that at least one of its members is needed for making an ARG nominal out of a NON-ARG. The fact that a nominal with a del~rmln~r is always ARG Iranslates into SG-DET inheriting from SG-ARG among others. DET is present in English, German, and French, but not in Japmese or Arabic (or Russian o~ Chinese). Demommnfive~ quanlifiers, numerals, and possessives in the latter lansuagea do not sham the syntactic function of DET. We suspect that the presence of DET is an areal property of western Eeropean lmgeaSes. The sublatticc in Figure 6 highlights two aspects of DET. One is the diff~,~.,ce between DET and ADJ(ective) in Engfish, German, and French with respect to the ARG status of the resulting nominal. DET always builds ARG cancelling whatever the type of the incoming nominal whereas ADJ passes the type of the incoming nominal to the top. The other is the place of demonslralives in relation to DET. Eve~ language has demonstratives encoding two or tluue degre~ of speaker proximity (e.g. JAPANESE: kono (close to the speaker), sow (close to the addressee), 61n implementation, this latter process may be triggered by a unary rule COUNT->MASS. 7They are assigned a NON-ARG category MN (for 'modified noun') separate from the ARG category N. Any modifier changes it into ARG. SA/mEX~ here means 'needing to be mmexed to a noun- based modifier', and U N ~ means 'completed'. Th~ arc also called NONNUNATED ~ NUNATED fOl'l~, respectively, in Semitic linguistics (Aristar, personal communication). 9An intnging direction is shown in Kritka's (1987) categorial grammar t~ttmenL He assigns the singular count noun in English (i.e. our NON-ARG) m unsatnmted nominal category looking for its numerical value both in syntax and semantics. The sJSnificance of determiners is here as suppliers of numerical values. How this approach can be extended to cover the NON-ARG nominals in Arabic and JapAnese (which ale not in need of numerical values per se) remRin~ to be seen. Although it ma~s sense to see NON-ARG as a functor looking for more semantic determinaeon, implemeneng it would require a reduction rule for TWO FONc'roRs U30~O FOR EAC~ oTtm~ The current system would cause an infinite regression with such a rule. 198 atomic templates %SG-HF.AD-FF.ATS-ARE-TOP-FEATS: <- passes the features of the second (result: [feats: <1> element to the top elements: [b: [feats: 1[ ])]]] %SG.-FIRST-ARGUMENT: <- slot for the first argument [result: [elements: [b: <1>]] arguments: [first: [result: 1[ ]]]]] %SG-GET.-ORDER: <- passes the ORDER content of the first argument to the top [result: ]order: [[<1>]] arguments: [first: [result: [order: 1[ ]]]]] $SG-MOD: <- for • category-constant functor MOD (see below) [result: [eat: 4[ ] elements: [s: [index: <1>] b: <3>] order: limed: 1[ ]] [head: 2[ ]]] arguments: [f'h'St: [result: 3[cat: <4> index: <2>]]] inheritance of composite templates $SG-N (above) %SC,-HEAD.-FEAT~ARF_,-TOP.FEATS %SG-FI1L~-ARG~iG-G~SG-MOD $SG-N-MOD<- for the general sdnominal modifier Figure 4. Genecal N-MOD atomic templates %SG-ARGUMENTS-REST-SATURATED: [arguments: [rest: #]] %SG-ONLY-TWO-ARGUMEN~: [arguments: [rest: [first: [arguments: #] rest: #]]] <- saturates the second argumen <- no more than two arguments soughl $SG...GENrnv~ <- assigns the genitive case featun [result: [elements: [a: [feats: [case: genitive]]]]] inheritance of composite templates $SG-N-MOD (above) $SG-CASE-MOD: <- for the general case-mod [result: [elements: ]a: [cat: {'P N') <- P or N feats: [mod-t'ype: case-meal]]]]] ~S G-INI~..EC'MON~.-Ca~E-M OD $SG-GENF~VE S SC~-PAR'n CLE-C~-q E-M O D category ~ ~ (chooses / ~ (chooses category P) ~SG-INFLECTIO NAL-GEN rSl~tE-CASE-MOD $SG-PARTICLE-GENITIVE-CASE-MOI: GE-N (above) GE: MarJas AR: rsjulin 'man's' EN: of JA: no Flgu~ $. Genitive Case MOD 199 and ano (away from either)), but they belong to the class of determiners only ff the language has DET. Grammatical agreement (AGR) Two kinds of features are distinguished, linguistic features relevant to GRAMMATICAL A ~ ' r (e.g. Frenc~ grammatical gender i~l~*~ table °a table' f.), and refexent fealm~s relevant to ~AC~ATXC A~Rmgdm~r (e.g. using s~ to refer to a female person; using appropriate numend classifiers fur counting objects in Japanese). The former is under aUribute AGR, and the latter is under FEATS. The N-internal gramma,~c~l agn:emunt (AGR) requires that certain features of the HEAD Nominal must agree with those of MOD. For instance, English has number agreement (e.g. th/s book, *tho~ book, *th/,v boo~). Among the five languages under consideration, all but Japanese have AGR. Although them is c~oss-linguistic variation in AGR features, it is not random (Moravcsik 1978). Table I sums up the N-intemai AGR features in the four languages. All AGR features go under atlribute AGR so that its presence simply corresponds to the inescoce of grmmnatical agreement in a language. EN-N, for instance, inherits the shared template for number agreement, and FR-N those for number and gender agreements. See below:. $SG-NBR-AGR: [result" [agr:. [nbr:. <I>] elements: [a: [feats: [nbr: IN]]]]] $SG-GDR-AGR: [result: [ag~. [g~ <1>] etemmts: [~ [feats: [g~ 11"I]]]]] Seperating AGR end FEATS enables us to cte.a~ SO- templates that impose the most general agreement conslraint ~-g~miless of the precise content of agreement fea~. Three agreement templates produce the combined effect of N-intenml agreement conslrsint, SG-AGR, SG- AGR-ARGUMENTS, and the composite of the two, SG- AGR-WITH-ARGUMEN'I~. See Figure 7. The reenlrancies impose the strict identity of AGR features: (0 $SG-AGR--betwem the topmost structure and the dcmmt that the graph is defined for, (fi) $SG-AGR-ARGUMENTS---between the topmost structure and the first argument, and (iii) $SG-AGR- WITH-ARGUMENTS--among all the three. (0 goes into ALL NOMINALS, pussing the Dominql's AGR featams to the top level This is because the AGR features must always be available at the top level of a nominal so that they can be used when the nominal is further modified. (ii) goes into ADNO~AL MODn~mRS, passing the head nominai's AGR realtors to the top leveL (ih~ goes into ONLY THOSE ADNOMINAL MODwle.gS SUBJECT TO THB AG~ CONS'IRAINI** for instance, demomtratives (e.g. these) but not attributive adjectives (e.g. sma//) in English, and both demonstratives and adjectives in French (see this diff~ce in the above inberitance). This is an example where a better language-specific treatment is obtained from the gnunmar-sharing perspective. If only English is handled, one may simply force the identity of NBR features amidst all kinds of other featmes, but in the light of eruss-linguistic variation and invsrisnts, it lends itself naturally to separating out two kinds of features that correspond to diff~t semantic intcqnetation processes. Category constancy and word order typology In connecting word order typology and categoriai grnmm~r~ we have benefited from work of Grcenberg (1966), Lelmumn (1973), Vennemann (1974, 1976, 1981), Kecnma (1979), Flynn (1982), and Hawkins (1984). Amon 8 these, we have a f'h-st-cut implementation of Vamemmm's (1981) and Plyun's (1982) view that the functor types based on CATEOORY CONSTANCY have a significant relation to the default word order of a language. A functor is c^Teoo~Y.COm-T~aCr ff it builds the same catego~ as its argum~t(s). It is CATEGORY.NON-CONSTANT if it builds a different category from its m-gument(s). These notions ~e also called m~xJrt, mc md ~x~c, respectively, by Ber-Hillel (1953), and are crucially used in lqyma's high-level word order convention s ~ . The definitiom of the notions MOD (modifier), HEAD (head), FN (run.ion), and ARG (argument) follow:. • MOD is a categm'y-comtant functor (XIX) that combines with HEAD (X). (see above for SG- MOB) • FN is a category-non-comtant functor (YIX) that combines with ARG (X). eatm~oz~, aat~oz~, cmast~ant non-oonst.ant~ X Y I\ I\ XlX X YIX X I I I I ~ PM &]RG @.g. BIN W PPIM W adJ noun pzmp noun red roof for Max Them is crms-linguis~ evidenc~ that MOD-I-IEAD mid FN-ARG urdcn tend to go in opposite directions. This remounts to two basic word order types in languages: ¢~R T'~PE 1: ]tRG < FN MOD ~ ¢L~DEIt TXW2 2: i'N<~ IDLED ~ MOD (wlmL-e < ~-qutdB as 'pz.cmdas') The N-level default word order in a language is determined as follows: Every language has ~posrnoN-s (prepositions and postpositions), universally a category-non-constant functor PPIN. A postpositionai laaguage (i.e. a language that uses only or predominantly postpositions) then belongs to TYPE 1 (ARG < FN), and a prepositional language belongs to TYPE 2 (FN < ARG). in the present case, EN, G~ ~ and AR are propositional while JA is postpositiuneL The default MOD order is most faithfully observed in 200 inheritance of composite templates ~ $SG-ARG (see above), %SG-ARGUMENTS-REST-SATURATED (see above) $S~-DET ~ G N ~ (see above) {various templates for cons~aimng the cooccurrence and order inside DET) $SG-DEM(onstrative) $SG-ATI'RIBUTIVE-ADJECTIVE $SG-HEAD-TYPE-IS-TOP-TYPE: ~ / ' " ~ / ~ : [ r e s u l t : [t~:>eeleme~l:> [b: 1[ ]]]]] i ENoATTIRB-ADJ GE-ATTRIB-ADJ FR-ATTRIB-ADJ AR-ATTRIB-ADJ JA-A3"rRIB-ADJ big gross grand . kablyr ookU Figure 6. DEM 8rid ATrRIB-ADJ in relation to DET ARABIC: GERMAN: FRENCH: F.NGLISH: NUMBER: GENDER: CASE: DEFINrrE: ANNEXED SG DU PL3 M F NOM ACC GEN ÷- + - SG PL M F N NOM ACC GEN DAT SG PL M F SG PL Ttble I. N-inmul Agmemmt Feature atomic tamplat~ %SG-AGR: [result: [agr: <I> elements: [a: [agr: I[ ll]]] :$SG-AGR-ARGUMENTS: [result: [agr: <1>] arguments: [first: [result: [AO~ I[ ]]]]] inheritance of composite templates (~ "~SG-GDR-AGR (above) ~J~.~a N MOD FIR N MOD 1 '' I~" ~etc. ~ .......... r ....... inu dogs chiens these stall ces petits Figure 7. AGREEMENT 201 Arabic (HEAD < MOD) and Japanese (MOD < HEAD), with few exceptions. The three European languages, however, observe the default order only with 'heavier' (i J:. phrasal or clausal) modifiers, namely, genitives, pp- modifiers, and relative clauses. Lex/cal modifiers, including numerals, demonslratives, and adjectives (more or less), go in the opposite ordering. The exceptionally ordered MODs of the five languages revealed en implk:ational chain amnng modifiers: Numerals < Demonstratives < Adjectives < Genitives .: Relative clauses. Exceptional order was found with those MODs s~arting from the left-end of this hierarchy: JA: marked use of Numerals, AR: enmarked use of Numerals and Demonslratives, FR: Numerals, Demonstratives, and used of Adjectlve~ EN&GE: Numerals, Demomlrafives, and Adjectives. The generalization is that a non-default order for a modifier type x implies the now default order for other types located to the LeFr of x in the given chain. WI~ we found mppo~ the general implicational hierm~hy that Hawkin~ (1984) found in his cross-linguistic study. We can ~ maintain, therefin'e, that there is such a thing as the default . o ~ with a qualification that it maybe oven'idden by non-random, subclaasea. In our current implementation, we simply assign another category MOD2 on those 'exceptional' modifiers in order to free them from the general order conslraint on MOD, which we hope to improve in the future. 10 Potential problems and solutions There are two potential problems in m effort to develop a shared grammar as described be~ One is the need for serious cooperation amang the developers. A small change in shared templates can always affect language-specific templmns that someoue else is workln~ on. The other problem is the sheer complexity of the inheritance lattice. Both problems can be most cffcctively reduc~_d by a sophisticated edits tooL Conclusions and future prospects We have shown a specific implementation of grammar sharin8 using graph unification by inheritance. Although the case discussed covers only simple nominals in five languages, we believe that the fundamental process that we GRAMMATICAL ATOMIZATION will remain crucial in developing a shared grammar of any sU'uctural complexity a~l linguistic coverage. The specif~ merits of this process is that (a) it tends to prevent the grammar writer from implementing treatments that work only for a language or a language type, and that (b) it pmvidas insights as to how certain conflated properties in a languase actually mnsist of smaller independent pros. In the end, when a prototype shared grammar anains a reasonable scale, we hope to verify the prediction that it will facilitate adding coverage for new languages. The purpose of this wo~ at MCC was to demonstrate the feasibility of a shared syn~ rule base for dissimilar languages. We only assumed that languages are used to . convey information contents that can be represented in a common knowledge base. As the next step, therefore, we have chosen to connect syntax with 'deeper' levels of information pmces~in~ (i.e. sern*.tlcs, discourse, and knowledge base) rather them continuing to increase the syntactic coverage alone. Our current effort is on developing a blackboard-like system for controlling various knowledge sources (i.e. morphology, syntax, semantics, discourse, and a commmutense knowledge base (MCC's CYC, Lanat and Feigenhaum 1987)). In the future, we hope to see a shared grammar integrated in a full-blown interface tool for man-machine commuuical/on. Acknowledgments This shared grammar work is a collaborative effort of a team at MCC. I am especially indebted to my fellow linguis~ Anthony Arists~ and Carol Juatus, for their insights into multilingual facts and numerous discussions. I would also like. to tl~nk Rich Cohen, Martha Morgan, Elaine Rich, Jonathan Slecum, Ksystyna Wachowicz, and Kent Wittenburg for valuable comments and discussions at various phases of the work. Thank~ also go to AI Mendall and Michael O'Leary for implementing the interface tool, e~l to anonymous ACL reviewers for helpful comments. I am responsible, however, for this particular exposition of the work and remaining shortcomings. I°We envision using a data structure of type inheritance lattice defined for each lanouage to express word order constraints in order to handle non-default orde~m 8. The basic idea is that an order constraint stated on a d_,~'__~-ndant (e.g. DEM < head) ovearides that stated on its anc~tont (e.g. head < MOD). This differs from GPSG's LP rules (Gazdar & Pullum 1981; Gazd& et al. 1985; Uzlmreit 1986) in that the order conslraints apply to items located anywhen" in the derivational Iree struclrue, not limited to sister constituents, and the pieces of an item can be scattered in the tree. It is in spirit ~imilar to LFG's functional precedence conslraints (Kaplun 1988; Kameyama forthcoming). References Aries, Anthony and Mark Steedman. 1982. On the order of words. Lingusitics and Philosophy, 4, 517-558. Aristar, Anthony. 1988. Word-order constraints in a n~0tilingeal categorial grammar. To appear in the Proceedings for the 12th International Conference on Computational Linguistics, Bedapest. Bach, ~mmon. 1986. The algebra of events. Linguistics and Philosophy, 9, 5-16. Bar-Hillel, Y. 1953. A quas/-arithmetical notation for 202 syntactic description. Language, 29(1), 47-58• van Benthem, Johan. 1986. Categorial grammar. Essays in Logical Semantics (Chapter 7). DonkechC Reidel, 123-150. Flickengcr, Daniel, Cad Pollard, and Thomas Wasow. 1985. Structure-sharing in lexical rcprcsentation. The Pruccedings for the 24th Annual Meeting of the Association for Computational Linguistics. Flynn, Michael 1982. A categorial theory of stricture building. In G. Gazdar, G. Pollum, and E. Klein (eds), Order, Concord, and Constituency. Dordrecht: Foris. Gazdsr, Gerald and Geoffrey K. Pullum. 1981. Subcategorizat/on, constituent order, and the notion 'head'. In Moongat, M., H. v.d. Huist, and T. Hoekstra (eds), The Scope of Lexical Rules. Dordrecht, Holland: Foris, 107-123. ; Ewen Klcin; Geoffrey K. pollum; and Ivan A. Sag. 1985• Generalized Phrase Slnumm~ Grammar. Oxford, England: Blackwell Publishing and Cambridge, Mass.: Harvard University Press. Greenberg, Joseph. 1966. Some universals of grammar with particular reference to the order of meaningful elements. In J. Greenberg (ed.), Universals of Language (2nd edition). Cambridge, Mass.: The MIT Press, 73-113. Hawkins, Jolm. 1984. Modifier-head or function-argument relations in phrase slructure? The evidence of some word order universals. Lingua, 63, 107-138. Kameyam* Megumi. forthcoming. Functional precedence conditions on overt and zero pmnominals. Manuscript. Kapian, Ronald M. 1988. Three seductions of computational psycholinguistics. In Whitelock, Peter;, Harold Somen, Paul Bennett, Rod Johnson, and Mary McGee Wood (eds), Linguistic Theory and Computer Applications. Academic Press. Karttunen, LaurL 1986• Radical lexicalism. Paper presented at the Workshop on Alternative Conceptions of Phrase Slntcture at the Summer Linguistic Institute, New York. [To appear in Kroch, Anthony et aL (eds), Alternative Conceptions of Phrase Structure.] Keemn, Edward. 1979. On surface form and logical form. Studies in the Linguistic Sciences (special issue), 8(2). Krifka, Manfred. 1987• Nominal ref~uce and tempm-al constitution: towards a semantics of quantity. In J. Gmenendijk, M. Stokhof, and F. VelUnan (eds), Proceedings of the Sixth Amsterdam Colloquium, University of Amsterdam, Institu~ for Language, Logic, and Information, 153-173. I.ab,~mn; Winfred P. 1973. A structural principle of language and its implications. Language, 49, 47-66. Lenat, Douglas B. and Edward A. Feigenbanm. 1987. On the thresholds of knowledge. Paper presented at the Workshop on Foundations of AI, MIT, June. Also in the Proceedings for the International Joint Conference on Artificial Intelligence, Milan. Montague, Richard. 1974. The proper Ireatment of quanlffication in English• In Rich Thomason (ed•), Formal Philosophy:. Selected Papers of Richard Montague. New Haven: Yale, 247-279. Moravcsik, Edith. 1978. AgreemanL In J. H. Greenberg et al. (eds), Universals of Human Language, VoL 3. Stanford: Stanford University Press. Pollard, Cad and Ivan Sag. 1987. Head-driven Phrase SU'UCUI.-'~ Grammar~ The ¢oursc ~ for [he Linguistic Institute at Stanford University. Schmerlin 8. Susan. 1983. Two theories of syntactic categories. Linguistics and Philosophy, 6, 393.421. Shicher, Stuart. 1984. The design of a computer language for linguiStiC informaliolL The Pr~__J~yl_ |n~s for the 10th International Conference on Computational Linguistics, 362-366. 1986• An Introduction to Unification-based Approaches to Grammar• CSLI Lecutre Notes 4. Stanford: CSLL (available from the University of Chicago P~s) Slocum, Jonathan. 1988. Morphological processing in the Nabu system. In the ProceeA_ings for the 2rid Confezence on Applied Natural Language Pmcessh]8. ACL. and Carol Juatus• 1985• Transprtability to other languages: the natm~ language processing project in the AI program at MCC. ACM Transactions on Offke Information Systems, 3(2), 204-230. Uzkm~t, Ham. 1986a. Comtraints on order. Stanford, CA: CSLI Repog No. CSLI-86-46. • 1986b. Categorial unification gramman. The ~ g s for the 1 lth International Conference on Computational Linguistics, 187-194. Venuemann, Then. 1974. Topics, subjects and word one-'r: From SXV tu SVX via TVX. In J. M. Andsrson ~nd C. Jones (eds), Historical Linguistics, I• Amsterdam: North-Holland, 339-376. • 1976. Categorial grammar and the order of meaningful elements. In A. Jnilland (ed.), IAnguistic studies offered to Joseph Greenberg on the occasion of his sixtieth birthday. California: Saratoga, 615-634. • 1981. Typology, universals and change of language. Paper prmentad at the International Conference on Historical Syntax, Poman. and Ray H&low. 1977. Categorial grammar md consistent basic VX ~iafizafion. Theoretical linguistics, <3), 227-254. Wittenhorg, Kent. 1986a. Natural language processing with combinat~ry categorial grammar in a graph- imificafion-based formalkuk Doctoral Dissertation, University of Texas at Austin. • 1986b. A parsor for portable NL interfaces using graph-unification-based ~mmnrS. The ~ g S for the 5th National Conference on Artificial IntelLigence, 1053-1058. 203
1988
24
SYNTACTIC APPROACHES TO AUTOMATIC BOOK INDEXING Gerard Salton Department of Computer Science Cornell University Ithaca, NY 14853 ABSTRACT Automatic book indexing systems are based on the generation of phrase struc- tures capable of reflecting text content. • Some approaches are given for the automatic construction of back-of-book indexes using a syntactic analysis of the available texts, followed by the identifica- tion of nominal constructions, the assign- ment of importance weights to the term phrases, and the choice of phrases as index- ing units. INTRODUCTION Book indexing is of wide practical interest to authors, publishers, and readers of printed materials. For present purposes, a standard entry in a book index may be assumed to be a nominal construction listed in normal phrase order, or appearing in some permuted form with the. principal term as phrase head, Cross-references ("see" or "see also" entries) between index entries are also normally used in the index. Excerpts from two typical book indexes appear in Fig. 1. Attempts have been made over the years to mechanize the book indexing task, based in part on the occurrence characteris- tics of certain content words in the docu- ment texts [Borko, 1970], and in part on more ambitious syntactic methodologies. [Dillon, 1983] However, as of now, com- pletely viable automatic book indexing methods are not available. Two main This study was supported in part by a grant from OCLC Inc.. and in part by the National Science Foun- dation under grant [R[-87-02735. research advances may, however, lead to the development of improved automatic book indexing procedures. These include the generation of advanced syntactic analysis procedures, capable of analyzing unrestricted English texts, as well as the construction of powerful automatic indexing systems using sophisticated term weighting systems to assess the importance of the indexing units. [Salton 1975a, 1975b] By joining the available linguistic procedures with the available know-how in automatic indexing, satisfactory book indexing sys- tems may be developed. AUTOMATIC PHRASE CONSTRUCTION Book indexing systems differ from standard automatic text indexing systems because complex, multi-word phrases are normally used for indexing purposes rather than the single term entries that are pre- ferred in conventional automatic indexing systems. The phrase generation system described in this note is based on an automatic syntactic analysis of the avail- able texts followed by a noun-phrase iden- tification process using parse trees as input and producing lists of nominal construc- tions. The parsing system used in this study is based on an augmented phrase structure grammar, and was originally designed for use in the EPISTLE text- critiquing system. I (Heidorn, 1982, Jensen, 1983) A typical document abstract is shown 1 The writer is indebted to the IBM Corporation and to Dr. George Heidorn for making available the PLNLP parsing system for use at Cornell University. 204 in Fig. 2, and the output produced by the syntactic analysis program for sentence 2 of the document is shown in Fig. 3. It may be noted that the syntactic output appears in the form of a standard phrase marker, the various levels of the syntax tree being listed in a column format from left to right. Dur- ing the analysis, a head is identified for each syntactic constituent, identified by an asterisk (*) in the output. Thus in Fig. 3, the VERB is the main head of the sentence; the head of the noun phrase preceding the main verb is the NOUN representing the term "oPerations", etc. The phrase formation system used in this study builds two-term phrases by com- bining the head of a constituent with the head of each constituent that modifies it. (Fagan 1987a, 1987b) For the sample sen- tence of Fig. 3, such a strategy produces the phrases development - exception dictionary - development negative - dictionary system operations In the phrase output, the dependent term is listed first in each case, followed by the governing term. Note that the phrase gen- eration system identifies apparently reason- able constructions such as "dictionary development" and "system operations", but not the unwanted phrases "exception opera- tions" or "exception systems". AUTOMATIC PHRASE ASSIGNMENT An automatic phrase construction sys- tem generates a large number of phrases for a given text item. Fig. 4 lists all the phrases produced for the abstract of Fig. 2. Phrases occurring in the document title are identified by the letter T, and phrases obtained more than once for a given docu- ment are identified by a frequency marker (2) in Fig. 4. The output of Fig. 4 could be used directly in a semi-automatic indexing environment by letting the user choose appropriate index entries from the available list. The standard entries from the figure might then be manually chosen for indexing purposes by the document author, or by a trained indexer. In a fully automatic indexing system, additional criteria must be used, leading to the choice of some of the proposed phrase constructions, and the rejection of some oth- ers. The following criteria, among others, may be useful: For sentences that produce more than one acceptable syntactic analysis out- put, all analyses except the first one may be eliminated; (in the Heidorn- Jensen analyzer multiple analyses are arranged in decreasing order of presumed correctness). Phrases consisting of identical juxta- posed words ("computations- computation" in Fig. 4) may be elim- inated. Phrases consisting of more than two words (e.g. "document-retrieval- system") may be given preference in the phrase assignment process. Phrases occurring in document titles, and/or section headings may be given preference. Noun-noun constructions might be given preference over adjective-noun construction. A further choice of phrases, as well as a phrase ordering system in decreasing order of apparent desirability, can be imple- mented by assigning a phrase weight to each phrase and listing the phrases in decreasing weight order. Two different fre- quency criteria are important in phrase weighting: The frequency of occurrence of a con- struct in a given document, or docu- ment section, known as the term fre- quency (tf) The number of documents, or docu- ment sections, in which a given con- struct occurs, known as the document frequency (df). 2 2 For book indexing purposes, a book can be broken down into sections, or paragraphs; the term frequency and document frequency factors are then computed for the individual book components 205 The best constructs for indexing purposes are those exhibiting a high term frequency, and a relatively low overall document fre. quency. Such constructs will distinguish the documents, or document sections, to which they are assigned from the remainder of the collection. The corresponding term weighting system, known as tf.idf is com- puted by multiplying the term frequency factor by an inverse document frequency factor. Fig. 5 shows selected phrase output based in part on the use of automatically derived term weights. The top part of the figure contains the automatically derived constructs containing more than two terms. These might be used for indexing purposes regardless of term weight. In addition, the two-term phrases whose term frequency exceeds 1 in the document might also be used for indexing purposes. This would add the 9 phrases listed in the center portion of Fig. 5. Some of the phrases with ff > 1 have either a very high document frequency (125 for "retrieval system") or a very low docu- ment frequency of 1, meaning that the phrase occurs only in the single document 659. In practice, a reasonable indexing pol- icy consists in choosing phrases for which tf > k 1 and k 2 < df < k3 for suitable parameters kl,k2, and k 3. When these parameters are set equal to 1, 1 and 100, respectively, the 5 phrases identified by asterisks in Fig. 5 are chosen as indexing units. The bottom part of Fig. 5 shows a ranked phrase list in decreasing order according to a composite (tf × idf) phrase weight. Using such an ordered list, a typi- cal indexing policy consists in choosing the top n entries from the list, or choosing entries whose weight exceeds a given thres- hold T. When T is chosen as 0.1, the 12 phrases listed at the bottom of Fig. 5 are produced. It may be noted that most of the terms listed in Fig. 5 appear to be reason- able indexing units. In a practical book indexing system, a phrase classification system capable of determining relationships between similar, or identical, phrases becomes useful. Such a phrase classification then leads to the choice of canonical representations for each group of equivalent phrases, and to the assignment of "see" and "see also" refer- ences. Phrase relationships can be deter- mined by using synonym dictionaries and various kinds of phrase lists. In addition, attempts have also been made to use the term definitions contained in machine- readable dictionaries to construct hierar- chies of word meanings. (Walker, 1987; Kucera, 1985; Chodorow, 1985) The automatic construction of phrase classifica- tion systems remains to be pursued in future work. REFERENCES Borko, H., 1970, Experiments in Book Indexing by Computer, Information Storage and Retrieval, 6:1, 5-16. Chodorow, M.W., Byrd, R.J., and Heidorn, G.E., 1985, Extracting Semantic Hierar- chies from a Large On-Line Dictionary, Proceedings of 23rd Annual Meeting of the Associations for Computational Linguistics, Chicago, IL. Dillon, M. and McDonald, L.K. 1983, Fully Automatic Book Indexing, Journal of Docu- mentation, 39:3, 135-154. Fagan, J.L., 1987a, Experiments in Automatic Phrase Indexing for Document Retrieval: A Comparison of Syntactic and Non-Syntactic Methods, Doctoral Disserta- tion, Cornell University, Technical Report 87-868, Department of Computer Science, Cornell University, Ithaca, NY. Fagan, J.L., 1987b, Automatic Phrase Indexing for Document Retrieval: An Examination of Syntactic and Non- Syntactic Methods, Tenth A n n ual ACM/SIGIR Conference on Research and Development in Information Retrieval, New Orleans, LA, ACM, NY, 1987. Heidorn, G.E., Jensen, K., Miller, L.A., Byrd, R.J., and Chodorow, M.S., 1982, The EPISTLE Text Critiquing System, IBM Sys- tems Journal, 21:3, 305-326. Jensen, K., Heidorn, G.E., Miller, L.A., and Ravin, Y., 1983, Parse Fitting and Prose Fixing: Getting Hold on Ill-Formedness, American Journal of Computational 206 Linguistics, 9:3-4, 147-160. Kucera, H., 1985, Uses of On-Line Lexicons, Proceedings First Conference of the U.W. Centre for the New Oxford English Diction- ary: Information in Data, University of Waterloo, 7-10. Salton, G., 1975a, A Theory of Indexing, Regional Conference Series in Applied Mathematics, No. 18, Society of Industrial and Applied Mathematics, Philadelphia, PA. Salton, G., Yang, C.S., and Yu, C., 1975b, A Theory of Term Importance in Automatic Text Analysis, Journal of the ASIS, 26:1, 33-44. Wa!}:er, D.E., 1987, Knowledge Resource Tools for Analyzing Large Text Files, in Machine Translation: Theoretical and Methodological Issues, Sorgei Nirenburg, editor, Cambridge University Press, Cam- bridge, England, 247-261. 207 Game tree, 259-270 Garbage collection, 169-178 Go to statement, 11 Graphs, 282-334 activity networks, 310-324 adjacency matrix, 287-288 adjacency lists, 288-290 adjacency multi lists, 290-292 bipartite, 329 bridge, 334 definitions, 283-287 Eulerian walk, 282 incidence matrix, 331 inverse adjacency lists, 290 orthogonal lists, 291 representations, 287-292 shortest paths, 301-308 spanning trees, 292-301 transitive closure, 296, 308-309 Data security, 360, 390-394 DBTG (Data Base Task Group), 377-380 Deadlock prevention, 395-396 Decision support system, 7, 9, 358-359 Decomposition of relations, 394 Deductive system, 259, 356, 420 Deep indexing, 55 Deep structure of language, 275 Default exit, 343 Delay cost (see Cost analysis) Density(see Document space density) Dependency (see Functional dependency; Term dependency model) Depth-first search, 223 Descriptive cataloging, 53 Deterioration, 225-226, 233 DIALOG system, 30-34, 38, 46-48 Dice coefficient, 203 Dictionary, 56-57,101-103, 259-263, 285-286 Dictionary format, 57 in STAIRS, 36 Figure 1. Typical Book Index Entries Document 659 .T A Highly Associative Document Retrieval System .W This paper describes a document retrieval system implemented with a subset of the medi- cal literature. With the exception of the development of a negative dictionary, all system operations are completely automatic. Introduced are methods for computation of term-term association factors, indexing, assignment of term-document relevance values, and computa- tions for recall and relevance. High weights are provided for low-frequency terms, and retrieval is performed directly from highly connected term-document files without elaboration. Recall and relevance are based on quantitative internal system computations, and results are compared with user evaluations. Figure 2. Typical Document Abstract 208 DECL PP PREP DET NOUN* PP "with" AI~* "exception" PREP DET NOUN* PP NP QUANT ADJ* NP NOUN* NOUN* "operations" VERB* "are" AJP AVP ADV* ADJ* "automatic" PUNC "" "the" "or' ADJ* "the" "development" PREP "of' DET ADJ* AJP ADJ* NOUN* "dictionary" PUNC " " "all" "system" "completely" "a" "negative" Figure 3. Typical Output of Syntactic Analysis Program for One Sentence assignment computation association assignment association computations association factors association indexing associative retrieval (T)* associative system (T) computations computation computation methods connected file development exception dictionary development document retrieval (T,2)* document retrieval system (2) document system (T,2) elaboration files factors computation indexing computation internal computation literature subset low-frequency terms medical literature negative dictionary quantitative computations recall computations* relevance values* retrieval system (T) subset implemented system computations system implemented system operations term-document files term-document relevance term-document relevance values term-document values * term-term-assingment term-term association * term-term association factors term-term computation term-term factors term-term indexing user evaluation * values assignment Figure 4. Phrases generated for Document 659 (T -- title; 2-- occurrence frequency of 2; * -- manually selected) 209 1. Three-Term Phrases document retrieval system term-term assocaition factor term-term relevance values 2. Two-Term Phrases (with Term Frequency greater than I) Phrase Frequency in Document (tf) Number of Documents for Phrase (out of 1460) (dr) retrieval system 2 *document system 2 term-term computation 2 term-document 2 term-term factors 2 *term-term indexing 2 *document retrieval 2 *term-term association 2 *term-term assignment 2 125 25 I I I 5 28 2 2 3. Two-Term Phrases in Normalized (tf x idf) Weight Order (df > 1) Phrase Weight Phrase Weight term-term assignment term-term association term-term indexing document system document retrieval indexing computation .2128 .2128 .1832 .1313 .1276 .1064 association factors associative system low frequency terms associative retrieval literature subset term-document files .1064 .1064 .1064 .1064 .1064 .1064 Figure 5. Automatic Phrase Indexing for Document 659 210
1988
25
Lexicon and grammar in probabilistic tagging of written English. Andrew David Be, ale Unit for Compum" ~ on the English Languase Univenity of ~ r Bailngg, Lancaster England LAI 4Yr mb0250~..az.~c~vaxl Abstract The paper describes the development of software for automatic grammatical ana]ysi$ of unl~'Ui~, unedited English text at the Unit for Compm= Research on the Ev~li~h Language (UCREL) at the Univet~ of Lancaster. The work is ~n'nmtly funded by IBM and carried out in collaboration with colleagues at IBM UK (W'~) and IBM Yorktown Heights. The paper will focus on the lexicon component of the word raging system, the UCREL grammar, the datal~zlks of parsed sentences, and the tools that have been written to support developmem of these comlm~ems. ~ wozk has applications to speech technology, sl~lfing conectim, end other areas of natural lmlguage pngessil~ ~ y , our goal is to provide a language model using transin'ca statistics to di.~.nbigu~ al.:mative 1~ for a speech .:a~nicim device. 1. Text Corpora Historically, the use of text corpora to provide mnp/ncal data for tes~g gramm.~e.al theories has been regarded as important to varying degn~es by philologists and linguists of differing pe~msions. The use of co~us citations in ~-~,~ma~ and dictionaries pre~t~ electronic da~a processing (Brown. 1984: 34). While most of the generative 8r~-,-a,iam of the 60S and 70S ignored corpus ant,,: the inc~tsed power Of the new t~mlogy ,wenlw.l~ points the way to new applications of computerized text cmlxEa in dictiona~ makln~_: style checking and speech w, cognition. Compmer corpora present the computational linguist with the diversity and complexity of real language which is more challenging for testing language models than intuitively derived examples. Ultimately grammatl must be judged by their ability to contend with the teal facts of language and not just basic constructs extrapolated by grammm/ans. 2. Word Tagging The system devised for automatic word tagging or part of speech selection for processing nmn/ng Enfli~ text, known as the Constituent-Likelihood Automatic Word-tagging System (CLAWS) (Garside et aL, 1987) serves as the basis for the current work. The word tagging system is an automated c~mponent of the probabilist/c parsing system we are curnmtly woddng on. In won/tagging, each of the rurmi.$ words in the coqms text to be processed is associated with a pre-termina/ symbol, denoting word class. In e.~enc~ the CLAWS suite can be conceplually divided imo two phases: tag assignment and tag selection. constable NNSI NNSI: NPI: constant JJ NNI constituent NNI constitutional JJ NNI@ construction NNI consultant NNI cons~"w~-~e JJ W0 contact NNI VV0 contained VVD VVN jJ@ containing WG NNI% contemporary JJ NNI@ content NNI JJ VV0@ contessa NNSI NNSI : contest NNI VV0@ contestant NNI continue VV0 continued VVD VVN JB@ contraband NNI JJ contract NNI W0@ contradictory jj contrary JJ NNI contrast NNI VV0@ Figure 1: Section of the CLAWS I.~icon JB = attributive adjective; JJ = general adjective: NNI = singular~co~mon noun; I~S1 = noun of style or title; NP1 = singular proper noun; W0 : base form of lexical verb, VVD -- past tense of lex/cal verb; WG = qng form of lexical verb; VVN = past participle of lexical verb; %, @ = probability markers; :- = word initial capital marker. 211 Tag assignmeat involves, for each input nmning word or punctuation mask. lexicon look-up, which provides one or more potential word tags for each input word or punctuation mark. The lexicon is a list of about 8,000 records containing fields for (1) the word form (2) the set of one or more ~u-~41da~ tabs denoting the wont's word class(es) with probability markers attached indicating three ~ levels of plrl0~tl~lity. Words not in the CLAWS lcxicoa me assigned potemial tabs either by suffixlist look-up, which attempts to match end characters of the input wo~ with a suffix in the ~ or, if the input word does not have a word.ending to match one of these enuies, default tags are assigned. The procedures emure that ~ words and neologL~as not: in the lezi~n .am given an analysis. de NNI ade NNI VV0 NPI: made JJ ede VV0 NPI : ide NNI W0 side NNI wide JJ oxide NNI ode NNI VV0 ude VV0 rude NNI ee NNI free JJ fe NNI NPI : ge NNI W0 NPI- dge NN1 WO ridge NNI NPI: Figure 2: Section of the Suffixlist Tag selection disambiguates the aRemative tags that are assigned to some of the running words. Disambiguafion is achieved by invoking one-step probabilities of tag pair E_~kelihoods exmtaed from a previously tagged training corpus and upgrading or downgrading likelihoods according to the probability markets against word tags in the lexicon or suffixlist. In the majority of cases, this first order Ma:kov model is sufficient to con~tly select the most likely of tags associated with the input nau~g text. (Over 90 per ant of running words am correctly disambiguatcd in this way.) Exceptions me dealt with by invoking a look up procedme that searches through a limited list of groups of two or more words, or by automatically adjus~ng the probabilities of sequences of three tags in cases where the intermediate tag is misleading. The curreat vemm of the CLAWS system requires no pro- editing and auribums the correct won1 tag to over 96 per cent of the input running words, leaving 3 to 4 per cast to be conectat by lmaum post.editom. 3. Error Analysis En'm" analysis of CLAWS output has resulted, and ccminms to result, in diveaue imlaovemems to the system, from the simple adjustm~ of probability weightings against tags in the lexicon tO the inclusioa of additional procedures, for insum~ m deal wire fl~ dis~cflon l~m pn~r names Pare of the system can also be used to develop new parts, to extend ~ pans, or to interfaz with other systems. For instam~ in onler to lzaXlace a lexicon sufficiently large and denial mou~ for pm~t, we _~___d m ~ ~ ori~ Ust of almut &000 enuies to or= 20,000 (the new CLAWS lexiccm ¢oma~s almut 26,500 enn~es)..In onfer to do this, a list of 15,000 wools not alnmdy in the CLAWS lexicon was tagged msn~ the CLAWS tag as~gmnem program. (Since they wee not already in the lexicon, the candidate tags for each new amy were assigned by sut~axlim toolcup or default tag asaignmem.) The new list was rhea post-edited by interaJ~ive scum edi~ md m ~ with the old l~icon. Anot/a~ example of 'self impmvemem' is in the pnxluaion of a better set of case-step tmmiticea probabilities. The first CLAWS system used a mat~ of tag trmsttion probabilities derived fnxn the tagged Brown corpus (F-nmcis and gu~em. 1982). Some cells of this matrix were inaccurate because of incompmilz'lity of the Brown tagset and the CS...AWS tagset. To remedy this, a new manix was created by a statistics-gathedng program that processed the post-edited version of a corpus of one million WOldS tagged by the ofigiglal CLAWS suite of programs. 4. Subcategorization Apart ~ ~ g tim vocaiml~ coverage of the CLAWS lexicon, we are also subcamgorizing words belonging to the major won1 classes in order to reduce thc over- generation of alternative parses of semences of gx~tter than trivial lmgtlL The task of subcalegorizafion involves: (1) a linguist's specification of a schema or typology of lexical sulr.ategorics based ca distributional am1 212 functional cri~efi~ (2) a lexicographer's judgement in assigning one or more of the mbcategory codes in the linguist's schenm to the major lexical word forms (verbs, nouns, adjectives). The amount of detail demarcated by the sub~ttegodzation typology is dependent, in part, on the practical n~quinnne~s of the system. ~ subcategorization systems, such as the one provided in the Longman Dic~onary of Contempora~ English (1978) or Sager's (1981) sutr.atogories, need tO be taken into account. But these are assessed critically rather thaa adop~ wholesale (see for instanoe Akkenmm et al., 1985 and Boguraev et al., 1987, for a discussion of the strengths and wea~____~_ of the LDOCE grammar codes). [I] intran~tlve verb : ache, age, allow, care. conflict, escape. occur, mp~y, snow. stay, sun-bad~, swoon, talk, vanish. [2] transitive verb : abandon, abhor, a11ow, hoild, complete, contain, demand, exchange, get. give, house, keep, mail, master, oppose, pardo~ spend, sumSe~e~ warn. [3] copular verb : appear, become, feel, ~ grow, rfmain: seem. [4] prepositional verb : absWd~ aim, ask. belong, cater, consist, prey, pry, search, vote. [5] phrasal verb : blow, build, cry, dn~as, ease. farm, fill, hand, jazz, look, open, pop, sham, work. [6] vevb followed by that-danas : accept, believe, demlnd; doubt, feel, guess, know, ~ reckon, mqu~ think. [7] verb followed by to-infinitive : ask. come, dare, demand, fail, hope, intend, need, prefer, pmpese, refuse, seem, try, wish. [8] verb followed by -ing construction : abhor, begin. continue, deny, dislike, enjoy, keep, recall, l~'maember, risk, suggest. [9] ambltrans/tive verb : accept, answer, close, omnpile, cook, develop, feed, fly, move, obey, prm~ quit. sing, stop, teach. try. [A] verb habitually followed by an adverbial : appear, come, go, keep, lie, live, move, put. sit, stand, swim, veer. [W] verb followed by a wh-dause : ask, choose, doubt, imagine, know, matter, mind, wonder. Figure 3: The initial schema of eleven verb subcategories We began subca~gorization of the CLAWS lexicon by word-tagging the 3,000 most frequem words in the Brown corpus (Ku~ra and Francis, 1967). An initial system of eleve~ verb subcategories was proposed, and judgame~s about which subcategory(ies) each verb belonged to wen: empirically tested by looking up ena'ies in the microfiche concordenoe of the tagged Lancaster/Oslo-Bergen corpus CHofland and Johansson, 1982; Johansson et aL, 1986) which shows every occur~nce of a tagged word in the corpus together with its contexL Ahout 2.500 verbs have been coded in this way, and we are now wo~ng on a more derailed system of about 80 diffem~ verb subcm~q~des using the Lexicon Development Em, imnmem of Bogumev et al. (1987). 5. Constituent Analysis The task of implemem~ a p~ohabili~c ~ algorifl~n to provide a dismnbiguatod conmimant analysis of uormmcxod Enrich is mine demanding than implementing the word tagging suite, not least because, in order to operate in a maonm" similar tO ~ wofd-tag~[lg model, the system mcluims (1) specification of an appropriate grammar of rules and symbols and (2) the consuucfion of a sufficiently large d::.bank of parsed smm~es conforming tO the (op~msD grammar specified in (1) tO provide suuistics of the relative likelihoods of cons~uem tag mmsitions for consfiutcot tag disambigumion. In order m meet these prior n~ptin~ms, researche~ have been employed on a full-time basis to assemble a corpus of parasd ~ 6. Grammar Development and Parsed Subcorpora The databank of approximately 45,000" words of manually parsed semences of the Lancaster/Oslo-Bergen corpus (Sampson, 1987: 83ff) was processed to .show the disl/nct types of pmduodon ndas and ~ir fn~iue~ of occorrenco in gv,mmAr associated with the Sampson m:chank. of the UCR]~ pmbabilistic syslz~ (Gandde and Leech, 1987: 66ff) and mgges~ons from other researchers prompdng new rules resulted in a new context-f~e grammar of about 6,000 pmductians cresting mine steeply nested slmcun~ than those of the Sampson g~anm~. (It was antici~m_!~ that steeper nesting would mduco the size of the m~ebank requin:d to obtain adequate f'n~luency stal~cs.) The new ~w-~rnar is defined descriptively in a Parser's Manual (Leech, 1987) and formaiLu~ as a set of context-free phrase-su~cmn: productions. Developmem of the grammar then proceeded in ~lem with the construc~n of a second ,~tnhank of parsed sentences, fitting, as closely as pos,~ole" the ralas expressed by the grammar. The new databank comprises extracts from newspaper r,~pons dining from 1979-80 in the Associated Press (A.P) corpus. Any difficolflas the grammarians had in parsing were resolved, whine appropriate, by amending or adding rules tO the grammar. This methodology resulted in the grammar 213 being modified and extended to nearly 10,000 context-free productions by December 1987. V' -> V Od (I) (v) Oh (I) (Vn) Ob {I) {(Vg)/(Vn)} Figure 4: Fragm~ of the Grammar from the l~u-ser's Mamml Ob = operator ~ of, or ending with, a form of/~, Od ffi operator consisting of, or ending with, a form of ~ Oh - operator ~ of, or ending with, a form of the verb hart, V ffi main verb with complemmumiom V' ffi predicate; Vg = an -/rig veto p~m¢; Vn = a past participle plume; 0 = op~oml con~umm; {/} = altcmmive comuiumm. 7. Constructing the ParsedDambank For c~wenieme of ~ editing and compuu= pmcess~,, the constituent stmctmm are relamen~ in a linear form, as su-inss of ~-,~nafical words with labelled bracketing. The grammariam are givan prim-oum of post-¢diu~l output from the CLAWS suite. They then construct a consfime~ analysis for each sentence on the p~im-om, either in derail or in outline, according to the rules described in the Pamer's Mamufl, and key in tbeir sm~mms using an input program that checks for well-fonnedne~ The wen-fonmsdv~ ~,t~ impo~,~l by the pmgr~ a~: (I) mat labe2s m legal non-umnin~ symhols (2) tl~ labelled brackm tmmce (3) that the productions obufined by the ~ analysis am contained in the existing grammar. One se~ance is p~¢seraed at a time. Any mmrs found by the program a~ reported back to the sc~ean, once the grammarian has sent what s/he conside~ to be the completed prose. Sentences which are not well formed can be ~.edited or abandoned. A validity nuuker is appended to the w.f=enco for each sentence indicating ~ the semele has bean abandoned with errors contain~ in it. ^ Shortages NN2 of_IO gasoline_NNl and..CC rapidly_RR risin~_VVG prlces_NN2 for_IF the__AT fuel_NN1 are_VBR given_VVN as_II the_AT reasons_NN2 for_IF a_ATI 6.7_MC percent_NNU reduc~ion_NNl in_II ~raffic_NNl dea~hs_NN2 on_II New_NPI York_NPl s~ane NNI • s_$ roads_NNL2 las~_MD year_NNTl . . Figure 5: A word.tagged senu:m~ from the AP coqms AT = article; AT1 = singular article; CC : coordinating conjunction: IF = for as preposifiow, II = l~-posifion; IO = of as preposition; MC ffi cardinal number;, MD ffi ordinal number, NN2 ffi plural common noun; NNL2 ffi plural locative noun; NNTI = u~mporal noun; NNU = unit of measuremen~ RR = general adverb; VBR ffi are; $ ffi germanic genitive marker. 8. Assessing the Parsed Databank and the Grammar We have written ancillary prosrmn~ to help in the development of the tpmumar and to check the validity of the parses in the ~*.henk One program searches thnmgh the parsed dmtqmk for every occumm~ of a consfimant matching a specilied comfimem rag. Output is a list of all occurrances of the specil~ ~ together with fnxlucoc~ This facility allows selective searching through the 4-t-h~k, which is a ~0OI for revising p~rts of I11 grnmmar. 9. Skeleton Parsing We are aiming to produce a millinn word corpus of parsed sentences by December 1988 so that we can implement a variant of the CYK algorithm (Hopemfl and Ullman, 1979: 140) m obtain a set of pames for each sentence. VRerbi labelling (Bahl et aL, 1983; Fomey, 1973) could be used to select the most pmbeble prose from ~e output paine set. But pmblmm associated with assembling a fully parsed datnhank (t) ~ of pmmmicm ml (2) .,,H~ the parsed dmalm~ m am evolving grammar. In order to cimmmvem these problems, a su~-gy of skeleum parsing hm been muoduced. In skeleton pms-ing, .gFmmn~mm cream" mininml labelled bracketing by inserting only those labelled bmckem that are unconuvversial and, in some cases, by insm~g brackets with no labels. The grammar validation routine is de-coupled from the input program so changes to the smmmar cam be made without disrupting the input parsing. The strategy also • prevems extrusive re~o~e editing whenever the grammar is modified. Grammar development and parsed a~t~nk ccmtmction are not mtiw.ly indeI~nd_ ~ however. A sulmet (I0 per cant) of the skeleton pames a~ ~ for comparison with the current grammar, wiule another subset (I per cent) is checked by i l ~ grnmmariai~. Skeleum parting win give us a partially parsed databank which should limit the alternative parses compatible with the final grammar. We can either assume each parse is equally likely and use the fiequency weighted productions generated by the paniaUy parsee d:tntmxk to upgrade or downgrade alternative parses or we can use a 'restrained' outsidefmside algerifl~m (Baker. 1979) to find the optimal parse. 214 /.-: ._-> ) ..... ~ ~,~..,. A010 1 v IS' [Sd[N' IN'& [N Shortages_NN2 [Po of_IO [N' [N gasoline_NNl N]N' ]Po]N] N'&] and_CC [N'+[Jm rapidly_RR rising_VVG Jm] IN prices_NN2 [P for_IF IN" [Da the_AT Da] [N fuel_NNl N]N" ]P]N]N'+]N'] IV' lOb are_VBR Oh] [Vn given_VVN [P as II [N' IDa the_AT Da] IN reasons_NN2 N]N" ]P] [P for_IF [N' [D a_ATI [M 6.7_MC MID] [N percent_NNU reduction_NNl [P in_II [N' [N traffic_NNl deaths_NN2 [P on_II IN' [D[G[N New_NPI York_NPI state_NNl N] 's_$ G]D] [N roads_NNL2 N] [Q[Nr" [D[M last_MD M]D] year_NNTl Nr']Q] N']P]N]N']P]N]N']P]Vn]V']Sd] ._. S'] Figure 6: A Fully Parsed Veqi~ of the Semmce in figure 5. D = general de~ermlnafive element; Da = detetminadve element containing an article as the last or only word; G = genitive consmu:tion; Jm = adjective phrase; M = numeral ' phrase; N ffi nominal; N' ffi noun phrase; N'& =-fltlt conjunct of co-ordinated noun phrase; N'+ ffi non-initial conjunct following a conjunction; Nr' = temporal noun phrase; P = prepo~on~ phrase; Po ffi p~.pesiaon~ phrase; Q ffi quadfiec S' = sen~ Sd = declarative sentenc~ A062 96 v "" [S Now RT, , " " [Si[N he PPHSI N] [V said VVD V]Si] , , "_" [S& [N we PPIS2 HI [~ arLVBR negotiating VVG [P under II IN duress NNI N] P]V]S~] ,_, and CC [S+[N they_PPHS2 HI IV can_VM p~ay_VV0 [P w~th_IW [N us_PPI02 N]PT[P like_ICS [N a ATI cat_NNl [P with_IW IN a_ATI mouse_NNl N]P]N]P]V]S+]S] ._. _ Figure 7: A Skeleton Premed Se~a~ce. word rags: ICS = im~0os/tion.conjuncli~; IW = w/~, w/thou: as prepositions; PPHSI = he, she;, PPI-IS2 = they; PPI02 = m~. PPIS2 = we;, RT = nominal adverb of time; VM = modal auxiliary verb; ~,pert~r. S = incl~d~ sentence; S& = first coordi-,,,'d main cJause; S+ = non-inital coordinated main clmu~ following a conjun~iom Si = inte~olated or appended sentence. 10. Feamrisation The development of the CLAWS tagset md UCREL grammar owes much to the work of Quirk et al. (1985) while the tags themselves have evolved from the Brown tagset G:~ and Ku~ra, 1982). However, the rules and symbols chosen have been wa~l,-~_ into a notation compatible with other theories of grammar. For instate, tags from the extended ve~ion of the CLAWS lexicon have been translated into a formalism compatible with the Winchester pa~er (Sharman, 1988). A program has also been written to map all of the ten thousand productions of the c~urent UCREL grammar into the notation used by the Gr~-mm~tr Deve/opment Environment ((]DE) (Briscoe et at., 1987; Grover et aL, 1988; Carroll et aL. 1988). This is a l~.liminary step in the task of recasting the grammar into a feanne-hased unification formalism which will allow us to radically reduce the size of the rule set while preventing file grammar from overgeneradng. V 1 [ W0* ] 50 85 [ VV0* N" ] 800 86 [ W0* J ] 80 87 [ VV0* P ] 400 88 [ VV0* R ] 80 89 [ W0* Fn ] 100 90 Figure 8: A Fragment of tl~ UCREL grammar 215 ! PSRULE V85 : V1 --3, V. PSRULE V86 : V1 --~ V NP. PSRULE V87 : VX --~ V AP. PSRULE V88 : V1 --~ V PP. PSRULE V89 : V1 --~ V ADVP. PSRULE vg0 : V1 -~ V V2 [FIN]. Figure 9: Tramlmion of the Rules in Figure 8 into ODE ~msematio~ 1 I. Summary In ,~m~/, we have a wor~ tagging system fl~ minimal post-editing, a _~jly accumulating ¢oqms of parsed and a ¢OIIge~-fl~: ~'.~rnmar of about ten thousand producdons which is currently being recast into a unification forma, m Additionally, w~ have p~grams for extruding statistical and conocatinnal data from both word tagged and pined text cotl~Om. 12. Acknowledgements The author is a member of a gnmp of tesearchem woddng at the Unit for Computer Research on the English Language at Lancaster Univemity. The ~ members of UCREL me Geoffrey Leech, Roger Gannde (UCRI~ directmu), Beale, Louise Denmark, Steve ~liou., Jean Forum., Fanny Leech and IAta Taylor. The work is ~nently funded by IBM UK (research grant: 8231053 and ~ out in collaboration with Oaire Graver, Richard Sharma~ Peter Aldemo~ Ezra Black and Frederick Jelinck of IBM. 13. References Erik Akkerman, Pieter Masereeuw and V/ilium Meijs (1985). 'Designing a C o m ~ Lexi~n for Linguistic Proposes'. ASCOT Report No. I, CIP-Gegevens KoninHij~e Bib~otheeg. Den Haaf, Netherlm~. Lalit R. Bahl, Frederick Jelinck and Rol~rt L Mercer (1983). "A Maximum I.ik~lillood A ~ tO ~ Speech Recognition', IEEE Transactions on Pattern Analysis and Machine In:eUigence, VoL PAMI-5, No. 2, March 1983. J. IL Baker (1979). 'Trainable Grammms for Speech Recognition,' Proceedings of the Spring Conference of the Acoustical Society of America. Bran Boguraev, Ted Brlscoe, John ~ll, David ~ and Claire Graver (19873. 'The Derivation of a Grammatically Indexed Lexicon from the Longman Di~onary of Contemporary Engfish', Proceedings of ACL-87, Ste~forrL California. Ted Brise~, Claire Grover, Bran Boguraev, Jolm Carroll (19873. 'A Formalism and Environment for the Develol~nent of a Large Grammar of English', proceedings of IJCAI, Milan. Keith Brown (1984)./~nguugi¢$ Today, Fomana, U.K. John Carroll, Brml Bo~, Claire Grover, Ted Briscoe (1988). 'The Grammar Development Environment User M~ual', Cambridge Computer Laboratory Technical Report 127, Cambridge, England. Roger Gmside, Geoffrey Leech aad Geoff~y Sampson (19873. The Comp,m~gnal Analysis of English: A Corpus-Based Approach, Longman, London and New York. Claire Graver, Ted Bt~.oe, John Can~ll, Bran Boguraev (1988). 'The Alvey Natural L,mguage Tools Proje:t Grammar:. A Wide-Coverage Compalafiooai Grammar of F~Sllxh', Lancaster Papers In ~ 47. ~ of Linguistics. Univorsity of Lma:uler: Mawdt 1988. G. Fomey, Jr. (1973). '1"he Viu~oi Algorithm', Proc. IEEE, Vol 61: March 1973, pp. 268-278. W. Nelson Franc~ mad Henry ~ (1982). Frequency • Analysis of English Usage: Lexicon and Granmu~, Houghtoo Boston. Knut Hofland and Stig Johansson (1982). Word Frequencies in BriOJh and Ismerican EnglisS. Norwegian Computing Cenue for the Humanities. Bergen: Longmmx. Lo~on. John E. Ho~ a~! Jeff~'y D. Ullmm (1979). l ~ n w Automata Theory, Languages, and Compum~on, Addlsow Wesley, Reading, MesL Stig J ~ F.~ Atwe~ Roger Gmeide and Geoffrey Leech (1986). Whe Tagged LOB Corpus Users' Mmmal,' Norwegian Computing ~ for the Humanities, Bergen. Henry ~ and W. Nelson Francis (19673. Compum:ional Analysis of Present-day Ame~an English, Brown Unive:sity Press, Pmvidmu:e, Rlmde lsla~ Geoffrey L~ (198"/). 'Parsers' Manual', Depamnmu of !J-m~is~cs, UnivemSy of Lmmu~er. Longman Dicdonary of Conu~pomry Eng/~ (1978), second edition (19873, Lonmman Group I.imig~ I-Iar~w and l~Jnelmld Randolph Quirk, Sidney G~mn: Geoffrey Leech and Jan Svartv~ (19853. A Compre.hens~ Grammar of the English Language, Longm~ Inc., New Yor~ Naomi Sager (1981). Namra/ Language Information Praces~g, Addi-¢on-Wesley, Reading, Mass. Geo~ Sampson (1987). "The grammatical database and panm 8 scheme' in Gar~de, Leech and Smnpson, pp 82-96. Richard A. Slmmmn (1988). "The Winchesl~r Unification Parsing System', IBM UICSC Report 999: April 1988. 216
1988
26
PARSING VS. TEXT PROCESSING IN THE ANALYSIS OF DICTIONARY DEFINITIONS Thomas Ahlswede and Martha Evens Computer Science Dept. Illinois Institute of Technology Chicago, 11. 60616 312-567-5153 ABSTRACT We have analyzed definitions from Webster's Seventh New Collegiate Dictionary using Sager's Linguistic String Parser and again using basic UNIX text processing utilities such as grep and awk. Tiffs paper evaluates both procedures, compares their results, and discusses possible future lines of research exploiting and combining their respective strengths. Introduction As natural language systems grow more sophisticated, they need larger and more d~led lexicons. Efforts to automate the process of generating lexicons have been going on for years, and have often been combined with the analysis of machine-readable dictionaries. Since 1979, a group at HT under the leadership of Manha Evens has been using the machine-readable version of Webster' s Seventh New Collegiate Dictionary (W7) in text generation, information retrieval, and the theory of lexical- semantic relations. This paper describes some of our recent work in extracting semantic information from WT, primarily in the form of word pairs linked by lexical-semantic relations. We have used two methods: parsing definitions with Sager's Linguistic String Parser (LSP) and text processing with a combination of UNIX utilities and interactive editing. We will use the terms "parsing" and "text processing" here primarily with reference to our own use of the LSP and UNIX utilities respectively, but will also use them more broadly. "Parsing" in this more general sense will mean a computational technique of text analysis drawing on an extensive database of linguistic knowledge, e.g., the lexicon, syntax and/or semantics of English; "text processing" will refer to any computational technique that involves little or no such knowledge. This research is supported by National Science Foundation grant IST 87-03580. Our thanks also to the G & C Merriam Company for permission to use the dictionary tapes. Our model of the lexicon emphasizes lexical and semantic relations between words. Some of these relationships axe fan~iliar. Anyone who has used a dictionary or thesaurus has encountered synonymy, and perhaps also antonymy. W7 abounds in synonyms (the capitalized words in the examples below): (1) funny 1 la aj affording light mirth and laughter : AMUSING (2) funny 1 lb aj seeking or intended to amuse : FACETIOUS Our notation for dictionary definitions consists of: (1) the entry (word or phrase being defined); (2) the homograph number (multiple homographs are given sepmaw entries in W7); (3) the sense number, which may include a subsense letter and even a sub- subseuse number (e.g. 263); (4) the text of the definition. We commonly express a relation between words through a triple consisting of Wordl, Relation, Word2: (3) funny SYN amusing (4) funny SYN facetious A third relation, particularly important in W7 and in dictionaries generally, is taxonomy, the species-genus relation or (in artificial intelligence) the IS-A relation. Consider the entries: (5) dodecahedron 0 0 n a solid having 12 plane faces (6) build 1 1 vt to form by ordering and uniting materials... These definitions yield the taxonomy Iriples (7) dodecahedron TAX solid (8) build TAX form Taxonomy is not explicit in definitions, as is synonymy, but is implied in their very structure. Some other relations have been frequently observed, e.g.: (9) driveshaft PART engine (10) wood COMES-FROM tree 217 The usefulness of relations in information retrieval is demonstrated in Wang et al. [1985] as well as in Fox [1980]. Relations are also important in giving coherence to text, as shown by Halliday and Hasan [1977]. They are abundant in a typical English language dictionary, us we will see later. We have recognized, however, that word- relation-word triples are not adequate, or at least not optimal, for expressing all the useful information associated with words. Some information is best expressed us unary attributes or feauLres. We have also recognized that phrases and even larger structures may on one hand be in some ways equivalent to single words, as pointed out by Becker [1975], or may on the other hand express complex facts that cannot be reduced to any combination of word-to- word links. Parsing Recognizing the vastness of the task of parsing a whole dictionary, most computational lexicologists have preferred approaches less comp,,t~tionally intensive and more specifically suited to their immediate goals. A partial exception is Amsler [1980], who proposed a simple ATN grammar for some definitions in the Merriam. Webster Pocket D/ctionary. More recently, Jensen and her coworkers at IBM have also parsed definitions. But the record shows that dictionary researchers have avoided parsing. One of our questions was, how justified is this avoidance? How much harder is parsing, and what rewards, ff any, will the effort yield7 We used Sager's Linguistic String Parser, as we have clone for several years. It has been continuously developed since the 1970s and by now has a very extensive and powerful user interface us well as a large English grammar and a vocabulary (the LSP Dictionary) of over 10,000 words. It is not exceptionally fast -- a fact which should be taken into account in evaluating the performance of parsers generally in dictionary analysis. Our efforts to parse W7 definitions began with simple LSP grammars for small sets of adjective [Ahlswede, 1985] and adverb [Klick, 1981] definitions. These led evenm, lly to a large grammar of noun, verb and adjective definitions [Ahlswede, 1988], based on the Linguistic Siring Project's full English grammar [Sager, 1981], and using the LSP's full set of resources, including restrictions, transformations, and special output generation routines. All of these grammars have been used not only to create parse trees but also (and primarily) to generate relational triples linking defined words to the major words used in their definitions. The large definition grammar is described more fully in Ahlswede [1988]. We are concerned here with its performance: its success in parsing definitions with a minimum of incorrect or improbable parses, its success in identifying relational triples, and its speed. Input to the parser was a set of 8,832 definition texts from the machine-readable WT, chosen because their vocabulary permitted them to be parsed without enlarging the LSP's vocab-I~ry. For parsing, the 8,832-definition subset was sorted by part of speech and broken into 100- definition blocks of nouns, transitive verbs, imransitive verbs, and adjectives. Limiting the selection to nouns, verbs and adjectives reduced the subset to 8,211, including 2,949 nouns, 1,451 adjectives, 1,272 intransitive verbs, and 2,549 transitive verbs. We were able to speed up the parsing process considerably by automatically extracting subvocabularies from the LSP vocabulary, so that for a IO0-definition input sample, for inslance, the parser would only have to search tln'ough about 300 words instead of I0,000. Parsing the subset eventually required a little under 180 hours of CPU time on two machines, a Vax 8300 and a Vax 750. Total clock time required " was very little more than this, however, since almost all the parsing was done at night when the systems were otherwise idle. Table 1 compares the LSP's performance in the four part of speech categories. Part of speech of defd. word nouns adjectives inL verbs ~'. verbs average Table Pet. of Avg. no. Time (see.) Triples clefs, of parses per parse generated parsed per success per success 77.63 1.70 11.05 11.46 68.15 1.85 10.59 5.45 64.62 1.59 11.96 6.62 60.29 1.50 43.33 9.15 68.65 1.66 18.89 9.06 I. Performance time and parsing efficiency of LSP by part of speech of words defined (adapted from Fox et ul., 1988) In most cases, there is little variation among the parts of speech. The most obvious discrepancy is the slow parsing time for wansifive verbs. We are not yet sure why this is, but we suspect it has to do with W7"s practice of representing the defined verb's direct object by an empty slot in the definition: (11) madden 0 2 vt to make intensely angry 218 (12) magnetize 0 2 vt to communicate magnetic properties to The total number of triples generated was 51,115 and the number of unique triples was 25,178. The most common triples were 5,086 taxonomles and 7,971 modification relations. (Modification involved any word or phrase in the definition that modified the headword; thus a definition such as "cube: a regular solid ..." would yield the modification triple (cube MOD regular)). We also identified 125 other relations, in three categories: (1) "traditional" relmions, identified by previous researchers, which we hope to associate with axioms for making inferences; (2) syntactic relations between the defined word and various defining words, such as (in a verb definition) the direct object of the head verb, which we will investigate for possible consistent semantic significance; and (3) syntactic relations within the body of the definition, such as modifier-head, verb- object, etc, The relations in this last category were built into our grammar;, we were simply collecting s_t~_ti$~ics on their occurrence, which we hope even.rally to test for the existence of dictionary- specific selectional categories above and beyond the general English selectional categories already present in the LSP grammar. Figure 1 shows a sample definition and the triples the parser found in it. ABDOMEN 0 1 N THE PART OF THE BODY BETWEEN THE THORAX AND THE PELVIS (THE) pmod (PART) (ABDOMEN 0 1 N) lm (THE) (ABDOMEN 0 1 N) t (PART) (ABDOMEN 0 1 N) rm (OF THE BODY BETWEEN THE THORAX AND THE PELVIS) (THE) pmod (BODY) (THE) pmod (PELVIS) (THE) pmod (THORAX) (BETWEEN) pobj (THORAX) (BETWEEN) pobj (PELVIS) (ABDOMEN 0 1 N) part (BODY) Figure 1. A definition and its relational triples In this definition, "part" is a typical category 1 relation, recognized by virtually all students of relations, though they may disagree about its exact nature. "Ira" and "rm" are left and right modification. As can be seen, "rm" does not involve analysis of the long posmominal modifier phrase. "pmod" and "pobj" are permissible modifier and permissible object, respectively; these are among the most common category 3 relations. We began with a list of about fifty relations, intending to generate plain parse trees and then examine them for relational triples in a separate step. It soon became clear, however, that the LSP itself was the best tool available for extracting information from parse trees, especially its own parse trees. Therefore we added a section to the grammar consisting of routines for identifying relations and printing out triples. The LSP's Restriction Language permitted us to keep this section physically separate from the rest of the grammar and thus to treat it as an independent piece of code. Having done this, we were able to add new relations in the com~e of developing the grammar. Approximately a third of the definitions in the sample could not be parsed with this grammar. During development of the grammar, we uncovered a great many reasons why definitions failed to parse; there remains no one fix which will add more than a few definitions to the success list. However, some general problem areas can be identified. One common cause of failure is the inability of the grammar to deal with all the nuances of adjective comparison: (13) accelerate 0 1 vt to bring about at an earlier point of time Idiomatic ,~es of common words are a frequent source of failure: (14) accommodnto. 0 3c vt to make room for There are some errors in the input, for example an inlransitive verb definition labeled as transitive: (15) ache 1 2 vt to become fill~ with painful yearning As column 3 of Table 1 indicates, many definitions yielded multiple parses. Multiple parses were responsible for most of the duplicate relational triples. Finding relational triples by text processing As the performance statistics above show, parsing is painfully slow. For the simple business of finding and writing relational triples, it turns out to be much less efficient than a combination of text processing with interactive editing. We first used straight text processing to identify synonym references in definitions and reduce them to triples. Our next essay in the text processing/editing method began as a casual experiment. We extracted the set of intransitive verb definitions, suspecting that these would be the easiest to work with. This is the smallest of the four major 219 W7 part of speech categories (the others being nouns, adjectives, and Iransitive verbs) with 8,883 texts. Virtually all verb definition texts begin with to followed by a head verb, or a set of conjoined head verbs. The most common words in the second position in inwansitive verb definitions, along with their typical complements, were: become + noun or adj. phrase (774 occurrences in 8,482 definitions) mate + noun phrase [+ adj. phrase] (526 occurrences) be + various (408 occurrences) mow + adverbial phrase (388 occurrences) Definitions in become, make and move had such consistent forms that the core word or words in the object or complement phrase were easy to identify. Occasional prepositional phrases or other posmominal constructions were easy to edit out by hand. From these, and from some definitions in serve as, we were able to generate triples representing five relations. (16) age 2 2b vi to become mellow or mature (17) (age 2 2b vi) va-incep (mature) (18) (age 2 2b vi) va-incep (mellow) (19) add 0 2b vi to make an addition (20) (add 0 2b vi) vn-canse (addition) (21) accelerate 0 I vi to move faster (22) (accelerate 0 1 vi) move (faster) (23) add 0 2a vi to serve as an addition (24) (add 0 2a vi) vn-be (addition) (25) annotate 0 0 vi to make or furnish critical or explanatory notes (26) (annotate 0 0 vi) va-cause (critical) (27) (annotate 0 0 vi) va-cause (explanatory) We also al~empted to generate taxonomic triples for inwansitive verbs. In verb definitions, we identified conjoined headwords, and otherwise deleted everything to the right of the last headword. This was straightforward and gave us almost 1O,000 triples. These triples are of mixed quality, however. Those representing very common headwords such as be or become are vacuous; worse, our lexically dumb algorithm could not recognize phrasal verbs, so that a phrasal head term such as take place appears as as take, with misleading results. The vacuous triples can easily be removed from the total, however, and the incorrect triples resulting from broken phrasal head terms are relatively few. We therefore felt we had been highly successful, and were inspired to proceed with nouns. As with verbs, we are primarily interested in relations other than taxonomy, and these are most commonly found in the often lengthy postoheadword part of the definitions. The problems we encountered with nouns were generally the same as with inlransitive verbs, but accentuated by the much larger number (80,022) of noun definition texts. Also, as Chodorow et al. [1985] .have noted, the boundary between the headword and the postnominal part of the definition is much harder to identify in noun definitions than in verb definitions. Our first algorithm, which had no lexical knowledge except of prepositions, was about 88% correct in finding the boundary. In order to get better results, we needed an algorithm comparable to Chodorow's Head Finder, which uses part of speech information. Our strategy is first to tag each word in each definition with all its possible parts of six,h, then to step through the definitions, using Chodorow's heuristics (plus any others we can find or invent) to mark prenonn-noun and nunn-posmoun boundaries. The first step in tagging is to generate a tagged vocabulary. We nsed an awk program to step through the entries and nm-ons, appending to each one its part or parts of speech. (A run-on is a subentry, giving information about a word or phrase derived from the entry word or phrase; for instance, the verb run has the run-ons run across, run ~fter, and run a temperature among others; the noun rune has the run-on adjective runic.) Archaic, obsolete, or dialect forms were marked as such by W7 and could be excluded. Turning to W7's defining vocabulary, the words (and/or phrases) actually employed in definitions, we used Mayer's morphological analyzer [1988] to identify regular noun plurals, adjective comparatives and superlatives, and verb tense forms. Following suggestions by Peterson [1982], we assumed that words ending in -/a and -ae (virt~mlly all appearing in scientific names) were nouns. We then added to our tagged vocabulary those irregular noun plurals and verb tense forms expressly given in W7. Unforumately, neither W7 nor Mayer's program provides for derived compounds with irregular plurals; for instance, W7 indicates men as the plural of man but there are over 300 nouns ending in -man for which no plural is shown. Most of these (e.g., salesman, trencherman) take plurals in -men but others (German, shaman) do not. These had to be identified by hand. Another 220 group of nouns, whose plurals we found convenient rather than absolutely necessary to treat by hand, is the 200 or so ending in -ch. (Those with a hard -ch (patriarch, loch) take plurals in -chs; the rest take plurals in -ches.) We could have exploited W7's pronunciation information to distinguish these, but the work would have been well out of proportion to the scale of the task. After some more of this kind of work, we had a tagged vocabulary of 46,566 words used in W7 definitions. For the next step, we chose to generate tagged blocks of definitions (rather than perform tagging on the fly). We wrote a C program to read a text file and replac~ each word with its tagged counterpart. (We are not yet attempting to deal with phrases.) Head finding on noun definitions was done with an awk program which examines consecutive pairs of words (working from right to left) and marks prenoun-noun and nonn-posmoun boundaries. It recognizes certain kinds of word sequences as beyond its ability to disambiguate, e.g.: (28) alarm 1 2a n a [ signal }? warning } of danger (29) aitatus 0 0 n a { divine }7 imparting } of knowledge or power The result of all this effort is a rudimentary parsing system, in which the tagged vocabulary is the lexicon, the tagging program is the lexical analyzer, and the head finder is a syntax analyzer using a very simple finite state grammar of about ten rules. Despite its lack of linguistic sophistication, this is a clear step in the direction of parsing. And the effort seems to be justified. Development took about four weeks, most of it spent on the lexicon. (And, to be sure, mote work is still needed.) This is more than we expected, but considerably less than the eight man-months spent developing and testing the LSP definition grammar. Tagging and head finding were performed on a sample of 2157 noun definition texts, covering the nouns from a through anode. 170 were flagged as ambiguous; of the remaining 1987, all but 58 were correct for a success rate of 97.1 percent. In 37 of the 58 failures, the head finder mistakenly identified a noun (or polysemous adjective/noun) modifying the head as an independent noun: (30) agiotage 0 1 n ( exchange } business (3 I) alpha 1 3 n the { chief ) or brightest star of a constellation There were 5 cases of misidenfification of a following adjective (parsable as a noun) as the head noun: (32) air mile 0 0 n a unit { equal } to 6076.1154 feet The remaining failures resulted from errors in the creation of the tagged vocabulary (5), non-definitien dictionary lines incorrectly labeled as definition texts (53, and non-noun definitions inconecfly labeled as noun definitions (6). The last two categories arose from errors in our original W7 tape. Among the 170 definitions flagged as ambiguous, there were two mislabeled definitions and one vocabulary en~r. There were 128 cases of noun followed by an -/n& form; in 116 of these the -/ng form was a participle, otherwise it was the head noun. (The other case flagged as ambiguous was of a possible head followed by a preposition also parsable as an adjective. This flag turned out to be unnecessary.) There were also seven instances of miscellaneous misidentification of a modifying noun as the head. Thus the "success rate" among these definitions was 148/170 or 87.1 percent. We are still working on improving the head finder, as well as developing similar "grammars" for posmominal phrases and for the major phrase str~tures of other definition types. In the course of this work we expect to solve the major "problem in this parficnl~ grammar, that of prenominal modifiers identified as heads. Parsing, again Simple text processing, even without such lexical knowledge as parts of speech, is about as accurate as parsing in terms of correct vs. incorrect relational triples identified. (It should be noted that both methods require hand checking of the output, and it seems unlikely that we will ever completely eliminate this step.) The text processing strategy can be applied to the entire corpus of definitions, without the labor of enlarging a parser lexicon such as the LSP Dictionary. And it is much faster. This way of looking at our results may make it appear that parsing was a waste of time and effort, of value only as a lesson in how not to go about dictionary analysis. Before coming to any such conclusion, however, we should consider some other factors. It has been suggested that a more "modem" parser than the LSP could give much faster parsing times. At least part of the slowness of the LSP is due to the completeness of its associated English grammar, perhaps the most detailed grammar associated with any natural language parser. Thus a 221 probable tradcoff for greater speed would be a lower percentage of definitions successfully parsed. Nonetheless, it appears that the immediate future of parsing in the analysis of dictionary definitions or of any other large text corpus lies in a simpler, less computationally intensive parsing technique. In addition, a parser for definition analysis needs to be able to return partial parses of difficult definitions. As we have seen, even the LSP's detailed grammar failed to parse about a third of the definitions it was given. A partial parse capability would facilitate the use of simpler grammars. For further work with the machine-~Jul~ble W7, another valuable feature would be the ability to handle ill-formed input. This is perhaps startling, since a dictionary is supposed to be the epitome of wellftxmedness, by definition as it were. However, Peterson [1982] counted 903 typographical and spelling en~rs in the machine-readable W7 (including ten errors carried over from the printed WT), and my experience suggests that his count was conservative. Such errors are probably little or no problem in more recent MRDs, which are used as typesetter input and are therefore exacdy as correct as the printed dictionary; exrots creep into these dictionaries in other places, as Boguraev [1988] discovered in his study of the grammar codes in the Longman Dictionary of Contemporary English. Before choosing or designing the best parser for the m~k, it is worthwhile to define an appropriate task: to determine what sort of information one can get from parsing that is impossible or impractical to get by easier means. One obvious approach is to use parsing as a backup. For instance, one category of definitiuns that has steadfastly resisted our text processing analysis is that of verb definitions whose headword is a verb plus separable particle, e.g. give up. A text processing program using part-of-sgw.~h tagged input can, however, flag these and other troublesome definitions for further analysis. It still seems, though, that we should be able to use parsing more ambitiously than this. It is intrinsically more powerful; the techniques we refer to here as "text processing" mostly only extract single, stereotyped fragments of information. The most powerful of them, the head finder, still performs only one simple grammatical operation: finding the nuclei of noun phrases. In conwast, a "real" parser generates a parse tree containing a wealth of structural and relational information that cannot be adequately represented by a fcenn~li~m such as word-relation-word triples, feature lists, etc. Only in the simplest definitions does our present set of relations give us a complete analysis. In most definitions, we are forced to throw away essential information. The definition (33) dodecahedron 0 0 n a solid having 12 plane faces gives us two relational triples: (34) (dodecahedron 0 0 n) t (solid) (35) (dodecahedron 0 0 n) nn-aUr (face) The first triple is straightforward. The second triple tells us that the noun dodecahedron has the (noun) auribute face, i.e. that a dodecahedron has faces. But the relational triple structme, by itself, cannot capture the information that the dodecahedron has specifically 12 faces. We could add another triple (36) (face) nn-atlr (12) i.e., saying that faces have the anribute of (a cardinality of) 12, but this Iriple is correct only in the context of the definition of a dodecahedron. It is not permanendy or generically true, as are (28) and (29). The information is present, however, in the parse Iree we get from the LSP. It can be made somewhat more accessible by putting it into a dependency form such as (37) (soild (a) (having (face (plural) (12) (plane)))) which indicates not only that face is an attribute of that solid which is a dodecahedron, but that the ~ t y 12 is an attribute of face in this particular case, as is also plane. In order to be really useful, a structure such as this must have conjunctionphrases expanded, passives inverted, inflected forms analyzed, and other modifications of the kind often brought under the rubric of "transformations." The LSP can do this sort of thing very welL The defining words also need to be disambiguated. We do not hope for any fully automatic way to do this, but co-¢r.currence of defining words, perhaps weighted according to their position in the dependency slructure, would reduce the human di~mbiguator's task to one of post- editing. This might perhaps be further simplified by a customized interactive editing facility. We do not need to set up an elaborate network data structure, though; the Lisp-like tree structure, once it is transformed and its elements disambiguated, constitutes a set of implicit pointers to the definitions of the various words. Even with all this work done, however, a big gap remains between words and ideal semantic 222 concepts. Let us consider the ways in which W7 has defined all five basic polyhedrons: (38) dodecahedron 0 0 n a solid having 12 plane faces (39) cube 1 1 n the regular solid of six equal square sides (40) icosahedmn 0 0 n a polyhedron having 20 faces (41) octahedron 0 0 n a solid bounded by eight plane faces (42) tetrahedron 0 0 n a polyhedron of four faces (43) polyhedron 0 0 n a solid formed by plane faces The five polyhedrons differ only in their number of faces, apart from the cube's additional attribute of being regular. There is no reason why a single syntactic/semantic structure could not be used to define all five polyhedrons. Despite this, no two of the definitions have the same structure. These definitions illaslrate that, even though W7 is fairly stereotyped in its language, it is not nearly as stereotyped as it needs to be for large scale, automatic semantic analysis. We are going to need a great deal of sophistication in synonymy and moving around the taxonomic hierarchy. (It is worth repeating, however, that in building our lexicon, we have no intention of relying exclusively on the information contained in W7). Figure 2 shows a small part of a possible network. In this sample, the definitions have been parsed into a Lisp-like dependency slructure, with some wansformations such as inversion of passives, but no attempt to fit the polyhedron definitions into a single semantic format. (cube 1 1) T (solid 3 1 (the) (regular) (of (side 1 6b (PL) (six) • (equal) (square}) ) ) (dodecahedron 0 0) T (solid 3 1 (a) (have (OBJ (face 1 5a5 (PL) (12) (plane))))) (icosahedron 0 0) T (polyhedron (a) (have (OBJ (face 1 5a5 (PL) (20)) ) ) ) (octahedron 0 O) T (solid 3 1 (a) (bound (SUBJ (face 1 5a5 (PL) (eight) (plane)) ) ) ) (tetrahedron 0 0) T (polyhedron (a) (of (face 1 5a5 (PL) (four)) ) ) (polyhedron 0 0) T (solid 3 1 (a) (form (SUBJ (face 1 5a5 (PL) (plane)) ) ) ) (solid 3 1) T (figure (a) (geometrical) (have (OBJ (dimension- (PL) (three)) ) ) ) (face 1 5a5) T (surface 1 2 (plane) (bound (OBJ (solid 3 1 (a) (geometric)) ) ) ) (side 1 6a) T (line (a) (bound (OBJ (NULL)) ) (of (figure (a) (geometrical)) ) ) (side 1 6b) T (surface 1 2 (delimit (OBJ (solid (a))))) (surface 1 2) T (locus (a) (or (plane) (curved)) (two-dimensional) (of (point (PL)) . . .)) Figure 2. Part of a "network" of parsed definitions If this formalism does not look much like a network, imagine each word in each definition (the part of the node to the right of the taxonomy marker 'W") serving as a pointer to its own defining node. The resulting network is quite dense. We simplify by leaving out other parts of the lexical entry, and by including only a few disambignations, just to give the flavor of their presence. Disambignation of a word is indicated by the inclusion of its homograph and sense numbers (see examples 1 and 2, above). Summary In the process of developing techniques of dictionary analysis, we have learned a variety of lessons. In particular, we have learned (as many dictionary researchers had suspected but none had attempted to establish) that full namral-langnage parsing is not an efficient procedure for gathering lexical information in a simple form such as relational Iriples. This realization stimulated us to do two things. F'n~'t, we needed to develop faster and more reliable techniques for extracting triples. We found that many Iriples could be found using UNIX text processing utilities combined with the recognition of a few structural patterns in definitions..These procedures are subject to further development and refinement, but have already yielded thousands of triples. Second, we were inspired to look for a form of data representation that would allow our lexical d-tabase to exploit the power of full natural-language parsing more effectively than it can through triples. We are now in the early stages of investigating such a representation. REFERENCES Ahlswede, Thomas E., 1985. "A Linguistic String Grammar for Adjective Definitions." In S. Williams, ed., Humans and Machines: the Interface through Language. Ablex, Norwood, NJ, pp. 101-127. Ahlswede, Thomas E., 1988. "Syntactic and 223 Semantic Analysis of Definitions in a Machine-Readable Dictionary." Ph.D. Thesis, Illinois Institute of Technology. Amsler, Robert A., 1980. "The Structure of The Merriam-Webster Pocket Dictionary." Ph.D. Dissertation, Computer Science. University of Texas, Austin. Amsler, Robert A., 1981. "A Taxonomy for English Nouns and Verbs." Proceedings of the 19th Annual Meeting of the ACL, pp. 133-138. Apresyan, Yu. D., I. A. Mel'~uk and A. IC ~olkovsky, 1970. "Semantics and Lexicography:. Towards a New Type of Unilingual Dictionary." In Kiefer, F., exl. Studies in Syntax. Reidel, Dordrecht, Holland, pp. 1-33. Becker, Joseph D., 1975. "The Phrasal I..~xicon." In Schank, R. C. and B. Nash-Webber, eds., Theoretical Issues in Natural Language Processing, ACL Annual Meeting, Cambridge, MA, June, 1975, pp. 38-41. Boguraev, Branimir, 1987. "Experiences with a Machine-Re~'~d~ble Dictionary." Proceedings of the Third Annual Conference of the UW Centre for the New OF_D, University of Waterloo, Waterloo, Ontario, November 1987, pp. 37-50. Chodorow, Martin S., Roy J. Byrd, and George E. Heidom, 1985. "Extracting Semantic Hierarchies from a Large On-line Dictionary." Proceedings of the 23rd Annual Meeting of the ACL, pp. 299-304. Evens, Martha W., Bonnie C. Litowitz, Judith A. Markowitz, Raoul N. Smith, and Oswald Werner, 1980. Lexical-Semantic Relations: A Comparative Survey. Linguistic Research, Inc., Edmonton, Alberta. Fox, Edward A., 1980. ~..exical Relations: Enhancing Effectiveness of Information Retrieval Systems." ACM SIGIR Forum, Vol. 15, No. 3, pp. 5-36. Fox, Edward A., J. Terry Nutter, Thomas Ahlswede, Martha Evens, and Judith Markowitz, forthcoming. "Building a Large Thesaurus for Information Retrieval." To be presented at the ACL Conference on Appfied Natural Language Processing, February, 1988. Mayer, Gleam, 1988. Program for morphological analysis, nT, unpublished. Halliday, Michael A. IC and Ruqaiya Hs~n, 1976. Cohesion in English. Longman, London. Klick, Vicki, 1981. LSP grammar of adverb definitions. Illinois Institute of Technology, unpublished. Peterson, James L., 1982. Webstex's Seventh New Collegiate Dictionary: A Computer-Readable File Format. Technical Report TR-196, University of Texas, Austin, TX, May, 1982. Sager, Naomi, 1981. Natural Language Information Processing. Addison-Wesley. New York. Wang, Yih-Chen, James Vandendorpe, and Martha Evens, 1985. "Relational Thesauri in Information Retrieval." ./ournal of the American Society for Information Science, voL 36, no. 1,pp. 15-27. 224
1988
27
Polynomial Learnability and Locality of Formal Grammars Naoki Abe* Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA19104. ABSTRACT We apply a complexity theoretic notion of feasible learnability called "polynomial learnabillty" to the eval- uation of grammatical formalisms for linguistic descrip- tion. We show that a novel, nontriviai constraint on the degree of ~locMity" of grammars allows not only con- text free languages but also a rich d~s of mildy context sensitive languages to be polynomiaily learnable. We discuss possible implications, of this result t O the theory of naturai language acquisition. 1 Introduction Much of the formai modeling of natural language acqui- sition has been within the classic paradigm of ~identi- fication in the limit from positive examples" proposed by Gold [7]. A relatively restricted class of formal lan- guages has been shown to be unleaxnable in this sense, and the problem of learning formal grammars has long been considered intractable. 1 The following two contro- versiai aspects of this paradigm, however, leave the im- plications of these negative results to the computational theory of language acquisition inconclusive. First, it places a very high demand on the accuracy of the learn- ing that takes place - the hypothesized language must be exactly equal to the target language for it to be con- sidered "correct". Second, it places a very permissive demand on the time and amount of data that may be required for the learning - all that is required of the learner is that it converge to the correct language in the limit. 2 Of the many alternative paradigms of learning pro- posed, the notion of "polynomial learnability ~ recently formulated by Blumer et al. [6] is of particular interest because it addresses both of these problems in a unified "Supported by an IBM graduate fellowship. The author gratefully acknowledges his advisor, Scott Weinstein, for his guidance and encouragement throughout this research. 1 Some interesting learnable subclasses of regu languages have been discovered and studied by Angluin [3]. lar 2For a comprehensive survey of various paradigms related to "identification in the limit" that have been proposed to address the first issue, see Osheraon, Stob and Weinstein [12]. As for the latter issue, Angluin ([5], [4]) investigates the feasible learnabil- ity of formal languages with the use of powerful oracles such as "MEMBERSHIP" and "EQUIVALENCE". way. This paradigm relaxes the criterion for learning by ruling a class of languages to be learnable, if each lan- guage in the class can be approximated, given only pos- itive and negative examples, a with a desired degree of accuracy and with a desired degree of robustness (prob- ability), but puts a higher demand on the complexity by requiring that the learner converge in time polyno- mini in these parameters (of accuracy and robustness) as well as the size (complexity) of the language being learned. In this paper, we apply the criterion of polynomial learnability to subclasses of formal grammars that are of considerable linguistic interest. Specifically, we present a novel, nontriviai constraint on gra~nmars called "k- locality", which enables context free grammars and in- deed a rich class of mildly context sensitive grammars to be feasibly learnable. Importantly the constraint of k- locality is a nontriviai one because each k-locai subclass is an exponential class 4 containing infinitely many infi- Rite languages. To the best of the author's knowledge, ~k-locaiity" is the first nontrivial constraint on gram- mars, which has been shown to allow a rich cla~s of grammars of considerable linguistic interest to be poly- nomiaily learnable. We finally mention some recent neg- ative result in this paradigm, and discuss possible im- plications of its contrast with the learnability of k-locai classes. 2 Polynomial Learnability "Polynomial learnability" is a complexity theoretic notion of feasible learnability recently formulated by Blumer et al. ([6]). This notion generalizes Valiant's theory of learnable boolean concepts [15], [14] to infinite objects such as formal languages. In this paradigm, the languages are presented via infinite sequences of pos- 3We hold no particular stance on the the validity of the claim that children make no use of negative examples. We do, however, maintain that the investigation of learnability of grammars from both positive and negative examples is a worthwhile endeavour for at least two reasons: First, it has a potential application for the design of natural language systems that learn. Second, it is possible that children do make use of indirect negative informa- tion. 4A class of grammars G is an exponential class if each sub- class of G with bounded size contains exponentially (in that size) many grammars. 225 itive and negative examples 5 drawn with an arbitrary but time invariant distribution over the entire space, that is in our case, ~T*. Learners are to hypothesize a grammar at each finite initial segment of such a se- quence, in other words, they are functions from finite se- quences of members of ~2"" x {0, 1} to grammars. 6 The criterion for learning is a complexity theoretic, approx- imate, and probabilistic one. A learner is s~id to learn if it can, with an arbitrarily high probability (1 - 8), converge to an arbitrarily accurate (within c) grammar in a feasible number of examples. =A feasible num- ber of examples" means, more precisely, polynomial in the size of the grammar it is learning and the degrees of probability and accuracy that it achieves - $ -1 and ~-1. =Accurate within d' means, more precisely, that the output grammar can predict, with error probability ~, future events (examples) drawn from the same dis- tribution on which it has been presented examples for learning. We now formally state this criterion. 7 Definition 2.1 (Polynomial Learnability) A col- lection of languages £ with an associated 'size' f~nction with respect to some f~ed representation mechanism is polynomially learnable if and onlg if: s 3fE~ 3 q: a polynomial function YLtE£ Y P: a probability measure on ET* Ve, 6>O V m >_. q(e-', 8 -~, size(Ld) [P'({t E CX(L~) I P(L(f(t~))AL~) < e}) >_1-6 and f is computable in time polynomial in the length of input] Identification in the Limit Error Time |trot • Tlmo Figure 1: Convergence behaviour in the limit" and =polynomial learnability ", require dif- ferent kinds of convergence behavior of such a sequence, as is illustrated in Figure 1. Blumer et al. ([6]) shows an interesting connection between polynomial learnability and data compression. The connection is one way: If there exists a polyno- mial time algorithm which reliably •compresses ~ any sample of any language in a given collection to a prov- ably small consistent grammar for it, then such an al- ogorlthm polynomially learns that collection. We state this theorem in a slightly weaker form. Definition 2.2 Let £ be a language collection with an associated size function "size", and for each n let c,~ = {L E £ ] size(L) ~ n}. Then .4 is an Occam algorithm for £ with range size ~ f(m, n) if and only if: If in addition all of f's output grammars on esample sequences for languages in c belong to G, then we say that £ is polynomially learnable by G. Suppose we take the sequence of the hypotheses (grammars) made by a ]earner on successive initial fi- nite sequences of examples, and plot the =errors" of those grammars with respect to the language being learned. The two ]earnability criteria, =identification awe let £X(L) denote the set of infinite sequences which con- tain only positive and negative examples for L, so indicated. awe let ~r denote the set of all such functions. 7The following presentation uses concepts and notation of formal learning theory, of. [12] aNote the following notation. The inital segment of a se- quence t up to the n-th element is denoted by t-~. L denotes some fixed mapping from grammars to languages: If G is a grammar, L(G) denotes the language generated by-it. If L I is a |anguage, slzs(Ll) denotes the size of a minimal grammar for LI. A&B denotes the symmetric difference, i.e. (A--B)U(B -A). Finally, if P is a probability measure on ~-T °, then P° is the cannonical product extension of P. VnEN VLE£n Vte e.X(L) Vine N [.4(~.) is consistent .ith~°rng(~..) and .4(~..) ¢ £I(-,-) and .4 runs in time polynomial in [ tm [] Theorem 2.1 (Blumer et al.) I1.4 is an Oceam al- gorithm .for £ with range size f(n, m) ----. O(n/=m =) for some k >_ 1, 0 < ct < 1 (i.e. less than linear in sample size and polynomial in complexity of language), then .4 polynomially learns f-. 91n [6] the notion of "range dimension" is used in place of "range size", which is the Vapmk-Chervonenkis dlmension of the hypothesis class. Here, we use the fact that the dimension of a hypothesis class with a size bound is at most equal to that size bound. 10Grammar G is consistent with a sample $ if {= [ (=, 0) E s} g L(G) ~ r.(a) n {= I (=, 1) ~ s} = ~. 226 3 K-Local Context Free Grammars The notion of "k-locality" of a context free grammar is defined with respect to a formulation of derivations de- fined originally for TAG's by Vijay-Shanker, Weir, and Josh, [16] [17], which is a generalization of the notion of a parse tree. In their formulation, a derivation is a tree recording the history of rewritings. Each node of a derivation tree is labeled by a rewriting rule, and in particular, the root must be labeled with a rule with the starting symbol as its left hand side. Each edge corresponds to the application of a rewriting; the edge from a rule (host rule) to another rule (applied rule) is labeled with the aposition ~ of the nonterminal in the right hand side of the host rule at which the rewriting ta~kes place. The degree of locality of a derivation is the num- ber of distinct kinds of rewritings in it - including the immediate context in which rewritings take place. In terms of a derivation tree, the degree of locality is the number of different kinds of edges in it, where two edges axe equivalent just in case the two end nodes are labeled by the same rules, and the edges themselves are labeled by the same node address. Definition 3.1 Let D(G) denote the set of all deriva. tion trees of G, and let r E I)(G). Then, the degree of locality of r, written locality(r), is defined as follows, locality(r) ---- card{ (p,q, n) I there is an edge in r from a node labeled with p to another labeled with q, and is itself labeled with ~} The degree of locality of a grammar is the maximum of those of M1 its derivations. Definition 3.2 A CFG G is called k.local if ma={locallty(r) I r e V(G)} < k. We write k.Local.CFG = {G I G E CFG and G is k. Local} and k.Local.CFL = {L(G) I G E k.Local.CFG Example 3.1 La = { a"bnambm I n,m E N} E J.LocaI.CFL since all the derivations of G1 = ({S,,-,¢l}, {a,b}, S, {S -- SaS1, $1 "* aSlb, Sa -- A}) generating La have degree of locality at most J. For example, the derivation for the string aZba ab has degree of locality J as shown in Figure ~. A crucical property of k-local grammars, which we will utilize in proving the learnability result, is that for each k-local grammar, there exists another k-local grammar in a specific normal form, whose size is only r" locality(r) = 4 S --481 S1 2 ! Sl -m SI b SI --m S1 b 2 SI ---m SI b S1 2 Sl --m Sl b 2 $1 -~. S --~1 SI S -~I SI I I 1 2 I I SI -st S1 b S --#a S1 b Sl --~ Sl b Sl -m Sl b I l 2 2 I l Sl --m Sl b Sl -0. Figure 2: Degree of locality of a derivation of aSb3ab by Ga polynomially larger than the original grammar. The normal form in effect puts the grammar into a disjoint union of small grammars each with at most k rules and k nontenninal occurences. By ~the disjoint union" of an arbitrary set of n grammaxs, gl,..., gn, we mean the grammax obtained by first reanaming nonterminals in each g~ so that the nonterminal set of each one is dis- joint from that of any other, and then taking the union of the rules in all those grammars, and finally adding the rule S -* Si for each staxing symbol S~ of g,, and making a brand new symbol S the starting symbol of the grAraraar 80 obtained. Lemma 3.1 (K-Local Normal Form) For every k- local.CFG H, if n = size(H), then there is a k-loml- CFG G such that I. Z(G)= L(H). ~. G is in k.local normal form, i.e. there is an index set I such that G = (I2r, Ui¢~i, S, {S -* Si I i E I} U (Ui¢IRi)), and if we let Gi -~ (~T, ~,, Si, Ri) for each i E I, then (a) Each G~ is "k.simple"; Vi E I [ Ri [<_ k &: NTO(R~) <_ k. 11 (b) Each G, has size bounded by size(G); Vi E I size(G,) = O(n) (c) All Gi's have disjoint nonterminal sets; vi, j ~ I(i # j) -- r., n r~, = ¢,. s. size(G) = O(nk+:). Definition 3.3 We let ~ and ~ to be any maps that satisfy: If G is any k.local-CFG in kolocal normal form, 11If R is a set of production r~nlen,ith~oNeTruOl(eaR.i) denotee the number ol nontermlnm occurre ea 227 then 4(G) is the set of all of its k.local components (G above.) If 0 = {Gi [ i G I} is a set of k-simple gram. mars, then ~b(O) is a single grammar that is a "disjoint union" of all of the k-simple grammars in G. 4 K-Local Context Free Languages Are Polynomially Learnable In this section, we present a sketch of the proof of our main leaxnability result. Theorem 4.1 For each k G N; k-iocal.CFL is polynomially learnable. 12 Proof." We prove this by exhibiting an Occam algorithm .A for k-local-CFL with some fixed k, with range size polyno- mial in the size of a minimal grammar and less than linear in the sample size. We assume that ,4 is given a labeled m-sample 13 SL for some L E k-local-CFL with size(H) = n where H is its minimal k-local-CFG. We let length(SL) ffi E,Es length(s) = I. 14 We let S~L and S~" denote the positive and negative portions of SL respectively, i.e., Sz + = {z [ 3s E SL such that s = (z, 0)) and S~" = {z [ 3s E Sr such that s= (z, I)}. We fix a mini- mal grammar in k-local normal form G that is consistent with SL with size(G) ~_ p(n) for some fixed polynomial p by Lemma 3.1. and the fact that a minimal consis- tent k-local-CFG is not larger than H. Further, we let 0 be the set of all of "k-simple components" of G and define L(G) = UoieoL(Gi ). Then note L(G) = L(G). Since each k-simple component has at most k nonter- minals, we assume without loss of generality that each G~ in 0 has the same nonterminal set of size k, say Ek = {A1 ..... Ak}. The idea for constructing .4 is straightforward. Step 1. We generate all possible rules that may be in the portion of G that is relevant to SL +. That is, if we fix a set of derivations 2), one for each string in SL + from G, then the set of rules that we generate will contain all the rules that paxticipate in any derivation in /). (We let ReI(G,S+L) denote the restriction of 0 to S + with respect to some/) in this fashion.) We use 12We use the size of a minimal k-local CFG u the size of a kolocal-CFL, i.e., VL E k-iocal-CFL size(L) = rain{size(G) G E k-local-CFG L- L(G) = L}. 13S£ iS a labeled m-sample for L if S _C graph(char(L)) and cm'd(S) = m. graph(char(L)) is the grap~ of the characteristic function of L, ~.e. is the set {(#, 0} ] z E L} tJ {(z, 1} I z I~ L}. 14In the sequel, we refer to the number of strings in ~ sample as the sample size, and the total length of the strings in a sample as the sample length. k-locality of G to show that such a set will be polyno- mially bounded in the length of SL +. Step 2. We then generate the set of all possible grammars having at most k of these rules. Since each k-simple component of 0 has at most k rules, the generated set of grammars will include all of the k-simple components of G. Step 3. We then use the negative portion of the sample, S L to filter out the "inconsistent" ones. What we have at this stage is a polynomially bounded set of k-simple gram- mars with varying sizes, which do not generate any of S~, and contain all the k-simple grammars of G. Asso- dated with each k-simple grammar is the portion of SL + that it "covers" and its size. Step 4. What an Occam algorithm needs to do, then, is to find some subset of these k-simple grammmm that "covers" SL +, and has a total size that is provably only polynomially larger than a minimal total size of a subset that covers SL +, and is less than linear is the sample size, m. We formalize this as a variant of "Set Cover" problem which we call "Weighted Set Cover~(WSC), and prove the existence of an approximation algorithm with a performance guar- antee which suffices to ensure that the output of .4 will be a grammar that is provably only polynomially larger than the minimal one, and is less than linear in the sample size. The algorithm runs in time polynomial in the size of the grammar being learned and the sample length. Step 1. A crucial consequence of the way k-locality is defined is that the "terminal yield" of any rule body that is used to derive any string in the language could be split into at most k + 1 intervals. (We define the "terminal yield" of a rule body R to be h(R), where h is a homo- morphism that preserves termins2 symbols and deletes nonterminal symbols.) Definition 4.1 (Subylelds) For an arbitrary i E N, an i-tuple of members of E~ u~ = (vl, v2 ..... vi) is said to be a subyield of s, if there are some uz ..... ui, ui+z E E~. such that s = uavzu2~...ulviu~+z. We let SubYields(i,a) = {w E (E~) ffi [ z ~_ i ~ w is a sub- yield of s}. We then let SubYieldsk(S+L) denote the set of all subyields of strings in S + that may have come from a rule body in a k-local-CFG, i.e. subyields that axe tuples of at most k + 1 strings. Definition 4.2 SubYieldsk(S +) = U ,Es+Subyields(k + 1, s). Claim 4.1 ca~d(SubYie/dsk(S,+)) = 0(12'+3). Proof, This is obvious, since given a string s of length a, there 228 are only O(a 2(k+~)) ways of choosing 2(k -i- 1) differ- ent positions in the string. This completely specifies all the elements of SubYieidsk+a(s). Since the number of strings (m) in S + and the length of each string in S + are each bounded by the sample length (1), we have at most O(l) × 0(12(k+1)) strings in SubYields~(S+L ). r~ Thus we now have a polynomially generable set of possible yields of rule bodies in G. The next step is to generate the set of all possible rules having these yields. Now, by k-locality, in may derivation of G we have at most k distinct "kinds" of rewritings present. So, each rule has at most k useful nonterminal oc- currences mad since G is minimal, it is free of useless nonterminals. We generate all possible rules with at most k nonterminal occurrences from some fixed set of k nonterminals (Ek), having as terminal subyields, one of SubYieldsh(S+). We will then have generated all possible rules of Rel(G,S+). In other words, such a set will provably contain all the rules of ReI(G,S+). We let TFl~ules(Ek) denote the set of "terminal free rules" {Aio -'* zlAiaz2....znAi,,Z.+l [ n < k & Vj < n A~ E Ek} We note that the cardinality of such a set is a function only of k. We then "assign ~ members of SubYields~(S +) to TFRules(Eh), wherever it is possi- ble (or the arities agree). We let CRules(k, S +) denote the set of "candidate rules ~ so obtained. Definition 4.3 C Rules( k, S +) = {R(wa/za ..... w,/z,) I a E TFRnles(Ek) & w E SubYieldsk(S +) ~ arity(w) = arity(R) = n} It is easy to see that the number of rules in such a set is also polynomially bounded. Claim 4.2 card(ORulea(k, S+ )) = O(l 2k+3) Step 2. Recall that we have assumed that they each have a non- terminal set contained in some fixed set of k nontermi- nMs, Ek. So if we generate all subsets of CRules(k, S +) with at most k rules, then these will include all the k- simple grammars in G. Definition 4.4 ccra,.~(k, st) = ~'~(CR~les(k, St)). 's Step 3. Now we finally make use of the negative portion of the sample, S~', to ensure that we do not include any in- consistent grammars in our candidates. 15~k(X) in general denotes the set of all subsets of X with cardinality at most k. Definition 4.5 FGrams(k, Sz) = {H [ H E CGra,ns(k, S +) ~, r.(a) n S~ = e~} This filtering can be computed in time polynomial in the length of St., because for testing consistency of each grammar in CGrams(k, + S z ), all that is involved is the membership question for strings in S~" with that gram- mar. Step 4. What we have at this stage is a set of 'subcovers' of SL +, each with a size (or 'weight') associated with it, and we wish to find a subset of these 'subcovers' that cover the entire S +, but has a provably small 'total weight'. We abstract this as the following problem. ~/EIGHTED-SET-COVER(WSC) INSTANCE: (X, Y, w) where X is a finite set and Y is a subset of ~(X) and w is a function from Y to N +. Intuitively, Y is a set of subcovers of the set X, each associated with its 'weight'. NOTATION: For every subset Z of Y, we let couer(g) = t3{z [ z E Z}, and totahoeight(Z) = E,~z w(z). QUESTION: What subset of Y is a set-cover of X with a minimal total weight, i.e. find g C_ Y with the follow- ing properties: (i) toner(Z) = X. (ii) VZ' C_ Y if cover(Z') = X then totalweight(Z') >_ totahoeig ht( Z ). We now prove the existence of an approximation algorithm for this problem with the desired performance guarantee. Lemma 4.1 There is an algorithm B and a polyno- mial p such that given an arbitrary instance (X, Y, w) of WEIGHTED.SET.COVER with I X I = n, always outputs Z such that; 1. ZC_Y 2. Z is a cover for X, i.e. UZ = X 8. If Z' is a minimal weight set cover for (X, Y, w), then E~z to(y) <_ p(Ey~z, w(y)) × log n. 4. B runs in time polynomial in the size of the in- stance. Proof: To exhibit an algorithm with this property, we make use of the greedy algorithm g for the standard 229 set-cover problem due to Johnson ([8]), with a perfor- mance guarantee. SET-COVER can be thought of as a special case of WEIGHTED-SET-COVER with weight function being the constant funtion 1. Theorem 4.2 (David S. JohnRon) There is a greedy algorithm C for SET.COVER such that given an arbitrary instance (X, Y) with an optimal solution Z', outputs a solution Z, such that card(Z) = O(log [ X [ xcard(Z')) and runs in time polynomial in the instance size. Now we present the algorithm for WSC. The idea of the algorithm is simple. It applies C on X and suc- cessive subclasses of Y with bounded weights, upto the maximum weight there is, but using only powers of 2 as the bounds. It then outputs one with a minimal total weight araong those. Algorithm B: ((X, Y, w)) mazweight := maz{to(y) [ Y E Y) m :-- [log mazweight] /* this loop gets an approximate solution using C for subsets of Y each defined by putting an upperbound on the weights */ Fori--1 tomdo: Y[i] := {lr/[ Y E Y & to(Y) < 2'} s[,] := c((x, Y[,])) End/* For */ /* this loop replaces all 'bad' (i.e. does not cover X) solutions with Y - the solution with the maximum total weight */ Fori= ltomdo: s[,] := s[,] if cover(s[i]) ---- X := Y otherwise End/* For */ ~intotaltoelght := ~i.{totaltoeight(s[j]) I J ¢ [m]} Return s[min { i I totaltoeig h t( s['l) --- mintotaitoeig ht } ] End /* Algorithm B */ Time Analysis Clearly, Algorithm B runs in time polynomial in the instance size, since Algorithm C runs in time poly- nomial in the instance size and there are only m ---- ~logmazweight] cMls to it, which certainly does not exceed the instance size. Performance Guarantee Let (X, Y, to) be a given instance with card(X) = n. Then let Z* be an optimal solution of that in- stance, i.e., it is a minimal total weight set cover. Let totalweight(Z*) = w'. Now let m" ---- [log maz{w(z) I z E Z°}]. Then m* ~_ rain(n, [logrnazweight]). So when C is called with an instance (X, Y[m']) in the m'-th iteration of the first 'For'-loop in the algorithm, every member of Z" is in Y[m*]. Hence, the optimal solution of this instance equals Z'. Thus, by the per- formance guarantee of C, s[m*] will be a cover of X with cardinality at most card(Z °) × log n. Thus, we have card(s[m*]) ~_ card(Z*) ×logn. Now, for every member t of sire*l, w(t) ~ 2 '~" _< 2 pOs~'I _~ 2w*. Therefore, totalweight(s[m*]) = card(Z') x logn x O(2w*) = O(w*) ×logn x O(2w'), since w" certainly is at least as large as card(Z'). Hence, we have totaltoeight(s[m*]) = O(w *= x log n). Now it is clear that the output of B will be a cover, and its total weight will not exceed the total weight of s[m']. We conclude therefore that B((X, Y, to)) will be a set-cover for X, with total weight bounded above by O(to .= x log n), where to* is the total weight of a minimal weight cover and nflX [. rl Now, to apply algorithm B to our learning problem, we let Y = {S+t. nL(H) [ H E FGrams(k, SL)) and de- fine the weight function w : Y --* N + by Vy E Y w(y) = rain{size(H) [ H E FGrams(k, St) & St = L(H)N S + } and call B on (S+,Y,w). We then output the gram- mar 'corresponding' to B((S +, Y, w)). In other words, we let ~r = {mingrammar(y) [ y E IJ((S+L,Y,w))} where mingrammar(g) is a minimal-size grammar H in FGrams(k, SL) such that L(H)N S + = y. The final output 8ra~nmar H will be the =disjoint union" of all the grammars in /~, i.e. H ---- Ip(H). H is clearly consistent with SL, and since the minimal to- tal weight solution of this instance of WSC is no larger than Rel(~, S+~), by the performance guarantee on the algorithm B, size(H) ~_ p(size( Rel( G, S + ))) x O(log m) for some polynomial p, where m is the sample size. size(O) ~_ size(Rei(G, S+)) is also bounded by a poly- nomial in the size of a minimal grammar consistent with SL. We therefore have shown the existence of an Occam algorithm with range size polymomlal in the size of a minimal consistent grammar and less than linear in the sample size. Hence, Theorem 4.1 has been proved. Q.E.D. 5 Extension to Mildly Context Sen- sitive Languages The learnability of k-local subclasses of CFG may ap- pear to be quite restricted. It turns out, however, that the ]earnability of k-local subclasses extends to a rich class of mildly context sensitive grsmmars which we 230 call "Ranked Node Rewriting Grammaxs" (RNRG's). RNRG's are based on the underlying ideas of Tree Ad- joining Grammars (TAG's) :e, and are also a specical case of context free tree grammars [13] in which unre- stricted use of variables for moving, copying and delet- ing, is not permitted. In other words each rewriting in this system replaces a "ranked" nontermlnal node of say rank ] with an "incomplete" tree containing exactly ] edges that have no descendants. If we define a hier- archy of languages generated by subclasses of RNRG's having nodes and rules with bounded rank ] (RNRLj), then RNRL0 = CFL, and RNRL1 = TAL. 17 It turns out that each k-local subclass of each RNRLj is poly- nomially learnable. Further, the constraint of k-locality on RNRG's is an interesting one because not only each k-local subclass is an exponential class containing in- finitely many infinite languages, but also k-local sub- classes of the RNRG hierarchy become progressively more complex as we go higher in the hierarchy. In pax- t iculax, for each j, RNRG~ can "count up to" 2(j + 1) and for each k _> 2, k-local-RNRGj can also count up to 20' + 1)? s We will omit a detailed definition of RNRG's (see [2]), and informally illustrate them by some examples? s Example 5.1 L1 = {a"b" [ n E N} E CFL is gen- erated by the following RNRGo grammar, where a is shown in Figure 3. G: = ({5'}, {s,a,b},|, (S}, {S -* ~, s - ~(~)}) ExampleS.2 L2 = {a"b"c"d" [ n E N} E TAL is generated by the following RNRG1 gram- mar, where [$ is shown in Figure 3. G2 = ({s}, {~, a, b, ~, d}, ~, {(S(~))}, {S -- ~, S -- ,(~)}) Example 5.3 Ls = {a"b"c"d"e"y" I n E N} f~ TAL is generated by the ]allowing RNRG2 gram- mar, where 7 is shown in Figure 3. G3 = ({S},{s,a,b,c,d,e,f},~,{(S(A,A))},{S .-* 7, S "-" s(~, ~)}). An example of a tree in the tree language of G3 having as its yield 'aabbccddee f f' is also shown in Figure 3. 16Tree adjoining grmnmars were introduced as a formalism for linguistic description by Joehi et al. [10], [9]. Various formal and computational properties of TAG'• were studied in [16]. Its linguistic relevance was demonstrated in [11]. IZThi• hierarchy is different from the hierarchy of "mete, TAL's" invented and studied extensively by Weir in [18]. 18A class of _g~rammars G is said to be able to "count up to" j, just in case -{a~a~...a~ J n 6. N} E ~L(G) [ G E Q} but {a~a~...a~'+1 1 n et¢} ¢ {L(a) I G e ¢}. 19Simpler trees are represented as term structures, whereas more involved trees are shown in the figure. Also note tha~ we use uppercase letters for nonterminals and lowercase for termi- nals. Note the use of the special symbol | to indicate an edge with no descendent. ~: 7: derived: • S b s $ f | b # © d # e • S d I b # ¢ $ A a s f a s f s $ b s c d s e b ~. c d ~. e Figure 3: ~, ~, 7 and deriving 'aabbceddeeff' by G3 We state the learnabillty result of RNRLj's below as a theorem, and again refer the reader to [2] for details. Note that this theorem sumsumes Theorem 4.1 as the case j = 0. Theorem 5.1 Vj, k E N k-local-RNRLj is poignomi. ally learnable? ° 6 Some Negative Results The reader's reaction to the result described above may be an illusion that the learnability of k-local grammars follows from "bounding by k". On the contrary, we present a case where ~bounding by k" not only does not help feasible learning, but in some sense makes it harder to learn. Let us consider Tree Adjoining Gram- mars without local constraints, TAG(wolc) for the sake of comparison. 2x Then an anlogous argument to the one for the learn•bUlly of k-local-CFL shows that k-local- TAL(wolc) is polynomlally learnable for any k. Theorem 6.1 Vk E N + k-loeal-TAL(wolc) is polyno. mially learnable. Now let us define subclasses of TAG(wolc) with a bounded number of initial trees; k-inltial-tree- TAG(wolc) is the class of TAG(wolc) with at most k initial trees. Then surprisingly, for the case of single letter alphabet, we already have the following striking result. (For fun detail, see [1].) Theorem 6.2 (i) TAL(wolc) on l-letter alphabet is polynomially learnable. 2°We use the size of a minimal k-local RNRGj as the size of a k-local RNRLj, i.e., Vj E N VL E k-local-RNRLj size(L) = mln{slz•(G) [ G E k-local-RNRG~ & L(G) = L}. 21Tree Adjoining Grammar formalism was never defined with- out local constrains. 231 (ii) Vk >_ 3 k.initial.tree-TAL(wolc) on 1.letter al- phabet is not polynomially learnable by k.initial.tres. YA G (wolc ). As a corollary to the second part of the above theorem, we have that k-initial-tree-TAL(wolc) on an arbitrary alphabet is not polynomiaJ]y learnable (by k-initial-tree- TAG(wolc)). This is because we would be able to use a learning algorithm for an arbitrary alphabet to con- struct one for the single letter alphabet case. Corollary 6.1 k.initial.tree-TAL(wolc) is not polyno- mially learnable by k-initial.tree- TA G(wolc). The learnability of k-local-TAL(wolc) and the non- learnability of k-initial-tree-TAL(wolc) is an interesting contrast. Intuitively, in the former case, the "k-bound" is placed so that the grammar is forced to be an ar- bitrarily ~wide ~ union of boundedly small grammars, whereas, in the latter, the grammar is forced to be a boundedly "narrow" union of arbitrarily large g:am- mars. It is suggestive of the possibility that in fact human infants when acquiring her native tongue may start developing small special purpose grammars for dif- ferent uses and contexts and slowly start to generalize and compress the large set of similar grammars into a smaller set. 7 Conclusions We have investigated the use of complexity theory to the evaluation of grammatical systems as linguistic for- malisms from the point of view of feasible learnabil- ity. In particular, we have demonstrated that a single, natural and non-trivial constraint of "locality ~ on the grammars allows a rich class of mildly context sensi- tive languages to be feasibly learnable, in a well-defined complexity theoretic sense. Our work differs from re- cent works on efficient learning of formal languages, for example by Angluin ([4]), in that it uses only ex- amples and no other powerful oracles. We hope to have demonstrated that learning formal grammars need not be doomed to be necessaxily computationally in- tractable, and the investigation of alternative formula- tions of this problem is a worthwhile endeavour. References [1] Naoki Abe. Polynomial learnability of semillnear sets. 1988. UnpubLished manuscript. [2] Naoki Abe. Polynomially leaxnable subclasses of mildy context sensitive languages. In Proceedings of COLING, August 1988. [3] Dana Angluin. Inference of reversible languages. Journal of A.C.M., 29:741-785, 1982. [4] Dana Angluin. Leafing k-bounded contezt.free grammars. Technical Report YALEU/DCS/TR- 557, Yale University, August 1987. [5] Dana Angluin. Learning Regular Sets from Queries and Counter.ezamples. Techni- cal Report YALEU/DCS/TR-464, Yale University, March 1986. [6] A. Blumer, A. Ehrenfeucht, D. Haussler, and M. Waxmuth. Classifying Learnable Geometric Con- cepts with the Vapnik.Chervonenkis DimensiorL Technical Report UCSC CRL-86-5, University of California at Santa Cruz, March 1986. [7] E. Mark Gold. Language identification in the limit. Information and Control, 10:447-474, 1967. [8] David S. Johnson. Approximation a~gorithms for combinatorial problems. Journal of Computer and System Sciences, 9:256-278,1974. [9] A. K. Joshi. How much context-sensitivity is neces- sary for characterizing structural description - tree adjoining grammars. In D. Dowty, L. Karttunen, and A. Zwicky, editors, Natural Language pro. c~sing- Theoretical, Computational, and Psycho- logical Perspoctive~, Cambrldege University Press, 1983. [10] Aravind K. Joshi, Leon Levy, and Masako Taks- hashl. Tree adjunct grammars. Journal of Com- puter and System Sciences, 10:136-163, 1975. [11] A. Kroch and A. K. Joshi. Linguistic relevance of tree adjoining grammars. 1989. To appear in Linguistics and Philosophy. [12] Daniel N. Osherson, Michael Stob, and Scott We- instein. Systems That Learn. The MYI" Press, 1986. [13] William C. Rounds..Context-free grammars on trees. In ACM Symposium on Theory of Comput- ing, pa4ges 143--148, 1969. [14] Leslie G. Variant. Learning disjunctions of conjunc- tions. In The 9th IJCAI, 1985. [15] Leslie G. Variant. A theory of the learnable. Com- munications of A.C.M., 27:1134-1142, 1984. [16] K. Vijay-Shanker and A. K. Joshi. Some compu- tational properties of tree adjoining grammars. In 23rd Meeting of A.C.L., 1985. [17] K. Vijay-Shanker, D. J. Weir, and A. K. Joshi. Characterizing structural descriptions produced by various grammatical formalisms. In ~5th Meeting of A.C.L., 1987. [18] David J. Weir. From Contezt-Free Grammars to Tree Adjoining Grammars and Beyond - A disser- tation proposal. Technical Report MS-CIS-87-42, University of Pennsylvania, 1987. 232
1988
28
Conditional Descriptions in Functional Unification Grammar Robert T. Kasper USC/Information Sciences Institute 4676 Admiralty Way, Suite 1001 Marina del Rey, CA 90292 U.S.A. Abstract A grammatical description often applies to a linguistic object only when that object has certain features. Such conditional descriptions can be indirectly modeled in Kay's Functional Unification Grammar (FUG) using functional descriptions that are embedded within disjunctive alternatives. An ex- tension to FUG is proposed that allows for a direct represen- tation of conditional descriptions. This extension has been used to model the input conditions on the systems of systemic grammar. Conditional descriptions are formally defined in terms of logical implication and negation. This formal defi- nition enables the use of conditional descriptions as a general notational extension to any of the unification-based gram° mar representation systems currently used in computational linguistics. 1 Introduction Functional Unification Grammar [Kay79] (FUG) and other grammatical formalisms that use feature structures and uni- fication provide a general basis for the declarative representa- tion of natural language grammars. In order to utilize some of the computational tools available with unification gram- mars, we have developed a mapping from sVstelnic ¢ram- mars [Hall76] into FUG notation. This mapping has been used as the first step in creating a general parsing method for systemic grammars [Kas87a]. The experience of trans- lating systemic grammars into FUG has shown several ways in which the notational resources of FUG may be improved. In particular, FUG has limited notational resources for ex- pressing conditional information. In this paper we describe how FUG has been enhanced by the addition of conditional descriptions, building on research that has already been re- ported [Kas87a,Kas86,Kas87b]. Conditional information is stated explicitly in systemic grammars by the input conditions of systems that specify when a system must be used. Consider, for example, the two systems (MoodType and Indicatlve'l~ype) x shown in Figure 1. The input condition for the MoodType system is the feature IThis example is extracted from Nigel [Mann83], a large sys- temic grammar of English that has been developed in text gener- ation research at USC/ISI. C/auae, and the input condition for the IndicatlveType sys- tem ls the feature Indicative. Because the features of a sys- temic grammar are normally introduced by a unique system, these input conditions actually express a bidirectional type of logical implication: I. If a constituent has the feature(s) specified by a sys- tem's input condition, then exactly one of the alterna- tives described by that system must also be yard for the constituent; 2. If a constituent has one of the feature alternatives de- scribed by a system, then it must also have the fea- ture(s) specified by that system's input condition. Thus the input condition of the Irtd/cative~pe system ex- presses the following implications: 1. If a clause ha~s the feature Indic,~tive, then it must also have exactly one of the alternatives from the Zndica- tive23,/pe system (either Declarative or Interrogative). 2. If a clause has one of the feature alternatives described by the Indicativs~3ype system (either Declarative or/n- terrooaties), then it must also have the feature Indiea- ties. While it is theoretically correct to regard the two directions of implication as exact converses of each other, there is a subtle difference between them. The consequent of the first type of implication is the description of the entire system, including systemic features and their realizations. 2 The antecedent of the second type of implication can be safely abbreviated by the systemic features without their realizations, because the presence of a systemic feature implies that its realizations also hold. We will return to this distinction when we provide a formal definition of conditional descriptions in Section 2. For simple input conditions, the first type of implication can be expressed in FUG, as it was originally formulated by Kay [Kay79], by embedding the description of one system in- side the description of another. For example, we can capture this implication for the IndicativeType system by embedding it within the description of the Indicative alternative of the 2A realization is a statement of structural properties that are required by a feature, such as the statement that SUBJECT pre- cedes FINITE for the feature declarative. 233 RANK ~ -Clause MOOD TYPE ~ Imperative NONFINITIVE!Stem LIn INDICATIVE dlcatlve TYPE SUBJECT:Nominative Dseclarative UBJECT ^ FINITE Llnterrogatlve Figure 1: The MoodType and IndicativeType Systems Rank = Clause MoodType = Imperative NONFI~ITIVE---- [ Form = Stem ] J MoodType -- Indicative ] SUBJECT ----" [ Case = Nominative ] L f F IndicatlveType = Declarative 1 ~ pattern = (... SUBJECT FINITE...) j J L ~ IndicativeType ----- Interrogative ] . 3 IndicatlveType ~ [ MoodType -- Indicative ] " Figure 2: The MoodType and IndlcativeType Systems in FUG MoodType system, as shown in Figure 2. Note that the sec- ond type of implication expressed by systemic input condi- tions has not been expressed by embedding one functional description inside another. To express the second type of lm- plicatlon, we have used a different notational device, called a feature existence condition; it will be defined in Section 2.4. Not all systems have simple input conditions consisting of single features. Those input conditious which are com- plex boolea~u expressions over features cannot be expressed directly by embedding. Consider the BenefactiveVolce s sys- tem shown in Figure 3 as an example. Its input condition is the conjunction of two features, Agentive and Benefactive. One way to express a system with a complex input con- dition in FUG is to use a disjunction with two alternatives, as shown in Figure 4. The first alternative corresponds to what happens when the Benef~ctiveVoice system is entered; the second alternative corresponds to what happens when the BenefactlveVoice system is not entered. The first alternative also includes the features of the input condition. The second alternative includes the features of the negated input condi- tion. Notice that the input condition and its negation must both be stated explicltly, unlike in systemic notation. If the negation of the input condition was not included in the sec- ond alternative, it would be possible to use this alternative 3The BenefactivcVoice system iJ also extracted from the Nigel grammar [Mann83]. It describes the active and passive voice op- tions that are possible in clauses that have both an agent and a beneficiary. The active/passive distinction is not primitive in systemic grammars of English. Instead, it is decomposed into sev- eral cases depending on which participant roles are present in the clause. In this case the subject of a passive clause may be conflated with either beneficiary or medium. even when the input condition for the system holds. Thus the description of the system would not always be used when it should be. Note that this method of encoding systemic in- put conditions presupposes an adequate treatment of negated features." A formal definition of negation will be developed in Section 2.3. While it is formally possible to encode complex input con- ditione by disjunction and ne~tion, such encoding IS not al- together satisfactory: It should not be necessary to state the negated input condition explicitly, since it can always be de- rived automatically from the unne&-~ted condition. It is also rather inefficient to mix the features of the input condition with the other features of the system. The features of the in- put condition contain exactly the information that is needed to choose between the two alternatives of the disjunction (Le., to choose whether the system is entered or not). It would be more efficient and less verbose to have a notation in which the features of the input condition are distlnguished from the other features of the system, and in which the negation of the input condition does not need to be stated explicitly. Therefore, we have developed an extension to FUG that uses a conditional operator (-~), as illustrated by the encoding of the BenefactiveVoice system shown in Figure 5. A descrip- tion corresponding to the input condition appears to the left of the --~ symbol, and the description to be included when the input condition is satisfied appears to its right. A formal definition of what it means for a description to be satisfied will be given in Section 2.1. 4Some negations of atomic features can be replaced by a finite disjunction of other pouible values for that feature, but this tech- nique only works effectively when the set of possible values is small and can be enumerated. 234 Agentive - Benefactive - BENEFACTIVE VOICE f 'Benefact]veActive AGENT / SUBJECT lv[EDIUM / DIRECTCOMP IvfedloPazslve MEDIUM / SUBJECT -BenePsssive BENEFICIARY / SUBJECT MEDIUM / DIRECTCOMP Figure'3: The BenefactiveVoice System. Rank = Clause Agentivity = Agentive Benefaction = Benefacitve ' BenefactiveVoice = BenefactiveActive ] AGENT = <SUBJECT> | MEDIUM = <DIRECTCOIqP> ] BenefactlveVoice = MedioPassive ] , MEDIUM = <SUBJECT> BenefactlveVolce = BenePassive | BENEFICIARY = <SUBJECT> | , MEDIUM = <DIRECTCOMP> J 'Agentivity = NOT AEentive ] ~ ] ' Benefaction = NOT Benefactive ] S / BenefactlveVolce = NONE J Figure 4: BenefactiveVoice system in FUG, using disjunction and negation. Rank = Clause Agentivity = Agentlve ] BenefactiveVoice = lvfedioPasslve ---k Benefaction = Benefactive MEDIUM = <SUBJECT> BenefactlveVoice = BenePessive BENEFICIARY = <SUBJECT> MEDIUM = <DIRECTCOIVIP> 3 BenefactiveVolce --, [ Agentivity = Agentlve ] Benefaction = Benefactlve BenefactiveVolce = BenefactiveActive ] ] AGENT = <SUBJECT> MEDIUM = <DIRECTCOMP> Figure 5: BenefactiveVoice system in extended FUG, using two conditional descriptions. Note: In systemic notation curly braces represent conjunction and square braces represent disjunction, while in FUG curly braces represent disjunction and square braces represent conjunction. 235 Note: A and L are NIL l:~ ~<p~ > ..... <p. >] ~ ^ ~ or [~... ~.] ~bz V~b~ or {~bz...~b.} denoting no information; where a E A, to describe atomic values; where l E L and ~b E FDL, to describe structures in which the feature labeled by I has a value described by ~; where each p; E L', to describe an equivalence class of paths sharing a common value in a feature structure; where ~i E FDL, denoting conjunction; where ~; E FDL, denoting disjunction. sets of symbols which are used to denote atomic values and feature labels, respectively. Figure 6: Syntax of FDL Formulas. 2 Definitions The feature description logic (FDL) of Kasper and Rounds [Kas86] provides a coherent framework to give a pre- cise interpretation for conditional descriptions. As in previ- ous work, we carefully observe the distinction between fea- ture structures and their descriptions. Feature structures are represented by directed graphs (DGs), and descriptions of feature structures are represented by logical formulas. The syntax for formulas of FDL is given in Figure 6. We define several new types of formulas for conditional descriptions and negations, but the domain of feature structures remains DGs, as before. 2.1 Satisfaction and Compatibility In order to understand how conditional descriptions are used, it is important to recognize two relations that may hold be- tween a particular feature structure and a description: satis- faction and compatibility. Satisfaction implies compatibility, so there are three possible states that a particular structure may have with respect to a description: the structure may fully 8ati~/X/the description, the structure may be i.eompat. isle with the description, or the structure may be ¢ompatiMe with (but not satisfy) the description. To define these terms more precisely, consider the state of an arbitra~ 7 structure, /~, with respect to an atomic feature description, f : e: satisfies f : e if f occurs in A with value e; is incompatible with f : e if j' occurs in g with value z, for some z ~ ~; /~ is (merely) compatible with f : e if f does not occur inA. Because feature structures are used to represent partial information, it is possible for a structure that is merely com- patible wlth a description to be extended (i.e., by adding a value for some previously nonexistent feature) so that it ei- ther satisfies or becomes incompatible with the description. Consider, for example, the structure (~z) shown in Figure 7, and the three descriptions: aubj : (perao. : 3 A .umber : ai.g) (I) subj : (perao. : 1 A .umber : .i.g) (2) 8=by: (case : .ore ^ .t,,nbe. : si.g) (3) subj ~ n d e r stag neut Figure 7: Example feature structure (AZ)- Description (I) is satisfied by Az, because •z is fully iustan- tiated with all the required feature values. Description (2) is i,eompatible with Az, because Az has a different value for the feature aubj : person. Description (3) is merely compatible with Az (but not satisfied by Az), because Az has no value for the feature aubj : e~se. In the following definitions, the notation A ~ ~5 means that the structure A satisfies the description ~, and the notation A ~ ~ means that the structure A is compatible toith the description ~. Logical combinations of feature descriptions are evaluated with their usual semantics to determine whether they are satisfied by a structure. Thus, a conjunction is satisfied only when every conjunct is satisfied, and a disjunction is satls- fied if any disjunct is satisfied. The formal semantics of the satisfaction relation has been specified in our previous work describing FDL [Kas86]. The semantics of the compatibility relation is given by the following conditions: I. ~ -- NIL always; 2. A .~ • ¢=¢. /~ is the atomic structure ~; 3. A ~ [< Pz >,-.-,< P. >] ~=~ all DGs in the set {~q/ < Pz > ..... 4/ < p. >} can be unified (any member of this set may be undefined; such members are equivalent to null DGs); 4. /~ ~ I : ~ ¢=~ /~/! is undefined or ~/1 ~ ~; 5. A~~V~ ¢=~ ~~~or~~~0; 6. ~ N ~bA~, ¢ffiffi~ .~, canonical form of~bA~. Unlike satisfaction, the semantics of compatibility cannot be defined by simple induction over conjunctive formulas, be- cause of a subtle interaction between path equivalences and 236 nonexistent features. For example, consider whether A,, shown LU Figure 7, is compatible with the description: nurnber: pl A |< ~*~mber >, < aubj number >]. A, is compatible with r~urnber : pl, and d, k also compat- ible with ~< nurnber >,< subj n~mber >l, but #~, is not compatible with the conjunction of these two descriptions, because it requires aub] : r~mber : pl and ,~, has si~,g as the value of that feature. Thus, in order to determine whether a structure is compat- ible wlth a conjunctive description, it is generally necessary to unify all conjuncts, putting the description into the canon- ical form described in [Kas87c]. This canonical form (i.e. the feature.description data structure) contains definite and in- definite components. The definite component contains no disjunction, and is represented by a DG structure that satis- fies all non-disjunctive parts of a description. The indefinite component is a list of disjunctions. A structure is compatible with a description in canonical form if and only if it is unifi- able with the definite component and it is compa!;ible wlth each disjunction of the indefinite component. 2.2 Conditional Description We augment FDL with a new type of formula to represent conditional descriptions, using the notation, n -. ~, and the standard interpretation given for material implication: AI = ~ -~ p ~ AI =~av#. C4) This Luterpretatlon of conditionals presupposes an interpre- tation of negation over feature descriptions, which is given below. To simpLify the interpretation of negations, we ex- clude formulas contaiuing path equivalences and path values from the antecedents of conditlonak. 2.3 Negation We use the classical interpretation of negation, where /~ ~ -~b ¢=~ /~ ~: #. Negated descriptions are defined for the following types of formulas: 1. A~-~ ¢=~ A is not the atom ~; 2. A ~ -~(l : ~) ~ Jl ~= l : "-~ or .~/! is not defined; 3. ,~ ~ -~(~ v ,/,) ~:~ A ~ -,~ ^ -,,p; 4. ,~ M -,(~ ^ ,p) ~ ,~ M -,~ v -,,p. Note that we have not defined negation for formulas contain- ing path equivalences or path values. Thls restriction makes it possible to reduce all occurrences of negation to a boolean combLuatlon of a fiuite number of negative constraints on atomic values. While the classical interpretation of negation is not strictly monotonic with respect to the normal sub- sumptlon ordering on feature structures, the restricted type of negation proposed here does not suffer from the ineffi- ciencies and order-dependent uuificaticn properties of gen- eral negation or intuiticnistic negation [Mosh87,Per87]. The reason for this is that we have restricted negation so that all negative information can be specified as local constraLuts on single atomic feature values. Thus, these constraints only come into play when specific atomic values are proposed for a feature, and they can be checked as efficiently as positive atomic value constraints. 2.4 Feature Existence Conditions A special type of conditional description k needed when the antecedent of a conditional is an existence predicate for a particular feature, and not a regular feature description. We call this type of conditional a [eature ezistence condition, and use the notation: B/ -+ ~, where A ~ 3[ 4==~ A/[ is defined. Thk use of B/is essentially equivalent to the use of f = ANY in Kay's FUG, where ANY lsa place-holder for any substan- tive (i.e., non-NIL) value. The primary effect of a feature existence condition, such as 3f --, ~, is that the consequent is asserted whenever a sub- stantive value is introduced for a feature labeled by f. The treatment of feature existence conditions differs slightly from other conditional descriptions in the way that an uusatisfiable consequent is handled. In order to negate the antecedent of 3f --~ #, we need to state that f may never have any sub- stantive value. This is accomplished by unifying a special atomic value, such as NONE, with the value of f. This spe- cial atomic value is incompatible with any other real value that might be proposed as a value for f. Feature existence conditions are needed to model the sec- ond type of implication expressed by systemic input condi- tions - namely, when a constituent has one of the feature alternatives described by a system, it must also have the fea- ture(s) specified by that system's input condition. Generally, a system named f with input condition a and alternatives described by/~, can be represented by two conditional de- scriptlons: 1. a .--. p; 2. Bf -* a. For example, recall the BenfactiveVoice system, which is rep- resented by the two conditionals shown in Figure 5. It is important to note that feature existence conditions are used for systems with simple input conditions as well as for those with complex input conditions. The use of feature existence conditions is essential in both cases to encode the bidirectional dependency between systems that is implicit in a systemic network. 3 Unification with Conditional Descriptions The unification operation, which is commonly used to corn- blue feature structures (i.e., non-disjunctive, non-conditional DGs), can be generalised to define an operation for combLuLug the information of two feature descriptions (i.e., formulas of FDL). In FDL, the unification of two descriptions is equiva- lent to their logical conjunction, as discussed in [Kas87b]. We 237 have shown in previous work [Kas87c] how unification can be accomplished for disjunctive descriptions without expanding to disjunctive normal form. This unification method factors descriptions into a canon- ical form conslstlng of definite and indefinite components. The definite component contains no dlsjunctlon, and is rep- resented by a DG structure that satisfies all non-disjunctive parts of a description. The indefinite component of a de- scription k a list of disjunctions. When two descriptions are unified, the first step is to unlfy their definite compo- nents. Then the indefinite components of each description are checked for compatlbility with the resulting definite com- ponent. Dlsjuncts are eliminated from the description when they are inconsistent with deflnlte information. When only one alternative of a disjunction remains, it is unified with the definite component of the description. This section details how thls unification method can be extended to handle conditional descriptions. Conditionals may be regarded as another type of indefinite information in the description of a feature structure. They are indefinite ]n the sense that they impose constraints that can be satisfied by several alternatives, depending on the values of features already present in a structure. 3.1 How to Satisfy a Conditional Description The constraints imposed on a feature structure by a condi- tional description can usually be determined most emclently by first examining the antecedent of the conditional, because it generally cont~nl a smaller amount of information than the consequent. F, xamining the antecedent k often sufficient to determine whether the consequent is to be included or discarded. Given a conditional description, C ---- ~ -+ ~, we can define the coustralnts that it imposes on a feature structure (A) as follows. When: ~ ct, then A ~ ~;6 ~ or, then ¢ imposes no further constraint on A, and can therefore be elhnJnated; A ~, c~, then check whether ~ ls compatible wlth A. If compatible, then C must be retained in the descrip- tion of ~. If incompatible, then ~ ~ -~a (and ¢ can be elimio nated). These constraints follow directly from the interpretation (4) that we have given for conditional descriptions. These con- straiuts are logically equivalent to those that would be im- posed on A by the disjunction -~ V ~, as required. However, the constraints of the conditional can often be imposed more ef~ciently than those of the equivalent dJsjunctlon, because examlnlng the antecedent of the conditional carries the same cost as examining only one of the dkjuncts. When the con- straints of a disjunction are imposed, both of the disjuncts must be examined in all cases. 6Read this constraint as: Umake sure that .4 satisfies ~.t 3.2 Extending the Unification Algorithm The unification algorithm for dlsjunctlve feature descrip- tions [Kas87c] can be extended to handle conditionals by recognizing two types of indefinite ~uformatlon in a descrip- tion: disjunctions and conditionals. The extended feature- descriptlon data structure has the components: definite: a DG structure; disjunctions: a llst of disjunctions; conditionals, a list of conditional descriptions. The part of the unification algorithm that checks the compat- ibility of indefinite components of a description with its def- inite component is defined by the function CHECK-INDEF, shown in Figure 8. Thk algorithm checks the disjunctions of a description before conditionals, but an equally correct ver- sion of thk algorithm might check conditionals before disjunc- tions. In our application of parsing with a systemic grammar it is generally more et~cient to check disjunctions first, but • other applications might be made more efBclent by varylng this order. 4 Potential Refinements Several topics merit further investlgatlon regarding condi- tional descrlptions. The implementation we describe has the constraints of conditionals and dkjunctions imposed in an ar- bitrary order. Chang|ng the order has no effect on the final result, but it is likely that the el~clency of unification could be improved by ordering the conditionals of a grammar in a deliberate way. Another way to improve the efficiency of unification with condltiona~ would involve indexing them by the features that they contain. Then a conditional would not need to be checked against a structure until some feature value of the structure might determine the manner in which it k eat|s fled. The amount of efficiency gained by such tech- niques clearly depends largely on the nature of the particular grammar being used in an appllcatlon. A slightly different type of conditional might be used as a way to speed up unification with binary disjunctive descrip- tions. If it k known that the values of a relatively small number of features can be used to discrimlnate between two alternative descriptions, then those features can be factored into a separate condition in a description such as IF cor, ditioa THEN alt~ ELSE air2. When the condition is satisfied by a structure, then altl is selected. When the condition is incompatible with a struc- ture, then air2 is selected. Otherwise both alternatives must remain under consideration. As it often requires a consider- able amount of time to check which alternatives of a dkjunc- tion are applicable, this technlque might offer a significant improvement in an application where large dlsjunctlve de- scriptions are used. Remember that we have restricted conditionals by requir- ing that their antecedents do not contain path equivalences. 238 Function CHECK-INDEF (desc) Returns feature-description: where desc is a featur~description. Let P = desc.deflnite (a DG). Let disjunctions = desc.disjunctions. Let conditionals = desc.conditionals. Let unchecked-parts ---- true. While unchecked-parts, do: unchecked-parts := false. Cheek eomp~h'~/ty oj' d/~nct/onm ~ P (omited, see [Kas87c]). O~ek eomp~'t~U of ¢o~o~b ~ P: Let new-conditionals ---- ~. For each ~, --./9 in conditionals: test whether D satisfies or is compatible with ,-: SATISFIES: 9 := UNIFY-DGS (9, ~.deflnite), disjunctions := disjunctions U ~.dlsjunctions, unchecked-parts := true; COMPATIBLE: If ~) is compatible with ~, then new-conditionals := new-conditionals U {a --, ~}, else let neg-ante = -~e. D := UNIFY-DGS (P, neg-ante.deflnite), disjunctions :---- disjunctions u neg-ante.disjunctions, unchecked-parts := true; INCOMPATIBLE: t~ eoad~/on,d imposem no ]urO~ee coaa~v~/nt. end (for loop). conditionals :---- new-conditionals. end (while loop). Let nd ---- make feature-description with: nd.deflnite -~ P, nd.disjunctions = disjunctions, nd.conditionals ---- conditionals. Return (nd). Figure 8: CHECK-INDEF: Algorithm for checking compatibility of indefinite parts of a feature-description with its definite component. This restriction has been acceptable in our use of condi- tional descriptions to model systemic grammars. It k unclear whether a treatment of conditional descriptions without thls restriction will be needed in other applications. If this restric- tion is lifted, then further work will be necessary to define the behavior of negation over path equivalences, and to handle such negations in a reasonably e~cient manner. 5 Summary We have shown how the notational resources of FUG can be extended to include descriptions of conditional information about feature structures. Conditional descriptions have been given a precise logical definition in terms of the feature de- scription logic of Kasper and Rounds, and we have shown how a unification method for feature descriptions can be ex- tended to use conditional descriptions. We have implemented this unification method and tested it in a parser for systemic grammars, using several hundred conditional descriptions. The definition of conditional descriptions and the unifica- tion method should be generaily applicable as an extension to other unificatlon-based grammar frameworks, as well as to FUG and the modeling of systemic grammars. In fact, the implementation described has been carried out by extending PATI~II [Shie84], a general representational framework for unificatlon-based grammars. While it is theoretically possible to represent the informa- tion of conditional descriptions indirectly using notational devices already present in Kay's FUG, there are practical advantages to representing conditional descriptions directly. The indirect encoding of conditional descriptions by dlsjunc- tions and negations entails approximately doubling the size of a description, adding many explicit nonexistence constraints on features (NONE values), and slowing the unification pro- cess. In our experiments, unification wlth conditional de- scriptions requires approximately 50~ of the time required by unification with an indirect encoding of the same descrip- tions. By adding conditional descriptions as a notational resource to FUG, we have not changed the theoretical limits of what FUG can do, but we have developed a representation that is more perspicuous, less verbose, and computationaily more e/~clent. Acknowledgements I would like to thank Bill Rounds for suggesting that it might be worthwhile to clarify ideas about conditional descriptions 239 that were only partially formulated in my dissertation at the /Per87] University of Michigan. Helpful comments on earlier versions of this paper were provided by Bill Mann, Ed Hovy and John Bateman. This research was sponsored by the United States Air [Shie84] Force Office of Scientific Research under contract F49620- 87-C-0005; the opinions expressed here are solely those of the author. References /Hall76] Gunther R. Kress, editor. IIallidap: System and Function in Language. Oxford University Press, London, England, 1976. [Kas87a I Robert T. Kasper. Systemic Grammar and Func- tional Unification Grammar. In J. Benson and W. Greaves, editors, SVstemic Functional Approaches to Discourse, Norwood, New Jersey: Ablex (in press). Also available as USC/information Sci- ences Institute, Technical Report RS-87-179, May 1987. [Kas86] Robert.T. Kasper and William C. Rounds. A Log- ical Semantics for Feature Structures. In Proceed- ings of the 24 ta Annual Meeting of the Association for Computational Linguistics, Columbia Unlver- slty, New York, 1~/', June 10-13, 1986. [Kas87b] Robert T. Kasper. Feature Structures: A Lo~cal 7'heorv ~dth Application to Language Analpds. Phi) dlssertation, University of Mlchlgan, 1987. [Kas87c] Robert T. Kasper. A Unification Method for DIS- junctive Feature Descriptions. In Proceed/ng8 o/the 25 ta Annual Meeting of the Association for Compu- tational Linguistica, Stanford University, Stanford, CA, July 6-9, 1987. /Kay79] Martin Kay. Functional Grammar. In Proceeding8 o/the Fifth Annual Meeting of the Bsrkclsp Lingui~- tica Societp, Berkeley Linguistics Society, Berkeley, Ca2ifornia, February 17-19, 1979. [Mmm83] Wi]fiam C. Mann and Christian Matthiemen. Nigel: A Systemic Grammar for Text Generation. USC / Information Sciences Institute, RR-83-105. Also appears in R. Benson and J. Greaves, editors, Spatemie Perapectivs~ on Diacourss: Selected Pa- per, Paper8 from the Ninth International Spstsmics WorkJhop, Ablex, London, England, 1985. [Mosh87] ~ Drew Moshier and William C. Rounds. A Logic for Partially Specified Data Structures. In Proceed- ing8 of the ACM Spmposium on PrinciplcJ of Pro- graraming Languages, 1987. Fernando C.N. Perelra. Grammars and Logics of Partial Information. In Proceedings of the Inter- national Conference on Logic Programming, Mel- bourne, AustraLia, May 1987. Stuart M. Shieber. The design of a computer lan- guage for lingu]stic information. In Proceedings of the Tenth International Conference on Computa- tional Linguistics: COLING 84, Stanford Unlver- sity, Stanford, California, July 2-7, 1984. 240
1988
29
MULTI-LEVEL PLURALS AND DISTRIBUTIVITY Remko Scha and David Stallard BBN Laboratories Inc. 10 Moulton St. Cambridge, MA 02238 U.S.A. ABSTRACT We present a computational treatment of the semantics of plural Noun Phrases which extends an earlier approach presented by Scha [7] to be able to deal with multiple-level plurals ("the boys and the girls", "the juries and the committees", etc.) 1 We ar- gue that the arbitrary depth to which such plural struc- tures can be nested creates a correspondingly ar- bitrary ambiguity in the possibilities for the distribution of verbs over such NPs. We present a recursive translation rule scheme which accounts for this am- biguity, and in particular show how it allows for the option of "partial distributivity" that collective verbs have when applied to such plural Noun Phrases. 1 INTRODUCTION Syntactically parallel utterances which contain plural noun phrases often require entirely different semantic treatments depending upon the particular verbs (or adjectives or prepositions) that these plural NPs are combined with. For example, while the sen- tence "The boys walk" would have truth-conditions ex- pressed by: 2 Vxe BOYS: WALK[x] the very similar sentence "The boys gather" could not be translated this way. Its truth-conditions would in* stead have to be expressed by something like: GATHER[BOYS] since only a group can "gather', not one person by himself. It is common to call a verb such as "walk" a "distributive" verb, while a verb such as "gather" (or "disperse ~ or intransitive "meet*) is called a ~The wod( presented here was supported unOer DARPA contracts #N00014-85-C-0016 and #N00014-87.C-0085. The vmws and con- clusions contained in this document ere those of the authom and should not be intecpreted as neceeserily repr~tmg the official policies, e~ther expressed or implied, of the Defense Advanced Research Projects Agency or the United Statas Government. 2We ¢jnore here the diecourse ~sues that bear on the inter- pretation of definite NPs "collective" verb. The collective/distributive distinction raises an important issue: how to treat the semantics of plural NPs uniformly. An eadiar paper by Scha ("Distributive, Collective and Cumulative Quantification" [7], hereinafter "DCC") presented a formal treatment of this issue which ex- ploits an idea about the semantics of plural NP's which is due to Bartsch [1]: plural NP's are always interpreted as quantifying over sets rather than in- dividuals; verbs are correspondingly always treated as collective predicates applying to sets. Distributive verbs are provided with meaning postulates which re- late such collective applications to applications on the constituent individuals. The present paper describes an improved and ex- tended version of this approach. Two important problems are addressed. First, there is the problem of ambiguity: the need to allow for more than one distribution pattern for the same verb. Second, there is the problem of "multi-level plurality': the con- sequences which arise for the distributive/collective distinction when one considers conjoined plural NPs such as "The boys and the girls". Both issues are addressed by a two-level system of semantic interpretation where the first level deals with the semantic consequences of syntactic structure and the second with the lexically specific details of distribution. The treatment of plural NPs described in this paper has been implemented in the Spoken Lan- guage System which is being developed at BBN. The system provides a natural language interface to a database/graphic display system which is used to ac- cess information about the capabilities and readiness conditions of the ships in the Pacific Reet of the US Navy. The remainder of the paper is organized as fol- lows: Section 2 discusses previous methods of handling the distributive/collective distinction, and shows their limitations in dealing with the problems mentioned above. Section 3 presents our two-level semantics ap- proach, and shows how it handles the problem of am- biguity. 17 Section 4 shows how a further addition to the two- level system - recursive enumeration of lexical mean- ings - handles the multi-level plural problem. Section 5 presents the algorithm that is used and Section 6 presents conclusions. 2BACKGROUND 2.1 An Approach to Distributivity One possible way to generate the correct readings for "The boys walk" vs. "The boys gather" is due to Bennett [2]. Verbs are sub-categorized as either col- lective or distributive. Noun phrases consisting of "the" + plural then have two readings; a *sat" reading if they are combined with a collective verb and a universal quantification reading if they are combined with a distributive verb. Scha's "Distributive, Collective, and Cumulative Quantification" ('DCC') showed that this approach, while plausible for intransitive verbs, breaks down for the two-argument case of transitive verbs [7]. Con- sider the example below: "The squares contain the circles" [3 This sentence has a reading which can be ap- proximately paraphrased as "Every circle is contained in some square" so that in the world depicted above the sentence would be considered true. The truth-conditions which Bennett's approach would predict, however, are expressed by the formula: Vx e SQUARES: V.R CIRCLES: CONTAIN[x,y] which obviously does not correspond to the state of affairs pictured above. "DCC" avoids this problem by not generating a distributive translation directly. Noun phrases, regard- less of number, quantify over sets of individuals: a singular noun phrase simply quantifies over a singleton set. Nouns by themselves denote sets of such singleton sets. Thus, both "square" and "squares" are translated as: SQUARES* in which the asterisk operator "*" creates the set of singleton subsets of "SQUARES'. Verbs can now be uniformly typed to accept sets of individuals as their arguments. The collective/distributive distinction consists solely in whether a verb is applied to a large set or to a singleton set. Determiner translations are either distributive or collective depending upon whether they apply the predicate to ,the constituent singletons or to their union. Some determiners are unambiguously distribu- tive, for example the translation for "each': (;~X: (~.P. Vx E x: P(x))) Other determiners - "all', "some" and "three" - are ambiguous between translations which are distributive and translations which are collective. Plural "the', on the other hand, is unambiguously collective, and has the translation: (X,X: (~.~./:'(U(,X)))) where "U" takes a set of sets and delivers the set which is their union. The following is a list of sentences paired with their translations under this scheme: The boys walk WALK(BOYS) Each boy walks Vx e BOYS': WALK(x) The boys gather GATHER(BOYS) The squares contain the circles CONTAIN(SQUARES,CIRCLES) For "the" + plural NP's we thus obtain analyses which are, though not incorrect, perhaps more vague than one would desire. These analyses can be further spelled out by providing distributive predicates, such as "WALK" and "CONTAIN', with meaning postulates which control how that predicate is distributed over the constituents of its argument. For example, the meaning postulate associated with "WALK" could be: WALK[x] - [#(x) • t3] ^ [rye x': WALK[y]] which, when applied to the above translation "WALK[BOYS]', gives the result: [#(BOYS) > 0] ^ [Vy ~ BOYS*: WALK[y]] which represents the desired distributive truth- conditions. The meaning postulate for "CONTAIN" could be: CONTAIN[u,v] - Vy ~ v': 3xe u': CONTAIN[x,y] This meaning postulate may be thought of as ex- pressing a basic fact about the notion of containment; namely that one composite object is "contained" by another if every every part of the first is contained in some part of the second. Application of this meaning postulate to the translation CONTAIN[SQUARES,CIRCLES] gives the final result: Vy ~ SQUARES*: 3x E CIRCLES': CONTAIN[x,y] which expresses the truth-conditions we originally 18 desired; namely those paraphrasable by "Every circle is contained by some square'. In general, it is expected that different verbs will have different meaning postulates, corresponding to the different facts and beliefs about the world that pertain to them. 2.2 Problems Conjuncbve Noun Phrases "DCC" only treated plural Noun Phrases (such as "the boys" and "some girls'), but did not deal with conjunctive Noun Phrases ('John, Peter and Bill', "the boys and the girls", or "the committees and the juries"). It is not immediately clear how a treatment of them would be added. Note that a PTQ-style 3 treat- ment of the NP "John and Peter": ~.P: P(John' ) ^ P(Peter' ) would encounter serious difficulties with a sentence like "John and Peter carried a piano upstairs'. Here it would predict only the distributed reading, yet a col- lective reading is the desired one. It would be more in the spirit of the treatment in "DCC" to combine the denotations of the NPs that are conjoined by some form of union. For example, "John and Peter', "The boys and the girls" might be trans- lated as: ;LP: P({John' ,Peter' )) ~.P: P(BOYS U GIRLS) For a sentence like "The boys and the girls gather" this prevents what we call the "partially" distributive" reading - namely the reading in which the boys gather in one place and the girls in another. For this reason, it seems incorrect to assimilate all NP denota~ons to the type of sets of individuals. Noun phrases like "The boys and the girls" or "The juries and the committees', are what we call "multi- level plurals': they have internal structure which can- not be abolished by assimilation to a single seL Note that the plural NP "the committees" is a multi-level plural as well, even though it is not a con- junction. The sentence "The committees gather" has a partially distributive reading (each committe gathers separately) analogous with the partially distributive reading for "The boys and girls gather" above. Ambiguity and Discourse Effects The final problem for the treatment in "DCC" has to do with the meaning postulates themselves. These always dictate the same distribution pattam for any verb, yet it does not seem plausible that one could finally decide what this should be, since the beliefs and knowledge about the world from which they are derived are subject to variation from speaker to speaker. Variability in distribution might also be imposed by context. Consider the sentence "The children ate the pizzas" and a world depicted by the figure in 2.1 where the squares represent children, and the circles, pizzas. Now there will be different quantificational readings of the sentence. The question "What did the children eat?" might be reasonably answered by "The pizzas'. If one were to instead ask "Who ate the pizzas?" (with a view, perhaps, to establishing in- dividual guilt) the answer "The children" would not be as felicitous, since the picture includes one square (child) not containing anything. It is to these problems with meaning postulates that we now turn in Section 3. The solution presented there is then used in Section 4, where we present our solution to the NP-conjunction/multi-level plural problem. 3 THE AMBIGUITY PROBLEM 3.1 The Problem with Meaning Postulates That certain predicates may have different dis- tributive expansions in different contexts cannot be captured by meaning postulates: since meaning pos- tulates are stipulated to be true in all models it is logically incoherent to have several, mutually incom- patible meaning postulates for the same constant. 4 An alternative might be to retreat from the notion of meaning postulates per se, and view them instead as some form of conventional implicatures which are "usually" or "often" true. While it is impossible to have alternative meaning postulates, it is easier to imagine having alternative implicatures. For a semantics which aspires to state specific truth-conditions this is not a very attractive position. We prefer to view these as alternative readings of the sentence, stemming from an open-ended ambiguity of the lexicai items in question - an ambiguity which has to do with the specific details of distributions. Since this ambiguity is not one of syntactic type it does not make sense (in either explanatory or com- putational terms) to multiply lexical entries on its be- half. Rather, one wants a level of representation in which these distributional issues are left open, to be resolved by a later stage of processing. 3We use the worn "style" because Montague's original paper [6] only conjoined term phrases with "or'. The extens~n to "and', however, is straJghtforward. 4One might tfi to combine them into a single meening postulate by Iogr,,al disjunction. We have indicated Oefo~re [9] why this approach is not satisfactory. 19 3.2 Two Levels of Semantic Interpretation To accommodate this our system employs two stages of semantic interpretation, using a technique for coping with lexical ambiguity which was originally developed for the Question-Answering System PHLIQA [3] [8]. The first stage uses a context-free grammar with associated semantic rules to produce an expression of the logical language EFL (for English-Oriented Formal Language). EFL includes a descriptive constant for each word in the lexicon, however many senses that word may have. Hence EFL is an ambiguous logical language; in technical terms this means either that the language has a model-theory that assigns multiple denotations to a single expression [5], or that its expressions are viewed as schemata which abbreviate sets of possible instance-expressions. [g] The second stage translates the EFL expression into one or more expressions of WML (for World Model Language). WML, while differing syntactically from EFL only in its descriptive constants, is un- ambiguous, and includes a descriptive constant for each primitive concept of the application domain in question. A set of translation rules relates each am- biguous constant of EFL to a set of WML expressions representing its possible meanings. Translation of EFL expressions to WM/expressions is effected by producing all possible combinations of constant sub- stitutions and removing those which are "semantically anomalous", in a sense which we will shortly define. EFL and WML are instantiations of a higher-order logic with a recursive type system. In particular, if (x and I~ are types, then: sets(.) sets(sets(=)) sets(sets(sets(.))) fun(~ 13) fun(sets(c¢),~) fun(sets(.),sets(13)) ...o are all types. The type "sets(,)" is the type of sets whose elements are of type eL The type =FUN((x,~)" is the type of functions from type o~ to type 13. Every expression has a type. which is computed from the type of its sub-expressions. Types have domains which are sets; whatever denotation an ex- pression can take on must be an element of the domain of its type. Some expressions, being con- structed from combinations of sub-expressions of in- appropriate types, are not meaningful and are said to be "semantically anomalous". These are assigned a special type, called NULL-SET, whose domain is the empty set. For example, if =F" is an expression of type fun(o¢,~) and "a" is an expression of type 7. whose domain is disjoint from the domain of., then the ex- pression "F(a)" representing the application of "F" to "a" is anomalous and has the type NULL-SET. For more details on these formal languages and their associated type system, see the paper by Landsbergen and Scha [5]. 3.3 Translation Rules Instead of Meaning Postulates We are now in a position to replace the meaning postulates of the "DCC" system with their equivalent EFt. to WML translation rules. For example, the original treatment of "contain" would now be represented by the translation rule: CONTAIN ->Zu, v: Vy E v': 3x E u': CONTAIN' Ix.Y] Note that the constant "CONTAIN'" on the right-hand side is a constant of WML. and is notationally separated from its EFL counterpart by a prime-mark. The device of translation rules can now be brought to bear on the problem mentioned in section 22. namely the distributional ambiguity (in context) of the transitive verb "eat*. The reading which allows an exception in the first argument would be generated by the translation rule: EAT -> ~.u, v: Vy ett: :ix E u*: EAT' [x,y] while the reading which allows no such exception would be: EAT ..> ZU.V: [VX E V': :ly e u': EAT' [y,x]] ^ [Vx E U': :lye I/': EAT' Ix,Y]] We call this a "leave none out* tran~ation. When applied to the sentence "The children ate the pizzas" this generates the reading where all children are guilty. By using this device of translation rules a verb may be provided with any desired (finite) number of alternative distribution patterns. The next section, which presents this paper's treatment of the multiple plurals problem, will make use of a slight modification of the foregoing in which the translation rules are allowed to contain EFL con- stants on their right-hand sides as well as their left, thus making the process recursive. 4 MULTIPLE LEVELS OF PLURALITY 4.1 Overview As we have seen in Section 2.2. utterances which contain multi-level plurals sometimes give rise to mixed collective/distributive readings which cannot be accounted for without retaining the separate semantic identity of the constituents. 20 Consider, for instance, the sentence "The juries and the committees gather". This has three readings: one in which each of the juries gathers alone and each of the committees gathers alone as well (distribution over two levels), another in which all per- sons who are committee members gather in one place and all persons who are jurors gather in another place (distribution" over one level), and finally a third in which all jurors and committee members unite in one large convention (completely collective). It seems in- escapable, therefore, that the internal multi-level structure of NPs has to be preserved. Indeed. it can be argued that the number of levels necessary is not two or three but arbitrary. As Landman [4] has pointed out. conjunctions can be ar- bitrarily nested (consider all the groupings that are possible in the NP "Bob and Carol and Ted and Alice"!). Therefore. the sets which represent collec- tive entities must, in principle, be allowed to be of arbitrary complexity. This is the view we adopt. Allowing arbitrary complexity in the structure of collective en~ties creates a problem for specifying the distributive interpretations of collective predicates: they can no longer be enumerated by finite lists of translation rules. An arbitrary number of levels of structure means an arbitrary number of ways to dis- tribute, and these cannot be finitely enumerated. In order to handle these issues it is necessary to extend the ambiguity treatment of the previous sub- section so that. as is advocated in [9], it recutsively enumerates this infinite set of alternatives. In order to do this we must allow EFL constants to also appear on the right-hand side of translation rules as well as on the left. In the next sub-section we present such a recur- sive EFL constant. Its role in the system is to deal with distributions over arbitrarily complex plural struc- tures. 4.2 The PARTS Function For any complex structure there is generally more than one way to decompose it into parts. For ex- ample, the structure { {John,Peter,Bill},{Mary,Jane,Lucy) } can be viewed as either having two parts - the sets '{John,Peter.Bill)' and '{Mary,Jane,Lucy}' - or six - the six people John,Peter,Bill,Mary,Jane, and Lucy. These multiple perspectives on a complex entity are accommodated in our system by the EFL function PARTS. This function takes a term, simple or com- plex, and returns the set of "parts" (that is, mathemati- cal "parts") making it up. Because there is in general more than one way to decompose a composite entitity into parts, this is an ambiguous term which can be expanded in more than one way. In addition, because the set-theoretic structures corresponding to plural en- titles can be arbitrarily complex, some expansions must be recursive, containing PARTS itself on the right-hand side. The expansions of PARTS are: 1. PARTS[x] -> x (where x an individual) 2. PARTS[s] => (for: s, collect: PARTS) (where s a set) 3. PARTS[s] -> U(for: s. collect: PARTS) (where s a set) 4. PARTS[x] ,,> F[x] Rule (1) asserts that any atomic entity is indivisible, that is, is its own sole part (remember, we are talking about mathematical, not physical parts here). Rules (2) and (3) range over sets and collect together the set of values of PARTS for each member; rule (3) differs in that it merges these into a single set with the operator 'U'. 'U' takes a set of sets and returns their union. In rule (4) "F" is a descriptive function. This rule is included to handle notions like membership of a committee, etc. Suppose PARTS is applied to the structure: { {John,Peter,Bill),{Mary~Jane,Lucy} ) corresponding, perhaps, to the denotation of the NP "The boys and the gids'. The alternative sets of parts of this structure are: (1) {John,Petar,BilI,Mary,Jane,LucY } (2) { {John,Peter,Bill},{Mary,Jane,Lucy} } Let us see how these ~-re produced by recursively expanding the funclion PARTS. Suppose we invoke rule (3) to begin with. This produces: U(for: { {John,Peter,Bill},{Mary,Jane,Lucy} }, collect: PARTS) Now suppose we invoke rule (2) on this, resulting in: U(for: { {John,Peter,Bill),{Mary,Jane,Lucy} }, collect: ~.x: (for: x, collect: PARTS)) In the final step, we invoke rule (1) to produce: U(for:{ {John,Peter.Bill},{Mary,Jane,Lucy) } collect: Zx:. (for: x, collect: ~.x: x) This expression simplifies to: {John,Peter,BUI.Mary,Jane,Lucy) which is just the expansion (1) above. Now suppose we had invoked rule (2) to start with, instead of rule (3). This would produce the ex- pansion: for: { {John.Petar,Bill},{Mary,Jane,Lucy) ), collect: PARTS The rest of the derivation is the same as in the first 21 example. We invoke rule (2) to produce the expan- sion for: { {John,Peter, Bill},{Mary,Jane,Lucy} }, collect: ~.x:. (for: x, collect: PARTS) Rule (1) is then invoked: for: { {John,Peter.Bill},{Mary,Jane,Lucy} }, collect: ~.x:. (for: x, collect: ~.x:. x) There are now no more occurrences of PARTS left. This expression reduces by logical equivalence to: { {John, Peter,Bill},{Mary,Jane,Lucy} } which is just the expansion (2). We now proceed to the distributing translation rules for verbs, which make use of the PARTS func- tion in order to account for the multiple distributional readings economically. 4.3 The Distributing Translation Rules The form below is an example of the new scheme for the translation rules, a translation which can cope with the problem originally posed in section 2.1, "The squares contain the circles'. "s CONTAIN -> ~.u,v : Vx • PARTS[{v}]: 3y • PARTS[{u}]: CONTAIN' [y,x] This revised system can now cope with multi-level plural arguments to the verb "contain". Suppose we are given "The squares contain the circles and triangled'. The initial translation is then: Vx • PARTS[{{CIRCLES,TRIANGLES}}]: 3y • PARTS[{SQUARES}]: CONTAIN' [y,x] The ranges of the quantifiers each contain an occur- rence of the PARTS function, so it is ambiguous as to what they quantify over. Note, however, that the WML predicate CONTAIN' is typed as being ap- plicable to individuals only. Inappropriate expansions for the quantifier ranges therefore result in anomalous expressions which the translation algorithm filters out. The first range restriction: PARTS[{{CIRCLES,TRIANGLES}}] is expanded to: U(for: {{CIRCLES.TRIANGLES}}, collect: ~.x: U(for: x, collect: Zx (for: x, collect: ~.x. x))) by a sequence of expansion rule applications (3),(3),(2),(2), and (1). This final form is equivalent to: SNote one othe¢ modification with rescNmt to the tre~lent presented in section 2.1: predicates transiting verbs are now al- Iowe~ to operate on individuals instea¢l of sets only U(CIRCLES,TRIANGLES) The other restriction, 'PARTS[{SQUARES}]', is reduced by similar means to just 'SQUARES'. We have, finally: Vx E U(CIRCLES,TRIANGLES): 3y • SQUARES: CONTAIN' [y,x] which expresses the desired truth-conditions. 4.4 Partial Distribution of Collective Verbs Let us take up again the example "The juries and committees gather*, Recall that this has three read- ings: one in which each deliberative body gathers apart, another in which the various jurors combine in a gathering and the various committee members com- bine separately in another gather, and finally, one in which all persons concerned, be they jurors or com- mittee members, come together to form a single gathering. These readings are accounted for by the following translation rule for GATHER: GATHER => ;Lx:. Vy • PARTS[{x}]: GATHER' [y] Applying this rule to the initial translation: GATHER[{{JURIES,COMMITrEES}}] produces the expression: Vy • PARTS[{{JURIES,COMMITTEES}}]: GATHER' [y] The various readings of this now depend upon what the range of quantification is expanded to. This must be a set of sets of persons in order to fit the type of GATHER', which is a predicate on sets of persons. We will now show how the PARTS function derives the decompositions that allow each of these readings. Because of the collective nature of the terms "jury" and "committee" ,we will use rule (4), which uses an arbitrary descriptive function to decom- pose an element. Suppose that 'JURIES' has the extension '{Jl ,J2,J3}' and 'COMMITTEES' has the extension '{Cl,C2,C3}'. Suppose also that the the descriptive function 'MEMBERS-OF' is available, taking an organization such as a jury or committee onto the set of people who are its members. Let it have an extension cor- responding to: Jl "-~ {a,b,c} J2 -'>' {d.e.f) J3 ~ {g,h,i} c 1 ~ {j,k,I} c 2 --+ {m,n,o} c 3 --+ {p,q,r} where the letters a,b,c, etc. represent persons. The derivation (3),(3),(2),(4) yields the first of the readings above, in which the verb is partially dis- 22. tributed over two levels. The range of quantification has the extension: { {a,b,c},{d,e,f},{g,h,i},{j,k,I},{m,n,o},{p,q,r} } This is the reading in which each jury and committee gathers by itself. The derivation (3),(2),(3),(4) yields the second reading, in which the verb is partially distributed over the outer level. The derivation produces a range of quantification whose extension is: { {a,b,c,d,e,f,g,h,i},{j,k,l,m,n,o.p,q,r} } This is the reading in which the jurors gather in one place and the committee members in another. Finally, the derivation (2),(3),(3),(4) yields the third reading, which is completely collective. This deriva- tion produces a range of quantification whose exten- sion is: { {a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r} } This is the reading in which all persons who are either jurors or committee members gather. 5 OTHER PARTS OF SPEECH In this section we discuss distributional considera- tions for argument-taking parts of speech other than verbs - specifically prepositions and adjectives. Prepositions in our system are translated as two-place predicates, adjectives as one-place predicates. The distributional issues they raise are therefore treatable by the same machinery we have developed for tran- sitive and intransitive verbs. 5.1 Prepositions Prepositions are subject to distributional con- siderations that are analogous to those of transitive verbs. Consider: The books are on the shelves Given facts about physical objects and spatial loca- tion, the most plausible meaning for this sentence is that every book is on some shelf or other. This would be expressed by the translation rule: Zu, v : Vx• PARTS(u): :ly~ PARTS(v): ON' (x,y) Note the similarity with the translation rule for "CONTAIN". from which it differs in that the roles of the first and the second argument in the quantifica- tional structure are reversed. 5.2 Adjectives The treatment of adjectives in regular form is ex- actly analogous with that given intransitive verbs such as "walk". Thus, for the adjective "red", we may have the translation rule: RED => Zu : Vxe PARTS(u): RED(x) A more interesting problem is seen in sentences con- taining the comparative form of an adjective, as in: The frigates are faster than the carders What are the truth-conditions of this sentence? One might construe it to mean that every frigate is faster than every carrier, but this seems unneccesarily strong. Intuitively, it seems to mean something a little weaker than that, allowing perhaps for a few excep- tions in which a particular carrier is not faster than a particular frigate. On the other hand, another requirement eliminates truth-conditions which are too weak. For if "The •gates are faster than the carders" is true, it must surely be the case that "The carriers are faster than the frigates" is false. This requirement holds not only for "faster", but for the comparative form of any adjective. The treatment of comparative forms in the Spoken Language System can be illustrated by the following schema: (~.x,y: larger(<uf>(x),<uf>(y))) in which '<uf>' is filled in by an "underlying function" particular to the adjective in question. For the adjec- tive "fast", this underlying function is "speed". The requirement of anti-symmetry for the distribu- tions of comparatives is now reduced to a requirement of anti-symmetry for the distributional translation of the EFL constant "larger'. In this way, the anti- symmet~/ requirement is expressed for all compara- tives at once. Obviously anti-symmetry is fufilled for the universal-universal translation, but, as we have pointed out, this is a very strong condition. There is another, weaker condition which fufills anti-symmeW: larger, -> ~.u,v:. Vx ~ PARTS[u]~y • PARTS[v]: larger' [x,y] ^ Vx • PARTS[v]: 3y • PARTS[u]: larger, [y,x] When applied to the sentence above, this condition simply states that for every frigate there exists a car- tier that is slower than it. and conversely, for every carrier there exists a frigate that is faster than it. This is anti-symmetric as required. For if there is some frigate that is faster than every carrier, there cannot be some carrier that is faster than every frigate. 6 THE ALGORITHM The algorithm which applies this method is an ex- tension of the previously-mentioned procedure of generating all possible WML expansions from an EFL expression and weeding out semantically anomalous ones. The two steps of generate and test are now embedded in a loop that simply iterates until all EFL- level constants, including 'PARTS', are expanded 23 away. This gives us a breadth-first search of the possible recursive expansions of 'PARTS', one which nevertheless does not fail to halt because seman- tically anomalous versions, such as those attempting to quantify over expressions which are not sets, or those applying descnptive relations to arguments of the wrong type, are weeded out and are not pursued any further in the next iteration. We can now define the function TO-WML, which takes an EFL expression and produces a set of WML expressions without EFL constants. It is: TO-WML(exp) "clef expansions <- (exp} until ~(3e e expansions: EFL?(e)) do becjin expansions <- U(for: expansions, collect: ~.e for: AMBIG-TRANS(e) collect: SIMPLIFY) expansions <- {e e expansions: TYPEOF(e)= NULL-SET) end The function AMBIG-TRANS expands the EFL-level constants in its input, and returns a set of expres- sions. The function EFL? returns true if any EFL constants are present in its argument. The function TYPEOF takes an expression and returns its type; it returns the symbol NULL-SET if the expression is semantically anomalous. Note that if a particular ex- pansion is found to be semantically anomalous it is removed from consideration. If no non-anomalous expansion can be found the procedure halts and the empty set of expansions {] is returned. In this case the entire EFL expression is viewed as anomalous and the interpretation which gave rise to it can be rejected. 7 CONCLUSIONS We have shown how treatments of the collectJve/dis~butive distinction must take into ac- count the phenomenon of "partial distributivity', in which a collective verb optionally distributes over the outer levels of structure in what we call a "multi-level" plural. Multiple levels of structure must be allowed in the semantics of such plural NPs as "the boys and the girls", "the committees", etc. We have presented a computational mechanism which accounts for these phenomena through a framework of recursive translation rules. This .'ramework generates quantifications over alternative levels of plural structure in an NP, and can handle NPs of arbitrarily complex plural structure, It is economical in its means of producing arbitrary num- bers of readings: the multiple readings of the sen- tence such as "The juries and the committees gathered" are expressed with just one translation rule. References [1] Bartsch, R. The Semantics and Syntax of Number and Numbers. In Kimball, J.P. (editor), Syntax and Seman- tics, Vol. 2. Seminar Press, New York, 1973. [2] Bennett, M.R. Some Extensions of a Montague Fragment of English. Indiana University Linguistics Club. 1975. [3] W.J.H.J. Bronnenberg, H.C. Bunt, S.P.J. Landsbergen, R.J.H. Scha, WJ. Schoen- makers and E.P.C. van Utteren. The Question Answering System PHLIOAI. In L, Bolc (editor), Natural Language Question Answering Systems. Macmillan, 1980. [4] Landman, Fred. Grodps~ 1987. University of Massachusetts, Amherst. [5] Landsbergen, S.P.J. and Scha, R.J.H. Formal Languages for Semantic Represen- tation. In AIl~n and Pet'6fi (editors), Aspects of Autornatized Text Processing: Papers in Textfinguis#cs. Hamburg: Buska, 1979. [6] Montague, R. The Proper Treatment of Quantification in Or- dinary English, in J. Hintakka, J.Moravcsik and P,Suppes (editors), Approaches to Natural Lan- guage. Proceedings of the 1970 Stanford Workship on Grammar and Semantics, pages 221-242. Dordrecht: D.Reidel, 1973. [7] Scha, Remko J.H. Distributive, Collective and Cumulative Quan- tification. In Jeroen Groenendijk, Theo M.V. Janssen, Martin Stokhof (editors), Formal Methods in the Study of Language. Part 2, pages 483-512. Malhematisch Centrum, Amster- dam, 1981. [8] Scha, Remko J.H. Logical Foundations for Question-Answering. Philips Research Laboratories, Eindhoven, The Nethedands, 1983. M.S.12.331. [9] Stallard, David G. The Logical Analysis of Lexicsi Ambiguity. In Proceedings of the ACL. Association for Computational Linguistics, July, 1987. 24
1988
3
DEDUCTIVE PARSING WITH MULTIPLE LEVELS OF REPRESENTATION.* Mark Johnson, Brain and Cognitive Sciences, M.I.T. ABSTRACT This paper discusses a sequence of deductive parsers, called PAD1 - PAD5, that utilize an axiomatization of the principles and parameters of GB theory, including a restricted transformational component (Move-a). PAD2 uses an inference control strategy based on the "freeze" predicate of Prolog-II, while PAD3 - 5 utilize the Unfold-Fold transformation to transform the original axiomatization into a form that functions as a recursive descent Prolog parser for the fragment. INTRODUCTION This paper reports on several deductive parsers for a fragment of Chomsky's Government and Binding theory (Chomsky 1981, 1986; Van Riemsdijk and Williams 1984). These parsers were constructed to illustrate the 'Parsing as Deduction' approach, which views a parser as a specialized theorem-prover which uses knowledge of a language (i.e. its grammar) as a set of axioms from which information about the utterances of that language (e.g. their structural descriptions) can be deduced. This approach directly inspired by the seminal paper by Pereira and Warren (1983). Johnson (1988a) motivates the Parsing as Deduction approach in more detail than is possible here, and Johnson (1988b) extends the techniques presented in this paper to deal with a more complex fragment. Steven Abney, Bob Berwick, Nelson Correa, Tim Hickey, Elizabeth Highleyman, Ewan Klein, Peter Ludlow, Martin Kay, Fernando Pereira and Whitman Richards all made helpful suggestions regarding this work, although all responsibility for errors remains my own. The research reported here was supported by a grant by the Systems Development Foundation to the Center for the Study of Language and Information at Stanford University and a Postdoctoral Fellowship awarded by the Fairchild Foundation through the Brain and Cognitive Sciences Department at MIT. In this paper I describe a sequence of model deductive parsers, called PAD1 - PAD5, for a fragment of GB theory. These parsers are not designed for practical application, but simply to show that GB deductive parsers can actually be built. These parsers take PF representations as their input and produce LF representations as their output. They differ from most extant GB parsers in that they make explicit use of the four levels of representation that GB attributes to an utterance - namely D-structure, S- structure, PF and LF - and the transformational relationship that holds between them. A "grammar" for these parsers consists entirely of a set of parameter values that parameterize the principles of GB theory - thus the parsers described here can be regarded as "principle- based' (Berwick 1987) - and the parsers' top- level internal structure transparently reflects (some of) the principles of that theory; X" and @ theory apply at D-structure, Case theory applies at S-structure, Move-or is stated as a relation between D- and S-structure, and LF- movement relates S-structure and LF. In particular, the constraints on S-structures that result from the interaction of Move-c~ with principles constraining D-structure (i.e. X' and @ theories) are used constructively throughout the parsingprocess. The PAD parsers are designed to directly mirror the deductive structure of GB theory. Intuitively, it seems that deductive parsers should be able to mirror theories with a rich internal deductive structure; these parsers show that to a first approximation this is in fact the case. For example, the PAD parsers have no direct specification of a 'rule' of Passive, rather they deduce the relevant properties of the Passive construction fi'om the interaction of O theory, Move-a, and Case theory. It must be stressed that the PAD parsers are only 'model' Parsers. The fragment of English they accept could only be called 'restricted'. They have no account of WH-movement, and Move-a is restricted to apply to lexical categories, for example, and they incorporate none of the principles of Bounding Theory. 241 However, the techniques used to construct these parsers are general, and they should extend to a more substantial fragment. A SKETCH OF GB THEORY In the remainder of this section I sketch the aspects of GB theory relevant to the discussion below; for more detail the reader should consult one of the standard texts (e.g. Van Riemsdijk and Williams 1986). GB theory posits four distinct representations of an utterance, D- structure, S-structure, PF and LF. To a first approximation, D-structure represents configurationally the thematic or predicate- argument structure of the utterance, S-structure represents the utterance's surface constituent structure, PF represents its phonetic form, and LF ("Logical Form") is a configurational representation of the scopal relationships between the quantificational elements present in the utterance. The PF and LF representations constitute the interface between language and other cognitive systems external to the language module (Chomsky 1986, p. 68). For example, the PF representation "Everybody is loved" together with the D-structure, S- structure and LF representations shown in Figure 1 might constitute a well-formed quadruple for English. INFL" INFL ~ / \~" / \~-FL" /\vP n beV NP beV NPi lo~,ed everybody lo~,ed D-structure INFL" S-structure n Npi/\v p be V NPi Figure 1: Representations of GB Theory. In order for such a quadruple to be well-formed it must satisfy all of the principles of grammar; e.g. the D-structure and S-structure must be related by Move(z, the D-structure must satisfy X'-theory and @-theory, etc. This is shown schematically in Figure 2, where the shaded rounded boxes indicate the four levels of representation, the boxes indicate relations that must hold simultaneously between pairs of structures, and the ellipses designate properties that must hold of a single structure. This diagram is based on the organization of GB theory sketched by Van Riemsdijk and Williams (1986, p. 310), and represents the organization of principles and structures incorporated in the parsers discussed below. ~i! Ph°netic i~ ~orm (PF) ~ - L Hgure 2: (Some of) The Principles of GB Theory. The principles of grammar are parameterized; the set of structures they admit depends on the value of these parameters. These principles are hypothesised to be innate (and hence universally true of all human languages, thus they are often called "Universal Grammar'), so the extra knowledge that a human requires in order to know a language consists entirely of the values (or settings) of the parameters plus the lexicon for the language concerned. The syntax of the English fragment accepted by the parsers discussed below is completely specified by the following list of parameters. The first two parameters determine the X' component, the third parameter determines the Move-cz relation, and the fourth parameter identifies the direction of Case assignment. (1) headFirst. specFirst. movesInSyntax(np). rightwardCaseAssignment. I conclude this section with some brief remarks on the computational problems involved in constructing a GB parser. It seems that one can only construct a practical GB parser by simultaneously using constraints from all of the principles of grammar mentioned above (excepting LF-Movement), but this involves being able to "invert" Move-cz 'on the fly'. Because of the difficulty of doing this, most 242 implementations of GB parsers ignore Move-or entirely and reformulate X' and @ Theories so that they apply at S-structure instead of D- structure, even though this weakens the explanatory power of the theory and complicates the resulting grammar, as Chomsky (1981) points out. The work reported here shows that it is possible to invert a simple formulation of Move-(x "on the fly', suggesting that it is possible to build parsers that take advantage of the D-structure/S-structure distinction offered by GB theory. PARSING AS DEDUCTION As just outlined, GB theory decomposes a competent user's knowledge of a language possessed into two components: (i) the universal component (Univeral Grammar), and (ii) a set of parameter values and a lexicon, which together constitute the knowledge of that i~articular language above and beyond the universal component. The relationship between these two components of a human's knowledge of a language and the knowledge of the utterances of that language that they induce can be formally described as follows: we regard Universal Grammar as a logical theory, i.e. a deductively closed set of statements expressed in a specialized logical language, and the lexicon and rarameter values that constitute the specific knowledge of a human language beyond Universal Grammar as a set of formulae in that logical language. In the theory of of Universal Grammar, these formulae imply statements describing the linguistic properties of utterances of that human language; these statements constitute knowledge of utterances that the parser computes. The parsers presented below compute instances of the 'parse" relation, which is true of a PF-LF pair if and only if there is a D-structure and an S-structure such that the D-structure, S- structure, PF, LF quadruple is well-formed with respect to all of the (pararneterized) principles of grammar. For simplicity, the 'phonology" relation is approximated here by the S- structure 'yield' function. Specifically, the input to the language processor are PF representations and that the processor produces the corresponding LF representations as output. The relationship between the parameter settings and lexicon to the 'parse' relation is sketched in Figure 3. Knowledge of the Language Parameter Settings headfirst. specFirst. moveslnSyntax(np). rightwardCaseAssignment. Lexicon thetaAssigner(love). thetaAssigner(loved). nonThetaAssigner(sleep). *l* ~ imply in the theory of Universal Grammar Knowledge of Utterances of the Language. parse([everybody,-s,love,somebody], [ everybodyi [ sornebodyj [I" [NP ei ] [I' [I -s] [V" [V" [V love] [NP ej ]]]]]]]) parse([everybody,-s,love,somebody], [ somebodyj [ everybodyi [I" [NP ei ] [I' [I -s] [V" [V' [V love] [NP ej ]]]]]]]) . t o l l Figure 3: Knowledge of a Language and its Utterances. It is important to emphasise that the choice of logical language and the properties of utterances computed by the parser are made here simply on the basis of their familiarity and simplicity: no theoretical significance should be attached to them. I do not claim that first-order logic is the 'language of the mind', nor that the knowledge of utterances computed by the human language processor are instances of 'parse' relation (see Berwick and Weinberg 1984 for further discussion of this last poinO. To construct a deductive parser for GB one builds a specialized theorem-prover for Universal Grammar that relates the parameter values and lexicon to the 'parse' relation, provides it with parameter settings and a lexicon as hypotheses, and uses it to derive the consequences of these hypotheses that describe the utterance of interest. The Universal Grammar inference engine used in the PAD parsers is constructed using a Horn-clause theorem-prover (a Prolog interpreter). The Horn-clause theorem-prover is provided with an axiomatization ~/of the theory of Universal 243 Grammar as well as the hypotheses 9/" that represent the parameter settings and lexicon. Since a set of hypotheses ~rimply a consequence F in the theory of Universal Grammar if and only if H u ¢./implies F in first-order logic, a Horn-clause theorem-prover using axiomatization ¢2 is capable of deriving the consequences of af that follow in the theory of Universal Grammar. Thus the PAD parsers have the logical structure diagrammed in Figure 4. Knowledge of Language Axiomatization of Universal Grammar parse(String, LF) :- xBar(infl2,DS), theta(infl2,0,DS), moveAlpha(DS,[],SS,[]), caseFilter(infl2,0,SS), phonology(String/[],SS), lfMovement(SS,LF). Parameter Settings + Lexicon headfirst. ..° thetaAssigner(love). °.. ...... ~ imply in First-order Logic ..................... Knowledge of Utterances of the Language. parse([ everybody,-s,love,somebody], [ everybodyi [ semebodyj [I" [NP ei ] [I" [I -s] Iv" Iv' Iv love] [NP ej ]]]]l]]) .°°.°° Figure 4: The Structure of the PAD Parsers. The clause defining the 'parse" relation given in Figure 4 as part of the axiomatization of GB theory is the actual Prolog definition of 'parse' used in the PAD1 and PAD2 parsers. Thus the top-level structure of the knowledge of language employed by the PAD parsers mirrors the top-level structure of GB theory. Ideally the internal structure of the various principles of grammar should reflect the internal organization of the principles of GB (e.g. Case assigment should be defined in terms of Government), but for simplicity the principles are axiomatized directly here. For reasons of space a complete description of the all of the principles is not given here; however a sketch of one of the principles, the Case Filter, is given in the remainder of this section. The other principles are implemented in a similiar fashion. The Case Filter as formulated in PAD applies recursively throughout the S-structure, associating each node with one of the three atomic values ass, rec or 0. These values represent the Case properties of the node they are associated with; a node associated with the property ass must be a Case assigner, a node associated with the property rec must be capable of being assigned Case, and a node associated with the property 0 must be neutral with respect to Case. The Case Filter determines if there is an assignment of these values to nodes in the tree consistent with the principles of Case assignment. A typical assignment of Case properties to the nodes of an S-structure in English is shown in 5, where the Case properties of a node are depicted by the boldface annotations on that node. 1 INFL" : 0 NP : rec INFL' : ass everybody INFL: ass VP: 0 i ! be / V ' : 0 V:0 NP:0 I I loved e Figure 5: Case Properties. The Case Filter is parameterizeci with respect to the predicates 'rightwardCaseAssignment' and qeftwardCaseAssignment'; if these are specified as parameter settings of the language concerned, ~ the Case Filter permits Case assigners and receivers to appear in the relevant linear order. The lexicon contains definitions of the one-place predicates 'noC.ase', "assignsCase' and 'needsCase' which hold of lexical items with the relevant 1 These annotations are reminiscent of the complex feature bundles associated with categories in GPSG (Gazdar et. al. 1986). The formulation here differs from the complex feature bundle approach in that the values associated with nodes by the Case Filter are not components of that node's category label, and hence are invisible to other principles of grammar. Thus this formulation imposes an informational encapsulation of the principles of grammar that the complex feature approach does not. 244 property; these predicates are used by the Case Filter to ensure the associations of Case properties with lexical items are valid. Specifically, the Case Filter liscences the following structures: (2a) a constituent with no Case properties may have a Case assigner and a Case receiver as daughters iff they are in the appropriate order for the language concerned, (2b) a constituent with no Case properties may have any number of daughters with no Case properties, (2c) a constituent with Case property C may be realized as a lexical item W if W is permitted by the lexicon to have Case property C, and (2d) INFL' assign Case to its left if its INFL daughter is a Case assigner. This axiomatization of Universal Grammar together with the parameter values and lexicon for English is used as the axiom set of a Prolog interpreter to produce the parser called PAD1. Its typical behaviour is shown below. 2 :parse([everybody, - s, love, somebody], IF) LF = everybody::i^somebody::j^infl2:[np:i, infll:[infl: # (- s), vp:[vl:[v: # love, np.~]]]] LF = somebody:.-j^everybody::i^infl2:[np:i, infll:[infl: # (- s), vp:[vl:[v:. # love, np.'j]]]] No (more) solutions :parse([harry, be, Ioved], LF) LF = infl2:[np: # harry, infll:[infl: # be, vp:[vl:[v: # loved, np:[]]]]] No (more) solutions AN ALTERNATIVE CONTROL STRUCTURE Because it uses the SLD inference control strategy of Prolog with the axiomatization of Universal Grammar shown above, PAD1 functions as a 'generate and test' parser. Specifically, it enumerates all D-structures that satisfy X'-theory, filters those that fail to satisfy O-theory, computes the corresponding 2 For the reasons explained below, the X' principle used in this run of parser was restricted to allow only finitely many D-structures. S-structures using Move-(z, removes all S- structures that fail to satisfy the Case Filter, and only then determines if the terminal string of the S-structure is the string it was given to parse. Since the X' principle admits infinitely many D-structures the resulting procedure is only a semi-decision procedure, i.e. the parser is not guaranteed to terminate on ungrammatical input. Clearly the PAD1 parser does not use its knowledge of language in an efficient manner. It would be more efficient to co-routine between the principles of grammar, checking each existing node for well-formedness with respect to these principles and ensuring that the terminal string of the partially constructed S- structure matches the string to be parsed before creating any additional nodes. Because the Parsing as Deduction framework conceptually separates the knowledge used by the processor from the manner in which that knowledge is used, we can use an inference control strategy that applies the principles of grammar in the manner just described. The PAD2 parser incorporates the same knowledge of language as PAD1 (in fact textually identical), but it uses an inference control strategy inspired by the 'freeze' predicate of Prolog-II (Cohen 1985, Giannesini et. al. 1986)to achieve this goal. The control strategy used in PAD2 allows inferences using specified predicates to be delayed until specified arguments to these predicates are at least partially instantiated. When some other application of an inference rule instantiates such an argument the current sequence of inferences is suspended and the delayed inference performed immediately. Figure 6 lists the predicates that are delayed in this manner, and the argument that they require to be at least partially instantiated before inferences using them will proceed. Predicate Delayed on X' theory O theory Move-u Case Filter Phonology LF-Movement D-structure D-st~'ucture S-structure S-structure not delayed S-structure Figure 6: The Control Strategy of PAD2. With this control strategy the parsing process proceeds as follows. Inferences using the X', O, 245 Case, Move-a and LF-movement principles are immediately delayed since the relevant structures are uninstantiated. The 'phonology" principle (a simple recursive tree-walking predicate that collects terminal items) is not delayed, so the parser begins performing inferences associated with it. These instantiate the top node of the S-structure, so the delayed inferences resulting from the Case Filter, Move-a and LF-movement are performed. The inferences associated with Move-a result in the instantiation of the top node(s) of the D-structure, and hence the delayed inferences associated with the X" and O principles are also performed. Only after all of the principles have applied to the S- structure node instantiated by the "phonology" relation and the corresponding D-structure node(s) instantiated by Move-a are any further inferences associated with the 'phonology" relation performed, causing the instantiation of further S-structure nodes and the repetition of the cycle of activation and delaying. Thus the PAD2 parser simultaneously constructs D-structure, S-structure and LF representations in a top-down left-to-right fashion, functioning in effect as a recursive descent parser. This toi>- down behaviour is not an essential property of a parser such as PAD2; using techniques based on those described by Pereira and Shieber (1987) and Cohen and Hickey (1987) it should be possible to construct parsers that use the same knowledge of language in a bottom-up fashion. TRANSFORMING THE AXIOMATIZATION In this section I sketch a program transformation which transforms the original axiomatization of the grammar to an equivalent axiomatization that in effect exhibits this 'co-routining' behaviour when executed using Prolog's SLD inference control strategy. Interestingly, a data-flow analysis of this transformed axiomatization (viewed as a Prolog program) justifies a further transformation that yields an equivalent program that avoids the construction of D- structure trees altogether. The resulting parsers, PAD3 - PADS, use the same parameter settings and lexicon as PAD1 and PAD2, and they provably compute the same PF-LF relationship as PAD2 does. The particular techniques used to construct these parsers depend on the internal details of the formulation of the principles of grammar adopted here - specifically on their simple recursive structure - and I do not claim that they will generalize to more extensive formulations of these principles. Recall that the knowledge of a language incorporated in PAD1 and PAD2 consists of two separate components, (i) parameter values and a lexicon, and (ii) an axiomatization U of the theory of Universal Grammar. The axiomatization U specifies the deductively closed set of statements that constitute the theory of Universal Grammar, and clearly any axiomatization U' equivalent to U (i.e. one which defines the same set of statements) defines exactly the same theory of Universal Grammar. Thus the original axiomatization U of Universal Grammar used in the PAD parsers can be replaced with any equivalent axiomatization U' and the system will entail exactly the same knowledge of the utterances of the language. A deductive parser using U'in place of U may perform a differer~ce sequence of inference steps but ultimately it will infer an identical set of consequences (ignoring non- termination). The PAD3 parser uses the same parameter values and lexicon as PAD1 and PAD2, but it uses a reaxiomatization of Universal Grammar obtained by applying the Unfold/Fold transformation described and proven correct by Tamaki and Sato (1984) and Kanamori and Horiuchi (1988). Essentially, the Unfold/Fold transformation is used here to replace a sequence of predicates each of which recursively traverses the same structure by a single predicate recursive on that structure that requires every node in that structure to meet all of the constraints imposed by the original sequence of predicates. In the PAD3 parser the X', @, Move-a, Case and Phonology principles used in PAD1 and PAD2 are folded and replaced by the single predicate 'p" that holds of exactly the D-structure, S-structure PF triples admitted by the conjunction of the original principles. Because the reaxiomatization technique used here replaces the original axiomatization of PAD1 and PAD2 with an equivalent one (in the sense of the minimum Herbrand model semantics), the PAD3 parser provably infers 246 exactly the same knowledge of language as PAD1 and PAD2. Because PAD3's knowledge of the principles of grammar that relate D- structure, S-structure and PF is now represented by the single recursive predicate 'p' that checks the well-formedness of a node with respect to all of the relevant principles, PAD3 exhibits the 'co-routining" behaviour of PAD2 rather than the 'generate and test" behaviour of PAD1, even when used with the standard SLD inference control strategy of Prolog. 3 PAD3 constructs D-structures, just as PAD1 and PAD2 do. However, a simple analysis of the data dependencies in the PAD3 program shows that in this particular case no predicate uses the D-structure value returned by a call to predicate 'p' (even when 'p' calls itself recursively, the D-structure value returned is ignored). Therefore replacing the predicate 'p' with a predicate 'pl' exactly equivalent to 'p' except that it avoids construction of any D- structures does not affect the set of consequences of these axioms. 4 The PAD4 parser is exactly the same as the PAD3 parser, except that it uses the predicate 'pl' instead of "p', so it therefore computes exactly the same PF - LF relationship as all of the other PAD parsers, but it avoids the construction of any D-structure nodes. That is, the PAD4 parser makes use of exactly the same parameter settings and lexicon as the other PAD parsers, and it uses this knowledge to compute exactly the same knowledge of utterances. It differs from the other PAD parsers in that it does not use this knowledge to explicitly construct a D-structure representation of the utterance it is parsing. This same combination of the Unfold/Fold transformation followed data dependency analysis can also be performed on all of the principles of grammar simultaneously. The 3 Although in terms of control strategy PAD3 is very similiar to PAD2, it is computationally much more efficient than PAD2, because it is executed directly, whereas PAD2 is interpreted by the meta- interpreter with the 'delay" control structure. 4 The generation of the predicate "pl' from the predicate 'p' can be regarded an example of static garbage-collection (I thank T. Hickey for this observation). Clearly, a corresponding run-time garbage collection operation could be performed on the nodes of the partially constructed D-structures in PAD2. Unfold/Fold transformation produces a predicate in which a data-dependency analysis identifies both D-structure and S- structure values as ignored. The PAD5 parser uses the resulting predicate as its axiomatization of Universal Grammar, thus PAD5 is a parser which uses exactly the same parameter values and lexicon as the earlier parsers to compute exactly the same PF-LF relationship as these parsers, but it does so without explictly constructing either D- structures or S-structure~ To summarize, this section presents three new parsers. The first, PAD3, utilized a re- axiomatization of Universal Grammar, which when coupled with the SLD inference control strategy of Prolog resulted in a parser that constructs D-structures and S-structures 'in parallel', much like PAD2. A data dependency analysis of the PAD3 program revealed that the D-structures computed were never used, and PAD4 exploits this fact to avoid the construction of D-structures entirely. The techniques used to generate PAD4 were also used to generate PADS, which avoids the explicit construction of both D-structures and S- structures. CONCLUSION. In this paper I described several deductive parsers for GB theory. They knowledge of language that they used incorporated the to W level structure of GB theory, thus demonstrating that parsers can actually be built that directly reflect the structure of this theory. This work might be extended in several ways. First, the fragment of English covered by the parser could be extended to include a wider range of linguistic phenomena. It would be interesting to determine if the techniques described here to axiomatize the principles of grammar and to reaxiomatize Universal Grammar to avoid the construction of D- structures could be used on this enlarged fragment - a program transformation for reaxiomatizing a more general formulation of Move-ct is given in Johnson (1988b). Second, the axiomatization of the principles of Universal Grammar could be reformulated to incorporate the 'internal' deductive structure of 247 the components of GB theory. For example, one might define c-command or goverment as primitives, and define the principles in terms of these. It would be interesting to determine if a deductive parser can take advantage of this internal deductive structure in the same way that the PAD parsers utilized the deductive relationships between the various principles of grammar. Third, it would be interesting to investigate the performance of parsers using various inference control strategies. The co-routining strategy employed by PAD2 is of obvious interest, as are its deterministic and non-deterministic bottom- up and left-corner variants. These only scratch the surface of possibilities, since the Parsing as Deduction framework allows one to straight- forwardly formulate control strategies sensitive tO the various principles of grammar. For example, it is easy to specify inference control strategies that delay all computations concerning particular principles (e.g. binding theory) until the end of the parsing process. Fourth, one might attempt to develop specialized logical languages that are capabale of expressing knowledge of languages and knowledge of utterances in a more succinct and computationally useful fashion than the first-order languages. BIBLIOGRAPHY Berwick, R. (1987) Principle-based Parsing. MIT Artificial Intelligence Laboratory Technical Report No. 972. Also to appear in The Processing of Linguistic Structure, The MIT Press, Cambridge, Mass. Berwick, R. and A. Weinberg. (1984) The Grammatical Basis of Linguistic Performance. The MIT Press, Cambridge, Mass. Chomsky, N. (1981) Lectures on Government and Binding. Foris, Dordrect. Chomsky, N. (1986) Knowledge of Language, Its Nature, Origin and Use. Praeger, New York. Cohen, J. (1985) Describing Prolog by its Interpretation and Compilation. C. ACM. 28:12, p. 1311-1324. Cohen, J. and T. Hickey. (1987) Parsing and Compiling Using Prolog. ACM Trans. Programming Languages and Systems. 9:2, p. 125-163. Gazdar, G., E. Klein, G. Pullum and I. Sag. (1985) Generalized Phrase Structure Grammar. Basil Blackwell, Oxford. Giannesini, F., H. Kanoni, R. Pasero, and M. v. Caneghem. Prolog. Addison-Wesley, Reading, Mass. 1986. Johnson, M. (1988a) Parsing as Deduction, the Use of Knowledge of Language, ms. Johnson, M. (1988b) Computing with Move-a using the Unfold-Fold Transformation, ms. Kanamori, T. and K. Horiuchi (1988) Construction of Logic Programs Based on Generalized Unfold/Fold Rules, in Lassez, ed., Proceedings of the Fourth International Conference of Logic Programming, p. 744 - 768, The MIT Press, Cambridge, Mass. Pereira, F. and S. Shieber. (1987) Prolog and Natural Language Processing. CSLI Lecture Notes Series, distributed by Chicago University Press. Chicago. Pereira, F. and D. Warren. (1983) Parsing as Deduction. In Proceedings of the 21st Annual Meeting of the Association for Computational Unguistics, MIT, Cambridge, Mass. Tamaki, H. and T. Sato. (1984) Unfold/Fold Transformation of Logic Programs. In Proceedings of the Second International Logic Programming Conference, p. 127-138, Uppsala University, Uppsala, Sweden. Van Riernsdijk, H. and E. Williams. (1986) Introduction to the Theory of Grammar. The MIT Press, Cambridge, Mass. 248
1988
30
Graph-structured Stack and Natural Language Parsing Masaru Tomlta Center for Machine Translation and Computer Science Department Camegie-MeUon University Pittsburgh, PA 15213 Abstract A general device for handling nondeterminism in stack operations is described. The device, called a Graph-structured Stack, can eliminate duplication of operations throughout the nondeterministic processes. This paper then applies the graph-structured stack to various natural language parsing methods, including ATN, LR parsing, categodal grammar and principle- based parsing. The relationship between the graph- structured stack and a chart in chart parsing is also discussed. 1. Introduction A stack plays an important role in natural language parsing. It is the stack which gives a parser context- free (rather than regular) power by permitting recursions. Most parsing systems make explicit use of the stack. Augmented Transition Network (ATN) [10] employs a stack for keeping track of retum addresses when it visits a sub-network. Shift-reduce parsing uses a stack as a pdmary device; sentences are parsed only by pushing an element onto the stack or by reducing the stack in accordance with grammatical rules. Implementation of pdnciple-based parsing [9, 1, 4] and categodal grammar [2] also often requires a stack for stodng partial parses already builL Those parsing systems usually introduce backtracking or pseudo parallelism to handle nondeterminism, taking exponential time in the worst case. This paper describes a general device, a graph-structured stack. The graph-structured stack was originally introduced in Tomita's generalized LR parsing algorithm [7, 8]. This paper applies the graph- structured stack to various other parsing methods. Using the graph-structured stack, a system is guaranteed not to replicate the same work and can run in polynomial time. This is true for all of the parsing systems mentioned above; ATN, shift-reduce parsing, principle-based parsing, and perhaps any other parsing systems which employ a stack. The next section describes the graph-structure stack itself. Sections 3, 4, 5 and 6 then describe the use of the graph-structured stack in shift-reduce LR parsing, ATN, Categorlal Grammars, and principle- based parsing, respectively. Section 7 discusses the relationship between the graph-structured stack and chart [5], demonstrating that chart parsing may be viewed as a special case of shift-reduce parsing with a graph-structured stack. 2. The Graph-structured Stack In this section, we describe three key notions of the graph-structured stack: splitting, combining and local ambiguity packing. • 2.1. SpUttlng When a stack must be reduced (or popped) in more than one way, the top of the stack is split. Suppose that the stack is in the following state. The left-most element, A, is the bottom of the stack, and the right- most element, E, is the top of the stack. In a graph- structured stack, there can be more than one top, whereas there can be only one bottom. #,--- n --- C --- D --- Z Suppose that the stack must be reduced in the following three different ways. F <-- D ]~ G <-- D IB H<-- C D 1 Then after the three reduce actions, the stack looks 249 like: A --- B lom \ \ \ --i F / / C .... G lfl 2.2. Combining When an element needs to be shifted (pushed) onto two or more tops of the stack, it is done only once by combining the tops of the stack. For example, if "1" is to be shifted to F, G and H in the above example, then the stack will look like: /-- r --\ / \ / \ A --- B --- C .... G .... Z \ / \ / \ a--/ 2.3. Local Ambiguity Packing If two or more branches of the stack turned out to be Identical, then they represent local ambiguity; the Identical state of stack has been obtained in two or more different ways. They are merged and treated as a single branch. Suppose we have two rules: J<--F Z J<-- G Z After applying these two rules to the example above, the stack will look like: A - - - a .... c - - - o \ \ \-- x --- z The branch of the stack, "A-B-C-J', has been obtained in two ways, but they are merged and only one is shown in the stack. 3. Graph-structured Stack and Shift-reduce LR Parsing In shift-reduce parsing, an input sentence is parsed from left to dght. The parser has a stack, and there are two basic operations (actions) on the stack: shift and reduce. The shift action pushes the next word in the input sentence onto the top of the stack. The reduce action reduces top elements of the stack according to a context-free phrase structure rule in the grammar. One of the most efficient shift-reduce parsing algorithms is LR parsing. The LR parsing algodthm pre-compiles a grammar into a parsing table; at run time, shift and reduce actions operating on the stack are deterministically guided by the parsing table. No backtracking or search is involved, and the algodthm runs in linear time. This standard LR parsing algorithm, however, can deal with only a small subset of context-free grammars called LR grammars, which are often sufficient for programming languages but cleady not for natural languages. If, for example, a grammar is ambiguous, then its LR table would have multiple entries, and hence deterministic parsing would no longer be possible. Figures 3-1 and 3-2 show an example of a non-LR grammar and its LR table. Grammar symbols starting with " represent pre-terminals. Entdes "sh n" in the actton table (the left part of the table) Indicate that the action is to "shift one word from input buffer onto the stack, and go to state n'. Entries "re n" Indicate that the action is to "reduce constituents on the stack using rule n'. The entry "acc" stands for the action "accept', and blank spaces represent "error'. The goto table (the dght part of the table) decides to which state the parser .should go after a reduce action. The LR parsing algorithm pushes state numbers (as well as constituents) onto the stack; the state number on the top of the stack Indicates the current state. The exact definition and operation of the LR parser can be found in Aho and UIIman [3]. We can see that there are two multiple entries in the action table; on the rows of state 11 and 12 at the column labeled "prep'. Roughly speaking, this is the situation where the parser encounters a preposition of a PP right after a NP. If this PP does not modify the NP, then the parser can go ahead to reduce the NP to a higher nonterminal such as PP or VP, using rule 6 or 7, respectively (re6 and re7 in the multiple entries). If, on the other hand, the PP does modify the NP, then 250 (1) S --> NP VP (2) S --> S PP (3) NP --> *n (4) NP --> *det *n (5) NP --> NP PP (6) PP --> *prep NP (7) VP --> *v NP Figure 3-1: An Example Ambiguous Grammar State *det *n *v *prep $ NP PP VP S 0 1 2 3 4 5 6 8 9 I0 11 12 sh3 sh4 shl0 sh3 sh4 sh3 sh4 sh7 re3 2 1 sh6 acc 5 sh6 9 8 re3 re3 re2 re2 11 12 re1 re1 re5 re5 re5 re4 re4 re4 re6 re6, sh6 re6 9 re7,sh6 re7 9 Figure 3-2: LR Parsing Table with Multiple Entries (dedved from the grammar in fig 3-1) . I . . . . . . . . s .................... 1 ........ \ I \ I ............. s, I ...... \ \ I \ \ I I ........ =re ...... 12 ....... \ \ I I \ \ o~--m,--2---v---'/~--~e--12--~p---6~--m,--11---p---6---ae-~11--~p---6 \ ..... s .... I .... \-. I \ ~-e ......... I I \ I \-,--. .... ~re .............. 6 I Flgure 3-3: A Graph-structured Stack 251 the parser must wait (sh6) until the PP is completed so it can build a higher NP using rule 5. With a graph-structured stack, these non- deterministic phenomena can be handled efficiently in polynomial time. Figure 3-3 shows the graph- structured stack right after shifting the word "with" in the sentence "1 saw a man on the bed in the apartment with a telescope." Further description of the generalized LR parsing algorithm may be found in Tomita [7, 8]. 4. Graph-structured Stack and ATN An ATN parser employs a stack for saving local registers and a state number when it visits a subnetwork recursively. In general, an ATN is nondeterministic, and the graph-structured stack is viable as may be seen in the following example. Consider the simple ATN, shown in figure 4-1, for the sentence "1 saw a man with a telescope." After parsing "1 saw", the parser is in state $3 and about to visit the NP subnetwork, pushing the current environment (the current state symbol and all registers) onto the stack. After parsing "a man', the stack is as shown in figure 4-2 (the top of the stack represents the current environment). Now, we are faced with a nondeterministic choice: whether to retum from the NP network (as state NP3 is final), or to continue to stay in the NP network, expecting PP post nominals. In the case of returning from NP, the top element (the current environment) is popped from the stack and the second element of the stack is reactivated as the current environment. The DO register is assigned with the result from the NP network, and the current state becomes $4. At this moment, two processes (one in state NP3 and the other in state $4) are alive nondeterministically, and both of them are looking for a PP. When "with" is parsed, both processes visit the PP network, pushing the current environment onto the stack. Since both processes are to visit the same network PP, the current environment is pushed only once to both NP3 and $4, and the rest of the PP is parsed only once as shown in figure 4-3. Eventually, both processes get to the final state $4, and two sets of registers are produced as its final results (figure 4-4). 5. Graph-structured Stack and categorial grammar Parsers based on categodal grammar can be implemented as shift-reduce parsers with a stack. Unlike phrase-structure rule based parsers, information about how to reduce constituents is encoded in the complex category symbol of each constituent with functor and argument features. Basically, the parser parses a sentence strictly from left to dght, shiffing words one-by-one onto the stack. In doing so, two elements from the top of the stack are Inspected to see whether they can be reduced. The two elements can be reduced in the following cases: • x/'z x -> x (Forward Functional Application) • Y x\x -> x (Backward Functional Application) • x/x x/z -> x/z (Forward Functional Composition) • x\z x/x -> x\z (Backward Functional Composition) When it reduces a stack, it does so non-destnJctively; that is, the original stack is kept alive even after the reduce action. An example categodal grammar is presented in figure 5-1. z saw (s\~e)/,~ • ~I~ nusn N w~th (.r~\~)/m,, ((s\m,) \ (s\m,))/m, • I~/N telescope N Figure 5-1: An Example Categodal Grammar The category, (S\NP), represents a verb phrase, as it becomes S if there is an NP on its left. The categories, (NP~NP) and (S\NP)\(S\NP), represent a prepositional phrase, as it becomes a noun phrase or a verb phrase if there is a noun phrase or a verb phrase on its left, respectively. Thus, a preposition such as "with" has two complex categodas as in the 252 PP / .... \ v ~ / I (Sl) ...... > (S2) ....... > (S3) ...... > [S4] < .... / PP / .... \ det n / J (NP1) ..... > (HP2) ..... > [NP3] < .... / \ \ p:on \ ..... > [.rP4] p NP (PP1) ..... > (PIP2) ..... > [PP3] SI-NP-S2 52-v-53 S3-NP-S4 S4-PP-S4 NPI-det-NP2 NP2-n-NP3 NP3-PP-NP3 NPI-pEon-NP4 PPI-p-PP2 PP2-NP-PP3 A: Sub:) <-- * C: (Sub j -ve:b-ag:eement ) A: MY<-- * A: DO<-- * A: ]~:x:[8 <=m * A: Det <-- * A: Head <-- . A: Qua1 <-- * A: Head <-- * A: Prep <-- * A: P:el~:)b:) <-- * []: final states (): non-final states Figure 4-1: A Simple ATN for "1 saw a man with a telescope" botto~ S3 N~3 [Sub:): Z [Det: a MV: Head: [=oat: : sea [=oat= : man tense: past]] Hum: 8Angle]] Figure 4-2: Graph-structured Stack in ATN Parsing "1 saw a man" bottom \ \ \ \. \ \ \ S3 NP3 [Sub:): X [Det: a ]~: Head: man] [=oat: see tense: past]] S4 [Sub:): z MV: [¢oot: see tense: past] DO: [Det: a Head: man]] PP2 [Pr~p: with] / /. / / / Figure 4-3: Graph-structured Stack in ATN Parsing "1 saw a man with a" ]NrP2 [Det : a] 253 b o t t ~ s4 [sub:) : z MV: [=got: see 1Cerise : past] IX): [Det: a Head: man] Mods: [P=ep: with P:epOb:): [Det: a Head: t:elescope] ] ] (sub::): z MV: [=oo'c : see tense: past] IX): [Det: e Head: man] Qua1: [P=ep : with P:epObj: [Det: a ]Bead: telescope] ] ] Figure 4-4: Graph-structured Stack in ATN Parsing "1 saw a man with a telescope" /-. (s\~m)/~ / Figure 5-1: Graph-structured Stack in CG parsing "1 saw a" / .............. (S\Ne)/H ..... \ / \ bottom .... m~ .... (s\~) In .... ~/a ...... \ \ \ \ \ \ m~ \ \ \ \, s\~ \ \ s Figure 5-2: Graph-structured Stack in CG parsing "1 saw a man" / ........ (sXsP)/s ..... \ / \ botto~ --- ~ --- (s\~)/lce --- mP/m --- H --\ \ \ \ \ /-- (mP\~) INs \ \ \ we .... \ I \ \ I .... ((s\mP) / (s\Ne)) INs \ \, s\~m --I \ / \ s ---I Figure 5-3: Graph-structured Stack in CG parsing "1 saw a man with" 254 example above. Nondeterminism in this formalism can be similarly handled with the graph-structured stack. After parsing "1 saw a', there is only one way to reduce the stack; (S\NP)/NP and NP/N into (S\NP)/N with Forward Functional Composition. The graph-structured stack at this moment is shown in figure 5-1. After parsing "man', a sequence of reductions takes place, as shown in figure 5-2. Note that S\NP is obtained in two ways (S\NP)/N N --> S\NP and (S\NP)/NP NP --> S\NP), but packed into one node with Local Ambiguity Packing described in section 2.3. The preposition "with" has two complex categories; both of them are pushed onto the graph-structured stack, as in figure 5-3. This example demonstrates that Categodal Grammars can be implemented as shift-reduce parsing with a graph-structured stack, it Is interesting that this algorithm is almost equivalent to "lazy chart parsing" descdbed in Paraschi and Steedman [6]. The relationship between the graph-structured stack and a chart in chad parsing is discussed in section 7. 6. Graph-structured Stack and Principle-based Parsing Pdnciple-based parsers, such as one based on the GB theory, also use a stack to temporarily store partial trees. These parsers may be seen as shift-reduce parsers, as follows. Basically, the parser parses a sentence strictly from left to dght, shifting a word onto the stack one-by-one. In doing so, two elements from the top of the stack are always inspected to see whether there are any ways to combine them with one of the pdnciplas, such as augment attachment, specifier attachment and pre- and post-head adjunct attachment (remember, there are no outside phrase structure rules in principle-based parsing). Sometimes these principles conflict and there is more than one way to combine constituents. In that case, the graph-structure stack is viable to handle nondeterminism without repetition of work. Although we do not present an example, the implementation of pdnciple-based parsing with a graph-structured stack is very similar to the Implementation of Categodal Grammars with a graph-structured stack. Only the difference is that, in categodal grammars, Information about when and how to reduce two constItuents on the top of the graph-structured stack is explicitely encoded in category symbols, while in principle-based parsing, it is defined implicitely as a set of pdnciplas. 7. Graph-structured Stack and Chart Some parsing methods, such as chart parsing, do not explicitly use a stack. It Is Interesting to investigate the relationship between such parsing methods and the graph-structured stack, and this section discusses the correlation of the chart and the graph-structured stack. We show that chad parsing may be simulated as an exhaustive version of shift- reduce parsing with the graph-structured stack, as described Informally below. 1. Push the next word onto the graph- structured stack. 2. Non-destructively reduce the graph- structured stack in all possible ways with all applicable grammar rules; repeat until no further reduce action is applicable. 3. Go to 1. A snapshot of the graph-structured stack in the exhaustive shift-reduce parsers after parsing "1 saw a man on the bed in the apartment with" is presented in figure 7-1 (slightly simplified, ignodng determiners, for example). A snapshot of a chart parser alter parsing the same fragment of the sentence is also shown in figure 7-2 (again, slightly simplified). It is clear that the graph-structured stack in figure 7-1 and the chart in figure 7-2 are essentially the same; in fact they are topologically Identical if we ignore the word boundary symbols, "*', in figure 7-2. It is also easy to observe that the exhaustive version of shitt-reduce parsing is essentially a version of chart parsing which parses a sentence from left to dght. 255 / . . . . . s ........ \ / \ / ............. s . . . . . . . . . . . \ \ / \ \ I I ........ ~ ............... \ \ / I \ \ bott~ ..... ~ ..... v ...... ~ ...... p ...... ~ ...... p ...... ~ ...... p \ \ I\ I \ ..... s ...... \, I \ ......... ~ I \ / \ ......... ~ ......................... I Figure 7.1: A Graph-structured Stack in an Exhaustive Shift-Reduce Parser "1 saw a man on the bed in the apartment with" /IIIIIIIIIlllIl~' .IIIIIIIIIIIIIIl II~ I \ I ............. s .................... \ \ I \ \ I I ........ m, ........... \ \ I I \ \ ----~---*---.---'---IqP---*---p---'---NP---*---p---*---We---*---p---* \ \ I \ I \ ..... s .......... \ .... I \ ,we .......... I \ I \ m~ ........... I "Z" "laW" "a I " "On" "thl ~d" "4n" "the apt" "w4th" Figure 7.2: Chart in Chart Parsing "1 saw a man on the bed in the apartment with" 256 8. Summary The graph-structured stack was introduced in the Generalized LR parsing algorithm [7, 8] to handle nondeterminism in LR parsing. This paper extended the general idea to several other parsing methods: ATN, principle-based parsing and categodal grammar. We suggest considering the graph-structure stack for any problems which employ a stack nondeterministically. It would be interesting to see whether such problems are found outside the area of natural language parsing. [9] [lO] Wehdi, E. A Government-Binding Parser for French. Working Paper 48, Institut pour les Etudes Semantiquas et Cognitives, Unlversite de Geneve, 1984. Woods, W. A. Transition Network Grammars for Natural Language Analysis. CACM 13:pp.591-606, 1970. 9. Bibliography [I] Abney, S. and J. Cole. A Govemment-Blnding Parser. In Proceedings of the North Eastern Linguistic Society. XVI, 1985. [2] Ades, A. E. and Steedman, M. J. On the Order of Words. Linguistics and Philosophy 4(4):517-558, 1982. [3] Aho, A. V. and UIIman, J. D. Principles of Compiler Design. Addison Wesley, 1977. [4] Barton, G. E. Jr. Toward a Principle-Based Parser. A.I. Memo 788, MITAI Lab, 1984. [5] Kay, M. The MIND System. Natural Language Processing. ' Algodthmics Press, New York, 1973, pages pp.155-188. [6] Pareschi, R. and Steedman, M. A Lazy Way to Chart-Parse with Categodal Grammars. 25th Annual Meeting of the Association for Computational Linguistics :81-88, 1987. [7] Tomita, M. Efficient Parsing for Natural Language. Kluwer Academic Publishers, Boston, MA, 1985. [8] Tomita, M. An Efficient Augmented-Context-Free Parsing Algorithm. Computational Linguistics 13(1-2):31-46, January-June, 1987. 257
1988
31
AN EAR.LEY-TYPE PAR.SING ALGOR.ITHM FOR. TR.EE ADJOINING GR_kMMAR.S * Yves Schabes and Aravind K. Joshi Department of Computer and Information Science University of Pennsylvania Philadelphia PA 19104-6389 USA schabes~liac.cis.upenn.edu joshi~cis.upenn.edu ABSTR.ACT We will describe an Earley-type parser for Tree Adjoining Grammars (TAGs). Although a CKY- type parser for TAGs has been developed earlier (Vijay-Shanker and :Icshi, 1985), this is the first practical parser for TAGs because as is well known for CFGs, the average behavior of Earley-type parsers is superior to that of CKY-type parsers. The core of the algorithm is described. Then we discuss modifications of the parsing algorithm that can parse extensions of TAGs such as constraints on adjunction, substitution, and feature structures for TAGs. We show how with the use of substi- tution in TAGs the system is able to parse di- rectly CFGs and TAGs. The system parses unifi- cation formalisms that have a CFG skeleton and also those with a TAG skeleton. Thus it also al- lows us to embed the essential aspects of PATR-II. 1 Introduction Although formal properties of Tree Adjoining Grammars (TAGs) have been investigated (Vijay- Shanker, 1987)--for example, there is an O(ns)- time CKY-like algorithm for TAGs (Vijay-Shanker and Joshi, 1985)--so far there has been no at- tempt to develop an Earley-type parser for TAGs. This paper presents an Earley parser for TAGs and discusses modifications to the parsing algo- rithm that make it possible to handle extensions of TAGs such as constraints on adjunction, sub- *This work is partially supported by ARO grant DAA29-84-9-007, DARPA grant N0014-85-K0018, NSF grants MCS-82-191169 and DCR-84-10413. The authors would like to express their gratitude to Vijay-Shankc~r for his helpful comments relating to the core of the algorithm, Richard Billington and Andrew Chalnlck for their graphi- cal TAG editor which we integrated in our system and for their programming advice. Tb,m~ are also due to Anne Abeill~ and Ellen Hays. stitution, and feature structure representation for TAGs. TAGs were first introduced by Joshi, Levy and Takahashi (1975) and Joshi (1983). We describe very briefly the Tree Adjoining Grammar formal- ism. For more details we refer the reader to Joshi (1983), Kroch and Joshi (1985) or Vijay-Shanker (1987). Definition 1 (Tree Adjoining Grammar) : A TAG is a 5-tuple G -- (VN, VT,S,I,A) where VN is a finite set of non-terminal symbols, VT is a finite set of terminals, S is a distinguished non- terminal, I is a finite set of trees called initial trees and A is a finite set of trees called auxiliary trees. The trees in I U A are called elementary trees. Initial trees (see left tree in Figure 1) are char- acterized as follows: internal nodes are labeled by non-terminals; leaf nodes are labeled by either ter- minal symbols or the empty string. S Li~minill$ x /x\ tofnflnld$ J Ltef rntnll|$ Figure h Schematic initial and auxiliary trees Auxiliary trees (see right tree in Figure 1) are characterized as follows: internal nodes are la- beled by non-terminals; leaf nodes are labeled by a terminal or by the empty string except for ex- actly one node (called the foot node) labeled by a non-terminal; furthermore the label of the foot node is the same as the label of the root node. We now define a composition operation called adjoining or adjunction which builds a new tree from an auxiliary tree/9 and a tree ~ (~ is any tree, 2$8 initial, auxiliary or tree derived by adjunction). The resulting tree is called a derived tree. Let c~ be a tree containing a node n labeled by X and let fl be an auxiliary tree whose root node is also labeled by X. Then the adjunction of fl to a at node n will be the tree 7 shown in Figure 2. The resulting tree, 7, is built as follows: * The sub-tree of a dominated by n, call it t, is excised, leaving a copy of n behind. • The auxiliary tree fl is attached at n and its root node is identified with n. • The sub-tree t is attached to the foot node of # and the root node n of t is identified with the foot node of ft. $ %, (ct} (1~) $ Figure 2: The mechanism of adjunction Then define the tree set of a TAG G, T(G) to be the set of all derived trees starting from initial trees in I. Furthermore, the string language generated by a TAG, L(G), is defined to be the set of all terminal strings of the trees in T(G). TAGs factor recursion and dependencies by ex- tending the domain of locality. They offer novel ways to encode the syntax of natural language grammars as discussed in Kroch and Joshi (1985) and Abeill~ (1988). In 1985, Vijay-Shanker and Joshi introduced a CKY-like algorithm for TAGs. They therefore es- tablished O(n 6) time as an upper bound for pars- ing TAGs. The algorithm was implemented, but in our opinion the result was more theoretical than practical for several reasons. First the algorithm assumes that elementary trees are binary branch- ing and that there are no empty categories on the frontiers of the elementary trees. Second, since it works on nodes that have been isolated from the tree they belong to, it isolates them from their domain of locality. However all important linguis- tic and computational properties of TAGs follow from this extended domain of locality. And most importantly, although it runs in O(n 6) worst time, it also runs in O(n s) best time. As a consequence, the CKY algorithm is in practice very slow. Since the average time complexity of Earley's parser depends on the grammar and in practice runs much better than its worst time complex- ity, we decided to try to adapt Earley's parser for CFGs to TAGs. Earley's algorithm for CFGs (Earley, 1970, Aho and Ullman, 1973) is a bottom- up parser which uses top-down information. It manipulates states of the form A -* a.fl[i] while using three processors: the predictor, the comple- tot and the scanner. The algorithm for CFGs runs in O(IGl2n s) time and in O(IGI n2) space in all cases, and parses unambiguous grammars in O(n 2) time (n being the length of the input, IGI the size of the grammar). Given a context-free grammar in any form and an input string al "'an, Earley's parser for CFGs maintains the following invariant: The state A --* a./3[i] is in states set Skiff S ::b 6A'r, 6 :bal " "ai and a ~ ai+l ""ak The correctness of the algorithm is a corollary of this invariant. Finding a Earley-type parser for TAGs was a difficult task because it was not clear how to parse TAGs bottom up using top-down informa- tion while scanning the input string from left to right. In order to construct an Earley-type parser for TAGs, we will extend the notions of dotted rules and states to trees. Anticipating the proof of correctness and soundness of our algorithm, we will state an invariant similar to Earley's original invariant. Then we present the algorithm and its main extensions. 2 Dotted symbols, dotted trees, tree traversal The full algorithm is explained in the next section. This section introduces preliminary concepts that will be used by the algorithm. We first show how dotted rules can be extended to trees. Then we introduce a tree traversal that the algorithm will mimic in order to scan the input from left to right. We define a dotted symbol as a symbol asso- ciated with a dot above or below and either to the left or to the right of it. The four positions of the dot are annotated by In, lb, ra, rb (resp. left above, left below, right above, right below): laura lb ~rb • Then we define a dotted tree as a tree with exactly one dotted symbol. Given a dotted tree with the dot above and to the left of the root, we define a tree traversal of a dotted tree as follows (see Figure 3): 259 START "'~ f END i'A,; o E F G H I 2.1 2.2 2.3 &1 3.2 Figure 3: Example of a tree traversal • if the dot is at position la of an internal node, we move the dot down to position lb, • if the dot is at position lb of an internal node, we move to position la of its leftmost child, • if the dot is at position la of a leaf, we move the dot to the right to position ra of the leaf, • if the dot is at position rb of a node, we move the dot up to position ra of the same node, • if the dot is at position ra of a node, there are two cases: - if the node has a right sibling, then move the dot to the right sibling at position la. - if the node does not have a right sibling, then move the dot to its parent at position rb. This traversal will enable us to scan the frontier of an elementary tree from left to right while try- ing to recognize possible adjunctions between the above and below positions of the dot. 3 The algorithm We define an appropriate data structure for the algorithm. We explain how to interpret the struc- tures that the parser produces. Then we describe the algorithm itself. 3.1 Data structures The algorithm uses two basic data structures: state and states set. A states set S is defined as a set of states. The states sets will be indexed by an integer: Si with i E N. The presence of any state in states set i will mean that the input string al...al has been recognized. Any tree ~ will be considered as a function from tree addresses to symbols of the grammar (termi- nal and non-terminal symbols): if z is a valid ad- dress in a, then a(z) is the symbol at address z in the tree a. Definition 2 A state s is defined as a 10-tuple, [a, dot, side,pos, l, ft, fr, star, t~, b~] where: • a: is the name of the dotted tree. • dot: is the address of the dot in the tree a. • side: is the side of the symbol the dot is on; side E {left, right}. • pos: is the position of the dot; pos E {above, below}. • star. is an address in a. The corresponding node in a is called the starred node. • ! (left), ft (foot left), fr (foot right), t~ (top left of starred node), b~ (bottom left of starred node) are indices of positions in the input string ranging over [O,n], n being the length of the input string. They will be explained further below. 3.2 Invariant of the algorithm The states s in a states set Si have a common prop- erty. The following section describes this invariant in order to give an intuitive interpretation of what the algorithm does. This invariant is similar to Earley's invariant. Before explaining the main characterization of the algorithm, we need to define the set of nodes on which an adjunction is allowed for a given state. Definition 3 The set of nodes 7~(s) on which an adjunction is possible for a given state s - [a, dot, side, pos, l, fhfi,star, t~,b~], is de- fined as the union of the following sets of nodes in a: • the set of nodes that have been traversed on the left and right sides, i.e., the four positions of the dot have been traversed; • the set of nodes on the path from the root node to the starred node, root node and starred node included. Note that if there is no star this set is empty. Definition 4 (Left part of a dotted tree) The left part of a dotted tree is the union of the set of nodes in the tree that have been traversed on the left and right sides and the set of nodes that have been traversed on the left side only. We will first give an intuitive interpretation of the ten components of a state, and then give the necessary and sufficient conditions for membership of a state in a states set. We interpret informally a state s = [~, dot, side, pos, l, f~, fi, star, t~, b~] in the fol- lowing way (see Figure 4): 260 "' 7 C ~ ^" Tit!, al ... all atl+l .... ah' Figure 4: Meaning of s E Si • l is an index in the input string indicating where the tree derived from a begins. • ft is an index in the input string corresponding to the point just before the foot node (if any) in the tree derived from a. • fi is an index in the input string corresponding to the point just after the foot node (if any) in the tree derived from a.The pair fi and fi will mean that the foot node subsumes the string al,+,...ay,. • star:, is the address in a of the deepest node that subsumes the dot on which an adjunction has been partially recognized. If there is no adjunction in the tree a along the path from the root to the dot- ted node, star is unbound. • t~ is an index in the input string corresponding to the point in the tree where the adjunction on the starred node was made. If star is unbound, then t~ is also unbound. • b~ is an index in the input string corresponding to the point in the tree just before the foot node of the tree adjoined at the starred node. The pair t~ and b~ will mean that the string as far as the foot node of the auxiliary tree adjoined at the starred node matches the substring alT+l...ab7 of the in- put string. If star is unbound, then b~ is also unbound. • s E Si means that the recognized part of the dot- ted tree a, which is the left part of it, is consistent with the input string from al to aa and from at to aI, and from ay. to ai, or from a I to al and from az to al when the foot node is not in the recognized part of the tree. We are now ready to characterize the member- ship of s in S~: Invariant 1 A state s = [a, dot, side,pos, l, fh fr, star, t~, b~] is in Si if and only if there is a derived tree from an initial tree such that (see Figure 4): 1. The tree a is part of the derivation. 2. The tree derived from a in the derivation tree, ~, has adjunctions only on nodes in 7~(s). 3. The part of the tree to the left of the dot in the tree derived spans the string al ... ai. 4. The tree derived from a, E, has a yield that starts just after ah ends at ay, before the foot node (if ay, is defined), and starts after the foot node just after ay, (if aI, is defined). 5. If there are adjunctions on the path from the dotted node to the root of a, then star is the ad- dress of the deepest adjunction on that path and the auxiliary tree adjoined at that node star has a yield that starts just after a,~ and stops at its foot node at ab t. The proof of this invariant has as corollaries the soundness, completeness, and therefore the cor- rectness of the algorithm. 3.3 The recognizer The Earley-type recognizer for TAGs follows: Let G be a TAG. Let al...a, be the input string. program recognizer beg~ So = { [a, O, left, above, 0 ..... -] ]a is an initial tree } For i := 0 to n do begin Process the states of Si, performing one of the following seven operations on each state s = [c~, dot, side,pos, l, f,, fr, star, t~, b~] until no more states can be added: I. Sc-~er 2. Move dot down S. Move dot up 4. Left Predictor 5. Left Completor 6. Right Predictor 7. Right Completor If Si+1 is empty and i < n, return rejection. en~ If there is in S. a state s=[a,O, right, above,O .... ,-] such that ~ is an initial tree then return acceptance. end. 261 The algorithm is a general recognizer for TAGs. Unlike the CKY algorithm, it requires no condi- tion on the grammar: the trees can be binary or not, the elementary (initial or auxiliary) trees can have the empty string as frontier. It is an off-line algorithm: it needs to know the length n of the input string. However we will see later that it can very easily be modified to an on-line algorithm by the use of an end-marker in the input string. We now describe one by one the seven processes. The current states set is presumed to be S/and the state to be processed is s = [a, dot, side, pos, l, fZ, fr, star, tT]. Only one of the seven processes can be applied to a given state. The side, the position, and the address of the dot determine the unique process that can be applied to the given state. Definition 5 (Adjunct(a, address)) Given a TAG G, define Adjunct(a, address) as the set of auxiliary trees that can be adjoined in the ele- mentary tree ct at the node n which has the given address. In a TAG without any constraints on adjunction, if n is a non-terminal node, this set consists of all auxiliary trees that are rooted by a node with same label as the label of n. 3.3.1 Scanner The scanner scans the input string. Suppose that the dot is to the left of and above a terminal sym- bol (see Figure 5). Then if the terminal symbol matches the next input token, the program should record that a new token has been recognized and try to recognize the rest of the tree. Therefore "the scanner applies to s = [a, dot, left, above, 1, ft, L, star, t[, b[] such that ,',(dot) is a terminal symbol and ~(dot) = ~+I or ~(dot) is the empey symbol • Case 1: a(dot) = ai+l The scanner adds [~, dot, right, above, 1, f,, fi, star, t[ , b[ ] "co SI+I • • Case 2: a(dot) = The scanner adds [tr, dot, right, above, l, ft, fr, star, t[ , b[ ] to S,. 3.3.2 -Move Dot Down Move dot down (See Figure 6), moves the dot down, from position lb of the dotted node to posi- C~e 1:a = a i ÷ ~ [1£1/T, tl*~l*] C~le 2." i m E ~toSi+l [1~1~,d',b1"] Bjl~,tl'.bl'] Figure 5: Scanner [l,fl,fr,tl*,bi*] [l.flJr,tl*~ol*] Figure 6: Move dot down tion la of its leftmost child. It therefore applies ¢o s = [~, d~, left, below, l, ~, f,, star, t[, b[] such that ~he node where the do~ is has a lef~most child at address u. It adds [a, u, left, above, I, ~ , re, star, t[ , b~ ] to S,. 3.3.3 Move DotUp Move dot up (See Figure 7), moves the dot "up", from position ra of the dotted node to position la of its right sibling if it has a right sibling, other- wise to position rb of its parent. It therefore applies to s = [a, dot, ~ght, above, l, ~, fi, star, t[, b[] such that the node on which the dot is has a parent node. • Case 1: the node where the dot is has a right sibling at address r. It adds [ct, r, left, above, l, fz, fr, star, t~ , b~] ~o S,. • Case 2: the node where the dot is is ~he rightmost child of the parent node p. It adds [~, p, right, below, l, f,, re, star, t~, bT] to S,. 262 [l~lJr, tl*,bl*] add~mS/ [l,fl,f~',tl *,bl*] Clme 92 X ii thv rlohlrn~ child [l.fl,fi',tl',bl'] [l.fl,fr, tl*.bl'] Figure 7: Move dot up 3.3.4 Left Predictor Suppose that there is a dot to the left of and above a non-terminal symbol A (see Figure 8). Then the algorithm takes two paths in parallel: it makes a prediction of adjunction on the node labeled by A and tries to recognize the adjunction (stepl) and it also considers the case where no adjunction has been done (step2). These operations are per- formed by the Left Predictor. It applies to s = [~, dot, left, above, 1, h, fr, aar, t~, b~] such that ~(dot) is a non-terminal. • Step I. It adds the states (LS,0,1eft, above, i . . . . . -] [B E Adjuna(~, dot) } to Si. • Step 2. -- Case 1: the dot is not on the foot node. It adds the state [~, dot, left, below, 1, ~ , fi , star, t~ , b~ ] to S,. -- Case 2: the dot is on the foot node. Necessarily, since the foot node has not been already traversed, ~ and fr are unspecified. It adds the state [~, dot, left, below, l, i, -, star, t~ , b~ ] to S,. 3.3.5 Left Completer Suppose that the auxiliary that we left-predicted has been recognized as far as its foot (see Fig- ure 9). Then the algorithm should try to recognize [I. n. fr. tl.. bl.] ~, (i.-.-.-.-] J [1, fl, fr, tl" ,bl*] [1, ft. fr, tl", bl*] £---'A [l.-.-.tl-~l.] [ki.-.tt.~l'] Figure 8: Left Predictor [r ,fl',fr',tl*',bl*'] [l.i.-.tl*,bl*] [r,fl',fr',l.i] Figure 9: Left completer what was pushed under the foot node. (A star in the original tree will signal that an adjunction has been made and half recognized.) This operation is performed by the Left Completer. It applies to s = [a, dot, left, below, l, i, -, star, t~, b~] such that the dot is on the foot node. For all I I I t I ,n St s = L 8, dot , left, above, l, f;, f~, star, t t , bt ] in Sz such that a E Adjunct(B, dot') Case I: dot' is on the foot node of B. Then necessary, f[ and f~ are unbound. It adds the state LS, dot',left, below, l',i,-,dot',l,~ to S,. Case 2: dot ~ is not on the foot node of B. It adds the state ~, dot', left, below, l', f[, f:, dot', l, ~ to S,. 263 Case l [tl*,bl*,-,tl*',bl*'] ~*~1"1 /--.--. A .=..=~ [tI* ,bl" ,l,tl*',bl*'] Case 2 aldd to~Z. p.~.tl*.bl*] Figure I0: Right Predictor 3.3.6 Right Predictor Suppose that there is a dot to the right of and be- low a node A (see Figure I0). If there has been an adjunction made on A (case I), the program should try to recognize the right part of the aux- iliary tree adjoined at A. However if there was no adjunction on A (case 2), then the dot should be moved up. Note that the star will tell us if an ad- junction has been made or not. These operations are performed by the Right predictor. The right predictor applies to s = [a, dot, right, below, l, fz, fr, star, tT, bT] • Case 1: dot = star For all states ,t $; s = [/3, dot', left, below, t~, bT, -, star ~-, t t , b t ]. in Sb 7 such that ~ ¢ Adjunct(a, dot), it adds the state L O, dot', right, below,tT, * " *' *' bz ,,,star',t z ,b I ] to s,. • Case 2: dot ~ star It adds the state [a, dot, right, above, l, fl, fr, star, tT , bT ] to S,. 3.3.7 Right Completor Suppose that the dot is to the right ot and above the root of an auxiliary tree (see Figure 11). Then the adjunction has been totally recognized and the program should try to recognize the rest of the tree in which the auxiliary tree has been adjoined. This operation is performed by the Right Completor. [l',fl',fr',tl *'.bl *'] [I,fl,t~e,-I ~addtd to$i [l',.~',~'r',tl*'.bl *'] Figure 11: Right Completor It applies to s = [a, 0, right, above, l, fz, L, -, -, -] For all states s! = [/3, dot', left, above, l', f[ , fir, star', t~', b~'] inS, and for all states LS, dot',right, below, t',T,,~,dot',Z, fd in aS, such that a E Adjunct(E, dot') It adds Lff , dot', right, above, l',-~l , 7~r, star', t;', 6;'] to S,. Nhere 7 = f, if f is bound in state st, and f can have any value, if f is unbound in state el. 3.4 Handling constraints on adjunc- tion In a TAG, one can, for each node of an elementary tree, specify one of the following three constraints on adjunction (Joshi, 1987): • Null adjunction (NA): disallow any adjunc- tion on the given node. • Obligatory adjunction (OA): an auxiliary tree must be adjoined on the given node. • Selective adjunction (SA(T)): a set T of aux- iliary trees that can be adjoined on the given node is specified. The algorithm can be very easily modified to handle those constraints. First, the function Adjunct(a, address) must be modified as follows: • Adjunct(a, address) = ~, if there is NA on the node. • A~unct(a, address) as previously defined, if there is OA on the node. • Adjunct(a, address) = T, if there is SA(T) on the node. Second, step 2 of the left predictor must be done 264 S~pl 0 s ° ,..i • ' s " d 3 I ~ o 2.3 (p) Figure 12: L = {a'~bnec"~ln >__ O} make ma,~ tt~t no ,.,'~ i~ po mblo on tl~ root o f ~n inifi"~ ~m~ S. I /\-./'\ $ Z Figure 13: Use of end marker in TAG only if there is no obligatory adjunction on the node at address dot in the tree a. 3.5 An example We give one example that illustrates how the rec- ognizer works. The grammar used for the exam- ple generates the language L = {a"b"ecndn]n > 0}. The input string given to the recognizer is: aabbeccdd. The grammar is shown in Fig- ure 12. The states sets are shown in Figure 14. Next to each state we have printed in paren- theses the name of the processor that was ap- plied to the state. The input is recognized since [a, O, right, above, 0 . . . . . -] is in states set sg. 3.6 Remarks Use of move dot up and move dot down Move dot down and move dot up can be eliminated in the algorithm by merging the original dot and the position it is moved to. However for explana- tory purposes we chose to use these two processors in this paper. Off-llne vs on-line The algorithm given is an off-line recognizer. It can be very easily modified to work on line by adding an end marker to all initial trees in the grammar (see Figure 13). Extracting a parse The algorithm that we describe in section 3.3 is a recognizer. However, if we include pointers from a state to the other states which caused it to he placed in the states set, the recognizer can be mod- ified to produce all parses of the input string. 3.7 Correctness The correctness of the parser has been proven and is fully reported in Schahes and Joshi (1988). It consists of the proof of the invariant given in sec- tion 3.2. Our proof is similar in its concept to the proof of the correctness of Earley's parser given in Aho and Ullman 1973. The "ofily if" part of the invariant is proved by induction on the number of states that have been added so far to all states sets. The "if" part is'proved by induction on a defined rank of a state. The soundness (the algorithm rec- oguizes only valid strings) and the completeness (if a string is valid, then the algorithm will recognize it) are corollaries of this invariant. 3.8 Implementation The parser has been implemented on Symbolics Lisp machines in Flavors. More details of the actual implementation can be found in Schabes mad Joshi (1988). The current implementation has an O(IGlZn 9) worst case time complexity and O(IGln 6) worst case space complexity. We have not as yet been able to reduce the worst case time complexity to O([G[Zn6). We are currently at- tempting to reduce this bound. However, the main purpose of constructing an Parley-type parser is to improve the average complexity, which is crucial in practice. 4 Extensions We describe how substitution is defined in a TAG. We discuss the consequences of introducing substi- tution in TAGs. Then we show how substitution can be parsed. We extend the parser to deal with feature structures for TAGs. Finally the relation- ship with PATR-II is discussed. 4.1 Introducing substitution in TAGs TAGs use adjunction as their basic composition operation. It is well known that Tree Adjoining Languages (TALs) are mildly context-sensitive. TALs properly contain context-free languages. It is also possible to encode a context-free grammar with auxiliary trees using adjunction only. How- ever, although the languages correspond, the pos- sible encoding does not reflect directly the original 265 So .$1 $2 $a S4 S5 S6 $7 ss s9 [a, O, left, above, 0 . . . . . -] (left predictor) [¢~, O, left, below, O, -, -, -, -, -~ (move dot down) [~! Zp left, ahoy% 01 --,--r--,--,--2 (scanner) 1, right, abo~e, 0, --, -, --, --, -] (move dot up) 2, left, below, 0, --, --, --, --, -] (move dot down) [~, 2.1, left, above, O, -, -, -, -, -] (scanner) z, le/tt.bove, Z, , , , ,-] ~sc~ner) left °ha. 2 - -,- - -i (left [/~, 2, left, below, 1 . . . . . -] (move dot down) O, left, below, 2, --, --, -, --, --] (move dot down) [~', 1, right, above, 1, -t --1--, --,--] ~move dot up) [0, 2.2, left, below, 1, 3, --,--, --,--] ~left completor) [/~, 2.1, right, above, I, --, --, --, --, --] (move dot up) [~, O, left, above, 0, - . . . . -] (left predictor) f/J, O, left, below, 0, -, -, -, -, -] (move dot down) -] ~scanner) [ct, 11 le~t l aboo% 0 r -1 --I --P -, (left predictor) ,[~, 2, left, above, O, -, -, -, -, [13, O, left, above, 1, -, -, -, -, -] (left predictor) [0, O, left, below, 1, -, --, --, -, --] (move dot down) [/~, 2.1, left, aboue, 1, --, --, -, -, -] (scanner) [B, 1, left, above, 2, -, --, --, -, --] (scanner) [/~, 2, left, above, 1, --, -, --, --, -] (left predictor) [0, 2, left, below, 0, -, -, 2, 1,3] (move dot down) [~, 2.2, left, above, 1, -, -, -, -, -] (left predictor) [p, 2.1, le/t, abate, O, -, -, 211, a I (scanne 0 [o, 1, left, above, O, --, --, O, O, 4] (manner) [~, 2.2, fell abo~e, O, -, -, 2, 1, 3] (left predictor) [~, 2.2, le)'t, below, O, 4, --, 2, 1,3] (left completor) [0, 2.3, left, abooe, O, 4, 5, 2,1,3] (scanner) [~, 2.2, right, above, 0, 4, 5, 2, 1, 3] (move dot up) [a~ 1, right, above t O r --t --w 01014] (move dot up) [0, 2.2, right, above, 1, 3, 6, -, -, -] (move dot up) [~, 2.3, left, above, 1, 3, 6, --, -, -] (scanner) [~, 2.2, right, below, 1~ 3~ 6~ -~ - r -] (right predictor r case 2) [0, 2, right, below, 1,3, 6,--,-,--] (right predictor, case 2) B I 3, lep, above, 1,3, 6, -I --I--1 (scanner) ~, O, right, below, I, 3, 6, --, --, -] (right predictor, case 2) [~, 3, left, above, 0, 4, 5, --, --, --] (scanner) (move dot up) [~1 21 fish'1 oh°re10, 41 51 --, --I -- (right predictor, case 2) [~, O, right, below, O, 4, 5, -, -, [~, O, rlqht l above, O, 4, 5, --, --, --] (right completor) [a, 0, left, beio~, 0, --, --, 0, 0, 4] (move dot down) [0, 2.1, right, above, 0, --, --, 2, 1,3] (move dot up) [[3, 2.2, right, below, 0, 4, 5, 2,1,3] (right predictor, case 2) [a, 0, right, below, O, -, -, O, O, 4] (right predictor, case 1) [0, 2.8, right, above, 0, 4, 5, 2, 1, 3] (move dot up) LS, 2, right, below, O, 4, 5, 2,1,3] (right predictor, case 1) [0, 2, right, above, 1,3, 6, --, --, --] (move dot up) I B r 2.31 right I above, 113, 61 --I --~--] (move dot up) /3, O, right, above, I, 3, 6, --, --, --] (right completor) [0, 3, right, abo~e, 1,3, 6, --, --, --] (move dot up) [o, O, right, above, O, --, --, --, -, -] (end test) [~, 3, right, above, O, 4, 5, -, --, --] (move dot up) Figure 14: States sets for the input aabbeccdd /\ Figure 15: Mechanism of substitution context free grammar since this encoding uses ad- junction. Substitution is the basic operation used in CFG. A CFG can be viewed as a tree rewriting system. It uses substitution as basic operation and it con- sists of a set of one-level trees. Substitution is a less powerful operation than adjunction. However, recent linguistic work in TAG gram- mar development (Abeilld, 1988) showed the need for substitution in TAGs as an additional opera- tion for obtaining appropriate structural descrip- tions in certain cases such as verbs taking two sen- tential arguments (e.g. "John equates solving this problem with doing the impossible") or compound categories. It has also been shown to be useful for lexical insertion (Schabes, Abeind and Joshi, 1988). It should be emphasized that the intro- duction of substitution in TAGs does not increase their generative capacity. Neither is it a step back from the original idea of TAGs. Definition 6 (Substitution in TAG) We de- $ VP NP Figure 16: Writing a CFG in TAG fine substitution in TAGs to take place on specified nodes on the frontiers of elementary trees. When a node is marked to be substituted, no adjunction can take place on that node. Furthermore, sub- stitution is always mandatory. Only trees derived from initial trees rooted by a node of the same la- bel can be substituted on a substitution node. The resulting tree is obtained by replacing the node by the tree derived from the initial tree. Substitution is illustrated in Figure 15. We conventionally mark substitution nodes by a down arrow (1). As a consequence, we can now encode directly a CFG in a TAG with substitution. The resulting TAG has only one-level initial trees and uses only substitution. An example is shown in Figure 16. 4.2 Parsing substitution The parser can be extended very easily to handle substitution. We use Earley's original predictor and completor to handle substitution. 266 [I, fl, ft. fl*, bl*,subs~?] ~. [i,-.-,-.-.W~e] Figure 17: Substitution Predictor The left predictor is restricted to apply to nodes to which adjunction can be applied. A flag subst? is added to the states. When set, it indicates that the tree (initial) has been pre- dicted for substitution. We use the index ! (as in Earley's original parser) to know where it has been predicted for substitution. When the initial tree that has been predicted for substitution has been totally recognized, we complete the state as Earley's original parser does. A state s is now an ll-tuple • [~, dot, side,poe, l, fl, fr, star, t~, b~, subst?]: where subst? is a boolean that indicates whether the tree has been predicted for substitution. The other components have not been changed. We add two more processors to the parser. Substitution Predictor Suppose that there is a dot to the left of and above a non-terminal symbol on the frontier A that is marked for substitution (see Figure 17). Then the algorithm predicts for substitution all initial trees rooted by A and tries to recognize the initial tree. This operation is performed by the substitution predictor. It applies to s- [~, dot, left, above, l, f l, fr , star, t~ i b~ , subst?] such that a(dot) is a non-terminal on the frontier of ~ .hieh is marked for subst itut ion: It adds the states {[fl, O, left, above, i, -, -, -, -, -, true] ]/~ is an Lnitial tree s.t.#(O) -- or(dot)} to Si. Substitution Completor Suppose that the initial tree that we predicted for substitution has been recognized (see Figure 18). Then the algorithm should try to recognize the rest of the tree in which we predicted a substitu- tion. This operation is performed by the substi- tution completor. [i'.fl',fr',tl*'.bl*',subst?'] _ . [I.fl,fr.-.-,=uel [r,fl',fr',tl*',bl *',subst?'] Figure 18: Substitution completor It applies to s=[a,O, rioht,above, l, , , , , ,true] For all states s = [/3, dot', left, a~-v~o e,- l',jt,jr,star'," " t~', b~', subst?'] in Sa s.t. #(dot') is marked for substitution and l~(dot) = a(O). It adds the following stats to Si: [/3, dot', right, above, 1', f[ , f~, star', t~' , b~ ', subst?'] . Complexity The introduction of the substitution predictor and the substitution completor does not increase the complexity of the overall TAG parser. If we encode a CFG with substitution in TAG, the parser behaves in O(IGl~n s) worst case time and O([GIn 2) worst case space like Earley's orig- inal parser. This comes from the fact that when there are no auxiliary trees and when only substi- tution is used, the indices ft,fi,t~,b~ of a state will never be set. The algorithm will use only the substitution predictor and the substitution eom- pletor. Thus, it behaves exactly like Earley's orig- inal parser on CFGs. 4.3 Parsing feature structures for TAGs The definition of feature structures for TAGs and their semantics was proposed by Vijay-Shanker (1987) and Vijay-Shanker and Joshi (1988). We first explain briefly how they work in TAGs and show how we have implemented them. We in- troduce in a TAG framework a language simi- lar to PATR-II which was investigated by Shieber (Shieber, 1984 and 1986). We then show how one can embed the essential aspects of PATR-II in this system. 267 t br tUu" m br f tf ..- I, Ubr Figure 19: Updating of features A NP Vp (a) I /\ PRO V PP /\ to go to the movies S.top::gtsnsed> = + S,bottom::<tensed> = V.boRom::<tensed> V.bottom::<tensed> = - Feature structures in TAGs As defined by Vijay-Shanker (1987) and Vijay- Shanker and 30shi(1988), to each adjunction node in an elementary tree two feature structures are at- tached: a top and a bottom feature structure. The top feature corresponds to a top view in the tree from the node. The bottom feature corresponds to the bottom view. When the derivation is com- pleted, the top and bottom features of all nodes are unified. If the top and bottom features of a node do not unify, then a tree must be adjoined at that node. This definition can be trivially extended to sub- stitution nodes. To each substitution node we at- tach two identical feature structures (top and bot- tom). The updating of features in case of adjunction is shown in Figure 19. Unification equations As in PATR-II, we express with unification equa- tions dependencies between DAGs in an elemen- tary tree. The system therefore consists of a TAG and a set of unification equations on the DAGs associated with nodes in elementary trees. An example of the use of unification equations in TAGs is given in Figure 20. Note that the top and bottom features of node S in (~ can not be uni- fied. This forces an adjunction to be performed on S. Thus, the following sentence is not accepted: *to go 1;o 1;he movies. The auxillm-y tree 81 can be adjoined at S in or: John wan1;s 1;o go 1;o 1;he movies. But since the bottom feature of S has tensed value - in c~ and since the bottom feature of S has tensed value -4- in/32, /31 can not be adjoined at node S in a: "Bob 1;hinks 1;o go I;o 1;he movies. But/~2 can be adjoined in 81, which itself can be adjoined in a: Bob thinks John wan1;s 1;o go I;o 1;he $ A NP VP ([~1) A /\ John V S 1 I wltnu S.top::<tensed> . + S.bottorn::<lensed=, . V.bollom::<tensed> S_l.bonom::<tensed>., V.bottom::<tensed-Sl> V.botlom::<tensed.Sl> ,. - V.boRom::<tensed> . + S A NP VP QB2) A /\ Bob V S I l ~ks S.top::<tensed> . + S.bottom::<tensed> . V.botlom::<tensed> S 1.bottom::<lensed> . V.bottom::<lensed-Sl> V.bonom::<tensed-Sl> . + V.bonom::<lensed> ,. ÷ Figure 20: Example of unification equations movies. We refer the reader to Abeill6 (1988) and to Schabes, Abeill6 and 3oshi (1988) for further ex- planation of the use of unification equations and substitution in TAGs. 268 Parsing and the relationship with PATrt-II By adding to each state the set of DAGs cor- responding to the top and bottom features of each node, and by making sure that the unifica- tion equations are satisfied, we have extended the parser to parse TAGs with feature structures. Since we introduced substitution and since we are able to encode a CFG directly, the system has the main functionalities of PATtt-II. The sys- tem parses unification formalisms that have a CFG skeleton and a TAG skeleton. 5 Conclusion We described an Earley-type parser for TAGs. We extended it to deal with substitution and feature structures for TAGs. By doing this, we have built a system that parses unification formalisms that have a CFG skeleton and also those that have a TAG skeleton. The system is being used for Tree Adjoining Grammar development (AbeiU~, 1988). This work has led us to a new general parsing strategy (Schabes, Abeill~ and Joshi, 1988) which allows us to construct a two-stage parser. In the first stage a subset of the elementary trees is ex- tracted and in the second stage the sentence is parsed with respect to this subset. This strategy significantly improves performance, especially as the grammar size increases. References Abeill~, Anne, 1988. A Computational Grammar for French in TAG. In Proceeding of the 12 th International Conference on Computational Linguistics. Aho, A. V. and Ullman, J. D., 1973. Theory of Parsing, Translation and Compiling. Vol I: Parsing. Prentice-Hall, Englewood Cliffs, NJ. Earley, J., 1970. An Efficient Context-Free Parsing Algorithm. Commun. ACM 13(2):94-102. Joshi, Aravind K., 1985. How Much Context- Sensitivity is Necessary for Characterizing Structural Descriptions -- Tree Adjoining Grammars. In Dowry, D.; Karttunen, L.; and Zwicky, A. (editors), Natural Language Processing- Theoretical, Computational and Psychological Perspectives. Cambridge University Press, New York. Originally presented in 1983. 2oshi, Aravind K., 1987. An Introduction to Tree Ad- joining Grammars. In Manaster-Ramer, A. (editor), Mathematics of Language. John Benjamins, Amster- dam. Joshi, A. K.; Levy, L. S.; and Takahashi, M., 1975. T~ee Adjunct GraJnmars. J. Comput. Syst. Sci. 10(1). Kroch, A. and Joshi, A. K., 1985. Linguistic Relevance of Tree Adjoining Grammars. Technical Report MS- CIS-85-18, Department of Computer and Information Science, University of Pennsylvaain. Schabes, Yves and Joahi, Aravind K., 1988. An Earley.type Parser for Tree Adjoining Grammars. Technical Report, Department of Computer and In- formation Science, University of Pennsylvania. Schabes, Yves; Abeill~, Anne; and Joshi, Aravind K, 1988. New Parsing Strategies for Tree Adjoining Grammars. In Proceedings of the 12 th International Conference on Computational Linguistics. Shieber, Stuart M., 1984. The Design of a Computer Language for Linguistic Information. In 22 ~ Meet- ing of the Association for Computational Linguistics, pages 362-366. Shieber, Stuart M., 1986. An Introduction to Unifi- cation.Based Approaches to Grammar. Center for the Study of Language and Information, Stanford, cA. Vijay-Shanker, K., 1987. A Study of Tree Adjoining Grammars. PhD thesis, Department of Computer and Information Science, University of Pennsylvania. Vijay-Shanker, K. and Joshi, A. K., 1985. Some Com- putational Properties of Tree Adjoining Grammars. In 23 rd Meeting of the Association for Computational Linguistics, pages 82-93. Vijay-Shanker, K. and Joshi, A.K., 1988. Feature Structure Based Tree Adjoining Grammars. In Pro- ceedings of the 12 ta International Conference on Com- putational Linguistic& 269
1988
32
A DEFINITE CLAUSE VERSION OF CATEGORIAL GRAMMAR Remo Pareschi," Department of Computer and Information Science, University of Pennsylvania, 200 S. 33 rd St., Philadelphia, PA 19104, t and Department of Artificial Intelligence and Centre for Cognitive Science, University of Edinburgh, 2 Buccleuch Place, Edinburgh EH8 9LW, Scotland remo(~linc.cis.upenn.edu ABSTRACT We introduce a first-order version of Catego- rial Grammar, based on the idea of encoding syn- tactic types as definite clauses. Thus, we drop all explicit requirements of adjacency between combinable constituents, and we capture word- order constraints simply by allowing subformu- lae of complex types to share variables ranging over string positions. We are in this way able to account for constructiods involving discontin- uous constituents. Such constructions axe difficult to handle in the more traditional version of Cate- gorial Grammar, which is based on propositional types and on the requirement of strict string ad- jacency between combinable constituents. We show then how, for this formalism, parsing can be efficiently implemented as theorem proving. Our approach to encoding types:as definite clauses presupposes a modification of standard Horn logic syntax to allow internal implications in definite clauses. This modification is needed to account for the types of higher-order functions and, as a con- sequence, standard Prolog-like Horn logic theorem proving is not powerful enough. We tackle this * I am indebted to Dale Miller for help and advice. I am also grateful to Aravind Joshi, Mark Steedman, David x, Veir, Bob Frank, Mitch Marcus and Yves Schabes for com- ments and discussions. Thanks are due to Elsa Grunter and Amy Feh.y for advice on typesetting. Parts of this research were supported by: a Sloan foundation grant to the Cog- nitive Science Program, Univ. of Pennsylvania; and NSF grants MCS-8219196-GER, IRI-10413 AO2, ARO grants DAA29-84-K-0061, DAA29-84-9-0027 and DARPA grant NOOO14-85-K0018 to CIS, Univ. of Pezmsylvani& t Address for correspondence problem by adopting an intuitionistic treatment of implication, which has already been proposed elsewhere as an extension of Prolog for implement- ing hypothetical reasoning and modular logic pro- gramming. 1 Introduction Classical Categorial Grammar (CG) [1] is an ap- proach to natural language syntax where all lin- guistic information is encoded in the lexicon, via the assignment of syntactic types to lexical items. Such syntactic types can be viewed as expressions of an implicational calculus of propositions, where atomic propositions correspond to atomic types, and implicational propositions account for com- plex types. A string is grammatical if and only if its syntactic type can be logically derived from the types of its words, assuming certain inference rules. In classical CG, a common way of encoding word-order constraints is by having two symmet- ric forms of "directional" implication, usually in- dicated with the forward slash / and the backward slash \, constraining the antecedent of a complex type to be, respectively, right- or left-adjacent. A word, or a string of words, associated with a right- (left-) oriented type can then be thought of as a right- (left-) oriented function looking for an ar- gument of the type specified in the antecedent. A convention more or less generally followed by lin- guists working in CG is to have the antecedent and the consequent of an implication respectively on 270 the right and on tile left of the connective. Thus, tile type-assignment (1) says that the ditransitive verb put is a function taking a right-adjacent ar- gulnent of type NP, to return a function taking a right-adjacent argument of type PP, to return a function taking a left-adjacent argument of type NP, to finally return an expression of the atomic type S. (1) put: ((b~xNP)/PP)/NP The Definite Clause Grammar (DCG) framework [14] (see also [13]), where phrase-structure gram- mars can be encoded as sets of definite clauses (which are themselves a subset of Horn clauses), and the formalization of some aspects of it in [15], suggests a more expressive alternative to encode word-order constraints in CG. Such an alterna- tive eliminates all notions of directionality from the logical connectives, and any explicit require- ment of adjacency between functions and argu- ments, and replaces propositions with first-order • formulae. Thus, atomic types are viewed as atomic formulae obtained from two-place predicates over string positions represented as integers, the first and the second argument corresponding, respec- tively, to the left and right end of a given string. Therefore, the set of all sentences of length j generated from a certain lexicon corresponds to the type S(0,j). Constraints over the order of constituents are enforced by sharing integer in- dices across subformulae inside complex (func- tional) types. This first-order version of CG can be viewed as a logical reconstruction of some of the ideas behind the recent trend of Categorial Unification Gram- mars [5, 18, 20] 1. A strongly analogous develop- ment characterizes the systems of type-assignment for the formal languages of Combinatory Logic and Lambda Calculus, leading from propositional type systems to the "formulae-as-types" slogan which is behind the current research in type theory [2]. In this paper, we show how syntactic types can be en- coded using an extended version of standard Horn logic syntax. 2 Definite Clauses with In- ternal Implications Let A and ---* be logical connectives for conjunc- tion and implication, and let V and 3 be the univer- 1 Indeed, Uszkoreit [18] mentions the possibility of en- coding order constraints among constituents via variables ranging over string positions in the DCG style. sal and existential quantifiers. Let A be a syntactic variable ranging over the set of atoms, i. e. the set of atomic first-order formulae, and let D and G be syntactic variables ranging, respectively, over the set of definite clauses and the set of goal clauses. We introduce the notions of definite clause and of goal clause via the two following mutually re- cursive definitions for the corresponding syntactic variables D and G: • D:=AIG--AIVzDID1AD2 • G:=AIG1AG=I3~:GID~G We call ground a clause not containing variables. We refer to the part of a non-atomic definite clause coming on the left of the implication connective as to the body of the clause, and to the one on the right as to the head. With respect to standard Horn logic syntax, the main novelty in the defini- tions above is that we permit implications in goals and in the bodies of definite clauses. Extended Horn logic syntax of this kind has been proposed to implement hypothetical reasoning [3] and mod- ules [7] in logic programming. We shall first make clear the use of this extension for the purpose of linguistic description, and we shall then illustrate its operational meaning. 3 First-order Categorial Grammar 3.1 Definite Clauses as Types We take CONN (for "connects") to be a three- place predicate defined over lexical items and pairs of integers, such that CONN(item, i,j) holds if and only if and only if i = j - 1, with the in- tuitive meaning that item lies between the two consecutive string positions i and j. Then, a most direct way to translate in first-order logic the type-assignment (1) is by the type-assignment (2), where, in the formula corresponding to the as- signed type, the non-directional implication con- nective --, replaces the slashes. (2) put : VzVyYzVw[CONN(put, y - 1, y) --* (NP(y, z) -- (PP(z, w) -- (NP(z, y - 1) --* s(=, ~o))))] 271 A definite clause equivalent of tile formula in (2) is given by the type-assignment (3) 2 . (3) put: VzVyVzVw[CONN(put, y 1, y) A NP(y, z) ^ PP(z, w) A gP(z, y - 1) --* S(x, w)] Observe that the predicate CONNwill need also to be part of types assigned to "non-functional" lexical items. For example, we can have for the noun-phrase Mary the type-assignment (4). (4) Mary : Vy[OONN(Mary, y- 1,y) .-.-* NP(y - 1, y)] 3.2 Higher-order Types and Inter- nal Implications Propositional CG makes crucial use of func- tions of higher-order type. For example, the type- assignment (5) makes the relative pronoun which into a function taking a right-oriented function from noun-phrases to sentences and returning a relative clause 3. This kind of type-assignment has been used by several linguists to provide attractive accounts of certain cases of extraction [16, 17, 10]. (5) which: REL/(S/NP) In our definite clause version of CG, a similar assignment, exemplified by (6), is possible, since • implications are allowed in the. body of clauses. Notice that in (6) the noun-phrase needed to fill the extraction site is "virtual", having null length. (6) which: VvVy[CONN(which, v - 1, v) ^ (NP(y, y) --* S(v, y)) --* REL(v - 1, y)] 2 See [2] for a pleasant formal characterization of first- order definite clauses as type declarations. aFor simplicity sake, we treat here relative clauses as constituents of atomic type. But in reality relative clauses are noun modifiers, that is, functions from nouns to nouns. Therefore, the propositional and the first-order atomic type for relative clauses in the examples below should be thought of as shorthands for corresponding complex types. 3.3 Arithmetic Predicates The fact that we quantify over integers allows us to use arithmetic predicates to determine sub- sets of indices over which certain variables must range. This use of arithmetic predicates charac- terizes also Rounds' ILFP notation [15], which ap- pears in many ways interestingly related to the framework proposed here. We show here below how this capability can be exploited to account for a case of extraction which is particularly prob- lematic for bidirectional propositional CG. 3.3.1 Non-perlpheral Extraction Both the propositional type (5) and the first- order type (6) are good enough to describe the kind of constituent needed by a relative pronoun in the following right-oriented case of peripheral extraction, where the extraction site is located at one end of the sentence. (We indicate the extrac- tion site with an upward-looking arrow.) which [Ishallput a book on T ] However, a case of non.peripheral extraction, where the extraction site is in the middle, such as which [ I shall put T on the table ] is difficult to describe in bidirectional proposi- tional CG, where all functions must take left- or right-adjacent arguments. For instance, a solution like the one proposed in [17] involves permuting the arguments of a given function. Such an opera- tion needs to be rather cumbersomely constrained in an explicit way to cases of extraction, lest it should wildly overgenerate. Another solution, pro- posed in [10], is also cumbersome and counterintu- itive, in that involves the assignment of multiple types to wh-expressions, one for each site where extraction can take place. On the other hand, the greater expressive power of first-order logic allows us to elegantly general- ize the type-assignment (6) to the type-assignment (7). In fact, in (7) the variable identifying the ex- traction site ranges over the set of integers in be- tween the indices corresponding, respectively, to the left and right end of the sentence on which the rdlative pronoun operates. Therefore, such a sentence can have an extraction site anywhere be- tween its string boundaries. 272 (7) which : VvVyVw[CONN(which, v - 1, v) A (NP(y, y) --.* S(v, w)) A v<yAy<w-.* REL(v - 1, w) ] Non-peripheral extraction is but one example of a class of discontinuous constituents, that is, con- stituents where the function-argument relation is not determined in terms of left- or right-adjacency, since they have two or more parts disconnected by intervening lexical material, or by internal ex- traction sites. Extraposition phenomena, gap- ping constructions in coordinate structures, and the distribution of adverbials offer other problem- atic examples of English discontinuous construc- tions for which this first-order framework seems to promise well. A much larger batch of simi- lar phenomena is offered by languages with freer word order than English, for which, as pointed out in [5, 18], classical CG suffers from an even clearer lack of expressive power. Indeed, Joshi [4] proposes within the TAG framework an attractive general solution to word-order variations phenom- ena in terms of linear precedence relations among constituents. Such a solution suggests a similar approach for further work to be pursued within the framework presented here. 4 Theorem Proving In propositional CG, the problem of determin- ing the type of a string from the types of its words has been addressed either by defining cer- tain "combinatory" rules which then determine a rewrite relation between sequences of types, or by viewing the type of a string as a logical conse- quence of the types of its words. The first al- ternative has been explored mainly in Combina- tory Grammar [16, 17], where, beside the rewrite rule of functional application, which was already in the initial formulation of CG in [1], there are also tim rules of functional composition and type raising, which are used to account for extraction and coordination phenomena. This approach of- fers a psychologically attractive model of parsing, based on the idea of incremental processing, but causes "spurious ambiguity", that is, an almost exponential proliferation of the possible derivation paths for identical analyses of a given string. In fact, although a rule like functional composition is specifically needed for cases of extraction and coordination, in principle nothing prevents its use to analyze strings not characterized by such phe- nomena, which would be analyzable in terms of functional application alone. Tentative solutions of this problem have been recently discussed in [12, 19]. The second alternative has been undertaken in the late fifties by Lambek [6] who defined a deci- sion procedure for bidirectional propositional CG in terms of a Gentzen-style sequent system. Lam- bek's implicational calculus of syntactic types has recently enjoyed renewed interest in the works of van Benthem, Moortgat and other scholars. This approach can account for a range of syntactic phe- nomena similar to that of Combinatory Grammar, and in fact many of the rewrite rules of Combi- natory Grammar can be derived as theorems in the calculus, tIowever, analyses of cases of extrac- tion and coordination are here obtained via infer- ences over the internal implications in the types of higher-order functio~ls. Thus, extraction and coor- dination can be handled in an expectation-driven fashion, and, as a consequence, there is no problem of spuriously ambiguous derivations. Our approach here is close in spirit to Lambek's enterprise, since we also make use of a Gentzen system capable of handling the internal implica- tions in the types of higher-order functions, but at the same time differs radically from it, since we do not need to have a "specialized" proposi- tional logic, with directional connectives and adja- cency requirements. Indeed, the expressive power of standard first-order logic completely eliminates the need for this kind of specialization, and at the same time provides the ability to account for con- structions which, as shown in section 3.3.1, are problematic for an (albeit specialized) proposi- tional framework. 4.1 An Intuitionistic Exterision of Prolog The inference system we are going to introduce below has been proposed in [7] as an extension of Prolog suitable for modular logic programming. A similar extension has been proposed in [3] to im- plement hypotethical reasoning in logic program- ming. We are thus dealing with what can be con- sidered the specification of a general purpose logic programming language. The encoding of a par- ticular linguistic formalism is but one other appli- cation of such a language, which Miller [7] shows to be sound and complete for intuitionistic logic, and to have a well defined semantics in terms of 273 Kripke models. 4.1.1 Logic Programs We take a logic program or, simply, a program 79 to be any set of definite clauses. We formally represent the fact that a goal clause G is logically derivable from a program P with a sequent of the form 79 =~ G, where 79 and G are, respectively, the antecedent and the succedent of the sequent. If 7 ~ is a program then we take its substitution closure [79] to be the smallest set such that • 79 c_ [79] • if O1 A D2 E [7 ~] then D1 E [79] and D2 E [7 ~] • ifVzD E [P] then [z/t]D E [7 ~] for all terms t, where [z/t] denotes the result of substituting t for free occurrences of t in D 4.1.2 Proof Rules We introduce now the following proof rules, which define the notion of proof for our logic pro- gramrning language: (I) 79=G ifaE[7 )] (ii) 79 =~ G if G ---, A e [7)] 7)=~A (III) ~P =~ G~ A G2 (IV) 79 = [=/t]c 7~ =~ BzG 7~U {O} =~ G (V) P ~ D--. G In the inference figures for rules (II) - (V), the sequent(s) appearing above the horizontal line are the upper sequent(s), while the sequent appearing below is the lower sequent. A proof for a sequent 7 ) =~ G is a tree whose nodes are labeled with sequents such that (i) the root node is labeled with 7 9 ~ G, (ii) the internal nodes are instances of one of proof rules (II) - (V) and (iii) the leaf nodes are labeled with sequents representing proof rule (I). The height of a proof is the length of the longest path from the root to some leaf. The size of a proof is the number of nodes in it. Thus, proof rules (I)-(V) provide the abstract specification of a first-order theorem prover which can then be implemented in terms of depth-first search, backtracking and unification like a Prolog interpreter. (An example of such an implemen- tation, as a metainterpreter on top of Lambda- Prolog, is given in [9].) Observe however that an important difference of such a theorem prover from a standard Prolog interpreter is in the wider distribution of "logical" variables, which, in the logic programming tradition, stand for existen- tially quantified variables within goals. Such vari- ables can get instantiated in the course of a Prolog proof, thus providing the procedural ability to re- turn specific values as output of the computation. Logical variables play the same role in the pro- gramming language we are considering here; more- over, they can also occur in program clauses, since subformulae of goal clauses can be added to pro- grams via proof rule (V). 4.2 How Strings Define Programs Let a be a string a, ... an of words from a lex- icon Z:. Then a defines a program 79a = ra tJ Aa such that • Fa={CONN(ai,i-l,i) ll<i<n} • Aa={Dlai:DEZ:andl<i<n} Thus, Pa just contains ground atoms encoding the position of words in a. A a contains instead all the types assigned in the lexicon to words in a. We assume arithmetic operators for addition, subtrac- tion, multiplication and integer division, and we assume that any program 79= works together with an infinite set of axioms ,4 defining the compari- son predicates over ground arithmetic expressions <, _<, >, _>. (Prolog's evaluation mechanism treats arithmetic expressions in a similar way.) Then, under this approach a string a is of type Ga if and only if there is a proof for the sequent 7)aU.4 ::~ Ga according to rules (I) - (V). 4.3 An Example We give here an example of a proof which deter- mines a corresponding type-assignment. Consider the string whom John loves Such a sentence determines a program 79 with the following set F of ground atoms: { CONN(whom, O, I), CONN(John, I, 2), CONN(loves, 2, 3)} 274 \,Ve assume lexical type assignments such that the remaining set of clauses A is as follows: {VxVz[CONN(whom, x - 1, x) A (NP(y, y) --* S(x, y)) --* REL(x - 1, y)], gx[CONN(John, x - 1, x) -* NP(x - 1, x)], W:VyVz[CONN(Ioves, y - 1, y) A NP(y, z) A NV(x, y - 1) --~ s(x, z)l} The clause assigned to the relative pronoun whom corresponds to the type of a higher-order function, and contains an implication in its body. Figure 1 shows a proof tree for such a type- assignment. The tree, which is represented as growing up from its root, has size 11, and height 8. 5 'Structural Rules We now briefly examine the interaction of struc. tural rules with parsing. In intuitionistic sequent systems, structural rules define ways of subtract- ing, adding, and reordering hypotheses in sequents during proofs. We have the three following struc- tural rules: • Intercha~,ge, which allows to use hypotheses in any order • Contraction, which allows to use a hypothesis more than once • Thinning, which says that not all hypotheses need to be used 5.1 Programs as Unordered Sets of Hypotheses All of the structural rules above are implicit in proof rules (I)-(V), and they are all needed to ob- tain intuitionistic soundness and completeness as in [7]. By contrast, Lambek's propositional calcu- lus does not have any of the structural rules; for instance, Interchange is not admitted, since the hypotheses deriving the type of a given string must also account for the positions of the words to which they have been assigned as types, and must obey the strict string adjacency requirement between functions and arguments of classical CG. Thus, Lambek's calculus must assume ordered lists of hypotheses, so as to account for word-order con- straints. Under our approach, word-order con- straints are obtained declaratively, via sharing of string positions, and there is no strict adjacency requirement. In proof-theoretical terms, this di- rectly translates in viewing programs as unordered sets of hypotheses. 5.2 Trading Contraction against Decidability The logic defined by rules (I)-(V) is in general undecidable, but it becomes decidable as soon as Contraction is disallowed. In fact, if a given hy- pothesis can be used at most once, then clearly the number of internal nodes in a proof tree for a se- quent 7 ~ =~ G is at most equal to the total number of occurrences of--*, A and 3 in 7 ~ =~ G, since these are the logical constants for which proof rules with corresponding inference figures have been defined. Hence, no proof tree can contain infinite branches and decidability follows. Now, it seems a plausible conjecture that the programs directly defined by input strings as in Section 4.2 never need Contraction. In fact, each time we use a hypothesis in the proof, either we consume a corresponding word in the input string, or we consume a "virtual" constituent correspond- ing to a step of hypothesis introduction deter- mined by rule (V) for implications. (Construc- tions like parasitic gaps can be accounted for by as- sociating specific lexical items with clauses which determine the simultaneous introduction of gaps of the same type.) If this conjecture can be formally confirmed, then we could automate our formalism via a metalnterpreter based on rules (I)-(V), but implemented in such a way that clauses are re- moved from programs as soon as they are used. Being based on a decidable fragment of logic, such a metainterpreter would not be affected by the kind of infinite loops normally characterizing DCG parsing. 5.3 Thinning and Vacuous Abstrac- tion Thinning can cause problems of overgeneratiou, as hypotheses introduced via rule (V) may end up as being never used, since other hypotheses can be used instead. For instance, the type assignment (7) which : VvVyVw[CONN(which, v - 1, v) A (gP(y, y) ~ S(v, w)) A v<_yAy<_w--. 275 U {NP(3,3)} ~ CONN(John, ],2) (If) T'U {NP(3,3)} = NP(I,2) PU {NP(3,3)} = NP(3,3) (III) P U {NP(3, 3)} ~ CONN(Ioves, 2, 3) 7 ) U {NP(3, 3)) =~ NP(1, 2) A NP(3, 3) (III) 7 ) U {NP(3,3)} =# CONN(loves, 2,3) A NP(I,2) A NP(3, 3) (II) 7)U {NP(3,3)} => S(1,3) 7 ) => CONN(whom, O,1) P =~ NP(3,3) --* S(1,3) (V) , (ziz) 7) =# CONN(whom, O, I) A (NP(3, 3) -- S(I, 3)) (II) 7) ~ REL(O, 3) Figure h Type derivation for whom John loves REL(v- 1, w) ] can be used to account for tile well-formedness of both which [Ishallput a book on r ] and which [ I shall put : on the table ] but will also accept the ungrammatical which [ I shall put a bookon the table ] In fact, as we do not have to use all the hy- potheses, in this last case the virtual noun-phrase corresponding to the extraction site is added to the program but is never used. Notice that our conjecture in section 4.4.2 was that Contraction is not needed to prove the theorems correspond- ing to the types of grammatical strings; by con- trast, Thinning gives us more theorems than we want. As a consequence, eliminating Thinning would compromise the proof-theoretic properties of (1)-(V) with respect to intuitionistic logic, and the corresponding Kripke models semantics of our programming language. There is however a formally well defined way to account for the ungrammaticaiity of the example above without changing the logical properties of our inference system. We can encode proofs as terms of Lambda Calculus and then filter certain kinds of proof terms. In particular, a hypothesis introduction, determined by rule (V), corresponds to a step of A-abstraction, wllile a hypothesis elim- ination, determined by one of rules (I)-(II), cor- responds to a step of functional application and A-contraction. Hypotheses which are introduced but never eliminated result in corresponding cases of vacuous abstraction. Thus, the three examples above have the three following Lambda encodings of the proof of the sentence for which an extraction site is hypothesized, where the last ungrammatical example corresponds to a case of vacuous abstrac- tion: • Az put([a book], [on x], I) • Az put(x, [on the table], I) • Az put([a book], [on the table], I) Constraints for filtering proof terms character- ized by vacuous abstraction can be defined in a straightforward manner, particularly if we are working with a metainterpreter implemented on top of a language based on Lambda terms, such as Lambda-Prolog [8, 9]. Beside the desire to main- tain certain well defined proof-theoretic and se- mantic properties of our inference system, there are other reasons for using this strategy instead of disallowing Thinning. Indeed, our target here seems specifically to be the elimination of vacuous Lambda abstraction. Absence of vacuous abstrac- tion has been proposed by Steedman [17] as a uni- versal property of human languages. Morrill and Carpenter [11] show that other well-formedness constraints formulated in different grammatical theories such as GPSG, LFG and GB reduce to this same property. Moreover, Thinning gives us a straightforward way to account for situations of lexical ambiguity, where the program defined by a certain input string can in fact contain hypothe- ses which are not needed to derive the type of the string. References [1] Bar-Hillel, Yehoslma. 1953. A Quasi-arithmetical Notation for Syntactic Description. Language. 29. pp47-58. [2] Huet, Gerard 1986. Formal Structures for Computation and Deduction. Unpublished lecture notes. Carnegie-Mellon University. 276 [3] Gabbay, D. M., and U. Reyle. 1984. N-Prolog: An Extension of Prolog with lIypothetical Im- plications. I The Journal of Logic Program- ruing. 1. pp319-355. [4] Joshi, Aravind. 1987. Word.order Variation in Natural Language Generation. In Proceed- ings of the National Conference on Artificial Intelligence (AAAI 87), Seattle. [5] Karttunen, Lauri. 1986. Radical Lexicalism. Report No. CSLI-86-68. CSLI, Stanford Uni- versity. [6] Lambek, Joachim. 1958. The Mathematics of Sentence Structure. American Mathematical Monthly. 65. pp363-386. [7] Miller, Dale. 1987. A Logical Analysis of Mod. ules in Logic Programming. To appear in the Journal of Logic Programming. [8] Miller; Dale and Gopalan Nadathur. 1986. Some Uses of Higher.order Logic in Com- putational Linguistics. In Proceedlngs of the 24th Annual Meeting of the Association for Computational Linguistics, Columbia Uni- versity. [9] Miller, Dale and Gopalan Nadathur. 1987. A Logic Programming Approach to Manipulat- ing Formulas and Programs. Paper presented at the IEEE Fourth Symposium on Logic Pro- gramming, San Francisco. [10] Moortgat, Michael. 1987. Lambek Theorem Proving. Paper presented at the ZWO work- shop Categorial Grammar: Its Current State. June 4-5 1987, ITLI Amsterdam. [11] Morrill, Glyn and Bob Carpenter 1987. Compositionality, Implicational Logic and Theories of Grammar. Research Paper EUCCS/RP-11, University of Edinburgh, Centre for Cognitive Science. [12] Pareschi, Remo and Mark J. Steedman. 1987. A Lazy Way to Chart-parse with Categorial Grammars. In Proceedings of the 25th An- nual Meeting of the Association for Compu- tational Linguistics, Stanford University. [13] Pereira, Fernando C. N. and Stuart M. Shieber. 1987. Prolog and Natural Language Analysis. CSLI Lectures Notes No. 10. CSLI, Stanford University. [14] Pereira, Fernando C. N. and David II. D. Warren. 1980. Definite Clauses for Language Analysis. Artificial Intelligence. 13. pp231- 278. [15] Rounds, William C. 1987. LFP: A Logic for Linguistic Descriptions and an Analysis of lts Complexity. Technical Report No. 9. The Uni- versity of Michigan. To appear in Computa- tional Linguistics. [16] Steedman, Mark J. 1985. Dependency and Coordination in the Grammar of Dutch and English. Language, 61, pp523-568 [17] Steedman, Mark J. 1987. Combinatory Gram- mar and Parasitic Gaps. To appear in Natu- • rat Language and Linguistic Theory. [18] Uszkoreit, Hans. 1986. Categorial" Unification Grammar. In Proceedings of the 11th Inter- national Conference of Computational Lin- guistics, Bonn. [19] Wittenburg, Kent. 1987. Predictive Combina- tots for the Efficient Parsing of Combinatory Grammars. In Proceedings of the 25th An- nual Meeting of tile Association for Compu- tational Linguistics, Stanford University. [20] Zeevat, H., Klein, E., and J. Calder. 1987. An Introduction to Unification Categorial Gram- mar. In N. Haddock et al. (eds.), Edinburgh Working Papers in Cognitive Science, 1: Cat- egorial Grammar, Unification Grammar, and Parsing. 277
1988
33
COMBINATORY CATEGORIAL GRAMMARS: GENERATIVE POWER AND RELATIONSHIP TO LINEAR CONTEXT-FREE REWRITING SYSTEMS" David J. Weir Aravind K. Joshi Department of Computer and Information Science University of Pennsylvania Philadelphia, PA 19104-6389 Abstract Recent results have established that there is a family of languages that is exactly the class of languages generated by three independently developed grammar formalisms: Tree Adjoining Grammm~, Head Grammars, and Linear Indexed Grammars. In this paper we show that Combina- tory Categorial Grammars also generates the same class of languages. We discuss the slruclm'al descriptions pro- duced by Combinawry Categorial Grammars and com- pare them to those of grammar formalisms in the class of Linear Context-Free Rewriting Systems. We also discuss certain extensions of CombinaWry Categorial Grammars and their effect on the weak generative capacity. 1 Introduction There have been a number of results concerning the rela- tionship between the weak generative capacity (family of string languages) associated with different grammar for- malisms; for example, the thecxem of Oaifman, et al. [3] that Classical Categorial Grammars are weakly equivalent to Context-Free Grammars (CFG's). Mote recently it has been found that there is a class of languages slightly larger than the class of Context-Free languages that is generated by several different formalisms. In pardodar, Tree Ad- joining Grammars (TAG's) and Head Grammars (HG's) have been shown to be weakly equivalent [15], and these formalism are also equivalent to a reslriction of Indexed Grammars considered by Gazdar [6] called Linear In- dexed Grammars (LIG's) [13]. In this paper, we examine Combinatory Categorial Grammars (CCG's), an extension of Classical Catego- rial Grammars developed by Steedman and his collab- orators [1,12,9,10,11]. The main result in this paper is *This work was partially mpported by NSF gnmts MCS-82-19116- CER. MCS-82-07294, DCR-84-10413, ARO grant DAA29-84-9-0027. and DARPA gnmt N0014-85-K0018. We are very grateful to Mark Steedmm, ]C Vijay-Shanker and Remo Pare~:hi for helpful disctmiem. that CCG's are weakly equivalent to TAG's, HG's, and LIG's. We prove this by showing in Section 3 that Com- binatory Categorlal Languages (CCL's) are included in Linear Indexed Languages (LIL's), and that Tree Adjoin- ing Languages (TAL's) are included in CCL's. After considering their weak generative capacity, we investigate the relationship between the struclzwal descrip- tions produced by CCG's and those of other grammar for- malisms. In [14] a number of grammar formalisms were compared and it was suggested that an important aspect of their descriptive capacity was reflected by the deriva- tion structures that they produced. Several formalisms that had previously been descn2~d as mildly context- sensitive were found to share a number of properties. In particular, the derivations of a grammar could be repre- senled with trees that always formed the tree set of a context-free grammar. Formalisms that share these prop- erties were called Linear Context-Free Rewriting Systems ('LCFRS's) [14]. On the basis of their weak generative capacity, it ap- pears that CCG's should be classified as mildly context- sensitive. In Section 4 we consider whether CCG's should be included in the class of LCFRS's. The derivation tree sets traditionally associated with CCG's have Context-free path sets, and are similar to those of LIG's, and therefore differ from those of LCFRS's. This does not, however, nile out the possibility that there may be alternative ways of representing the derivation of CCG's that will allow for their classification as LCP'RS's. Extensions to CCG's have been considered that enable them to compare two unbounded sU'uctures (for example, in [12]). It has been argued that this may be needed in the analysis of certain coordination phenomena in Dutch. In Section 5 we discuss how these additional features increase the power of the formalism. In so doing, we also give an example demonstrating that the Parenthesis- free Categorial Grammar formalism [5,4] is moze pow- erful that CCG's as defined here. Extensions to TAG's (Multicomponent TAG) have been considered for similar 278 reasons. However, in this paper, we will not investigate the relationship between the extension of CCG's and Mul- ticomponent TAG. 2 Description of Formalisms In this section we describe Combinatory Categorial Gram- mars, Tree Adjoining Grammars, and Linear Indexed Grammars. 2.1 Combinatory Categoriai Grammars Combinatory Categorial Grammar (CCG), as defined here, is the most recent version of a system that has evolved in a number of papers [1,12,9,10,11]. A CCG, G, is denoted by (VT, VN, S, f, R) where VT is a finite set of terminals (lexical items), VN is a finite set of nonterminals (atomic categories), S is a distinguished member of VN, f is a function that maps elements of VT U {e} to finite subsets of C(VN), the set of categories*, where V N g C(VN) and if CI, C 2 e C(VN) then (el/c2) E C(VN) and (c1\c2) E C(VN). R is a finite set of combinatory rules, described below. We now give the combinatory rules, where z, y, z are variables over categories, and each Ii denotes either \ or /. 1. forward application: 2. backward application: u (z\u) -. z 3. generaliT~d forward composition for some n _> 1: (... I.z.) -. 4. generalized backward composition for some n E 1: (...(yll~x)12... I-=-) (~\~) --' (--. (~11=x)12... I~z.) z Note that f can assign categoric8 to the empty suing, ~, though, to our knowledge, this feature has not been employed in the linguistic applications ¢~ C'CG. Restrictions can be associated with the use of the com- binatory rule in R. These restrictions take the form of conswaints on the instantiations of variables in the rules. These can be constrained in two ways. 1. The initial nonterminal of the category to which z is instantiated can be restricted. 2. The entire category to which y is instantiated can be resuicted. Derivations in a CCG involve the use of the combi- natory rules in R. Let the derives relation be defined as follows. ~c~ F ~clc2~ if R contains a combinawry rule that has czc2 --* c as an instance, and a and ~ are (possibly empty) strings of categories. The string languages, L(G), generated by a CCG, G', is defined as follows. {al... c, ~ f(aO, a, ~ VT U {~}, 1 _< i _< .} Although there is no type-raising rule, its effect can be achieved to a limited extent since f can assign type-raised categories to lexical items, which is the scheme employed in Steedman's recent work. 2.2 Linear Indexed Grammars Linear Indexed Grammars (LIG's) were introduced by Gazdar [6], and are a restriction of Indexed Grammars introduced by Aho [2]. LIG's can be seen as an exten- sion of CFG's in which each nonterrninal is associated with a stack. An LIG, G, is denoted by G = ( Vjv , VT , Vs , S, P) where VN iS a finite set of nontenninals, VT is a finite set of terminals, Vs is a finite set of stack symbols, S E VN is the start symbol, and P is a finite set of productions, having the form A[] - A[..1] -* AI[]...Ai["]...A.[] A[..]--a~[]...Ad..t]...A.[] where At .... A. E VN, l E Vs, and a E VT O {~}. The notation for stacks uses [. •/] to denote an arbi- Wary stack whose top symbol is I. This system is called L/near Indexed Grammars because it can be viewed as a 279 restriction of Indexed Grammars in which only one of the non-terminals on the right-hand-side of a production can inherit the stack from the left-hand-side. The derives relation is defined as follows. ~A[Z,, ... ht]~ ~ ~A,[] ... A,[Z,,... t~].., a,[]~ if A[.. l] -. ~,[]...A,[..]...A,[] ~ P otA[lm.., ll]~ o =~ aAl[]... Ai[lm... ill]... An[]/~ if A[..] --. A,[]...A,[-. Z]...A,,[] ~ P : c,a[]a ~ ,ma if A[]--.a~P The language, L(G), generated by G is 2.3 Tree Adjoining Grammars A TAG [8,7] is denoted G = (VN, VT, S, I, A) where VN is a finite set of nontennlnals, VT is a finite set of terminals, S is a distinguished nonterminal, I is a finite set of initial trees and A is a finite set of auxiliary trees. Initial trees are rooted in S with w E V~ on their fron- tier. Each internal node is labeled by a member of VN. Auxiliary trees have tOlAW2 E V'~VNV~ oll their fron- tier. The node on the frontier labeled A is called the foot node, and the root is also labeled A. Each internal node is labeled by a member of VN. Trees are composed by tree adjunction. When a tree 7' is adjoined at a node ~/in a tree .y the tree that results, 7,', is obtained by excising the subtree under t/from and inserting 7' in its place. The excised subtree is then substituted for the foot node of 3 / . This operation is illustrated in the following figure. ~': $ r'." x Y": s Each node in an auxiliary tree labeled by a nonterminal is associated with adjoining constraints. These constraints specify a set of auxiliary trees that can be adjoined at that node, and may specify that the node has obligatory adjunction (OA). When no tree can be adjoined at a node that node has a null adjoining (NA) constraint. The siring language L(G) generated by a TAG, G, is the set of all strings lYing on the frontier of some tree that can be derived from an initial trees with a finite number of adjunctions, where that tree has no OA constraints. 3 Weak Generative Capacity In this section we show that CCO's are weakly equivalent to TAG's, HG's, and LIO's. We do this by showing the Inclusion of CCL's in L1L's, and the inclusion of TAL's in CCL's. It is know that TAG and LIG are equivalent [13], and that TAG and HG are equivalent [15]. Thus, the two inclusions shown here imply the weak equivalence of all four systems. We have not included complete details of the proofs which can be found in [16]. 3.1 CCL's C LIL's We describe how to construct a LIG, G', from an arbi- trary CCG, G such that G and G' are equivalent. Let us assume that categories m-e written without parentheses, tmless they are needed to override the left associativity of the slashes. A category c is minimally parenthesized if and only if one of the following holds. c= A for A E VN c = (*oll*xl2... I,,c,,), for, >_ 1, where Co E VN and each c~ is mini- mally parenthesize~ It will be useful to be able to refer to the components of a category, c. We first define the immediate components of c. 280 when c = A the immediate component is A, when c = (col:xh...I.c.) the immediate components are co, cl,. • •, e.,,. The components of a category c are its immediate com- ponents, as well as the components of its immediate com- ponents. Although in CCG's there is no bound on the number of categories that are derivable during a derivation (cate- gories resulting from the use of a combinatory rule), there is a bound on the number of components that derivable categories may have. This would no longer hold if unre- stricted type-raising were allowed during a derivation. Let the set Dc(G) he defined as follows. c E De(G) if c is a component of d where c' E f(a) for some a E VT U {e}. Clearly for any CCG, G, Dc(G) is a finite set. Dc(G) contains the set of all derivable components because for every category e that can appear in a sentential form of a derivation in some CCG, G, each component of c is in Dc(G). This can be shown, since, for each combinatory rule, ff it holds of the categories on the left of the rule then it will hold of the category on the right. Each of the combinatory rules in a CCG can be viewed as a statement about how a pair of categories can be com- bined. For the sake of this discussion, let us name the members of the pair according to their role in the rule. The first of the pair in forward rules and the second of the pair in backward rules will be named the primary cate- gory. The second of the parr in forward rules and the first of the pair in backward rules will be named the secondary category. As a resuit of the form that combinatory rules can take in a CCG, they have the following property. When a combinatory rule is used, there is a bound on the number of immediate components that the secondary categories of that rule may have. Thus, because immediate constituents must belong to De(G) (a finite set), there is a bound on the number of categories that can fill the role of secondary categories in the use of a combinatory rule. Thus, theae is a bound on the number of instantiations of the variables y and zi in the combinatory rules in Section 2.1. The only variable that can be instantiated to an unbounded number of categories is z. Thus, by enumerating each of the finite number of variable bindings for y and each z~, the number of combinatory rules in R can be increased in such a way that only x is needed. Notice that z will appears only once on each side of the rules (Le, they are linear). We are now in a position m describe how to represent each of the combinatory rules by a production in the LIG, G'. In the combinatory rules, categories can be viewed as stacks since symbols need only be added and removed from the right. The secondary category of each rule will be a ground category: either A, or (AIlcl[2... [ncn), for some n >__ I. These can be represented in a LIG as A[] or A[hCl[2... InCh], respectively. The primary category in a combinatory rule will be unspecified except for the identity of its left and rightmost immediate components. Its leftmost component is a nonterminal, A, and its right- most component is a member of De(G), c. This can be represented in a LIG by A[.. el. In addition to mapping combinatory rules onto produc- tions we must include productions in G' for the mappings from lexical items. If c E f ( a ) where a E VT U {e} then if e = A then A[] ...* a E P if c-'(ahcll2...I,c,) then A[llC112.." ]nOn ] .-o, a e P We are assuming an extension of the notation for produc- tions that is given in Section 2.2. Rather than adding or removing a single symbol from the stack, a fixed number of symbols can be removed and added in one produc- tion. Furthermore, any of the nonterminals on the right of productions can be given stacks of some fixed size. 3.2 TAL's C CCL's We briefly describe the construction of a CCG, G' from a TAG, G, such that G and G' are equivalent. For each nonterminal, A of G there will be two nonter- minals A ° and A c in G'. The nonterminal of G' will also include a nonterminal Ai for each terminal ai of the TAG. The terminal alphabets will be the same. The combinatory rules of G' are as follows. Forward and backward application are restricted to cases where the secondary category is some X ~, and the left immediate component of the primary cate- gory is some Y°. Forward and backward composition are restricted to cases where the secondary category has the form ((XChcl)[2c2), and the left immediate component of the primary category is some Y% An effect of the restrictions on the use of combinatory rules is that only categories that can fill the secondary role during composition are categories assigned to terminals by f. Notice that the combinatory rules of G' depend only 281 on the terminal and nonterminal alphabet of the TAG, and are independent of the elementary trees. f is defined on the basis of the auxiliary trees in G. Without loss of generality we assume that the TAG, G, has trees of the following form. I contains one initial tree: $ OA I Thus, in considering the language derived by G, we need only be concerned with trees derived from auxiliary trees whose root and foot are labeled by S. There are 5 kinds of auxiliary trees in A. 1. For each tree of the following form include A"/Ca/B ~ ~ f(e) and A°/C*/B + ~ f(O A NA B OA C OA I I AI~ e 2. For each tree of the fonowing form include Aa\Ba/C ¢ E f(e) and A¢\Ba/C ¢ E f(e) A NA BOA C OA I I A NA 3. For each tree of the following form Aa/B¢/C e.E f(e) and Ae/Be/C ¢ E f(e) ANA I B OA I COA I A NA include 4. For each tree of the following form include A°\AI E f(e), A*\AI E f(e) and A, E f(a,) ANA al A NA 5. For each tree of the following form include A °/Ai E f(e), AC/Ai E f(e) and Ai E f(al) ANA A NA a i The CCG, G', in deriving a string, can be understood as mimicking a derivation in G of that suing in which trees are adjoined in a particular order, that we now describe. We define this order by describing the set, 2~(G), of all trees produced in i or fewer steps, for i >_ 0. To(G) is the set of auxiliary trees of G. TI(G) is the union of T~_x(G) with the set of all trees 7 produced in one of the following two ways. 1. 2. Let 3 / and 7" be trees in T~-I(G) such that there is a unique lowest OA node, I?, in 7' that does not dominate the foot node, and 3/' has no OA nodes. 7 is produced by adjoining 7" at in 7'. Let 7' be trees in T~-I(G) such that there is OA node, 7, in 7' that dominates the foot node and has no lower OA nodes. 7 is pmduceA by adjoining an auxiliary tree ~ at 17 in 7'- Each tree 7 E 2~(G) with frontier wiAw2 has tbe prop- erty that it has a single spine from the root to a node that dominates the entire string wlAw2. All of the OA nodes remaining in the tree fall on this spine, or hang immedi- ately to its right or left. For each such tree 7 there will be a derivation tree in a', whose root is labeled by a ca~gory c and with frontier to 1W2, wher~ c encodes the remaining obligatory adjunctions on this spine in 7. Each OA nodes on the spine is encoded in c by a slash and nonterminal symbol in the appropriate position. Sup- pose the OA node is labeled by some A. When the OA node falls on the spine c will contain /.4 ¢ (in this case the direction of the slash was arbiwarfly chosen to be for- ward). When the OA node faUs to the left of the spine c will contain \A% and when the OA node fall~ to the right of the spine c will contain/A °. For example, the follow- ing tree is encoded by the category A\A~/AI/A~\A ~ 282 A i A I OA A2OA /\ Wl w2 We now give an example of a TAG for the language { a"bn I n >_ 0} with crossing dependencies. We then give the CCG that would be produced according to this construction. S NA S 10A S2OA I I £ SNA S2NA I S OA I $30A I $2 NA S I NA $3 NA a SINA S3NA b NA £ SNA s'\s~/s~ ~ f(O s'\sf/s~ ~ f(O S~\A ~ f(O S~\A ~ f(O A e f(~) B ~ f(b) Sa\S, 6 f(¢) S¢\S, 6 f(¢) S, E f(6) 4 Derivations Trees Vijay-Shanker, Weir and Joshi [14] described several properties that were common to various conswained grammatical systems, and defined a class of such systems called Linear Context-Free Rewriting Systems (LCFRS's). LCFRS's are constrained to have linear non- erasing composition operations and derivation trees that are structurally identical to those of context-free gram- mars. The intuition behind the latter restriction is that the rewriting (whether it be of strings, trees or graphs) be performed in a context-free way; i.e., choices about how to rewrite a structure should not be dependent on an unbounded amount of the previous or future context of the derivation. Several wen-known formalisms fall into this class including Context-Free Grammars, Gener- alized Phrase Structure Grammars (GPSG), Head Gram- mars, Tree Adjoining Grammars, and Multicomponent Tree Adjoining Grammars. In [14] it is shown that each formalism in the class generates scmilinear languages that can be recognized in polynomial time. In this section, we examine derivation trees of CCG's and compare them with respect to those of formalisms that are known to be LCFRS's. In order to compare CCG's with other systems we must choose a suitable method for the representation of derivations in a CCG. In the case of CFG, TAG, HG, for example, it is fairly clear what the elementary structures and composition operations should be, and as a result, in the case of these formalisms, it is apparent how to represent derivations. The traditional way in which derivations of a CCG have been represented has involved a binary tree whose nodes are labeled by categories with annotations indicat- ing which combinatory rule was used at each stage. These derivation trees are different from those systems in the class of LCFRS's in two ways. They have context-free path sets, and the set of categories labeling nodes may be infinite. A property that they share with LCFRS's is that there is no dependence between unbounded paths. In fact, the derivation trees sets produced by CCG's have the same properties as those produced by LIG's (this is apparent from the construction in Section 3A). Although the derivation trees that are traditionally as- sociated with CCG's differ from those of LCFRS's, this does not preclude the possibility that there may be an al- ternative way of representing derivations. What appears to be needed is some characterization of CCG's that iden- tities a finite set of elementary structures and a finite set of composition operations. The equivalence of TAG's and CCG's suggests one way of doing this. The construction that we gave from TAG's to CCG's produced CCG's having a specific form which can be thought of as a normal form for CCG's. We can represent the derivations of grammars in this form with the same tree sets as the derivation tree sets of the TAG from which they were constructed. Hence CCG's in this normal form can be classified as LCFRS's. 283 TAG derivation trees encode the adjanction of specified elementary trees at specified nodes of other elementary trees. Thus, the nodes of the derivation trees are labeled by the names of elementary trees and tree addresses. In the construction used in Section 3.2, each auxiliary tree produces assignments of elementary categories to lexicai items. CCG derivations can be represented .with trees whose nodes identify elementary categories and specify which combinatory rule was used to combine it. For grammars in this normal form, a unique derivation can be recovered from these trees, but this is not true of arbitrary CCG's where different orders of combination of the elementary categories can result in derivations that must be distinguished. In this normal form, the combina- tory rules are so restrictive that there is only one order in which elementary categories can be combined. Without such restrictions, this style of derivation tree must encode the order of derivation. 5 Additions to CCG's CCG's have not always been defined in the same way. Although TAG's, HG's, and CCG's, can produce the crossing dependencies appearing in Dutch, two additions to CCG's have been considered by Steedman in [12] to describe certain coordination phenomena occurring in Dutch. For each addition, we discuss its effect on the power of the system. 5.1 Unbounded Dependent Structures A characteristic feature of LCFRS's is that they are un- able to produce two structures exhibiting an unbounded dependence. It has been suggested that this capability may be needed in the analysis of coordination in Dutch, and an extension of CCG's has been proposed by Steed- man [12] in which this is possible. The following schema is included. X* COnj x ~ x where, in the analysis given of Dutch, z is allowed to match categories of arbitrary size. Two arbitrarily large structures can be encoded with two arbitrarily large cat- egories. This schema has the effect of checking that the encodings are identical The addition of rules such as this increases the generative power of CCG's, e.g., the following language can be generated. {(wc)" I w e {a,b} °} In giving analysis of coordination in languages other than Dutch, only a finite number of instances of this schema are required since only bounded categories are involved. This form of coordination does not cause problems for LCFRS's. 5.2 Generalized Composition Steedman [12] considers a CCG in which there are an inf~te number of composition rules for each n _> 1 of the form (~lv) (...(vhz~)l~...I.z.) - (-.. (~l:dln- .. I,z,) (...(VllZl)l,...I,z,) (~\y) -" (... (~1:012... I,z,) This form of composition is permitted in Parenthesis-free Categorial Grammars which have been studied in [5,4], and the results of this section als0 apply to this system. With this addition, the generative power of CCG's in- creases. We show this by giving a grammar for a language that is known not to be a Tree Adjoining language. Con- sider the following CCG. We allow um~stricted use of arbitrarily many combinatory rules for forward or back- wards generalized composition and application. f(e) = {s} /(al) = {At} .~(a2) = {A2} f(Cl) = {S\AI/D1/S\BI} f(c2) {S\A21D21S\B2} f(bx) = {Bx} f(b2)'-{B2} f(dl) = {DI} f(d2)= {D2} When the language, L, generated by this grammar is in- tersected with the regular language we get the following language. nl ~3 ~1 ftl ft2 ft 3 2 1 {a I G 2 b I C 1 b 2 C 2 d~2 d~l I nl,n 2 • 0} The pumping lemma for Tree Adjoining Grammars [13] can be used to show that this is not a Tree Adjoining Language. Since Tree Adjoining Languages are closed under intersection with Regular Languages, L can not be a Tree Adjoining Language either. 6 Conclusions In this paper we have considered the string languages and derivation trees produced by CCL's. We have shown that CCG's generate the same class of string languages 284 as TAG's, HG's, and LIG's. The derivation tree sets nor- mally associated with CCG's are found to be the same as those of LIG's. They have context-free path sets, and nodes labeled by an unbounded alphaboL A consequence of the proof of equivalence with TAG is the existence of a normal form for CCG's having the property that deriva- tion trees can be given for grammars in this normal form that are structurally the same as the derivation trees of CFG's. The question of whether there is a method of representing the derivations of arbitrary CCG's with tree sets similar to those of CFG's remains open. Thus, it is unclear, whether, despite their restricted weak generative power, CCG's can be classified as LCFRS's. References [1] A. E. Ades and M. J. Steedman. On the order of words. Ling. a.nd Philosophy, 3:517-558, 1982. [2] A. V. Aho. Indexed grammars -- An extension to context free grammars. J. ACM, 15:647--671, 1968. [3] Y. Bar-Hillel, C. Gaifman, and E. Shamir. On cate- gorial and phrase structure grammars. In Language and Information, Addison-Wesley, Reading, MA, 1964. [4] J. Friedman, D. Dai, and W. Wang. The weak gen- erative capacity of parenthesis-free categorial gram- mars. In 11 th Intern. Conf. on Comput. Ling., 1986. [5] J. Friedman and R. Venkatesan. Categorial and Non- Categorial languages. In 24 Ch meeting Assoc. Corn- put. Ling., 1986. [6] G. Gazdar. Applicability of Indexed Grammars to Natural Languages. Technical Report CSLI-85- 34, Center for Study of Language and Information, 1985. [7] A. tL Joshi. How much context-sensitivity is nee- essary for characterizing su'ucm.,~ descriptions Tree Adjoining Grammars. In D. Dowry, L. Kart- tunen, and A. Zwieky, editors, Natural Language Processing ~ Theoretical, Computational and Psy- chological Perspective, Cambridge University Press, New York, NY, 1985. Originally presented in 1983. [8] A. K. Joshi, L. S. Levy, and M. Takahashi. Tree ad- junct grammars. J. Comput. Syst. Sci., 10(1), 1975. [9] M. Steedman. Combinators and grammars. In R. Oehrle, E. Bach, and D. Wheeler, editors, Categorial Grammars and Natural Language Structures, Foris, Dordrecht, 1986. [1o] [11] [12] [13] [14] [15] [16] M. Steedman. Combinatory grammars and para- sitic gaps. Natural Language and Linguistic Theory, 1987. M. Steedman. Gapping as constituent coordination. 1987. m.s. University of Edinburgh. M. J. Steexlman. Dependency and coordination in the grammar of Dutch and English. Language, 61:523- 568, 1985. K. Vijay-Shanker. A Study of Tree Adjoining Gram- mars. PhD thesis, University of Pennsylvania, Philadelphia, Pa, 1987. K. Vijay-Shankcr, D. L Weir, and A. K. Joshi. Char- acterizing structural descriptions produced by vari- ons grammatical formalisms. In 25 th meeting Assoc. Comput. Ling., 1987. K. Vijay-Shanker, D. J. Weir, and A. K. Joshi. Tree adjoining and head wrapping. In 11 th International Conference on Comput. Ling., 1986. D. J. Weir. Characterizing Mildly Context-Sensitive Grammar Formalisms. PhD thesis, University of Pennsylvania, Philadelphia, Pa, in prep. 285
1988
34
Unification of Disjunctive Feature Descriptions Andreas Eisele, Jochen D6rre Institut f'dr Maschinelle Sprachverarbeitung Universit~t Stuttgart Keplerstr. 17, 7000 Stuttgart 1, West Germany Netmaih [email protected] Abstract The paper describes a new implementation of feature structures containing disjunctive values, which can be characterized by the following main points: Local representation of embedded dis- junctions, avoidance of expansion to disjunctive normal form and of repeated test-unifications for checking consistence. The method is based on a modification of Kasper and Rounds' calculus of feature descriptions and its correctness therefore is easy to see. It can handle cyclic structures and has been incorporated successfully into an envi- ronment for grammar development. 1 Motivation In current research in computational linguistics but also in extralinguistic fields unification has turned out to be a central operation in the mod- elling of data types or knowledge in general. Among linguistic formalisms and theories which are based on the unification paradigm are such different theories as FUG [Kay 79,Kay 85], LFG [Kaplan/Bresnan 82], GSPG [Gazdar et al. 85], CUG [Uszkoreit 86]. However, research in unifi- cationis also relevant for fields like logic program- rning, theorem proving, knowledge representation (see [Smolka/Ait-Kaci 87] for multiple inheritance hierarchies using unification), programming lan- guage design [Ait-Kaci/Nasr 86] and others. The version of unification our work is based on is graph unification, which is an extension of term unification. In graph unification the number of arguments is free and arguments are selected by attribute labels rather than by position. The al- gorithm described here may easily be modified to apply to term unification. The structures we are dealing with are rooted directed graphs where arcs starting in one node must carry distinct labels. Terminal nodes may also be labelled. These structures are referred to by various names in the literature: feature struc- tures, functional structures, functional descrip- tions, types, categories. We will call them feature structures I throughout this paper. In applications, other than toy applications, the efficient processing of indefinite information which is represented by disjenctive specifications be- comes a relevant factor. A strategy of multiplying- out disjunction by exploiting (nearly) any combi- nation of disjuncts through backtracking, as it is done, e.g., in the case of a simple DCG parser, quickly runs into efficiency problems. On the other hand the descriptional power of disjunction often helps to state highly ambiguous linguistic knowl- edge clearly and concisely (see Fig. I for a disjunc- tive description of morphological features for the six readings of the german noun 'Koffer'). Koffer: morph: sem: o o . r sg11 agr: L.pers: 3 J/ gend: masc / case: {nom dat acc}J mum: pill agr: [pers: 3 J| gend: masc / case: {nom gen acc}J arg: [] Figure 1: Using disjunction in the description of linguistic structures Kasper and Rounds [86] motivated the distinc- tion between feature structures and formulae of a logical calculus that are used to describe feature structures. Disjunction can be used within such a formula to describe sets of feature structures. With this separation the underlying mathematical framework which is used to define the semantics of the descriptions can be kept simple. 1We do not, ms is frequently done, restrict ourselves to acydlc structures. 286 2 Disjunctive Feature De- scriptions We use a slightly modified version of the formula language FRL of Kasper and Rounds [86] to de- scribe our feature structures. Fig. 2 gives the syn- tax of FRL', where A is the set of atoms and L the set of labels. FML' contains: NIL TOP a where a E A 1 : ~ where 1E L, @ E Flff.' A 9 where ~,~ E FILL' V • where ~,~ E FRL' ~p) where p E L ° Figure 2: Syntax of FML' In contrast to Kasper and Rounds [86] we do not use the syntactic construct of path equivalence classes. Instead, path equivalences are expressed using non-local path expressions (called pointers in the sequel). This choice is motivated by the fact that we use these pointers for an efficient rep- resentation below, and we want to keep FIK.' as simple as possible. The intuitive semantics of FIK/is as follows (see [Kasper/Rounds 86] for formal definitions): 1. NIL is satisfied by any feature structure. 2. TOP is never satisfied. 3. a is satisfied by the feature structure consisting only of a single node labelled a. 4. I : ~ requires a (sub-)structure under arc I to satisfy @. 5. @ A • is satisfied by a feature structure that satisfies ~ and satisfies ~. 6. • V • is satisfied by a feature structure that satisfies @ or satisfies 9. 7. (p) requires a path equivalence (two paths lead- ing to the same node) between the path (p) and the actual path relative to the top-level structure.2 The denotation of a formula @ is usually defined as the set of minimal elements of SAT(~) with respect to subsumption 3, where SAT(@) is the set 2 This construct is context-sensitive in the sense that the denotation of (p) may only be computed with respect to the whole structure that the formula describes. 3The subsumptlon relation _E is a partial ordering on feature structures inducing a semi-lattice. It may be de- fined as: FS1 C FS2 iff the set of formula~ satisfied by FS2 includes the set of formulae satisfied by FS1. of feature structures which satisfy &. Example: The formula ~=subj:agr:(agr) A ¢ase:(nom V ace) denotes the two graphs subj agr case subj agr case nora acc 3 The Problem The unification problem for disjunctive feature de- scriptions can be stated as follows: Given two formulae that describe feature structures, find the set of feature struc- tures that satisfy both formulae, if it is nonempty, else announce 'fail'. The simplest way to deal with disjunction is to rewrite any description into disjunctive nor- mal form (DNF). This transformation requires time and space exponential with the number of disjuncts in the initial formula in the worst case. Although the problem of unifying disjunc- tive descriptions is known to be NP-complete (see [Kasper 87a]), methods which avoid this transfor- mation may perform well in most practical cases. The key idea is to keep disjunction local and con- sider combinations of disjuncts only when they re- fer to the very same substructure. This strategy, however, is complicated by the fact that feature structures may be graphs with path equivalences and not only trees. Fig. 3 shows an example where unifying a disjunction with a structure containing reentrancy causes parts of the disjunction to be linked to other parts of the structure. The dis- junction is e:rported via this reentrancy. Hence, the value of attribute d cannot be represented uniquely. It may be + or -, depending on which disjunct in attribute a is chosen. To represent this information without extra formal devices we have to lift the disjunction one level up. 4 4 In this special case we still could keep the disjunction in the attribute a by inverting the pointer. A pointer (a b) underneath label d would allow us to specify the value of d dependent on the disjunction under a. 287 a" I b: [o. C: :'II :] V [a: [b: d: [ 3 (d) ]] [ . Eb 1] / c Figure 3: Lifting of disjunction due to reentrancy 4 From Description to Effi- cient Representation It is interesting to investigate whether FI~' is suit- able as an encoding of feature structures, i.e. if it can be used for computational purposes. However, this is clearly not the case for the un- restricted set of formulae of FML', since a given feature structure can be represented by infinitely many different formulae of arbitrary complexity and -- even worse -- because it is also not pos- sible to ascertain whether a given formula repre- sents any feature structure at all without extensive computation. On the other hand, the formulae of FIK.' have some properties that are quite attractive for repre- senting feature structures, such as embedded and general disjunction and the possibility to make use of the law of distributivity for disjunctions. Therefore we have developed an efficiency- oriented normal form F~F, which is suitable as an efficient representation for sets of feature struc- tures. The formulae are built according to a restricted syntax (Fig. 4, Part A) and have to satisfy condi- tion Cs~j. (Part B). The syntax restricts the use of conjunction and TOP in order to disallow contra- dictory information in a formula other than TOP. However, even in a formula of the syntax of Part A inconsistence can be introduced by a pointer to a location that is 'blocked' by an atomic value on a higher level. For example in the formula a: (b c) A b:d the path (b c) is blocked since it would require the value of attribute b to be complex in conflict to the atomic value d, thus rendering the A) Restricted syntax of ENF: NIL TOP a where a q A 11 : ~I ^"" ^ In : ~, where ~i E EI[F\{TOP}, li E L, li # lj for i :f= j V • where @, • E ESF\{TOP} (p) where p E L'. B) Additional condition Cs~,: ff an instance ~ of a formula @ contains a pointer (p), then the path p must be realized in 6. Figure 4: A normal form to describe feature struc- tures efficiently formula non-satisfiable. With the additional con- dition Cs~, such~inconsistencies are excluded. Its explanation in the next section is somewhat tech- nical and is not prerequisite for the overall under- standing of our method. Condition Cs• . First we have to introduce some terminology. Instance: When every disjunction in a formula is replaced by one of its disjuncts, the result is called an instance of that formula. Realized: A recursive definition of what we call a reafized path in an instance ~b is giver in Fig. 5. The intuitive idea behind this notion is to restrict is realized in ~b, if ~b ~ TOP ! E L is realized in It : ~bt A... A 1, : ~b, (even if/~ {It...In}) l.p is realized in .-. A I : ~b A -.., if p is realized in p is realized in (p'), if pip is realized in the top-level formula Figure 5: Definition of realized paths pointers in such a way that the path to their des- tination may not be blocked by the introduction of an atomic value on a prefix of this path. Note that by virtue of the second line of the definition, the last label of the path does not have to actually occur in the formula, if there are other labels. Example: In a: (b c) only the path e and each path of length 1 is realized. Any longer path may be blocked by the introduction of an atomic value at level 1. Thus, the formula violates CENP. 288 a:(b d) A b:(c) A c:(d:x V b:y), on the other hand, is a well-formed gNF formula, since it contains only pointers with realized destinations in every disjunct. The easiest way to satisfy the condition is to in- troduce for each pointer the value NIL at its des- tination when building up a formula. With this strategy we actually never have to check this con- dition, since it is maintained by the unification algorithm described below. Properties of ENF The most important properties of formulae in ~.NF are: • For each formula of ~'llL' an equivalent formula in ENF can be found. • Each instance of a formula in ¢-~ (besides TOP) denotes exactly one feature structure. • This feature structure can be computed in lin- ear time. The first property can be established by virtue of the unification algorithm given in the next section, which can be used to construct an equivalent glD'- formula for an arbitrary formula in FML ~. The next point says: It doesn't matter which disjunct in one disjunction you choose -- you can- not get a contradiction. Disjunctions in gNF are mutually independent. This also implies that TOP is the only formula in ENF that is not satisfiable. To see why this property holds, first consider for- mulae without pointers. Contradictory informa- tion (besides TOP) can only be stated using con- junction. But since we only allow conjunctions of different attributes, inconsistent information can- not be stated in formulae without pointers. Pointers could introduce two sorts of incon- sistencies: Since a pointer links two paths, one might assume that inconsistent information could be specified for them. But since conjunction with a pointer is not allowed, only the destination path can carry additional information, thus excluding this kind of inconsistency. On the other hand, pointers imply the existence of the paths they refer to. The condition CB~ r ensures that no informa- tion in the formula contradicts the introduction of these implied paths. We can conclude that even formulae containing pointers are consistent. The condition CBN P additionally requires that no extension of a formula, gained by unification with another formula, may contain such contra- dicting information. A unification algorithm thus can introduce an atomic value into a formula with- out having to check if it would block the destina- tion path of some pointer. 5 The Unification Procedure Figure 6 shows an algorithm that takes as in- put two terms representing formulae in ~-IlF and computes an ElfF-representation of their unifica- tion. The representation of the formulae is given by a 1-to-l-mapping between formulae and data- structures, so that we can abstract from the data- structures and write formulae instead. In this sense, the logical connectives A, V, : are used as term-constructors that build more complex data- structures from simpler ones. In addition, we use the operator • to express concatenation of labels or label sequences and write (p) to express the pointer to the location specified by the label se- quence p. p : ~ is an abbreviation for a formula where the subformula 4~ is embedded on path p. The auxiliary function unify-aux performs the essential work of the unification. It traverses both formulae in parallel and builds all encountered subformulae into the output formula. The follow- ing cases have to be considered: • If one of th~ input formulae specifies a sub- formula at a location where the other input provides no information or if both inputs con- tain the same subformula at a certain location, this subformula is built into the output with- out modification. • The next statement handles the case where one input contains a pointer whereas the other con- rains a different subformula. Since we regard the destination of the pointer as the represen- tative of the equivalence class of paths, the sub- formula has to be moved to that place. This case requires additional discussion, so we have moved it to the procedure move..Cormula. • In ease of two conjunctions the formulae have to be traversed recursively and all resulting at- tribute - value pairs have to be built into the output structure. For clarity, this part of the algorithm has been moved to the procedure unify_complex. • The case where one of the input formulae is a disjunction is handled in the procedure ua£~y.ztisj that is described in Section 5.2. • If none of the previous cases matches (e.g. if the inputs are different atoms or an atom and a complex formula), a failure of the unification has to be announced which is done in the last 289 unify(X,Y) ~ formula repeat (X,Y) := unify_aux(X,Y,~) until Y = NIL or Y = TOP return(X) unify_aux(Ao,al,Pa) ~-, (formula,formula) if A0 ffi AI then return (LI ,IIL) else if £i -- ~IL then return (al-i ,NIL) else if £~ is the pointer <Pro> then return move_formula(A1_~ ,Pa,Pto) else if both a i are conjunctions then return unify_complex(Ao ,AI ,Pa) else if Ai is the disjunction (B V C) then return unify_disj (Ai-i, B, C. P.) else return (TOP,TOP) unif y-complex (ao ,al ,Pa) ~-* (:formula,formula) L := A l:v, where l:v occurs in one Ai and 1 does not occur in Al-i G := NIL for all i that appear in both ~ do let Vo,Vl be the values of 1 in Ao,at (V,GV) := unify_aux(V0,V1,Pa.1) if V = TOP or GV.= TOP then return (TOP,TOP) else L := L A l:V G := uaifyCG,GV) if G = TOP then return (TOP,TOP) return CL,G) Figure 6: The unification procedure statement. The most interesting case is the treatment of a pointer. The functional organization of the al- gorithm does not allow for side effects on remote parts of the top-level formula (nor would this be good programming style), so we had to find a dif- ferent way to move a suhformula to the destination of the pointer. For that reason, we have defined our procedures so that they return two results: a local result that has to be built into the output for- mula at the current location (i.e. the path both in- put formulae are embedded on) and a global result that is used to express 'side effects' of the uni- fication. This global result represents a formula that has to be unified with the top-level result in order to find a formula covering all information contained in the input. This global result is normally set to NIL, but the procedure move.for,,ula must of course produce something different. For the time being, we can as- sume the preliminary definition of move.formuXa in Figure 7, which will be modified in the next subsection. Here, the local result is the pointer (since we want to keep the information about the path equivalence), whereas the global result is a formula containing the subformula to be moved embedded at its new location. move_formula(F, P/tom, Pro) (formula,formula) return (<Pto>,Pto :F) Figure 7: Movement of a Subformula -- Prelimi- nary Version The function tinily_complex unifies conjunc- tions of label-value-pairs by calling tutify_aux re- cursively and placing the local results of these uni- fications at the appropriate locations. Labels that appear only in one argument are built into the out- put without modification. If any of the recursive unifications fail, a failure has to be announced. The global results from recursive unifications are collected by top-level unification 5. The third ar- gument of unify_aux and unify_complex contains the sequence of labels to the actual location. It is not used in this version but is included in prepara- tion of the more sophisticated treatment of point- ers described below. To perform a top-level unification of two formu- lae, the call to unify.aux is repeated in order to unify the local and global results until either the unification fails or the global result is NIL. Before extending the algorithm to handle dis- junction, we will first concentrate on the question how the termination of this repeat-loop can be guaranteed. 5.1 Avoiding Infinite Loops There are cases where the algorithm in Figure 6 will not terminate if the movement of subformulae is defined as in Figure 7. Consider the unification of a:(b) A b:(a) with a:~. Here, the formula sl.f we Allow the global result to be a //~ o].fm'm~do.e, this recursicm could be replaced by list-concatenation. However, this would imply modifications in the top-level loop and would slightly complicate the treatmem of disjunction. 290 will be moved along the pointers infinitely often and the repeat-loop in unify will never terminate. An algorithm that terminates for arbitrary input must include precautions to avoid the introduction of cyclic pointer chains or it has to recognize such cycles and handle them in a special way. When working with pointers, the standard tech- nique to avoid cycles is to follow pointer chains to their end and to install a new pointer only to a location that does not yet contain an outgoing pointer. For different reasons, dereferencing is not the method of choice in the context of our treat- ment of disjunction (see [Eisele 87] for details). However, there are different ways to avoid cyclic movements. A total order '<p' on all possible lo- cations (i.e. all paths) can be defined such that, if we allow movements only from greater to smaller locations, cycles can be avoided. A pointer from a greater to a smaller location in this order will be called a positive pointer, a pointer from a smaller to a greater location will be called negative. But we have to be careful about chosing the right or- der; not any order will prevent the algorithm from an infinite loop. For instance, it would not be adequate to move a formula along a pointer from a location p to its extension p • q, since the pointer itself would block the way to its destination. (The equivalence class contains (p), (p q), (p q q)... and it makes no sense to choose the last one as a representative). Since cyclic feature structures can be introduced inadvertently and should not lead to an infinite loop in the unification, the first condition the order '<p' has to fulfill is: p<ppq if q#~ The order must be defined in a way that positive pointers can not lead to even indirect cycles. This is guaranteed if the condition p <p q =~ rps <p rqs holds for arbitrary paths p, q, r and s. We get an order with the required properties if we compare, in the first place, the length of the paths and use a lexicographic order <t for paths of the same length. A formal statement of this definition is given in Figure 8. Note that positive pointers can turn into neg- ative ones when the structure containing them is moved, as the following example shows: a:b:c:d:(a b e) U a:b:c:(f) pos. pos. = a:b:c:(f) A f:d:(a b e) pos. neg. P<p q if IPl < Iql or if Ipl = [q[, P = rils, q = ri2 t, r,s,t EL*, Ii EL, i1 <112 Figure 8: An Order on Locations in a Formula However, we can be pragmatic about this point; the purpose of ordering is the avoidance of cyclic movements. Towards this end, we only have to avoid using negative pointers, not writing them down. To avoid movement along a negative pointer, we now make use of the actual location that is provided by the third argument of unify-aux and unify_complex and as the second argument of move.~ormula. move_formula(F, Pl,om, Pro) ~. (formula, formula)' if Pro <v P/yore then return (<Pto>,Pto :F) else if P,o = P/,om then return (F, MIL) else return (F,Pto:<Plvom>) Figure 9: Movement of a Subformula -- Correct Version The definition of move.~ormula given in Fig- ure 7 has to be replaced by the version given in Figure 9. We distinguish three cases: • If the pointer is positive we proceed as usual. • If it points to the actual location, it can be ignored (i.e. treated as NIL). This case occurs, when the same path equivalence is stated more than once in the input. • If the pointer is negative, it is inverted by in- stalling at its destination a pointer to the ac- tual position. 5.2 Incorporating Disjunction The procedure unify-disj in Figure 10 has four arguments: the formula to unify with the disjunc- tion (which also can be a disjunction), both dis- juncts, and the actual location. In the first two statements, the unifications of the formula A with the disjuncts B and C are performed indepen- dently. We can distinguish three main cases: * If one of the unifications falls, the result of the other is returned without modification. * If both unifications have no global effect or if the global effects happen to result in the same 291 unify_disj(A,B,C,Pa) , ~-~ (formula,formula) (L1,G1) := unify-aux(A,B,P.) (L2,G2) := unify-aux(A,C,P=) if L1 = TOP or G1 = TOP then return (L2,G2) else if L2 = TOP or G2 = TOP then return (LI,GI) else if G1 = G2 then return (LIVL2,GI) else return (WIL,pack(unify(P.:L1,G1)V unify(P~:L~,G2))) Figure 10: Unification with a Disjunction formula, a disjunction is returned as local re- sult and the common global result of both dis- juncts is taken as the global result for the dis- junction. • If both unifications have different global re- sults, we can not return a disjunction as local result, since remote parts of the resulting for- mula depend on the choice of the disjunct at the actual location. This case arrives if one or both disjuncts have outgoing pointers and if one of these pointers has been actually used to move a subformula to its destination. The last point describes exactly the case where the scope of a disjunction has to be extended to a higher level due to the interaction between dis- junction and path equivalence, as was shown in Figure 3. A simple treatment of such effects would be to return a disjunction as global result where the disjuncts are the global results unified with the corresponding local result embedded at the actual position. However, it is not always necessary to return a top-level disjunction in such a situation. If the global effect of a disjunction concerns only locations 'close' to the location of the disjunction, we get two global results that differ only in an em- bedded substructure. To minimize the 'lifting' of the disjunction, we can assume a procedure pack that takes two formulae X and Y and returns a formula equivalent to X V Y where the disjunction is embedded at the lowest possible level. Although the procedure pack can be defined in a straightforward manner, we refrain from a formal specification, since the discussion in the next sec- tion will show how the same effect can be achieved in a different way. 6 Implementation We now have given a complete specification of a unification algorithm for formulae in ENF. How- ever, there are a couple of modifications that can be applied to it in order to improve its efficiency. The improvements described in this section are all part of our actual implementation. Unification of Two Pointers If both arguments are pointers, the algorithm in Figure 6 treats one of them in the sarne way as an arbitrary formula and tries to move it to the destination of the other pointer. Although this treatment is correct, some of the necessary com- putations can be avoided if this case is treated in a special way. Both pointer destinations and the actual location should be compared and pointers to the smallest of these three paths should be in- stalled at the other locations. Special Treatment of Atomic Formulae In most applications, we do not care about the equivalence of two paths if they lead to the same atom. Under this assumption, when moving an atomic formula along a pointer, the pointer itself can be replaced by the atom without loss of infor- mation. This helps to reduce the amount of global information that has to be handled. Ordering Labels The unification of conjunctions that contain many labels can be accelerated by keeping the labels sorted according to some order (e.g. <a). This avoids searching one formula for each label that occurs in the other. Organisation of the Global Results on a Stack In the algorithm described so far, the global re- sult of a unification is collected, but is - apart from disjunction - not used before the traversal of the input formulae is finished. When formulae containing many pointers are unified, the repeated traversal of the top-level formula slows down the unification, and may lead to the construction of many intermediate results that are discarded later (after having been copied partially). To improve this aspect of the algorithm, we have chosen a better representation of the global result. Instead of one formula, we represent it as a stack of 292 formulae where the first element holds information for the actual location and the last element holds information for the top-level formula. Each time a formula has to be moved along a pointer, its destination is compared with the actual location and the common prefix of the paths is discarded. From the remaining part of the actual location we can determine the first element on the stack where this information can be stored. The rest of the destination path indicates how the information has to be represented at that location. When returning from the recursion, the first el- ement on the stack can be popped and the infor- mation in it can be used immediately. This does not only improve efficiency, but has also an effect on the treatment of disjunction. In- stead of trying to push down a top-level disjunc- tion to the lowest possible level, we climb up the stacks returned by the recursive unifications and collect the subformulae until the rests of the stacks are identical. In this way, 'lifting' disjunctions can be limited to the necessary amount without using a function like pack. Practical Experiences In order to be compatible with existing software, the algorithm has been implemented in PROLOG. It has been extended to the treatment of unifica- tion in an LFG framework where indirectly speci- fied labels (e.g in the equation (1" (lpcase)) -- J. ), set values and various sorts of constraints have to he considered. This version has been incorporated into an existing grammar development facility for LFGs [Eisele/D6rre 86,Eisele/Schimpf 87] and has not only improved efficiency compared to the former treatment of disjunction by backtracking, but also helps to survey a large number of similar results when the grammar being developed contains (too) much disjunction. One version of this system runs on PCs with reasonable performance. 7 Comparison with Other Approaches 7.1 Asymptotical Complexity Candidates for a comparison with our algorithm are the naive multiplying-out to DNF, Kasper's representation of general disjunction [Kasper 87b], and Karttunen's treatment of value disjunction [Karttunen 84], also the improved version in [Bear 87]. Since satisfiability of formulae in FNL is known to be an NP-complete problem, we cannot expect better than exponential time complexity in the worst case. Nevertheless it might be interest- ing to find cases where the asymptotic behaviour of the algorithms differ. The following statements - although somewhat vague - may give an im- pression of strong and weak points of the differ- ent methods. For each given statement we have specific examples, but their presentation or proofs would be beyond the scope of this paper. 7.1.1 Space Complexity (Compactness of the Represeatation) • When many disjunctions concern different substructures and do not depend on each other, our representation uses exponentially less space than expansion to DNF. • There are cases where Kasper's representation uses exponentially less space than our repre- sentation. This happens when disjunctions in- teract strongly, but an exponential amount of consistent combinations remain. • Since Karttunen's method enumerates all con- sistent combinations when several disjunctions concern the same substructure, but allows for local representation in all other cases, his method seems to have a similar space complex- ity than ours. 7.1.2 Time Complexity There are cases where Kasper's method uses exponentially more time than ours. This hap- pens when disjunctions interact so strongly, that only few consistent combinations remain, hut none of the disjunctions can be resolved. When disjunctions interact strongly, hut an ex- ponential amount of consistent combinations remains, our method needs exponential time. An algorithm using Kasper's representation could do better in some of these cases, since it could find out in polynomial time that each of the disjuncts is used in a consistent com- bination. However, the actual organisation of Kasper's full consistency check introduces ex- ponential time complexity for different reasons. 7.2 Average Complexity and Con- clusion It is difficult to find clear results when comparing the average complexity of the different methods, 293 since anything depends on the choice of the exam- pies. However, we can make the following general observation: All methods have to multiply out disjunctions that are not mutually independent in order to find inconsistencies. Kasper's and Karttunen's methods discard the results of such computations, whereas our algo- rithm keeps anything that is computed until a con- tradiction appears. Thus, our method tends to use more space than the others. On the other hand, since Kasper's and Karttunen's methods 'forget' intermediate results, they are sometimes forced to perform identical computations repeatedly. As conclusion we can say that our algorithm sacrifies space in order to save time. 8 Further Work The algorithm or the underlying representation can still be improved or extended in various re- spects: General Disjunction For the time being, when a formula is unified with a disjunction, the information contained in it has to be distributed over all disjuncts. This may involve some unnecessary copying of label-value- pairs in cases where the disjunction does not in- teract with the information in the formula. (Note, however, that in such cases only the first level of the formula has to be copied.) It seems worthwhile to define a relazed ElF, where a formula (AVB)AC is allowed under certain circumstances (e.g. when (A V B) and C do not contain common labels) and to investigate whether a unification algorithm based on this relaxed normal form can help to save unnecessary computations. Functional Uncertainty The algorithm for unifying formulae with regular path expressions given by Johnson [Johnson 86] gives as a result of a unification a finite disjunction of cases. The algorithm presented here seems to be a good base for an efficient implementation of Johnson's method. The details still have to be worked out. Acknowledgments The research reported in this paper was supported by the EUROTRA-D accompanying project (BMFT grant No. 101 3207 0), the ESPRIT project ACORD (P393) and the project LILOG (supported by IBM Deutschland). Much of the inspiration for this work originated from a com-se about extensions to unification (including the work of Kasper and Rounds) which Hans Uszkoreit held at the University of Stuttgart in spring 1987. We had fruitful discussions with Lauri Karttnnen about an early version of this algorithm. Thanks also go to Jftrgen Wedekind, Henk Zeevat, Inge Bethke, and Roland Seiffert for hell~ui discussions and im- portant counterexamples, and to Fionn McKinnon, Stefan Momnm, Gert Smolka, and Carin Specht for polild~ing up our m'gumentation. References [A~t-Kacl/Nur 86] AYt-Kaci, H. and R. Nasa- (1986). LO- GIN: A Logic Programming Language with Built-In In- heritance. The Journal of Logic Programming, 1986 (3). [Bear 87] Bear, J. (1987). Feature-Value Unification with Disjunctions. Ms. SRI International, Stanford, CA. [Bisele 87] Eisele, A. (1987). Eine Implementierung rekur- Idve¢ Merkanalstzxtkturma mlt dlsjunktiven Angaben. Diplomarbeit. Institut f. Informatik, Stuttgart. [Bisele/I~rre 86] Eisele, A. and J. DSrre (1986). A Lexlcal Functional Grammar System in Prolog. In: Proceed/~s of COLING 1#86, Bonn. [Eisele/Schimpf 87] Eisele, A. and S. Sddmpf (1987). Eine benutzerfreund~che Softwareumgebttn g zur Entwick- lung yon LFGen. Studlenarbeit. IfI, Stuttprt. [Gazdar et al. 85] Gazdar, G., E. Klein, G. Pullum and I. Sag (1985). Ge~-m//m/Ph~e $~-~z~ G~z~r. Lon- don: Blackwell, [Johnson S6] John~m, M. (19S6), Cm~e~ ~th P~r PcZ/~ Form~ Ms. CSLI, Stanford, California. [Kaplan/Brem~n 82] Kaplan, R. und J. Bresnan (1982). Lexical Ftmctional Grin,mr:. A Formal System for Grammatical Pc, presentatlon. In: J. Bresnan (ed.), The MenM/Re~ewtat/o~ o] Gmmn~//r.~ Re/6//o~. MIT Press, Cambridge, Mammdm~tts. [Kartt~men 84] Karttunen, L. (1984). Feattwes and Value~ In: Proeesdi~, o] COLIN G 1#8~, Stanford, CA. [Kasper 87a] Kasper, R.T. (1987). Feature Structures: A Logical Theory with Application to Language Analysia Ph.D. Thesis. University of Michigan. [Kasper 871)] Kasper, R.T. (1987). A Unification Method for Disjunctive Feature Descriptions. In: P~-~b~m oJ the P.Sth Anmtal Mee6~ o] the A CL. Stanford, CA. [Kasper/Ronnds 86] Kasper, R.T. and W. Rounds (1986). A Logic~l Semantics for Feature Structures. In: P~- ee.edi~ o/the ~.4th Annzmi Meetiwj o/ the ACL. Columbia Univenfity, New York, NY. [Kay 79] Kay, M. (1979). Functkmal Grammar. In: C. Chiare]lo et al. (eds.) Pn~dings o/the 5th Ann~l Mee~ of the Be~dq ~g'=~:~c Soci~. [Kay 85] Kay, M. (1985). Parsing in Functional Unification Grammar. In: D. Dowty, L. Karttunen, and A. Zwicky (eds.) N,~t~ml l~n~ge Pardng, Cambridge, England. [Smolks/A~t-Kaci 87] Smolka, G. and H. A~t-Kaci (1987). Inheritance Hierarchies: Semantics and Unification. MCC Tech. Pep. No AI-057-87. To appear in: Journal of Symbolic Logic, Speci~l Issue on Unification, 1988. [Uszkorelt 86] Uszkoreit, H. (1986). Categorial Unification Grammars. In: /xtmze.d/~s of COLJ~G 1#86, Bonn. 294
1988
35
THE INTERPRETATION OF RELATIONAL NOUNS Joe de Bruin" and Remko Scha BBN Laboratories 10 Mouiton Street Cambridge, MA 02238, USA ABSTRACT This paper 1 decdbes a computational treatment of the semantics of relational nouns. It covers relational nouns such as "sister.and "commander; and focuses especially on a particular subcategory of them, called function nouns ('speed; "distance', "rating'). Rela- tional nouns are usually viewed as either requiring non-compositional semantic interpretation, or causing an undesirable proliferation of syntactic rules. In con- trast to this, we present a treatment which is both syntactically uniform and semantically compositional. The core ideas of this treatment are: (1) The recog- nition of different levels of semantic analysis; in par- ticular, the distinction between an English-oriented and a domain-oriented level of meaning represen- tation. (2) The analysis of relational nouns as denoting relation-extensions. The paper shows how this approach handles a variety of linguistic constructions involving relational nouns. The treatment presented here has been im- plemented in BBN's Spoken Language System, an experimental spoken language interface to a database/graphics system. 1 RELATIONAL NOUNS AND THEIR DENOTATIONS When Jean Piaget faced his nine year old subject Hal with the question ~/Vhat's a brother?; the answer was: "When there's a boy and another boy, when there are two of them." And, with a greater degree of formal precision, ten year old Bern replied to the same question: ",4 brother is a relation, one brother to another. "[2] [8] What these children are beginning to be able to articulate is that there is something wrong with the experimenter's question as it is posed: it talks about "brothers" as if they constituted a natural kin d, as if there is a way of looking at an individual to find out whether he is a brother. But "brother" is normally not used that way - a property which it shares with words like "co-author; "commander', "speed', "distance', and "rating'. Nouns of this sort are called relational nouns. As 1This research was supported by the Advanced Research Projects Agency of the Depmlment of Defense under Contract No, NO0014-87-C-0(~5. "Current address: Cartesian Products BV, WG Plem 316, 1054 SG Amsterdam, The Nathedands. we shall see in a moment, their semantic properties differ significantly from those of other nouns, so that the standard treatments of nominal semantics don't apply to them. The problem of the semantic inter- pretation of relational nouns constitutes the topic of this paper. We shall argue that this problem is indeed a semantic one, and should preferably not be treated in the syntax. The semantic treatment that we present uses a multilevel semantics framework, and is based on the idea of assigning relation extensions as denotations to relational nouns. Relational nouns are semantically unsaturated. They are normally used in combination with an implicit or explicit argument: "John's brother.. The argument of a relational noun, if overtly realized in the sentence, is connected to the noun by means of a relation- denoting lexical element: the verb "have" or one of its semantic equivalents (the geni~ve and the preposi- tions "of" and "with): "John has a sister', "John's sister; "a sister of John's; "a boy with a sister" It has been noted that these lexical items interact differently with relational nouns than they do with other nouns. [7] Compare, for instance, the noun "car" in (1)/(labcd) with the relational noun "brother" in the parallel sentences (2)/(2abcd): (1) entails (labcd), but the corresponding (2) does not entail (2abcd).2 (1) "John's cars are wrecks." (la) "Some wrecks of John's are cars." ( l b) "Some wrecks are John's." (1 c) "Some ca~ are John "S. " ( l d) "John has wrecks." (2) "John's brothers are punks." #(2a) "Some punks of John's are brothers." #(2b) "Some punks are John's." #(2c) "Some brothers are John's." #(2d) "John has punks." A particular subcategory of the relational nouns, that we shall consider in some detail, is constituted by the function nouns; they are semantically distinct in that for every argument they refer to exactly one en- tity, which is an element of a linear ordering: a hum- ZWe refrain from saying that (2abod) are ungrammatical. Because of the semantic open-endedness of "have" and the genitivQ, these sentences can certainly be wellformod and meaningful, if uttored in an appropriate context. The issue at stake is that the inteqDreta~on whic~ is the saJient one for the genitive in (2) is not avaUable for the ¢ommponciing elements in (2abcd). Sentences displaying this property have been marked with the #-sign (rathor than the ungrammoticality-aotorisk) in this paper. 25 bet, an amount, or a grade. Examples are "length", "speed', "distance", "rating". Function nouns can be used in constructions which exclude other nouns, relational as well as non-relational. Compare, for in- stance: (3) "The USS Frederick has a speed of 15 knots." #(3a) "John has a car of ~is wreck." #(3b) "John has a brolher of Peter." The examples above show that there are sig- nificant semantic differences between phrases con- necting relational nouns to their functions/values, and the corresponding, similarly structured phrases built around other nouns. This suggests that the standard treatment of ordinary nouns cannot be applied directly to relational nouns and yield correct results. To con- clude this introductory section, we investigate this is- sue in a little more detail. Assume a semantic framework with the following, not very unusual, features. Nouns are analyzed as set-denoting constants; concomitantly, adjectives are analyzed as one-place predicates, prepositions as two-place predicates, verbs as n-place predicates. Plural noun phrases with "the" or a possessive denote sets which have the same semantic type as the noun around which they are built: "John's cars" denotes a particular set of cars. In this approach, the represen- tation of the noun phrase "Peter's s/stern'would be: {x • SISTERS / POSSESS(PETER, x)}, where SISTERS denotes the set of persons who are a sister, and POSSESS represents the possessive rela- tion indicated by the genitive construction. Now this expression does not have the right properties. It lacks necessary information: the predi- cate ~ x: POSSESS(PETER, x)) applies to elements of the extension of SISTERS; it cannot take into ac- count how this extension was defined. For instance, if in a pa~cular world the set of sisters is co-extensional with the set of coauthors, the approach just outlined would incorrectly assign to "Peter's sisters" the same denotation as to "Peter's co-aulhors". It is clear what the source of the problem is: the semantic representations for relational nouns con- sidered above denote simple sets of individuals, and do not contain any information about the relation in- volved. To salvage a uniform compositional treatment, a richer representation is needed. One might think of invoking Montague's individual concepts [3] [6], or en- riching one's ontology with qua-individuals (distinguishing between Mary qua sister and Mary qua aunt) [4]. In section 4 we will present our solution to this problem. First, we discuss why we didn't choose for a more syntactically oriented approach. 2 AGAINST SYNTACTIC TREATMENTS Often, the complexities mentioned above are taken to require a distinction between intransitive common nouns and transitive common nouns in the syntax, with a concommittant proliferation of syntactic rules. Instead, we have chosen to extend a treatment of "ordinary" nouns only at the semantic processing stage. We shall now indicate some of the reasons for this choice. Relational nouns are semantically dependent on an argument. In this respect, they are more reminis- cent of verbs than ot standard nouns like "boy" or "chair'. Most verbs of English have one or more ar- gument places that must be filled for the verb to be used in a syntactically/semantically felicitous way; this property of verbs is probably an important reason for the persisting tendency to analyze them as n-place predicates rather than sets of situations. The semantic similarity between relational nouns and verbs has given rise to treatments which model the syntactic treatment of nouns on the treatment of verbs: one introduces lexical subcategories for nouns which in- dicate how many arguments they take and how these arguments are marked; the syntactic rules combine N-bara or noun-phrases with genitive phrases and preposition-phrases, taking these subcategorizations into account. [15] We will now argue, however, that from a syntactic point of view such a move is unattrac- five. Syntactically, relational nouns do not behave very differentJy from "ordinary" nouns. They combine with adjectives, determiners, preposition phrases and rela- five clauses to form noun phrases with a standard X-bar structure; and the noun phrases thus con- stituted can pa~cipate in all sentence-level structures that other noun phrases partake in. Also, no nouns have syntactic properties that would be analogous to the sentenco-levei phenomenon of a verb obligatorily taking one or more arguments. The overt realization of the arguments of a "transitive noun" is always optional. Finally, we may note that relational nouns can be connected to their arguments/values by a variety of verbs and prepositions, which constitute a semantic complex that is also used, with exactly the same structure but with a different meaning, to operate on non-relational nouns. Compare, for instance: "The Chevrolet of Dr. Johnson" / "The speed of Frederick" "Dr. Johnson's Chevrolet" / "Frederick's speed ~ "The Chevrolet that Dr. Johnson has" / "The speed that Frederick has" "Dr. Johnson acquired a rusty Chevrolet" / "Frederick acquired a formidable speed" "A philosopher with a rusty Chevrolet" / ",4 ship wi~ a formidable speed" The same set of terms is used in English for the 26 ownership relation, for the part-whole relation, and for the relation between a function and its argument. These terms (like "of', "have" and "with" ) are highly polysemous, and any language processing system must encompass mechanisms for disambiguating their intended meaning in any particular utterance. To summarize: relational nouns do not distinguish themselves syntactically from other nouns, and they mark their function-argument structures by means of polysemous descriptive terms. We therefore conclude that it would be theoretically elegant as well as com- putationaily effective to treat relational and non- relational nouns identically at the syntactic level, and to account for the semantics of relational noun con- structions by exploiting independently motivated dis- ambiguation mechanisms. The remainder of this paper describes such a treatment. First, Section 3 discusses the multilevel semantics architecture which constitutes the framework for our approach. Section 4 presents our answer to a basic question about relational nouns: what should their denotations be? This section then goes on to describe the semantic transformations which derive the desired analyses of constructions involving rela- tional nouns. Section 5 briefly discusses the interface with a Discourse Model, which is necessary to recover arguments of a relation that are left implicit in an ut- terance. Section 6 shows that our treatment is useful for the purpose of response-formulation in question- answering. 3 MULTILEVEL SEMANTICS. Our approach to the problem of relation~d nouns is based on the idea of multilevel semantics, the distinc- tion between different levels of semantic analysis. [1] [10] In this approach, interpreting a natural lan- guage sentence is a multi-stage process, which starts out with a high-level meaning representation which reflects the semantic structure of the English sentence rather directly, and then applies translation rules which specify how the English-oriented semantic primitives relate to the ones that are used at deeper levels of analysis. At every level of analysis, the meaning of an input utterance is represented as an expression of a logical language. 3 The languages used at the various levels of analysis differ in that at every level the descriptive constants are chosen so as to correspond to the semantic primitives which are assumed at that level. At the highest semantic level, the meaning of an input utterance is represented as an expression of the Eng/ish-oriented Formal Language (EFL). The con- stants of EFL correspond to the descriptive terms of 3BBN's Siren Language System uses a higher-o~er intensienel logic hased on Church's iaffC.3~Pcak:ulus, comDining fe~oJre6 from PHLIQA's logical language [5] with Montague'$ Intensionel Logic [6]. English. A feature of EFL which is both unusual and important, is the fact that descriptive constants are allowed to be ambiguous. Within each syntactic cats- gory, every word is represented in EFL by a single descriptive constant, no matter how many senses the word may have. An EFL expression can thus be seen as an expression schema, where every constant can be expanded out in a possibly large number of dif- ferent ways. (See [5] for details on the model theory of such a logic.) The ambiguity of EFL follows from its domain- independence. All descriptive words of a language are polysernous, and only when used in the context of a particular subject domain do they acquire a single precise meaning - a meaning which cannot be articu- lated independently of that subject domain. Even within one subject domain, many words have a range of different meanings. Joint representations for such sets of possible expansions are computationaJly ad- vantageous; and when the range of possibilities is defined in an open-ended way, they are even neces- sary. Such cases occur when we attempt to account for the interpretation of metonymy, metaphor and nominal compounds [12], or the interpretation of mul- tilevel plural noun phrases [11]. The logical language used at the domain- dependent level of representation is called the World Mode/Language (WML). This is an unambiguous lan- guage, with an ordinary model-theoretic interpretation. Its constants are chosen to correspond to the con- cepts which cons~tute the domain of discourse. 4 We can illustrate the distinction between EFL and WML by means of an example involving relational noiJns. Compare (4) and (5) below. Sentence (4) will usually be translated into something like (4a): s (4) "John has a house in Hawaii." (4a) 3 he {he HOUSES/IN(h,HAWAII)}: HA VE(JOHN, h) Now consider (5) instead; a single-level architecture would have to analyse this sentence as (5b) rather than (Sa), since (5b) is the representation one would prefer to end up with. (5) "Frederick has a speed of 15 knots." (Sa) "~ c ~ {c e SPEEDS / OF(c, amount(15, KNOTS))}: HA VE(FREDERICK, c) 4To provide a smooth interface with underlying application sys- tems, there is a third level of semantic interpretation. The language used at this level is called the Data Base Language (DBL). Its constants stand for the fites and attributes of the _,~tP _t'.,~e_ [o be accessed, and the avaiiable graphics system opemUons and their parameters. SAccommoda~ng discourse anaphore may motivate a different treatment of the indefinite noun phrase, repre~mting its semantics by a Skelem-constant or a similer device, rather than by the traditional existential quantifier. For the present discussion we may ignore this issue. 2? (Sb) F-SPEED(FREDERICK). amount(15, KNOTS) In a multilevel semantics architecture, however, one would prefer to maintain a completely uniform first stage in the semantic interpretation process, where (5) would be treated exactly as (4), and therefore be analyzed as (5a). By applying appropriate EFL-to- WML translation rules, the EFL expression (5a) would then be transformed into the WML expression (5b). Taking natural language at semantic face value thus simplifies the process of creating an initial meaning representation. The remaining question then is, whether one can in fact write EFL-to-WML translation rules which yield the desired results. This is the ques- tion we will come back to in section 4. In the remainder of the present section, we first give some more detail on the general properties of the translation rules and the logical languages. The interpretive rules which map syntactic struc- tures onto EFL expressions are compositional, i.e., they correspond in a direct way to the syntactic rules which define the legal input strings. There is a methodological reason for this emphasis on com- positionality: it makes it possible to guarantee that all possible combinations between syntactic rules are in fact covered by the interpretive rules, and to minimize surprises about the way the rules interact. Similar considerations apply when we think about the defini- tion of the EFL-to-WML translation: we wish to guarantee that the WML translations of every EFL expression are defined in an effectively computable way, and that the different rules which together specify the translation interact in a predictable lash- ion. This is achieved by specifying the EFL-to-WML translation using strictly Ioca/rules: rules operating only on constants, which specify for every EFt. con- slant the WML expressions that it translates into. Translation by means of local rules, which expand constants into complex expressions, tends to create fairly large and complicated formulas. The result of the EFL-to-WML translation is therefore processed by a logical simplification module; this keeps formulas from becoming too unwieldy to handle and impossible to evaluate. Local rules by themselves do not specify what combinations between them will lead to legitimate results. Since the rules can be applied independently of each other, we need a separate mechanism for checking the meaningfulness of their combined opera- lion. This mechanism is the semantic type system. EFL, WML and DBL are typed languages. This means that for every expression of these languages, a semantic type is defined. The denotation of an ex- pression is guaranteed to be a member of the set denoted by its type. In WML, for instance, FREDERICK has the type SHIPS which denotes the set of all ships; GUAM and INDIAN-OCEAN have the type LOCATIONS which denotes the set of all loca- tions; CARRIERS and SHIPS both have the type SETS(SHIPS) which denotes the powerset of the set of all ships; F-SPEED has the type FUNC TIONS(U(SHIPS, PLANES, LAND-VEHICLES), AMOUNTS(SPEED.UNITS)), which denotes the set of functions whose domain is the union of the sets of ships, of planes and of land vehicles, and whose range is the set of amount- expressions whose units are members of the set of speed-units. Given the types of the constants occurring in it, the type of a complex expression is determined by formal rules. For instance, the expression F-SPEED(FREDERICK) would have the type AMOUNTS(SPEED-UNITS). The rules which define the types of complex expressions also define when an expression does not have a legitimate type, and is therefore not considered to be a bona fide member of the language. For instance, F-SPEED(GUAM) does not have a legitimate type, because the type- computation rule for function-application expressions requires that the type of the argument not be disjoint with the domain of the function. The semantic type constraints make it possible to express the possible interpretations of ambiguous EFL constants by means of local translation rules, without running the danger of creadng spurious non- sensical combinations. For instance, if "Guam" were the name of a ship as well as the name of a location, there could be one EFL constant GUAM.EFL with two WML-expansions: GUAM-LOC with type LOCATIONS and GUAM-SHIP with type SHIPS. Ap- plying the EFL-to-WML rules to F-SPEED(GUAM-EFL) would nevertheless yield only one result, since the other combination would be deemed illegitimate. In the next section we show how relational noun denotations and EFL-to-WML translations may be chosen in such a way that sentences involving rela- tional nouns after an initially uniform treatment end up with plausible truth conditions - so that, for instance, (5) above can be initially analyzed as (5a) and then translated into (5b) in a principled way. 4 MULTILEVEL SEMANTICS FOR RELATIONAL NOUNS The treatment we propose is based on a simple, yet powerful idea: analyse a relational noun as denot- ing the extension of the corresponding relation R (i.e., the set of pairs <x,y> such that R(x,y)), and allow predicates to apply not only to individuals but also to such pairs. 6 As an example, consider again the phrase "Peter's sisters." that we discussed in section 1 above, in the treatment we propose, this phrase would get the EFL analysis (6a). eTerminoiogy: We assume directed relation~ If <x.y> is a pair in a relation-extension, we call x the argument and y the value. 28 (6) "Peter's sisters" (6a) {x ~ R-SISTER / POSSESS(PETER,x)}, where R-SISTER, with the type 7 U(MALES, FEMALES) X FEMALES, denotes the extension of the sister-relation, and where POSSESS has as one of its types: FUNCTIONS ((U(MALES, FEMALES) X FEMALES), TRUTHVALUES). (6a) can be transformed into a plausible expression for (6) by applying the translation rule: POSSESS ,,> ('A. u,v: u =v[l]) where u has type THINGS and v has type THINGS X THINGS. Applying this rule to (6a) yields after ~reduction: (6b) {x e R-SISTER / PETER ,, x[l]}, which is equivalent to: (6c) {u,v / u = PETER & R-SISTER(u,v)} Thus, we see that by allowing the semantic trans- lation of "Peter's'to select over pairs consisting of a person and the sister of that person, we can end up with a representation of "Peter's sisters" which comes close to having the right denotation: it denotes the correct set of persons, but they are still paired up with Peter. This "extra information" is of course a problem. For instance, "Peter's sisters are Mary's aunts." as- serts the equality of two sets of persons, not two sets of pairs of parsons. it turns out that we have two distinct cases to deal with: to account for the interaction between a rela- tional noun and the phrases which indicate its ar- guments and values, we would like to treat it as denoting a relation-extension; but to account for its interaction with everything else, we would like to treat it as denoting a set of individuals. In order to make the relational treatment yield the right results, we must assume that part of the meaning of ordinary descrip- tive predicates is an implicit projection-operator, which projects tuples onto their value-elements. This is the solution we adopt. We formalize it by means of a meaning-postulate schema which applies to avery function F which is not among a small number of ex- plicitly noted exceptions: V x,y: F(x) =, F(<y,x>) The copula "be" is not an excep~on to this mean- ing postulate schema: it operates on values rather than relation-elements. This is the reason why "John" is not available as an argument for "brother" in (2ac) above ('Some punks of John's are brothers." "Some brothers are John's') We shall now consider the actual EFL-to-WML 7Notation: A X B denotes the set of pairs <x.y> such that x is in the denotation of A and y is in the denotation of B. translation rules which handle the relational nouns in a little more detail. The EFL relations have many different translations into WML; which ones are relevant in a given case, is decided by considering the semantic types of the arguments to which they are applied. Consider again, for example, the part of the EFL-to-WML translation rules that deals with the inter- pretation of the possessive relation as specifying a relational argument, as in "Peter's sister', "Frederick's speed':. POSSESS -> ~. u,v: u ,, v[l]) where u has type THINGS and v has type THINGS X THINGS. Being a local translation rule, this rule could be applied to any occurrence of POSSESS in an EFL formula. However, many such applications would give rise to semantically anomalous WML formulas (with necessarily denotationless sub-expressions) which are filtered out if there are any other non-anomalous interpretations. For instance, the above rule for POSSESS would yield an anomalous expression if applied to the representation of "Peter's cars', be- cause the EFL constant CARS does not denote a set of pairs but a set of individual entities. It would also yield an anomalous expression if applied to "The USS Frsderick's sisters', because the type of the EFL con- stant FREDERICK, which is SHIPS, is disjoint with the argument type of R-SISTER, which is U(MALES, FEMALES). To avoid spurious generation of anomalous ex- pressions, the semantic types of the arguments of an EFL function or EFL relation are checked before the EFL-to-WML rule for that function or relation is ap- plied. For instance, the above rule for POSSESS will only be applied to an expression-of the form POSSESS(A,B), if A and B have types ¢¢ and ~ such that 3P, Q: fJ,,PXQ & NON-EMPTY(atoP). As noted above, the interdefinability which exists between "have; "of', the genitive, and "wi/h', when they are used, for instance, in reference to ownership, carries over to their use for indicating the relation be- tween a relational noun and its argument. Thus, the EFL representations of "of', "have; and "w/th" have WML translations which, modulo the order of their ar- guments, are all identical to the rule for POSSESS discussed above. Function nouns, like "speed" and "length', induce a special interpretation on preposition phrases with "of'. Such phrases can be used to connect the func- tion noun with its va/ue. The treatment of relational nouns sketched in the previous section can also ac- commodate this phenomenon, as we shall show now. Consider example (7) below, which is identical to (5) above. It gets, in the treatment we propose, the EFL analysis (5a); this analysis is exactly analogous to the one that a syntactically similar sentence involv- ing a non-relational noun would get. (Cf. (4) and (4a).) 29 (7) "Frederick has a speed of 15 knots." (7a) 3 s • {s e F-SPEED / OF(s, amount(15, KNOTS))}: HA VE(FREDERICK, s) It is the task of the EFL-to-WML translafion rules to define a transformation on EFL expressions which would turn (5a) into (5b) or a logically equivalent for- mula. (7b) F-SPEED(FREDERICK). amount(15, KNOTS) To achieve the desired result, we need a rule for HAVE which is precisely analogous to the rule for POSSESS above; and we need a rule for OFwhich is not analogous to the rule for POSSESS above: "a speed of 15 knots" is unlike "the speed of the USS Frederick" in that in the former case we must connect the relation with its value rather than its argument. The rule for OFthat we need here is as follows: OF => ~. u, v: u[2] = v) Note that different rules for one EFL constant can coexist without conflict, because of the assumption of lexical ambiguity in EFL. (In the general case, an EFL expression will have several WML expansions for this reason; often, many rule-applications will be blocked by semantic type-checking.) This basic approach makes it possible to trans- form the EFL representation of any of the construc- tions shown in the examples in section 1 into reason- able World Model Language and Data Base Lan- guago formulations of the intended query. We shall illustrate the process of applying the EFL-to-WML translations and logical simplifications in a little more detail while showing how to extend this treatment to function nouns which can take more than one ar- gument. Such nouns interact with specific kinds of preposition phrases to pick up their arguments. For instance: "Frederick's distance to Hawaii; "the dis. tance from Hawaii to Guam". As an example, we will now discuss the noun "readiness" as used in the U.S. Navy, which designates a two-argument function. "Readiness; as used in the Navy baffle manage- merit domain, refers to the degree to which a vessel - to be more precise, a unit - is prepared for combat or for a specific mission. This degree is indicated on a five-point scale, using either c-codes (C1 to C5), if referring to combat readiness, or m-codes (M1 to M5), if referring to mission readiness. The readiness for combat can furthermore be the overall readiness (the default case) or the readiness with respect to one of the four resource readiness areas: personnel, train- ing, equipment or supplies. Therefore, READINESS-OF is a function which maps two ar- guments, an element of SHIPS and an element of READINESS-AREAS, into READINESS-VALUES. Consider as an example the noun phrase "/he readiness of Frederick: If we ignore for the moment the effect of the "singular the" operator (see section 5), its initial translation is: {x • READINESS-OF I OF(x, FREDERICK)} The parts of this expression are translated as follows. A logical transformation translates the function- constant READINESS-OF into the following equiv- alent expression, which will be convenient for sub- sequent processing: {x • domain (READINESS-OF) X range(READINESS-OF) / READINESS-OF(x[1]), x[2]} which in its turn is equivalent to {x ~ (SHIPS X READINESS-AREAS) X READINESS.VALUES / READINESS-OF(x[ 1]) = x[2]} The relation OF is eliminated in the EFL-to-WML transformation by a variant ~ of the translation rule mentioned above. It transforms OF(x, FREDERICK) into x[1][1], FREDERICK The net result of these logical and descriptive trans- formations is the following expression: {x ~ {z • (SHIPS X READINESS-AREAS) X READINESS-VALUES / READINESS-OF(# 1]) ,, z[2]} / #1][1] ,, FREDERICK} This expression is then simplified to: {z G ({FREDERICK~ X READINESS-AREAS) X READINESS-VALUES / READINESS-OF(z[1]), z[2]} which in its turn can be transformed into a logically equivalent but more optimally evaluable expressions: (for: {FREDERICK} X READINESF-AREAS, apply: ~ x: <x, READINESS-OF(x)>)) (The actual system may apply further transformations (from WML into DBL), if it has to account for dis- crepancles between the database structure and the canonical domain model, possibly followed by further optJmizations at the DBL leveL) Other restrictions on "readiness; as in "the readi- ness o.n.n personnel', "the personnel readiness, or "a c l readiness', are handled in an analogous manner: ON -> ~u,v: u[l][2],,v) PREMOD ,,> (~ u,v: u[l][2] ,, v) PREMOD ,,> ~ u,v: u[2] - v) where PREMOD is the EFL translation of the elided relation in a noun-noun compound. (Note that if the same preposition is used with different nouns to mark different argument places, we need a more elaborate notation which identifies the arguments of a function by labels rather than by position.) *MuIti-an:jument func~ns are viewed as functions on n-tuplas. OF specifies, in this case, the first element of the argument-n-tuple. °Notation: (for: A. Iplldy: F) denotas the beg contmning the results of all applications of the function F to elements of the set A. 30 Because of the essentially local character of the descriptive transformations on HAVE, OF, ON, PREMOD, etc., and the completely general character of the simplifications dealing with intersections of sets and tuples, a small number of transformations (a few for each EFL relation) covers a wide variety of expres- sions. 5 IMPLICIT ARGUMENTS. One or more of the arguments of a relation may be unspecified in the input sentence, while the intent of the utterance is nevertheless that a particular ar- gument should be filled in. The present section dis- cusses briefly how this issue can be dealt with during a phase of semantic processing which follows the EFL-to-WML translation. The most important case arises from the usa of definite descriptions in the English input sentence. The phrase *the readiness of Frederick", for instance, leads to an expression which has the operator "the" wrapped around the expression which represents "readiness(as) of Frederick'. "the" is a pragmatic operator, which selects the single most salient ele- ment out of the set that it operates on. Where the expression representing "readiness of Frederick on personnel" would denote a set contain- ing exactly one tuple, the expression representing "readiness of Frederick" denotes a set containing a number of different tuples: ones with EQUIPMENT, PERSONNEL, OVERALL, etc., filled in as the second argument, l=timinating the "the" operator consists in accessing a Discourse Model to find out which of the fillers of the second argument place is contextually most accessible. (We assume that available discourse referents are stored at every level of embedding in a recursive model of discourse surface structure, such as [9]). If none of the readiness areas were mentioned in an accessible discourse constituent, the system defaults to the "unmarked" readiness area, i.e., OVERALL Plural definite noun phrases are treated in a similar fashion. For instance, "the readineesas of Frederick" leads to an expression in which a prag- matic operator selects the contextually salient multiple element subset of the tuples in the extension of READINESS-OF which have FREDERICK as a first argument. In this case, if no particular subset of the readiness areas can be construed as a discourse referent, the system defaults to the assumption that here the overall readiness plus the four resource readinesses are intended. (Another possibility being the reference to the ship's readiness history:, a se- quence of past, current and projected future readinesses.) 6 RELATION EXTENSIONS AS ANSWERS. The decision to treat relational nouns as denoting relation extensions has an immediate consequence, of some practical importance for question-answering systems, concerning the way in which wh-questions involving relational nouns are answered. For ex- ample, the request "List the speeds of the ships in the Indian Ocean." could be answered in three ways, of ascending informativeness: 1) with a set of speed values (possibly of smaller cardinality then the set of ships in the Indian Ocean) 2) with a bag of speed values (of the same cardinality as the set of ships) and 3) with a set of <ship, speed> ordered pairs, such that each ship is paired off with its speed. Clearly, 3) is most likely to be the desired response (although it is possible to envision situations where reponses 1) and 2) are desired). One cannot obtain this response, however, if the semantic trans- lation of the noun phrase "the speeds of the ships in the Indian Ocean" does not retain the information of which speed goes with which ship. An important ad- vantage of our approach to the relational noun problem is that it preserves this information, making 3) the normal reponse and 1 ) and 2) derivable from it. This may be compared to the "procedural semantics" approach to this same problem, as found in the work on LUNAR [14]. In this work, meaning is regarded as procedural in nature, and quantifications are represented in terms of nested iterations. The request "List the speeds of the ships in the In.an Ocean'would be represented as: (FOIt ~.L X / slrrps : (nl X ZNDT.3UI-OCLIkIB) ; (~RZa'Jr (s~mm x) ) ) where the action of this representation would be to iterate over the class SHIPS, for each member checking to see if it is IN the INDIAN.OCEAN, and if so, printing its speed. The PRINT operator is made "smart" enough to detect the occurrence of the free vadable in its argument and to add in a printout the value of this variable for each iteration. Note that while this representation provides for the tuple response (3), and perhaps, if the "smartness" is made optional, for the bag response (2), the set response (1) would seem out of reach. In contrast, the approach this paper presents allows for all three, by generating as a default response the tuple set, and then optionally "projecting" on its second column. 7 CONCLUSION Relational nouns are of primary importance for natural language interfaces to databases and expert systems, since they are commonly used to refer to database relations and to arithmetical functions. This paper has presented a treatment of relational nouns which manages to maintain uniformity and generality 31 at the level of syntactic analysis and initial semantic interpretation. This treatment has been incorporated into the semantic framework of BBN's Spoken Lan- guage System without writing additional LISP code. The semantic transformations necessary for the treat- ment are all carried out by general algorithms which were part of the pre-existing semantic framework. Im- plementing the treatment consisted in writing descrip- tive (EFL to WML) translation specifications for the EFL relations involved with function nouns, and a few dozen logical transformations to supplement the exist- ing set of simplifications. Further work on this topic should investigate how our perspective on relational nouns carries over to an account of the temporal and spatial modifiers that can be used with any noun. This will then make it possible to explore its connections with the work on the semantics of time-dependent nouns that has been done in the Montague-tradition. [:3] [13] ACKNOWLEDGMENTS We thank David Stallard for important contribu- tions to the ideas presented here; Jan Landsbergen for his share in the development of the conceptual framework that inspired this research; Damaris Ayuso and Scan Boisen for their assistance in applying our results to BBN's Spoken Language System. [1] [2] [3] [4] REFERENCES Bronnenberg, W.J.H.J., H.C. Bunt, S.P.J. Landsbergen, R.J.H. Scha, W.J. Schoen- makers. E.P.C. van Uttsren. The Question Answering System PHLIQAI. In L. Bolc (editor), Natural Language Question Answering Systems, pages 217-305. Lon- don: MacMillan, 1980. Clark, Eve V. What's in a word? On the Child's Acquisition of Semantics in his First Language. In Cognitive Development and the Acquisition of Language. Academic Press, New York, 1973. Janssen, Theo. Individual Concepts are Useful. In Fred Landman and Frank Veitman (editors), Varieties of Formal Semantics, pages 171-192. Foris Publications, Dordrecht, 1984. Landman, Fred. Groups. Technical Report, Un. of Massachusetts, Am- herst, MA, 1987. [3] [6] [B] [lO] [11] [12] [13] [14] [is] Landsbergen, S.P.J. and R.J.H. Scha. Formal Languages for Semantic Represen- tation. In S. Allen and J. Petofi (editors), Aspects of Automatized Text Processing. Buske, Hamburg, 1979. Montague, Richard. The Proper Treatment of Quantification in Or- dinary English. In J. Hintikka, J. Moravcsik and P. Suppes (editors). Approaches to Natural Language. Reidel, Dordrecht, 1973. Partee, Barbara H. CompositJonality. In Fred Landman and Frank Veltman (editors), Varieties of Formal Semantics. Foris Publications, Dordrecht, 1984. Piaget, Jean. Judgment and Reasoning in the Child. Humanities Press, New York, 1928. Polanyi, Livia and Remko Scha. A Syntactic Approach to Discourse Semantics. In Proceedings of Coling 84, pages 413-419. Stanford University, Stanford, CA, July, 1984. Scha, Remko J.H. Logical Foundations for Question Answering. Technical Report, Philips Research Labs, Eindhoven, M.S. 12.331, 1983. Scha, Remko and David Stallard. Multi-Level Plurals and Distributivity. In Proceedings of the 26th Annual Meeting of the ACL. Buffalo, NY, 1988. Stallard, David. The Logical Analysis of LexicaI Ambiguity. In Proceedings of the 25th Annual Meeting of the ACL, Stanford University, Stanford, CA, July, 1 987. Thomason, Richmond H. Home is where the hearth is. In Contemporary Perspectives in the Philosophy of Language. Un. of Minnesota Press, Minneapolis, 1979. Woods, William A. Semantics and Quantification in Natural Lan- guage Question Answering. In M. Yovits (editor), Advances in Computers, pages 1-87. Academic Press, 1978. Zoeppritz, Magdalena. The meaning of 'of' and 'have' in the USL sys- tem. AJCL 7(2):109-119, 1981. 32
1988
4
QUANTIFIER SCOPING IN THE SRI CORE LANGUAGE ENGINE Douglas B. Moran Artificial Intelligence Center SKI International 333 Ravenswood Avenue Menlo Park, California 94025, USA ABSTRACT An algorithm for generating the possible quanti- fier scopings for a sentence, in order of preference, is outlined. The scoping assigned to a quantifier is determined by its interactions with other quan- tifiers, modals, negation, and certain syntactic- constituent boundaries. When a potential scoping is logically equivalent to another, the less preferred one is discarded. The relative scoping preferences of the individ- ual quantifiers are not embedded in the algorithm, but are specified by a set of rules. Many of the rules presented here have appeared in the linguis- tics literature and have been used in various natu- ral language processing systems. However, the co- ordination of these rules and the resulting coverage represents a significant contribution. Because ex- perimental data on human quantifier-scoping pref- erences are still fragmentary, we chose to design a system in which the set of preference rules could be easily modified and expanded.. The algorithm described has been implemented in Prolog as part of a larger natural language pro- cessing system. Extensions of this algorithm are in progress. INTRODUCTION One of the major sources of ambiguity in sen- tences results from the different scopes that can be assigned to the various quantified noun phrases in the sentence. Part of the problem in determining the preferred scopings of quantifiers is the number of factors involved. For example, consider these three sentences John visited every house on a street. (1) John visited every house on a square. (2) John visited every patient in a private room. (3) Each of these sentences has two quantifier scop- ings: in one, "every' has wider scope over "a," and while in the other, "a" has the wider scope. However, the readings that most people obtain for these sentences are quite different. In (1), the reading in which "a" has wider scope is highly preferred; in (3), the reading in which "everf has wider scope is highly preferred; in (2), the reading with wide-scope "everf is preferred, but wide- scope "a" is also acceptable. A plausible expla- nation for the difference between (1) and (2) is that, since the typical house is located on a street but not on a square, the default preference rep- resented by (2) is overridden by a conversational maxim of quantity--if "~ streeff has narrow scope, "on a street" would contribute too little informa- tion to justify its presence. A plausible explana- tion for the difference between (2) and (3) is based on the relationship among the components. The reading of (3) in which "a" is given wider scope is improbable because the domain of quantification for "every" would then be the single patient in the selected room--an infelicitous use of "every, ~ whereas there is no similar problem in (2) because there are normally multiple houses on a square. Similarly, in John visited a person on every committee. (4) John visited a house on every street. (5) the reading in which "a" has wider scope is reason- able for (4) but not for (5)--in a normal domain of discourse, it is conceivable that there could be a person who is on all of the committees, but it is highly improbable that the geometry of the streets is such that a single house could be located on all of them. In (1), (3), and (5), discourse criteria and do- main information seem to be the primary factors in determining the preferred quantifier scopings, whereas in (2) and (4), linguistic criteria seem to 33 be the determining factors. Our approach presumes that the determination of a sentence's preferred scoping can be divided into two phases, the first of which is the subject of the algorithm described here. In this initial phase, linguistic information is used to generate the possi- ble quantifier scopings in order of preference. The relevant linguistic information consists of surface position, syntactic structure, and the relationship among the function words (determiners, modals, and negation). In the second phase (future work), domain and discourse information is applied suc- cessively to these scopings, modifying the scores produced by the first phase. We expect that the modifications will be only penalties, thus making it possible to identify the best choice when it is en- countered (cutting off the processing of remaining scopings generated by the first phase). The primary study of quantifier scoping prefer- ences was done by VanLehn (1978). The experi- mental data reported therein was of limited useful- hess in developing the algorithm described here-- it was gathered and evaluated under assumptions arising from a different linguistic theory. We shall first present the rules that governed the structure of our design, then outline the algorithm. This scoping algorithm has been implemented as a component of a larger system that is under con- tinuing development. In this system, called the Core Language Engine or CLE (Alshawi et aL, 1987), the semantic interpretation phase produces unscoped logical forms in which quantifier expres- sions are represented by quantifier terms (qterms). For example, the sentence "John saw a studenf' has the uuscoped logical form 1 see'(john';qterm(a',X,student'(X))) Since the only permissible scope for this quanti- fier is the whole sentence, the qterm is raised to produce the scoped logical form quant(3,X,student'(X), see'(john',X)) The qterm expression can best be thought of as a quant expression before its scope has been estab- lished. In the above qterm and quant expressions, student'(X) is the restriction of the quantified variable X; that is, it specifies a set of the pos- sible values of X over which the quantifier ranges. I The logical form's syntax in the implementation is ac- tuaJly [seel~ohnl,qterm(al,X,[studentl,X])], but the more conventional notation will he ~ for perspicuity. In the above quant expression, see'(john',X) is re- ferred to as either the body or the scope of the quantifier. This treatment of the logical form of quantifiers follows that employed in many previ- ous systems (e.g., LUNAR (Woods, 1977), Moore (1981), Barwise and Cooper (1981), and Hobbs and Shieber (1987)). RULES AND PREFERENCES Many of the following rules have appeared in wrious forms in multiple places in the literature, and most natural language processing systems in- clude some mechanism for selecting a preferred quantifier scoping. However, the published de- scriptious of many of those systems' capabilities tend to be cursory, with the scoping rules utilized in the LUNAR system still among the best de- scribed in the NLP literature. Because of space limitations, it is not possible to cite much of this discussion, nor to compare this system to others. Rule 1 A quantifier A that is not in the restric. tion of quantifier B and that occurs within the scope of B cannot oeLgcope any of the quantifiers in the restriction of B. Rule 2 If a quantifier is raised past an operator, then any quantifier that occurs within its restric- tion must also be raised past that operator. These rules, presented by Hobbs and Shieber (1987), can best be explained with examples. A bishop visits e~er*j chapel by a ri,)er. (6) has an uuscoped logical form of visit'(qterm(a',B,bishop'(B)), qterm(every',C,and(chapel'(C), by'(C,qterm(a',R,river'(R)))))) The following is one of the possible permuta- tions of the quemtifiers, but is not a valid scop- ing because the restriction of "every" ("chapel by a river") has been fragmented: *quant(V,C,chapel'(C), quant(=l,B,bishop'(B), quant(=l, R,and(river'(R),by'(C, R)), visit'(B,C)))) Similarly, for the sentence John did not visit a chapel by a river. (7) the quantifier permutation 34 *quant(3,C,chapel'(C), not(quant(3,R,and(river'(R),by'(C, R)), visit'(john',C)))) is not a possible scoping of the unscoped logical form not(visit'Ciohn',qterrn(a',C,and(chapel'(C ), by'(C,qterm(a',R,river'(R))))))) Rule 3 For a set of quantijiers, which quantifier receives wide-scope preference can be determined by a pairwise comparison of the determiners. This comparison is based upon a combination of factors that include their relative strengths and surface po- sitions, and whether or not either has been raised. In many systems, determiners are assigned nu- merical strengths and these values are compared to determine what scope should be assigned to each quantifier. Such a ranking is implicit in our prefer- ence rules and can be viewed as a first approxima- tion of the relationships represented by our rules. Our algorithm permits a set of properties to be associated with determiners and for these to be used in ascertaining which determiner has wide- scope preference. The properties currently em- ployed are surface position (the integer index of the determiner) and a Boolean value indicating when a quantifier has already been raised. Preference 3.1 There is a strong preference for %ach" to outscope other determiners. That "each" is the strongest determiner is a common feature of most quantifier-scoping treat- ments. However, the evidence for the relative strengths of the remaining quantifiers is much less clear---our current ranking of them is an ad hoc blending of those in TEAM (Grosz ef al., 1987) and VanLehn (1978). Preference 3.2 There is a strong preference for WH.terms to outscope all determiners ezcepf "each," which outscopes WH-terms. In the unscoped logical forms currently pro- duced, WH-words ("which," "who," "what") and phrases are represented as qterms. Our scoping- preference rules assign wide scope to "each" in Which ezams did each student pass? (8) There is a reported dialect in which sentences of the above form are judged to be malformed, but that dialect was not found among our informants. The design of our algorithm makes it easy to re- place the current preferences with these. The definite determiner "the" is currently treated as a very strong quantifier, but this ap- proach is not entirely satisfactory. Consider Every student passed the ezam. (9) The student in every race celebrated. (1O) The student in each race celebrated. (11) Every student in the race celebrated. (12) Each student in the race celebrated. (13) In (9)-(12), the preferred scopings are as predicted by the rules. However in (13), the preferred read- ing selected is the one with wide-scope "each." Al- though both scopings of this sentence are logically equivalent (as are those for (9) and (12)), wide- scope "the" seems to he the preferred reading. Our algorithm does not distinguish between spe- cific and nonspecific use of indefinite articles. It is debatable whether this belongs in quantifier scolP ing or in another part of the system. Preference 3.3 A logically weaker interpretation is preferred. This preference is strong when it maintains surface order, weak when it inverts sur- face order. 2 The quantifier order V'~ is weaker than ~/, ac- counting for the preferences in A man loves every woman. (14) Every man loves a woman. (15) In both sentences, the reading with wide-scope "eeerf is the preferred one; the reading with wide-scope "a" is possible for (14), but is very strained for (15). Rule 4 Raising a quantifier out of certain syntac- tic constituents changes the strength of its deter- miner. VanLehn presents an "embedding hierarchy" of the probability of a quantifier in the modifier of an NP being raised to have wider scope than the quantifier in the NP's head 2Vanl.mhn proposes a more general form of this preference--that, when comparing two quantifiers within the same ge~neral group, the "more numerous" one will have a preference for wider scope. For example, "many" would take wider scope over "few." However, for everything ex- cept "ever~'/"a," such preferences appear to he very slight. 35 PP > Reduced Relative Clause > Relative Clause A method frequently proposed to account for this distinction is to use, as a measure of the cost of raising, a count of the number of nodes in the syn- tactic structure over which the quantifier is raised. However, such accounts are acknowledged to have various deficiencies and to be overly sensitive to the syntactic representation used. We have cho- sen to permit rules to associate a cost for raising a quantifier with certain types of nodes (other nodes can be viewed as having zero costs). This capabil- ity of the system is currently invoked only on an all-or-nothing basis. Preference 4.1 A quantifier cannot be raised across more than one major clause boundary. A common rule in the quantifier-scoping litera- ture is "quantification is generally clause bound." While it is possible to generate sentences with acceptable readings when a quantifier has wider scope than the clause in which it occurs, we have been unable to find any examples showing that it can be raised out of two clauses. Preference 4.2 A quantifier cannot be raised out of a relative clause. This is a common restriction in many quantifier- scoping algorithms. In our system, this is not a special rule, but one of the preferences. Conse- quently, this could easily be modified from vever being permitted to being "highly unpreferred." Rule 5 In unscoped logical form, quantifiers can occur within the scope of an opaque operator. Whether or not to raise such a quantifier outside that operator is determined by a pairwise compar- ison between the operator and the determiner in the quantifier, as well as by their relative surface position. Preference 5.1 There is a strong preference for "some" to outscope negation. Preference 5.2 There is a preference for nega- tion to outscope %very." This preference is strong when it maintains surface order, weak when it doesn't. Different scopings of "some" and "every" under negation produce equivalent readings (3"~ is equiv- alent to --V). The preferred scopings for the two sentences John did not see someone. (16) John did not see everyone. (17) have equivalent logical forms quant(3,P, person'(P),not(see'(john',P))) not(quant(V,e, person'(e),see'(john',e))) Similarly, the preferred scopings of sentences Someone did not see John. (18) Everyone did not see John. (19) have equivalent logical forms quant(3,P, person'(P),not(see'(P, john'))) not(quant(V,e.person'(P),see'(e, john'))) The reading of (16), which would assign nar- row scope to "some" is produced by substituting "an~ 's for "some" : John did not see anyone. (20) This has the following logical form (no other scop- ings exist): not(q ua nt(3, P, person'(P),see'(joh n', P))) , which is logically equivalent to quant(V,e, per$on '(e),not(see'(john' ,e))) , which corresponds to the strongly "unpreferred" readings of (16) and (17). Similarly, the sentence No one saw John. (21) which has a scoped logical form of quant(V,P, person'(P),not(see' (p,john'))) corresponds to the "unpreferred" scoping for (18) and (19). One of LUNAR's scoping rules was that in the antecedent of "if-then" statements, quantifiers "some" and "anf should be assigned wide scope, and that "a" and "every" should be given nar- row scope. If such antecedents were treated as a negative environment (or equivalent thereto), the foregoing preferences could produce this effect. SThe CLE system does not currently provide a treat- merit of ",n~." However, within the qu~ati~er-scoping compon~t, "4n~" is treated ~ ~ potenti~dly am- biguotm between the usual universal quantifier, free- choice "any," and a ~cond form, polarity-sensitive "anlt," which occurs in conjunction with negative-polarlty items. Polarity-~mitive "anlh" is treated as & narrow.cope exis- telxtied quantifier (Ladtmaw, 1980). 36 Preference 5.3 There is a strong preference for free-choice "any" to have wider scope than modals. There is a strong preference for all other determin- ers that occur within the scope of a modal to have narrower scope than that modal. Did some student take every testf (22) Does some student take every test? (23) Some student took every test. (24) Some student takes every test. (25) Some student is taking every test. (26) For sentences (23), (25), and (26), there are two acceptable quantifier scopings. However, for (22) and (24), the scoping in which "every" is assigned narrower scope seems to be strongly preferred. We ascribe this to the presence in the logical form of a modal operator corresponding to the past tense. This effect is accentuated in (27), which ex- hibits an ambiguity resulting from whether "some teacher" is scoped inside or outside the modal, cor- responding to (28) and (29), respectively: Some teacher took every course. (27) Last summer, some teacher took every coarse(28) As a student, some teacher took every course~29) The scoping in which "every" outscopes "some ~ is possible, although unpreferred, for the reading • (28); but it is not a possible scoping for (29) in any dialect that we have encountered. Rule 6 If polarity-sensitive "any" occurs within a clause in which its trigger does not occur, it must be raised out of that clause. De Dicto/De Re The mechanism described here can provide an account for the de dicto/de re dis- tinction. Another ambiguity associated with quantifier terms is whether or not the referent is required to exist. In PTQ (Montagne, 1973), the sentence John seeks a unicorn. (30) is assigned a de dicto reading (which does not re- quire that any unicorns exist), seek'(~john ',%~(P,q uant(3,X,u nlcorn '(X),'P(X)))) and a de re reading (which requires the existence of some unicorn) quant(3,X,unicorn'(X),seek'Cjohn',^A(P,'P(X)))) In PTQ, this distinction is produced by syntactic rules. Cooper (1975, 1983) demonstrated that a mechanism using a store could produce both read- ings from a single logical form. Our mechanism obtains similar results. Starting from the unscoped logical form seek'Cjohn','A(P,:P(qterm(a',X,unicorn'(X))))) with the intension operator " treated as being op- tionally opaque, both readings are produced by the quantifier-scoping algorithm described here. Additional (unwarranted) scopings are not pro- duced because these are the only two sites at which quantifiers can be pulled from the store. Nonrule There is a strong preference for a noun phrase in a prepositional phrase complement to outscope the head noun. This criterion is used in many quantifier scoping mechanisms. It is a good heuristic, but it is not a reliable rule. In John visited every house on a street. (31) John visited every house with a dog. (32) the heuristic correctly predicts the preferred stop- ing for (31), but fails for (32). 4 This heuristic is not part of our scoping algorithm; we believe that its effects are part of the processing consigned by us to the second phase of quantifier scoping (future work). BASIC ALGORITHM The first level of our scoping algorithm gener- ates the possible scopings, as described by Hobbs and Shieber (1987). However, we implemented ~ this with a different algorithm, partly for reasons of effÉciency and partly because it could be easier expanded to include additional capabilities. The performance of the Hobbs and Shieber algorithm deteriorates as the number of quantifiers in the sentence increases---our analysis is that it spends a significant amount of time repeatedly travers- ing the logical form and doing structure copying (their goal was to produce a provably correct algo- rithm, not a highly efficient one). Our algorithm traverses the unscoped logical form, collecting the qterms (quantifier terms) into a store; then as the scoping for each qterm is determined, it is pulled out of the store, producing a scoped logical form. 4This was brought to my attention by Richard Crouch. 37 For a sentence with four qusatifiers, our algorithm is typically an order of magnitude faster than that presented by Hobbs sad Shieber. A simple example of the use of the store is pro- vided by the sentence "John saw a student," which has an unscoped logical form of see'(john',qterm(a',X,student'(X))) After quantifier scoping has placed the qterm in the store, the logical form is see'(john',X) sad the store is [ [ qterm(a',X,student'(X)) ] ] The scope for this quantifier is the whole sentence, so the qterm is puned out of the store to produce the scoped logical form quant(3,X,studeet'(X), see'~iohn',X)) The sentence "Few students pass most ezamg' has the unscoped logical form pass'(qterm(few',X,student'(X)), qterm(most'.V.exam'(V))) After the qterms have been extracted, the remain- ing logical form sad the store are p ss'(x,v) [ [ qterm(few',X,stud ent'(X)) ], [ qterm(rhost',Y,exam'(Y))) ] ] A qterm can have other qterms in its restric- tion sad our quantifier store is a structured col- lection (unlike the stores of Cooper sad LUNAR). The structure of qterms in the store corresponds to their relative positions in the unscoped logical form. For example, the unscoped logical form for "every student in a college attends the lecture' is atten d'(qterrn(every' ,X,and(student'(X), in'(X,qterm(a',Y,college'(Y))))), qterm(the',Z,lecture'(Z))) When such qterms are placed in the store, this re- lationship is maintained by representing the col- lected qterms as trees (called qtrees), with the outer qterm as the root and those in its restric- tion as daughters: [[ qterm(every',X,and(student'(X),in'(X,Y))), qterm(a' ,Y,college'(Y)) ], [ qterm(the',Z,lecture'(Z)) ] ] Consequently, the store is a forest of such qtrees, and the qterms occurring in the restriction of a qterm are themselves a forest of qtrees and are treated as if they were a store. As qterms are collected, they are inserted into the store in inverse order of preferencc c.g., the qterm that has narrowest-scope preference appears at the front of the list representing the forest. In implementing this algorithm in Prolog, we found that it was considerably easier to generate the scopings by working from the narrowest to the widest scope, rather than rice versa. As the vari- ous permutations of the quantifiers are generated, equivalent scopings are detected, and all but the most preferred one are then filtered out. In the following, both scopings of each sentence are logi- tally equivalent: Every student takes every test. Every student takes each test. A student takes a test. Some student takes a tes~. Each student takes the test. Eeery student takes the test. The student takes every test. (33) (34) (35) (36) (37) (38) (39) In (33), (35), (37), sad (39), the preferred order is the same as the surface order, while in (34), (36), sad (38), the stronger quantifier occurs second in surface order, sad the scoping that corresponds to surface order is discarded. Filtering of equiva- lent permutations is achieved simply by compar- ing the qtree currently being pulled from the store with the preceding one; if the qusatifiers in their head qterms are logically equivalent, this quantifier scoping is discarded unless the qtree being pulled has wide-scope preference over its predecessor (in which case the other logically equivalent ordering will be discarded). Logically equivalent scopings can also be pro- duced when a quantifier is raised out of the restric- tion of another. However, the quantifier permuta- tions that produce equivalent scopings by raising are a subset of those produced by permuting sib- lings: Every student in every race celebrated. (40) A student in a race celebrated. (41) Some student in a race celebrated. (42) 38 Each student in the race celebrated. (43) Every student in the race celebrated. (44) The student in every race celebrated. (45) Note that the scopings for (40) and (45) are not logically equivalent. The scopings in the others axe logically equivalent, but in (41) and (43), the preferred scoping is the one corresponding to con- stituent structure, whereas in (42) and (44), the preferred scoping has the NP from the PP raised to have wider scope over the head noun. When a qtree is pulled from the store, the algo- rithm tries to produce additional permutations by raising subsets of qterrns (actually of qtrees) out of that qtree's restriction. When a qtree is raised, it is put back into the store---since qtrees are being assigned scope from narrowest to widest, this en- sures that a raised qtree will receive wider scope than the qtree out of which it was raised. Because a raised qtrse may have its strength re- duced when it is placed back in the store (an op- tion in our system), a set of logically equivalent scopings could have all instances filtered out by a naive implementation. The problem arises in the following manner. Before the qtree is raised, the algorithm determines that the unraised scop- ing is logically equivalent to a raised one and that the latter is preferred, so it discards the former. When the qtree is raised and its strength reduced, it becomes weaker than the qtree out of which it was raised. The algorithm detects that the raised scoping is logically equivalent to an unralsed one, and determines--on the" basis of the current strengths--that the unraised scoping is preferred, so it now discards the raised one. This problem is avoided by doing some additional bookkeeping. The current implementation of the above rules is very coarse-grained. The "score" indicating whether or not a quantifier should be assigned wide scope over another quantifier, logical form operator (e.g., a modal, negation), or syntactic constituent is one of four values: always (narrow scope is impossible), never (wide scope is impos- sible), pref (wide scope is preferred, but narrow scope is acceptable), and unpref (narrow scope is preferred). In the current implementation of the above preferences, a strong preference to take wider scope is treated as an instance of always, and a weak preference is treated as pref. For ex- ample, Preferences (3.1)-(3.3) are given by the fol- lowing rules, in which Pref is the preference of a determiner Detl to take wider scope over another determiner Det2: if Detl and Det2 are both "each": - if Detl precedes Det2 in surface order, Pref = pref, - otherwise, Pref = unpre.f. otherwise, if Detl is "each" (and Det2 is not), Pref = always otherwise, if Detl is an interrogative determiner, Pref-- always otherwise, if the logical forms for Detl and Det2 are V and 3, respectively: - if Detl precedes Det2 in surface order, Pref = always - otherwise, Pref = pref. Overshoot The method described here results in .some quantifiers' being assigned scopes that are wider than appropriate, relative to other predicates (but not quantifiers) in the logical form. The sentence "John visited every person on a committee" has an uuscoped logical form of visit'(john',qterm(every',P, and(person'(P), on'(P, qterm(a',C,committee'(C)))))) and its preferred scoping is quant(V,P, quant(3,C,committee'(C), and(person'(P),on'(P,C))), visit'Cjohn'.P)) Note that person'(P) is independent of C; thus it can be outside the scope of the quantifier for C quant(V,P, and(person'(P), quant(q,C,committee'(C),on'(P,C))), visit'~iohn', P)) Such transformations can have a significant im- pact on the performance of the system, substan- tially reducing the processing time of queries for even a modest database. Rather than pass ad- ditional information so that quantifiers could be pulled at the correct point in the traversal of the logical form, we chose to let the scoping algorithm "overshoot" its mark and then lower the quanti- tiers to the correct position. This was considerably easier to implement, and it does not seem to have any performance penalty in our system. CONCLUSION For lack of a reasonable corpus of human quan- tifier scoping preferences, the testing of'this sys- tem has been limited to checking conformance to 39 the stated rules, s The semantic component of the CLE does not produce logical forms with mass or count NPs or collective readings, but that capa- bility is currently being developed. The foregoing description of qterms is a slight simplification; an extended form is now being used to support gen- eralized quantifiers in the new semantic rules. Examples offered by VanLehn (1978) indicate that dative movement affects quantifier scoping, but the cause may actually be domain or discourse information. Our examples show that passiviza- tion affects quantifier scoping, but we have not yet found a means of determining whether the effect is due solely to the cost of raising out of the PP. The algorithm does not handle "donkey sen- tences," nor is it intended to. A scheme for han- dling such sentences is being explored as part of the continuing development of the CLE (Fernando Pereira, personal communication). This would be a separate mechanism, rather than an extension of quantifier scoping. ACKNOWLEDGMENTS The research on which this paper is based was supported by the Natural Language Processing Club (NATTIE) of the Alvey Directorate program in Intelligent Knowledge-Based Systems (Project No. ALV/PRJ/IKBS/105). Most of it was per- formed while I was a member of SRI's Cambridge Computer Science Research Centre. This work benefited from extensive discussion with and sug- gestions from Robert C. Moore and Hiyan AN shawi. REFERENCES Alshawi, Hiyan; Moore, Robert C.; Moran, Dou- glas B.; and Pulman, Steven G. 1987. Re- search Programme in Natural-Language Pro- cessing, Annual Report to the Natural Lan- guage Processing Club (NATTIE) of the Alvey Directorate Program in Intelligent Knowledge-Based Systems, Cambridge Com- puter Science Research Centre, SRI Interna- tional, Cambridge, England. Barwise, 2on and Cooper, Robin 1981. General- ized Quantifiers and Natural Language. Lin- guistics and Philosophy 4(2): 159-219. 5The range of quantified noun phrases covered in the algorithm is larger than what is currently produced by the syntactic and semaantic components of the CLE sys- tem. Such extenalons have been tested by starting from the anticipated logical form. Cooper, Robin 1975. Montague's Semantic The- ory and Transformational Syntaz. Ph.D. dis- sertation, Department of Linguistics, Uni- versity of Massachusetts at Amherst, Mas- sachusetts. Cooper, Robin 1983. Quantification and Syntactic Theory, D. Reidel, Dordrecht, Holland. Grosz, Barbara J.; Appelt, Douglas E.; Mar- tin, Paul A.; and Pereira, Fernando C.N. 1987. TEAM: An Experiment in the De- sign of Transportable Natural-Language In- terfaces. Artificial Intelligence 32(2): 173- 243. Hobbs, Jerry R. and Shieber, Stuart M. 1987. An Algorithm for Generating Quantifier Scop* ings. Computational Linguistics, 13(1-2): 47- 63. Ladnsaw, William 1980. Pblarity Sensitivity as Inherent Scope Relations. Ph.D. disserta- tion, Department of Linguistics, University of Texas at Austin; published by Garland Press, New York, New York. Montague, PAchard 1973. The Proper Treat- ment of Quantification in Ordinary English. In: Hintikka, J.; Moravcsik, J.; and Sup- pea, P. (eds.) 1973. Approaches to Natu- ral Language, D. Reidel, Dordrecht, Holland: 221-242. Reprinted in: Montague, Richard 1974. Formal Philosophy: Selected Papers of Richard MonLague, edited and with an intro- duction by Richmond Thomason, Yale Uni- versity Press, New Haven, Connecticut: 247- 270. Moore, Robert C. 1981. Problems in Logical Form. In Proc. of the 19th Annual Meeting of the AssociaLion for Computational Linguis- tics: 117-124. VanLehn, Kurt A. 1978. Determining the Scope of English Quantifiers. Report AI-TR-483, Arti- ficial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, Mas- sachusetts. Woods, William A. 1977. Semantics and Quantifi- cation in Natural Language Question Answer- ing. In: Advances in Computers, Volume 17, Academic Press, New York, New York: 1-87. 40
1988
5
A General Computational Treatment of Comparatives for Natural Language Question Answering Bruce W. Ballard AT&T Bell Laborotories 600 Mountain Avenue Murray Hill, N.J. 07974 Abstract We discuss the techniques we have developed and implemented for the cross-categorial treatment of comparatives in TELl, a natural language question- answering system that's transportable among both application domains and types of backend retrieval systems. For purposes of illustration, we shall consider the example sentences "List the cars at least 20 inches more than twice as long as the Century is wide" and "Have any US companies made at least 3 more large cars than Buick?" Issues to be considered include comparative inflections, left recursion and other forms of nesting, extraposition of comparative complements, ellipsis, the wh element "how', and the translation of normalized parse trees into logical form. 1. Introduction We shall describe a general treatment of comparatives that has been implemented in the context of TELI, a question-answering system which is transportable among both domains of discourse and different types of backend retrieval systems.n Comparatives are important because of the dramatic increase in expressive power they allow; they are interesting at least because of the variety of issues (from morphology on up) one must deal with in order to provide for them. 1. The examples in this paper illustrate TEL1 us a front-end to the Kandor knowledge representation system (Patel-Schneider, 1984); we will give examples in terms of a knowledge base of information about 1987 cars. TELI has produced queries for at least four different "backend" systems and has been adapted for over a dozen domains of data. 41 1.1 Goals In seeking to provide TEL1 with general capabilities for comparatives, our primary goals have been to formulate cross-categorial techniques that treat the comparativizations of different syntactic elements (e.g. adjectives, quantifiers, and measure nouns) with the same mechanisms; to allow comparatives to be composed with themselves (e.g. "at least 3 more than 3 times as many') and with other syntactic features (e.g. wh elements); to be faithful to what is known from work in theoretical linguistics; we draw from Bresnan (1973), Cushing (1982), Dik (1980), Jackendoff (1977), Sells (1985), and Winograd (1983); to account for as many of the specific cases of comparatives found in the literatureof implemented NL processors as possible. 1.2 Achievements Letting <X> denote a grammatical category to be comparativized, we begin by providing for comparativized structures C{<X>} of the form C{<X>} -.* (<Qmd>) CC{<X>) <Comp> <Qua> -'* *tmostlatleutlaolexsctlylg~'dmyljastlealy CC{<X>} -=*" (CC{<X>}) (<Measure>) <el> (<X>) <c2> <Measure> --* <Number> (<Ordinai>lperc~tltinNs) I <onus> --* h~lt~ltUrdsl-- <Comp> --0 <NP> <Etcx> <el>/<c2> .-4, -er/flum[less/thu[ss/us where (...) denotes optionality; "/" indicates "agreement" between comparative particles; and <Etcx> accounts for items parallel to those in the matrix clause in which the comparative occurs (e.g. "cars that are longer than the Regal (is (wide))'). In addition, a variety of extrapositions (i.e. rightward and occasional leftward movement) from C{<X>} may (and sometimes must) occur. For example, both "cars larger than the Century" and "larger cars than the Century" are allowed. Since we wish to allow C{<X>} structures to occur wherever <X> could occur, arbitrarily complex interactions with quantifiers (within the complement), ordinals, superlatives, raisings, wh elements, and other constructs must be provided for. In addition to the structures indicated by the BNF above, we allow for some simpler expressions not conventionally classified as comparatives. Some examples are "6 ears" (cf. "as many as 6 cars') and "3 inches long" (cf. "as long as 3 inches'). We also provide for structures involving the nominal counterpart of an adjective, as in "more than 185 inches in length'. To date, we have fully implemented a wide variety of comparatives related to adjectives, quantifiers, and measure nouns (e.g. "cars that cost at least $100 more than the Park Avenue'). Due to the commonality among the comparativized syntactic structures, our grammar for these three types of comparatives is produced by meta-rules suggested by the BNF rules shown above. Although the feature agreement provided by our parser is used to eliminate spurious structures such as "cars more than 3 (inches/*dollars) long', we avoid conflicts between pure numbers and measure phrases that involve a unit (e.g. "companies that make more than 3 (*dollars) cars') by having two (very nearly identical) Quantity routines in the grammar. 1.3 Lhnitatioas" In addition to some specific limitations to be stated in the remainder of the paper, there are some general limitations of our work to date, many of which are being rectified by the work mentioned in Section 8.3. (1) By analogy with conjunctions, with which comparatives share a number of properties (cf. Sager 1981, pp. 196ff), our comparative particle pairs (- er/than etc.) provide for co-ordinate comparatives, in contrast to pairs such as so/that, as in "Buick makes so many cars that it's the largest company." (2) Comparative complements are expected in a limited number of places. For example, "Audi makes more large cars than Pontiac in France" is recognized but "Audi makes more large cars in France than Pontiac" is not. This is because we currently propagate the evidence of having found a comparative panicle ("more") to the noun phrase headed by "cars', hence the complement ('than ...') can attach there, but not to the higher level verb phrase headed by "makes'. This limitation also prevents our processing "What companies make a larger car than Buick', whose exact meaning(s) the reader is invited to ponder. (3) Since comparative complements are based on noun phrases, neither "Audi makes more large cars in France than in Germany" nor "Audi makes large ears more in France than in Germany" is recognized. (4) We attempt no pragmatic disambiguation of semantically ambiguous comparatives. Thus, when confronted with "more than 3 inches shorter" or "more than 3 fewer cars', we provide the compositional interpretation associated with our left recursive syntax. Even expressions such as "as many" and "as large" are ambiguous between at least and exactly. (5) We attempt no anaphora processing, and so comparatives without a complement, as in "Which cars are larger?', are not processed. (6) We provide general conversion of units of measure (e.g. "2 feet longer" is the same as "24 inches longer') but they are not fully incorporated into the system. 2. Aa Initial Exmnple The mechanisms we shall describe apply a conventional series of transformations to sentences containing one or more comparatives, ultimately resulting in an executable expression. As an example of this process, 2 we'll consider the input "List the cars at lee.st 20 inches more tlum twice as long as the Century is wide" which contains a highly comparativized adjective. First, this input is scanned and parsed, yielding the parse tree shown in Figure 1. Note that each COMPAR node has a QUANTITY node and a MODE 3 of its own. Also, the MODE of the top COMPAR (whose value is "equal') is co-indexed (indicated by the subsrcipt i) with the MODE feature associate with the panicle ('as') that intervenes between the ADJ and its COMPAR- ARG; this assures that -er/than, less/than, and as/as pairs collocate correctly. Next, we build a "normalized" parse tree by reconstructing elements that were discontinuous in the surface structure and 2. A formal account the associated formalisms, including a BNF syntax and a denotational semantics for our "normalized parse trees" and "algebraic-logical form" language, is given in Ballard and Stumberger (1987). 3. Dashed lines indicate features, as distinct from lcxical items, and empty nodes, which result from Whiz-deletion, are denoted by'?'. 42 by performing other simplifications. This yields the following structure, whose 2-place predicate, with P (parameter) and A (argument) as variables, corresponds to "at least 20 inches more than twice as • .. as'. Normalized Purse Tree: (CAR (NOUN CAR) (COMPAR (ADJ LONG) (A (P A) (~ P (÷ 20 (. 2 A)))) (CAR { = CENTURY) ) (ADJ WIDE))) Next, user-defined meanings of words and phrases are looked up 4 and the comparati~zafion operations described in Section 6 are performed, yielding Algebraic-Logical Fon~ (SET (CAR Pl) ( ~ (Length-of-Car PI ) (+ 20 (~ 2 (Width-of-Car CENTURY] Finally, this representation is converted into the executable expression indicated by lrmal Executable Exprossiee: (SUBSET (X (Pl) (~ (KSV PI eS{LENGTH}) (÷ 20 (- 2 (KSV @I(CENTURY} BS{WIDTH} ) ) ) ) (KI @F{CAR} ) ) ) where KSV and KI are primitive retrieval functions of the Kandor back-end; @I{...}, @F{...} and @S{...} are Lisp objects respectively denoting instances, frames, and slots in Kandor's taxonomic knowledge base; and >I>/ is a coercion routine supplied by TELI to accommodate backend retrieval system that produce numbers in disguise (e.g. a Lisp object or a singleton set) on which the standard Lisp functions would choke. 5 However, since compositionally created structures such as the preceding one are often intolerably inefficient, optimiz~tions are carried out while the executable expression is being formed. In the case at hand, the second argument of >I >~ is constant, so it is evaluated, producing Optimized Executable Exlmressiee: (SUBSET (A (Pl) (~>/ (KSV P1 @S{LENGTH}) 158)) (KI BF{CAR} ) ) A second example, which illustrates a comparative 4. In TELI, meanings may be arbitrary expressions in the extended tint-order language discussed in Ballard and Stumberger (1987). 5. Similar functions are also supplied for arithmetic operators. quantifier, is given in an appendix where, as a result of optimizations analogous to those which produced the constant 158 above, the comparative "at least 3 more large cars than Buick" is eventually processed exactly as though it had been "at least 6 cars" (since Buick made 3 large cars). 3. Lexical Provisions for Comparatives Our current repertoire of domain-independent lexical items associated with comparatives includes "many', "few', and "much'; "more', with 3 readings (er, er+many, er+much), following Bresnan (1972) and similar to Robinson (1982, p. 28); "fewer (er+few); "less', with 3 readings (less, er+few 6, less+much); several formatives and adverbials ('at', "least', "most', "exactlY', "precisely', "only', "just', "half', "again', "times', "percent'); and a handful of spelled- out ordinals ('thirds" etc.). Though not stored in the lexicon, both integers and floating-point numbers (of. "3.45 inches') are also involved in comparativization. The domain-dependent portion of the lexicon includes members of the open categories of adjectives, measure nouns, and comparative inflections of adjectives. The scanner output for the comparative of the adjective A is er +A (e.g. "larger" becomes er+large). 4. Syntax for Comparatives The basic syntax for comparatives adheres to the meta-rules given in Section 1.2. As indicated in the parse tree of Figure 1, COMPAR is never a primary tree node but is instead a daughter of the node being comparativized. Furthermore, since our grammar has recently taken on somewhat of an X-bar flavor (cf. Jackendoff, 1977), the complement for a comparativized item is found as either its sister or its parent's sister. Complex comparatives derive from left-recursive structures. 7 Our present grammar for comparatives is set up partly by meta-rules 8 and partly by hand-coded rules relating to such idiosyncracies as "more than 3 inches in length" (however, of. "more than 6 in number*). 6. To the possible horror of the prescriptive grammarian, this accounts for such attrecities as "less books'. 7. Though our parser operates top-down, we've incorporated a general mechanism for left recursinn that's also utilized by possessives (e.g. "the newest car's company's largest compatitor's smallest car'). 8. Meta-rules are also used to produce the grammar for relative clauses, yes-no questions, and a host of other structures (e.g. various slash categories) from a hand-coded grammar for basic declarative sentences. 43 S. Parse Tree Normalization ' Letting Node{<X>} denote a node of the normalized parse tree associated with an element of type <X>, comparatives involve the replacement denoted by NodelCt<X>}} --.* (COMPAR Node{<X>} <Rel> <At]g> <Etcx>) where <Arg> corresponds to an optional noun phrase, <Etcx> captures non-elided material associated with the matrix clause, and the 2-place- relation denoted by <Rel> is the most interesting (and by far the most complex) element produced. The algorithm that produces it converts "more', "less", and "times" respectively into +, -, and *. This process is left recursive; the relational operator is determined from the highest MODE, and by default it is assigned to be _.9 As indicated below, these algebraic and arithmetic symbols will be preserved in the executable expression unless the word being comparativized indicates a downward direction on the scale applicable to it (e.g. "fewer', "shorter'), in which case they will be reversed (e.g. >i becomes and -~ becomes -). Each 2-place-relation is the body of a 2-place lambda whose variables, P and A, are associated with values obtained from a parameter and an argument against which a comparison is being made. Some example 2-place-predicates are mere than 166 h~les leag more than IS feet ling at meat 180 inchu king ~ e m at least u leq as 1 h~.h ~ger tt~ exactly twice as Iomlg as 3 times as long as half agala • leq as forty percem kqer t~m less thu erie third u leq as at least 3 inches mere alma twice u leeg u (> P 166) (> P 180) (~ P 18o) (> PA) (~ PA) (- P (. 2~U) (;~ P (. 3 A)) (~ P (* 1.5 A)) (~ P (. (+ (/40 I00) I) A)) (< P (. (I 1 3) A)) () P (+ 3 (- 2 A))) When the measure noun appearing in an English input differs from that by which the objects being tested are measured, as indicated by the second example above, a scalar conversion is required. 6. Semantics for Comparatives The semantics of comparativization involves converting a one-place predicate into another one- place predicate by performing arbitrarily complex operations on it. For example, if "large car" has been defined as a car whose length exceeds 190 inches, thetl, letting "A" denote a noun phrase complement, some examples are t0q kMq~r tim 180 hm:l~ leqcr tlam A no lealger than A twice as leog as A t- wide 3 laches mora thaa twi~ as long as A Lesgth(x) ;~ 190 Lcegth(x) > 18o Leq~(x) > Leq~(A) Le,t.m(x) ~ Le~mCA) Leqpm(x) ~ 2 • Wldth(A) Length(x) > 3 + 2, Length(A) where each of these right-hand-sides is the body of a one-place predicate whose single variable is x. As a second example, comparative quantifiers such as "more than 6" are handled by an identical process l°, as indicated by Ii x has --any y,. Size {y I Jhs(x,y)} ;~ x has more tham 6 y's Size {y [ Has(x,y)] > 6 x Im mere y'. em A Size {y I nt, s(x,y)} > Size blt~(A,y)} x Im at lem 2 me~ Size {y [ Hgix,y)} y's tim A ~ 2 + Size [y ] l-I~(A.y)} where the initial Constant denotes some arbitrary constant. In general, comparativizing a one-place predicate takes place as follows. 1. Find (a) an appropriate one-place function and (b) an associated relational operator that tells which direction on a linear scale indicates having "more" of the property. 2. Apply the relational operator located above to the modality of the comparison to determine the relational operator that will appear in the IR+. If the relational operator of the definition being comparativized is either > or >i, use the mode occurring in the IR; otherwise, "reverse" the mode by doing what would be a negation but leaving untouched the - portion of the operator. Thus, the reversal of < is >, the 9. This addresses the inherent ambiguity of as/as structures without an adverbial element, such as "exactly" or "at least'. Thus, "people with 3 children" is interpreted as people with exactly 3 children. 10. That is, we have no special purpose processing for "more than', "how many" etc. 11. We use "has" in these examples for clarity; naturally, the scope of a comparative quantifier may contain an arbitrarily complex predicate. 44 reversal of ~< is />, and so forth. Similarly, +, and - are switched. 3. Determine the argument being compared against (possibly a constant). 4. Link these pieces together. If the argument was not constant (e.g. "... longer than at least 3 foreign cars'), wrap its scope around the resulting expression. For example, if "short car" has been defined as "x is short': Length(x) < 160 then the 1-place function and relational operator are determined in step 1 to be Length and <~, and thus we have "shorter than A" -"* Leagth(x) < IAalgtk(A) "exactly 3 inches shorter than A" --* LentO(x) - Izs~(A) - 3 7. Comparatives Containing a Wh Element In addition to recognizing wh elements associated with a relative or interrogative clause, 12 TELI recognizes the word how when it appears in place of a quantity, e.g. "how long" (cf. "6 inches long') and "how many more" (of. "6 more't3). Wherever wh appears, however, we treat its semantics as roughly "solve for wh such that'. In the case of interrogative pronouns (e.g. "what'), this leads rather obviously to an internal representation asking for a SET. In the case of "how', this treatment is also in order since it represents a (quantity) NP. For simplicity, we produce an expression containing an unbound wh and later give it wide scope. 14 In particular, subsequent processing involves moving the wh element upward in the logical form tree 18 by performing appropriate transformations. 12. To see that wh is less than a "word', consider pairs such as what~that, where~there and when~then. The advantage of recognizing sub-word units us the primitives on which syntax and/or semantic analysis is based should come as no surprise to anyone acquainted with the structure of languages other than English, which is unusual in coming so close to being treatable solely at the word level. 13. As stated earlier, we have adopted derivations suggested by Bresnan (1973) such as -er+many---qnore. In the case at hand, we must assume something like Q+many--*Q, where Q denotes a quantity. 14. The scope is wide but not global because of inputs such as "How many cars does each US company make?" 15. Of course, our algebraic-logical forms, based on operators and their associated arguments, amount to being trees. For illustration, consider the absurdly complicated example "Buick makes 3 more than how many percent more cars than Audi?" the comparative portion of whose internal representation t6 is (X (P A) (- P (+ (* A (+ 1 (/ WN 100))) 3] At this point, we proceed with semantic processing, ignoring for the moment the presence of the unbound WH element. In the case at hand, this leads to (= (COUNT (SET (CAR Pl) (Make BUICK Pl) ) ) (÷ (, (COUNT (SET (CAR Pl) (Make AUDI PI) )) (+ I (/ wH 100))) 3)) after which we "solve for" WH to yield (. (- (/ (- (COUNT (SET (CAR PI) (Make BUICK PI))) 3) (COUNT (SET (CAR PI) (Make AUDI PI)) )) I) 100) This process is not dependent on the position in which the wh occurred, and thus takes the place of sl~:ial-pu~ interpretation routines for "how many,, "How <Adjective>', and so forth. 17 8. Discussien Thus far, we have presented an overview of our treatment of comparatives, with as much detail as we're able to supply in a conference-length paper. Although we can offer no substantive empirical evidence with TELI (e.g. results of use by non- authors), we believe some of the techniques we've presented can be put to use by the reader. Further information, especially with regard to the interaction of comparatives with a variety of other types of constructs, can be found in Bailard and Stumberger (1987). 16. The sentence is ambiguous, with readings indicated by "3 more than [how many percent]" and "[3 more than how manyl percent'. As indicated earlier, we presently take the reading that favors the use of left reenrsion. 17. Problematic situations can arise in which simple algebraic operations aren't sufl~cienct. For example, in examples such as "Cars were sold to people with how many children?', we must move wh past a logical quantifier, rather than the arithmetic operators as shown above. 45 8.1 Related Work Although the literature describing implemented NL processors contains many examples of comparative constructions (cf. Kirsch (1964) for a wealth of early examples), at least two qualifications may be given concerning the current "state of the art" of comparative treatment. First, the majority of the examples appearing in the literature are quite simple 18 (e.g. "more than $250") and can be prepared for by specifying a 2-place predicate in advance that's effectively equivalent to the 2-place predicate we construct from an underlying 1-place predicate by way of coercion into a 1-place function. This allows one to avoid some slippery problems of movement (which we have adressed but have certainly not disposed of), to ignore morphological subtleties (e.g. recognizing the "er" of "larger" or "more" as -er, a "word" to be input to the parser), and to take other shortcuts. 19 Second, although examples of various types of comparatives are not hard to come by, accounts of the actual mechatdsms that treat comparatives are harder to find, as are specific statements of the generality which authors believe themselves to have provided for. 8.2 Levels of Representation The architecture of TELI resembles that of similarly motivated question answering systems (cf. Grosz et al, 1987; Hafncr and Godden, 1985; Bates and Bobrow, 1983 and Bates et al 1985) by comprising a linear sequence of processing stages which produce successively -lower" level representations of the input. 2° Although our parse tree format is rather conventional, 21 what we have called "normalized 18. Evidence of the gap between what's been studied and what may actually be important is expressed, in the context of pronoun resolution, in Hobbs (1978, p. 343) as follows: "There are classes of examples from the literature which are not ... handled by the algorithm, but they occur rarely in actual texts, and in view of the fact that the algorithm fails on much more natural and common examples, there seems to be little point in greatly complicating the algorithm to handle them." 19. The extent to which "shortcuts" are justified, from either a psychological or system designer's standpoint, is not clear. As a possibly bizarre example, consider the word "after', which could be treated as "-er .aft than', where .aft is the Anglo- Saxon root (extant only on I:card ship) from which current English word derives. A perhaps even more bizarre opportunity may exist for treating "rather" as "-er .rathe', where ".rathe" is a Middle English adverb meaning "quickly'. 20. We're using "low" to refer to level of abstraction. Perhaps ironically, successively higher levels of cognitive information are involved in producing these "lower" level representation. 21. The methods whereby TELI produces parse trees are less conventional than the trees it produces, due to our provision for having the parser enforce agreements automatically while it is running, rather than doing subsequent filtering. parse tree" and "algebraic-logical form" correspond rather loosely to what in the literature are often called "logical form" and "meaning representation', respectively. Furthermore, in the most recent work with TELI, meaningful distinctions between modules have become blurred, although the relative order in which operations are carried out is largely the same. In seeking to compare our formalisms and processing strategies with others that have been proposed, we have found terms such as "logical form" being used in the literature in quite vague and often incompatible ways. Furthermore, we know of no compelling arguments that suggest that a psychologically plausible model of human information processing will require intermediate levels such as parse trees, logical forms, and the like. Is it even clear that there ought be be a finite number of successive "levels", whatever they might be? We are increasingly doubtful that the trappings spawned by linguists and philosophers can be put in a bag, sprinkled with Common Lisp, shaken, and expected to yield robust natural language processors. More of an interdisciplinary effort may be required than has yet been seen. 8.3 Curreat Work The representation given in Section 5 fundamentally restricts us from handling comparatives whose complement is more than one level above the word being comparativized (e.g. "John persuaded his students to contribute to more museums than Bill did'). Our current work involves producing normalized parse tree structures of roughly the form (COMPAR.2 Ci <Co..p> ('COMP~-I Ct-) -.) where the COMPAR-1 and <Comp> structures correspond to the COMPAR structure given in Section 5; Ct provides for co-indexing when multiple comparativizations are present; and the first "..." allows for arbitrarily many levels. This calls upon us to modify the semantic processing presented in Section 6, making it resemble the treatment given to wh elements as described in Section 7. 46 9. Conclusions We have presented algorithms aimed at the morphological, syntactic, and semantic problems associated with a large variety of comparative structures that arise in the context of question answering. We believe the extent of our coverage equals in several ways and exceeds in some ways the capabilities known to us via the literature. However, comparatives operate as a "meta" phenomenon and thus cut across many issues; we have ignored certain problems and knowingly treated others inadequately. Further work is certainly required, and we hope to have presented a framework in which (I) some interesting and important capabilities can be provided for now and (2) further computational studies can be carried out. 10. Acknowledgements The author wishes to acknowledge the many insights displayed by Mark Jones and Guy Story during a number of intense discussions concerning the issues discussed in this paper. 11, References Ballard, B. The Syntax and Semantics of User-Defined Modifiers in a Transportable Natural Language Processor. IOth International Conference on Computational Linguistics, Stanford University, July 1984, 52-56. Ballard, B. User Specification of Syntactic Case Frames in TELI, A Transportable, User-Customized Natural Language processor. l lth International Conference on Computational Linguistics, University of Bonn, August 1986, 454460. Ballard, B., Lusth, J., and Tinkham, N. LDC-I: A Transportable Natural Language processor for Office Environments. ACM Transactions on O~ce Information Systems 2, 1 (1984), 1-23. Ballard, B. and Stumberger, D. Semantic Acquisition in TELI: A Transportable, User-Cnstumized Natural Language Processor. 24th Annual Meeting of the Association for Computational Linguistics, Columbia University, June 1986, pp. 20-29. Ballard, B. and Stumberger, D. The Design and Use of a Logic- Based Internal Representation Language for Backend-lndependent Natural Language Processing. AT&T Bell Laboratories Technical Memorandum, October 1987. Bailard, B. and Tinkham, N. A Phruse-Structured Grammatical Framework for Transportable Natural Language Processing. Computational Linguistics 10, 2 (1984), 81-96. Bates, M., Maser, M. and Stallard, D. The IRUS Transportable Natural Language Interface. Proc. First Int. Workshop on Expert Database Systems, Kiawah Island, October 1984. Bresnan, J. Syntax of the Comparative Clause Construction in English. Linguistic Inquiry 4, 3 (1973), 275-344. Cushing, S. Quantifier Meamngs: A Study in the Dimensions of Semantic Competence. North-Holland, Amsterdam, The Netherlands, 1982. Damerau, F. Problems and Some Solutions in Customization of Natural Language Database Front Ends. ACM Transactions on Office Information Systems 3, 2 (1985), 165-184. Dik, S. Studies in Functional Grammar. Academic Press, London, England, 1980. Ginsparg, J. "Natural Language Products', unpublished document, 1987. Grosz, B., Appelt, D., Martin, P., and Pereira, F. TEAM: An Experiment in the Design of Transportable Natural-Language Interfaces. Artificial Intelligence, 32, 2 (1987), pp. 173-243. Hafner, C. and Godden, C. Portability of Syntax and Semantics in Datalog. ACM Transactions on O~ice Information Systems 3, 2 (1985), 141-164. Jackendoff, R. X, Bar Syntax: A Study of Phrase Structure. MIT Press, Cambridge, Mass., 1977. Kirsch, R. Computer interpretation of English text and picture patterns. IEEE Trans. on Electronic Computers, 1964. Moore, R. Problems in Logical Form. 19th Meeting of the Association for Computational Linguistics, Stanford, California, 1981, pp. 117-124. PateI-Schneider, P. Small Can Be Beautiful in Knowledge Representation. Proceedings of the IEEE Workshop on Principles of Knowledge-Based Systems, Denver, Colorado, December 1984. Robinson, J. DIAGRAM: a grammar for dialogues. Communications of the ACM, 25, 1 (1982), 27.47. Sager, N. Natural Language Information Processing: A Computer Grammar of English and Its Applications. Addison- Wesley, 1981. Sells, P. Lectures on Contemporary Syntactic Theories. Canter for the Study of Language and Information, Stanford University, 1985. Thompson, B. and Thompson, F. ASK Is Transportable in Half a Dozen Ways. ACM Trans. on O~ce Information Systems 3, 2 (1985), 185-203. Woods, W. Semantics and Quantification in Natural Language Question Answering. Advances in Computers, 'Col. 17, New York, Academic Press, 1978 47 HEAD J NOUNNP-TRACE NP/NPVERB/AUX I t J J CAR TRACE ? AUXIAUX QUALII*Hf, E I I I : ? QUAI~'L/~ I LEAST NIP J NP2 \ REL AIXI COMPAR AI~ t COMPAIt QUANTITY CMODE AIXI QUANH-t~ CMODE NUM TIMES ~ I :. NUM MEASURE mere 2 I I 20 INCH COMP~Ait-ARG (:MODE NP2 PREDICATE im~ll. I I A q--I( LONG NOUNVAL AIXI I I CENTURY WIDE Figure 1: Parse Tree for The Example of Section 2 .. Appendix: Processing a Comparative Quantifier gugUsh ~pm: "Have any US companies made at least 3 more large cars than Buick?" Nonmdized Parse Tree: +vP (co.p,~r .~s cAN sxL axL .xL) (suaJ (eou,m (a.-"~ ARY) (CONPANY (AJDJ US) (aoml coNpaJrt)))) (OlJ (CAN (CON,AN [GUANT NAn') () Q (~ CO 3)) (COlPaJn' (- B~ZC¢))) (CAR (~ L&ItGE) (~OUN CAN))))) Algebraic.Logical Fore: (ooAN~ (co..,.n .1) c> Q 1) (O8-Company Pl) (~ (eOUIlT (SET (CAN P2) (AND (> (Length-of-Car ,2) 190) (m (Coml~aY-of-Ca¢ ,1) ,2)))) (+ 3 (COUliT (8IT (CAN ,3) (&lid (> (~ength-o£-Csr P2) 190) (- (COal~ny-of-Ca¢ P2) IUZCE))))))) Final Executabb Expression: (oPc-soxs "(1 co) (X (P1) (ANO (KZ? ,1 e,(os-coNp~n')) ()) (GPC-COOIT (8UBSBT (~ (,2) (AND (>> (ESV P2 g8(LSMGTH}) 190) (-= (ESV ,2 IS(CONPAIIT)) ,1))) (¢x B,(CAN)))) (GPC-+ 3 (EZ OF(CONPMIT)))) (GPC-COUNT (SUD8BT (X (P2) (AND (>) (ESV P2 OS{LENGTH}) 190) (-- (Esv P2 os{conPA~r)) oz(auzc¢)))) (¢Z BP(CAN))))))) Optimized Executable Expmsion: (GPC-SONZ "(1CQ) (~ (P1) (GPC-a0NZ "(6 CQ) (~ (P2) (AHD (>) (ESV P2 eS(LBNGTH)) 190) (mm (ESV P2 DS{CONPAHY}) Pl)) '(eZ(ZWTRGKA) OZ(NOVA} ...)))) (El eF{US-CONPAMY))) 48
1988
6
PARSING AND INTERPRETING COMPARATIVES Marmy Rayner SICS Box 1263, S-164 28 KISTA Sweden Amelie Banks UPMAIL Box 1205, S-750 02 UPPSALA Sweden Tel: +46 8 752 15 O0 Tel: +46 18 181051 ABSTRACT 1. INTRODUCTION We describe a fairly comprehensive handling of the syntax and semantics of comparative constructions. The analysis is largely based on the theory developed by Pinkham, but we advance arguments to support a different handling of phrasal comparatives - in particular, we use direct interpretation instead of C- ellipsis. We .explain the reasons for dividing comparative sentences into different categories, and for each category we give an example of the corresponding Montague semantics. The ideas have all been implemented within a large-scale grammar for Swedish. This paper is written with two distinct audiences in mind. On the practical side, we try to present a cookbook which the natural language interface implementor can ~use if he wishes to incorporate comparative constructions into his system's coverage. This is, we trust, interesting in itself; a quick glance at Table 1 should be enough to show that this construction is more common than is perhaps generally realized. Thus in addition to the obvious more, less and as much as, used together with an adjective, adverb or determiner, we also include such words as same, before and after, used in appropriate ways. We also try to give a usable classification of the various kinds of constructions generally lumped together under the blanket heading of "Comparative Ellipsis". Examples of comparatives 1) John is taller than Mary. 2) Few people run as fast as John. 3) John bought more books than Mary. 4) John was happier in New York than in London. 5) John has more books than Mary has newspapers. 6) John had this job before me. 7) John was born in the same city as Mary. 8) Mary had more friends than John thought. 9) More men than women bought the book. 10) Mary seems brighter than most of the pupils. Adjectival comparison Adverbial comparison with "as" Determiner comparison Comparison on PP Clausal comparison "Before" comparison "Same" comparison "S-operator" comparison Complex comparative determiner "Simple" phrasal comparison Table I 49 On the theoretical side, we want to reexamine some fundamental questions concerning the nature of the comparative construction; we are going to argue that our practical work fairly strongly supports a hypothesis that has already appeared in several forms in the theoretical literature, namely that "comparative ellipsis" is a semantic rather than syntactic phenomenon. We expand more on this theme in section 2. In section 3 we present our handling of clausal comparison, which is a straightforward implementation of Pinkham's theory. The next two sections cover non- clausal comparison, and constitute the main part of the paper. In section 4 we show how Pinkham's predicate copying analysis can be implemented within a Montague grammar framework so that duplication of material is not syntactic copying of parts of the parse-tree but is instead a double application of a higher level function. We demonstrate at length how this method can be used to handle three different kinds of elliptic construction, all of which present problems for the syntactic approach. In section 5 we describe our treatment of the base generated phrasal constructions from section B.2.3 of Pinkham's thesis. (We call these "simple" phrasal comparatives). In the final section we summarize our results; in particular we address ourselves to the question of justifying our classification of comparatives into separate categories instead of providing a unified interpretation. The current paper is a shortened version of (Rayner & Banks 88) ("the full paper"), which we will refer to from time to time. This includes among other things test examples and full program listings of a logic grammar based on the SNACK-85 implementation, which covers all forms of comparison discussed here. 2. PREVIOUS WORK The traditional viewpoint has been to explain non-clausal comparatives by means of deletion rules; the first detailed account based on this idea was (Bresnan 73), which strongly influenced most work in the area during the following ten years. Recently, however, other researchers have pointed out problems with Bresnan's approach; a very thorough and detailed criticism appears in (Pinkham 85) 1, which has been our main theoretical source. Pinkham gives examples of a wide range of constructions which are difficult or impossible to explain in terms of deletion phenomena, and suggests instead an approach in which at least some comparative constructions are base-generated phrasal and then interpreted using a rule which she calls "distributive copying". The following example 2 shows how the scheme works in practice. Sentence la) receives the logical form lb): la) I invited more men than women lb) I INVITED (MORE [ql (ql men), q2 (q2 women)]) 1 Hereafter "Pinkham". 2 From Pinkharn, p. !23 50 (The object of INVITED is the base generated phrasal). After distributive copying, this becomes lc): lc) MORE I ql (INVITED ql men), q2 (INVITED q2 women)] This manoevre, replacing syntactic deletion rules with interpretative copying operations, seems to us very powerful, and (although we formulate it in a rather different way) is one of the central ideas in our own treatment of comparatives. We have in fact taken it even further than Pinkham, who keeps the verb deletion rule of "C- ellipsis" to explain some comparative constructions: in the account presented below in section 4, we get rid of the deletion rules completely and use only interpretative methods. In this context, it is interesting to look at Levin's LFG-based work on sluidng constructions (Levin 82). Levin presents a variety of arguments to support her claim that sluicing is not a c-structure phenomenon (i.e. not elliptic in nature), but rather explainable at f-structure level (i.e. in some sense related to a semantic copying operation). The differences between sluicing and comparative ellipsis are sufficiently great that this cannot in itself be said to prove anything, but it is none the less indicative of the way in which linguists are thinking about these problems. In SNACK-85, which uses a framework based on that in (Pereira 83), we perform copying operations on "quant- trees", a level of structure which can loosely be compared with Chomskian logical form or LFG's f-structures. Viewed in this light, we claim that our treatment of non-clausal comparison (which at first glance might seem somewhat ad hoc) is in fact fairly weU- related to current tendencies in theoretical linguistics. 3. CLAUSAL COMPARATIVES Most authors are agreed that the case of clausal comParison is the simplest, and for this reason we tackle it first; despite this, it will be seen that there are a few tricky points. Our analysis is heavily based on Pinkham's, and virtually amounts to an implementation of the second section of her thesis; we start by summarizing what we see as the main ideas in her treatment. The fundamental notion in Pinkham's analysis is to assume that there is an implicit element present in a comparative clause, which is linked to the head of the comparison 1 in a way similar to that in which a trace or gap is linked to its controller. This "trace" always contains a quantifier-like component. (We will adopt Pinkham's notation and symbolize this as Q). It may consist of just the Q on its own, or else be an implicit NP composed of the Q together with other material from the head of the comparison. Pinkham argues that there are essentially three cases; these are exemplified in sentences 2a) - 2c). In the first of these, just the Q is extraposed; in the second, a Q together with the CN books, taken from the 1 We endeavour throughout this paper to keep our terminology as close as possible to that used by Pinkham. The terms used are summarized in Appendix 1. 51 head more books. If the head contains a comparative adjective, as in 2c), then the extra material, consisting of the adjective and the main noun from the head, is obligatory. For a justification, and an explanation of several apparent exceptions, we refer to Pinkham, p. 33 - 40. 2a) John bought more books than Mary bought (Q) records. 2b) John bought more books than Mary could carry (Q books). 2c) John bought a more expensive vase than Mary bought (a Q expensive vase). A scheme of this kind can readily be implemented using any of the standard ways of handling traces. In our system, which is based on Extraposition Grammar (Pereira 83), we use the "extraposition list" to move the material from the head to the place in the comparative clause where it is going to appear; this corresponds to use of the HOLD register in an ATN, or "slash categories" in a GPSG-like framework. Although this method appears to work well in practice, thre is a theoretical problem arising from the possibility of sentences with crossing extrapositions. We refer to the full paper for further discussion. 4. DIRECT INTERPRETATION OF NON-CLAUSAL COMPARISON 4.1 Basic ideas Our first implementation (Banks 86) was based on the conventional interpretation of comparatives: all comparatives are explicit or elliptic forms of clausal comparatives, making the analysis of comparison essentially a syntactic process. In (Banks & Rayner 87) we presented this in outline and then described some problems we had encountered, which eventually caused us to abandon the approach. Briefly, it turned out that the exact formulation of the syntactic copying process was by no means straightforward: there appeared to be a strong parallel with the well-known arguments against the analogous point of view for co- ordination constructions. (See e.g. (Dowty et. al. 82), p. 271). As an example, we presented sentence 3) 3) Everyone spent more money in London than in New York. which is problematic for a reduction account. We suggested instead that the sentence be thought of as being composed of the following components: the initial everyone, the contrasted elements London and New York, and the duplicated part, which could be rendered (roughly) as is a P such that P spent an amount of money in where _. In a Montague- grammar-like formalism, this can then be given the following semantic analysis: 52 "Montagovian" analysis of comparative (spent(x,y,z) is to be read as "x spent amount y in the city z") than in New York 1. everyone 2. New York 3. London 4. spent m in 5. spent more in 6. spent more in London than in New York everyone spent more in London than in New York Table 2 . ~.QVx: person(x)--)Q(x) ~.QBz: [z=New YorkAQ(z)] ~.QBz: [z=LondonAQ(z)] ~.y~XzXx: spent(x,y,z)AP(y) XzXx3y: spent(x,y,z)A By': spent(x,y',New York)Ay>y' Xx.~y: spent(x,y,London)A By':spent(x,y',New York)Ay>y' Vx: person(x)~ [3y: spent(x,y, London)A 3y': spent(x,y',New York)Ay>y'] The key point is that the syntactic copying of the deletion approach has been replaced by a semantic operation, a double instantiation of a lambda- bound form. The following account summarizes how the idea is implemented within the structure of the SNACK-85 system. Semantic interpretation in SNACK-85 is performed by first converting the parse-tree to an intermediate form, which we call (following (Pereira 83)) a quant-tree. This is then subjected to rewriting rules before being converted into the final logical form. Normally, these rewriting rules formalize so- called scoping transformations; here, we will also use them to describe the interpretation of non-clausal comparison. The basic motivation is the same, namely to remove rules from the grammar which lack syntactic motivation. We introduce four new kinds of nodes in addition to those defined in (Pereira 83): we call these comparands, comparative-objects, comparisons, and comparison-placeholders. They interact as follows. (Stage 1) At the syntactic level, we view the comparative object as a constituent in its associated comparative AP; when the parse-tree is transformed into the quant-tree, the AP gets turned into a comparand node, in which there is a comparative-object subnode representing the comparative object. (Stage 2)Rewriting rules then move the comparative-object out of the comparand, leaving behind a placeholder. This is a triple consisting of the compared predicate (the adjective, adverb or whatever), and two logical variables (the "linking" variables), which correspond to the lambda-bound variables y and ~ above. (Stage 3) The "raised" comparative- object node is a 4-tuple. It consists of 53 • The two variables y and P (and is thus "linked" to the placeholder through them- hence the name), • The comparison type (more than, less than, same as etc.) • The quant subnode which represents the comparand NP or PP. The rewriting rules move it upwards until it finds a quant node that it can be compared against. At the moment, the only compatibility requirements are that the quant node and the comparative-object's quant subnode not have incompatible case-markings. This could be improved upon; one way would be to define preference heuristics which gave higher priority to comparisons between quant nodes whose variables are of similar type. The result of merging the two nodes is a comparison node, which is a 5-tuple consisting of • The comparative-object's quant node • The quant node it has been merged with • The comparison type • The two "linking variables", y and P When the quant-tree is converted into logical form, there should thus be only comparison nodes and placeholder nodes left, with the placeholders "below" the comparisons. In the final stage, the portion of the quant-tree under the comparison node is duplicated twice, and the linking variables instantiated in each copy in the manner described above. So in the "inner" copy, P gets instantiated to a a form 2y:comp(y,y'), where comp is the type of comparison and y and y' are the degree variables; in the "outer" copy, P is instantiated to the value of the inner form. In the next two subsections, we go further to show how a similar analysis can be used to assign a correct semantics to two other kinds of comparative construction without any recourse to C-ellipsis. 4.2. Comparatives with "s-operators" In this section, we are going to examine comparative constructions like those in 4a), 4b) and 4c). These have a long and honourable history in the semantics literature; 4c) is a famous example due to Russell. 4a) Mary had more friends than John had expected. 4b) Most people paid more than Mary sa/d. 4c) John's yacht was longer than I thought. In order tohandle examples like these within our framework, we need a syntactic representation which does not involve ellipsis. Our solution is to introduce a syntactic constituent which we call an "s-operator": we define this implicitly by saying that an "s- operator" and a sentential complement combine to form a clause. 1 Thus the italicized portions of the sentences above are deemed to be s-operators, and in each of them the s-operator's 1 In a categorial grammar framework like HPSG (Pollard & Sag 88), we could simply identify an s- operator with a constituent of the form S/S-COMP. It is fairly straightforward to define s-operators in XG-grammar. 54 missing complement is viewed as a kind of null pronoun. Although this move may in English seem syntactically quite unmotivated, there are other languages where evidence can be found to support the claim that these pronouns really exist. In Russian, where comparative constructions very closely follow the English and Swedish patterns, they can optionally appear in the surface structure as the pronoun ~ 1"0. The following sentence illustrates this. OH K~H'I'I4,rl 60JII, LUe KHWr qeH ~ 3TO He bought more books than I ~T0 n~Ma~. thought. Semantically, the analysis of such sentences is exactly parallel to that in the preceding subsection. Comparing 4b) with 3), the "initial part" is most people, and the "contrasted elements" are the s-operator Mary said and an implicit trivial s-operator which we can write as (it is true that). The "duplicated part" is the predicate is a P such that P paid amount of money where . We can sketch a "Montagovian" analysis similar to that in table 2 "Montagovian" analysis of s-operator comparative (paid(x,y) is to be read as "x paid y amount of money") 1. most people 2. Mary said 3. (it is true tha0 4. paid 5. paid more than Mary said 6. (it is true tha0 paid more than Mary said 7. most people paid more than Mary said ~.Q: most0~x:person(x).Q) ~.Q: said(m,Q) ~.y~.~ Xx: paid(x,y),<P(y) ~,x3y paid(x,y)A By'said(m,paid(x,y')Ay>y') ~.x3y paid(x,y)A 3y'said(m,paid(x,y')Ay>y') most(~x:person(x), Xx: 3y: paid(x,y)A 3y'said(m,paid(x,y')A y>y') Table 3 The implementation of this analysis in terms of quant-tree rewriting rules involves only a slight extension of the method described in section 4.1 above. The reader is referred to the program code in the full paper for the concrete details. 55 4.3. "Parallel" phrasal comparatives Comparative constructions of the type illustrated in 5a) have been the object of considerable controversy. The orthodox position was that they were "parallel" constructions: 5a) would thus be a reduced form of 5b). 5a) More women than men read '1-Iouse and Garden". 5b) More women read "House and Garden" than men read "House and Garden". Pinkham, however, gives good reasons for supposing that this is not the case, and that the construction is in some sense base generated phrasal (p.121- 123). It will presumably not come as a revelation to hear that we agree with this idea, though we express it in a somewhat different way. Our interpretation of Pinkham's analysis recasts the more ... than... construction as a special kind of determiner. We introduce an extra rule for NP formation: in addition to the normal NP --~ Det + CN, we also have NP --~ CompDet + CN + CN. (The details can be found in the full paper). This allows us as usual to give the constituent structure without use of ellipsis, and then to interpret it using a suitable predicate-copying operation. Once again we illustrate with a Montague-style example. "Montagovian" analysis of "paraUel" phrasal comparative (reads(x,y) is to be read as "x habitually reads y") 1. women 2. men 3. more 4. more women than men 5. "House and Garden" 6. read "House and Garden" 7. more women than men read "House and Garden" ~: woman(x) ~x: man(x) XP~.QM~ more(P, Q, R) M~,: more(~x: women(x), Xx: men(x), R) ~.x: x = "H & G" Xx: read(x,y) n y ="H & G" more( ~x: women(x), Xx: men(x), ~x: read(x,"H & G")) Table 4 It is interesting to compare our treatment with that suggested in (Keenan & Stavi 86) (p.282-284) for comparative adjectival constructions like that in 6a); they argue convincingly that these are to be regarded as directly interpreted, rather than as "reduced forms" of sentences like 6b). It seems to us that their arguments can be adapted to support the analysis of "parallel" phrasals given above; so if we were to extend their example, we would have that 6b) in its turn was also to be interpreted directly, rather than considered a reduction of 6c). 6a) More male than female students passed the exam. 56 6b) More male students than female students passed the exam. 6c) More male students passed the exam than female students passed the exam. 5 "SIMPLE" PHRASAL COMPARATIVES We finally turn our attention to a third type of comparative construction, which does not properly seem to fit into any of the patterns given above. We start by giving in 7) - 9) some examples of the kind of sentence we have in mind. 7) Mary seems brighter than most pupils. 8) He ran faster than the world record. 1 9) John needs a bigger 2 spanner than the No. 4. Pinkham uses constructions like these as her key examples when demonstrating the existence of base- generated phrasal comparatives. Looking for instance, at 9), we claim with Pinkham that the most natural solution is to treat bigger spanner than the No. 4 as a dosed constituent with a semantic interpretation which does not involve the rest of the sentence. It may not be obvious at first why this should be so, and we pause briefly to examine the possible alternatives. Firstly, suppose that we tried to use a reduction/predicate copying account. This would make 9) a form of 9a): 9a) John needs a (big to extent X) spanner, X such that John needs the (big to extent Y) No. 4. spanner, X>Y. implying that John needs the No. 4. This is clearly wrong; the "needs" isn't copied in any way, and in fact the scope of any copying operation must be limited to the phrase bigger spanner than the No. 4. If we are absolutely bent on using copying, it appears to us that the only way in which it can be done is to treat 9) as derived from 9c) through 913) 9b) John needs a spanner which is bigger than the No. 4. 9c) John needs a spanner which is (big to extent X), X such that the No. 4 is (big to extent Y), X > Y. To be honest, we can't completely discount this approach. However, since it makes bigger than the No. 4 into a constituent in the intermediate 9b), we think it simpler to interpret the phrase structure directly, as is illustrated in the following Montagovian analysis. 1pinkham's example 124a, p. 136 2 We will treat "bigger" as though it were actually "more big" for the usual reasons. 57 Montagovian analysis of "simple" phrasal comparative (needs(x,y) to be read as "x needs something of which the predicate y holds") 1. John 2. needs 3. No. 4 4. big 5. spanner 6. the 7. more 8. more big than the No. 4 9. a bigger spanner than the No. 4 10. John needs a bigger spanner than the No. 4 Xx: x = John Xx,y: needs(x,y) X.x: type_of(x, No. 4) Xx,y: big(x,y) Xx: spanner(x) XP~.Q: the(P, Q) XP~.QX.~.: (X.x: By: P(x,y) A R(Q, Xz: By': P(z,y') (y > y')) X.x: 3y: big(x,y) A • the(Xz: type_of(x, No. 4), kz: 3y': big(z,y') A (y > y')) Ix: spanner(x) ^ 3y: big(x,y) a the(Xz: type_of(x, No. 4), Xz: 3y': big(z,y') ^ (y > y')) needs(John, ~: spanner(x) ^ 3y: big(x,y) A the(Xz: type__of(x, No. 4), kz: 3y': big(z,y') A (y > y')) Tables It will be apparent that bigger than the No. 4 turns up as a constituent here too, and thus our solution is in a sense equivalent with the alternate one proposed above. This is a striking illustration of the difficulties that can attend any efforts to make rigorous comparisons between different syntactic-semantic analyses of natural- language constructions. 6. CONCLUSIONS We have presented a method for syntactic and semantic interpretation of comparative sentences. This has been done by dividing our material into three separate groups, each of which are treated differently: Clausal comparatives (section 3), which are handled by extraposing a constituent containing a Q, following Pinkham's theoretical analysis. 58 • Phrasal comparatives (section 4), treated by direct interpretation using "predicate copying". • "Simple" phrasals (section 5), handled by a different direct interpretation method. We do not claim that this classification is the only way to explain the facts; as we have said above, it would be possible to rewrite simple phrasal comparatives into directly interpreted phrasal comparatives, and also to rewrite directly interpreted phrasal comparatives as clausal comparatives. We think, however, that this manoevre would give us nothing in the form of real gains; even though a unified solution might seem more elegant, the syntactic transformations needed are more complicated than the use of different categories. Thus our first argument against a unified approach is the practical one: we need do less work as implementors if we adopt the classification described here. Despite this, we suspect that many readers (especially those more theoretically than practically inclined) would find it comforting to have some direct evidence that supports our point of view. In this connection we think that the following data from Swedish may be of interest. Comparative constructions in Swedish are virtually identical to the corresponding ones in English. One significant difference, however, is the distribution of the relative pronoun vad ("what"); this can optionally be inserted after the comparative marker in some constructions, as shown in I0) and 11) I . 10) Johan k6pte tier b6ckex /in John bought more books than (vad) Maria gjorde. (what) Mary did. 11) Johan bar ett dyrare John has a more expensive hus ~in (vad) jag har. house than (what) I have. Given the correspondences between clausal comparison and relative clauses described in section 4, it is very tempting to account for the "vad" as a relative pronoun realizing the normally null Q. If we are prepared to accept this, it then appears significant that "vad" may not be used in most phrasal comparatives, as shown in 12) and 13). This would seem problematic for a transformational account, but is quite natural if phrasal comparatives are treated by direct interpretation; there isn't any Q, so it can't be realized as a "vad". 14) ]ohan k6pte tier b6cker /in John bought more books than (*vad) Maria. (*what) Mary. 15) Flex kvinnor iin (*vad) More women than (*what) 1/isex "H/int i Veckan". read "News of the World". m~n men There is, however, one exception to the rule: "vad" may appear in the "s- 1 This is also possible in some dialects of English. $9 operator" constructions from section 5.1 above, as shown in 16). 16) Johan k6pte tier b6cker gn John bought more books than (vad) Maria troclde. (what) Mary thought. We are not certain how to explain this, and leave the reader to judge the facts for himself1; but despite this irregularity, we think the other data gives our theory a considerable amount of concrete backing. APPENDIX: TERMINOLOGY Comparative Clause: the clause introduced by the comparison marker. Compared Element: the largest constituent in the main or the comparative clause, the leftmost element of which is a comparison marker or the comparative quantifier Q. Comparison Marker: words like than, as, before, after. Head of the Comparison: refers to the compared element in the main clause. Phrasal Comparative: a comparative complement which appears to be the reduced form of a comparative clause. This may be a remnant of the application of Comparative Ellipsis to a comparative clause, or it may be base generated. Q: An (implicit or explicit) comparison quantifier which is extraposed in the interpretation of clausal comparatives. REFERENCES (Banks 86) Banks, A. Modifiers in Natural Language, Bachelor's Thesis, Uppsala University, 1986. (Banks & Rayner 87) Banks, A. and Rayner, M., Comparatives in Logic Grammars - Two Viewpoints, Proceedings of the 2rid International Workshop on Natural Language Understanding and Logic Programming, p. 131 - 137,1987 (Bresnan 73) Bresnan, J. Syntax of the Comparative Clause Construction in English, Linguistic Inquiry 4, p. 275-343,1973 (Dowty et al. 82) D. Dowty, R.E. Wall and S. Peters, introduction to Montague Semantics D. Reidel, 1982 (Keenan & Stavi 86) Keenan, E.L and Stavi J. Natural Language Determiners, Linguistics and Philosophy 9, p. 253-325 (Levin 82) Levin, L., Sluicing: A Lexical Interpretation Procedure, in Bresnan, J. (ed.) The Mental Representation of Grammatical Relations, MIT Press, 1982 (Pinkham 85) Pinkham, J. The Formation of Comparative Clauses in French and English, Garland Publishing Inc., New York, 1985 (Pereira 83) Pereira, F.N.C. Logic for Natural Language Analysis, SKI Technical Note No 275, 1983 (Pollard & Sag 88) C. Pollard and I. Sag, Information-based Syntax and Senmantics, Vol. 1, CSLI, 1988 (Rayner & Banks 86) Rayner, M. and Banks, A. Temporal Relations and Logic Grammars, Proceedings of ECAI-86, VoL 2 p.9-14" 1986 1 One possibility is that this is a result of cognitive limitations in the human sentence-processing mechanism, since an arbitrary amount of text can separate a "vad" from the realization that the construction is s-operator rather than clausal comparison. 60
1988
7
Defining the Semantics of Verbal Modifiers in the Domain of Cooking Tasks Robin F. Karlin Department of Computer and Information Science University of Pennsylvania Philadelphia, PA 19104-6389 Abstract SEAFACT (Semantic Analysis For the Animation of Cooking Tasks) is a natural language interface to a computer-generated animation system operating in the domain of cooking tasks. SEAFACT allows the user to specify cooking tasks "using a small subset of English. The system analyzes English input and pro- duces a representation of the task which can drive motion synthesis procedures. Tl~is paper describes the semantic analysis of verbal modifiers on which the SEAFACT implementation is based. Introduction SEAFACT is a natural language interface to a computer-generated animation system (Karlin, 1988). SEAFACT operates in the domain of cooking tasks. The domain is limited to a mini-world con- sisting of a small set of verbs chosen because they involve rather complex arm movements which will be interesting to animate. SEAFACT allows the user to specify tasks in this domain, using a small subset of English. The system then analyzes the English input and produces a representation of the task. An intelli- gent simulation system (Fishwick, 1985,1987), which is currently being extended, will provide the final link between the SEAFACT representation and lower level motion synthesis procedures. The representation con- sists of a decomposition of verbs into primitive actions which are semantically interpretable by the motion synthesis procedures. It also includes default infor- mation for all knowledge which is not made explicit in the input, but must be explicit in the animated output. The representation contains sufficient non- geometric information needed to schedule task start and end times, describe concurrent actions, and pro- vide reach, grasp, and motion goals. An empirical, linguistic study of recipes was con- ducted with the goals of delimiting the scope of the cooking domain, identifying important verbal mod- ifiers, and defining the semantics of those modifiers. This paper is concerned primarily with describing the results of this study and the implementation of some of the modifiers. A Linguistic Analysis of Verbal Modifiers An empirical study of approximately II0 sentences from nine cookbooks was carried out. Verbal mod- ifiers were found to play an essential role in the ex- pressive power of these sentences. Therefore, in order to develop a representation for the verbal modifiers, the study describes and categorizes their occurences and provides a semantic analysis of each of the cate- gories. Each of the categories is considered a seman- tic role in the representation of the natural language input. Temporal adverbials were found to be partic- ularly prevalent in recipes because they are needed to specify temporal information about actions which is not inherent in the meaning of verbs and their ob- jects. This paper discusses two categories of temporal modifiers: duration and repetitions as well as speed modifiers. Other categories of modifiers which were analyzed include quantity of the object, end result, instrument, and force. Passonnean (1986) and Waltz (1981,1982) are con- cerned with developing semantic representations ad- equate for representing adverbial modification. Pas- sonneau's work shows that to account for tense and grammatical aspect requires a much more complex representation of the temporal components of lan- guage than the one used in SEAFACT. However, she does not look at as many categories of temporal ad- verhials, nor does she propose ~specific representa- tion for them. Waltz (1982) suggests that adverbs will be represented by the scales in his event shape diagrams. For example, time adverbials will be tel>- 61 resented by the time scale and quantity adverbials by the scale for quantity of the verbal objects. This is similar to the approach taken in SEAFACT. In SEAFACT scales are replaced by default amounts for the category in question, for example the duration of a primitive action. Aspectual Category of an Event The aspectual category of an event is relevant because it affects which types of modifiers (e.g., repetitions, duration) can co-occur with the event. The analy- sis of aspect given in Moens (1987) (see also (Moens, 1988)) is adopted here. Moens and Steedman iden- tify temporal/aspectual types following Vendler, but introduce new terminology. They apply these types to entire sentences, analyzed in their global context. Moens and Steedman's events are classified as culmi- nated processes, culminations, points, or processes. The majority of events in the cooking domain are calmina~ed procesaes. A culminated process is a state of affairs that also extends in time but that does have a particular culmination associated with it at which a change of state takes place. (Moens, 1987, p. 1) Each process in cooking must have a culmination be- cause any cooking task involves a finite sequence of steps, whose goal is to bring about a state change. An important point about verbal modifiers in the cook- ing domain, revealed in this study, is that many of them are concerned with characterizing the culmina- tion points of processes. In many cases a verb and object alone do not specify a clear culmination point. For example, the command beat the crpam does not contain information about the desired culmina- tion of the process, that is, when to stop the beating. Some sort of verbal modifier such as for 10 minutes or just until it forms peaks is necessary to specify the culmination of the process. Another aspectual type is a culmination. A culmi- nation is an event which the speaker views as accom- panied by a transition to a new state of the world. This new state we will refer to as the "consequent state" of the event. (Moens, 1987, p. 1) Culminations, such as cover the pot, are not ex- tended in time as are processes and culminated pro- CesseS. In addition to the sentential aspect discussed above, the SEAFACT implementation identifies the lexical aspect of the verb. The lexical aspect refers to the aspectual category which can be ascribed to a verb considered outside of an utterance. For ex- ample, the lexical aspect of the verb stir is a process. However, the sentential aspect of the sentence s~ir the soap for S minates is a culminated process. The im- plementation checks that the sentential aspect of each input sentence containing a process verb is a culmi- nated process. That is, there must be some verbal modifier which coerces the process into a culminated process. If this is not the case, as in the sentence stir the soap, then the input is rejected since it would specify an animation without an ending time. The lexical aspect is also used in the analysis of speed modifiers, as discussed below. The Number of Repetitions of the Ac- tion Any expreesion which includes an endpoint, and therefore belongs to one Of the aspectual cla-qses of points, culminations, or culminated processes can be described as having a number of discrete repetitions. When a culminated process is described as having a number of repetitions, it is the entire process which is repeated. Process type events cannot have a number of repetitions associated with them since they do not include the notion of an end point. The number of repetitions of the event can be specified as a cardinal number, as a frequency, or indirectly as a result of the object of the verb being plural, having multiple parts, or being a r~ term. Cardln~! Count Adverbials Cardinal count adverbials (Mourelatos, 1981, p. 205) specify an exact number of repetitions of the event. (1) baste tw/ce during the cooking period (Rombauer, 1931, p. 350) Notice that in the case of certain verbs or sentential contexts it is not possible to specify a number of repe- titions for a culminated process. This is the case when the culmination involves a state change to the object which makes a repetition of the action impossible or meaningless. Consider the example, *Freeze twice. Freeze is a culminated process and once the culmi- nation has taken place the new state of the substance makes a repetition of the process redundant. Talmy (1985) proposes a classification scheme of aspectual types of verb roots which formalizes this distinction. He would classify f~eeze as a one-way non-resettable verb and baste as a one-way reseflable eerb (Talmy, 1985, p. 77) He suggests that these types can be dis- tinguished by their ability to appear with iterative 62 expressions. This distinction can also be made by means of world knowledge about the verbs in ques- tion. Frequency Adverbials Frequency adverbials (Mourelatos, 1981, p. 205) de- scribe the number of repetitions of an action using a continuous scale with gradable terms (Croft, 1984, p. 26) such as frequently, occasionally, and seldom. (2) Bring to a boil, reduce the heat, and sim- mer 20 minutes, stirring occasionally, until very thick. (Poses, 1985, p. 188) The meaning of frequency adverbials is best captured by stating the length of the intervals between repe- titions of the action. For example, the meaning of occasionally is that the number of minutes between incidents of stirring is large. An additional complica- tion is that frequency adverbials must be interpreted relative to the total length of time during which the event may be repeated. If the total time period is longer, the intervals must be proportionately longer. Like other gradable terms, such as tall and short, frequency adverbials are interpreted relative to their global context, in this case the cooking domain. Val- ues must be determined for each of the gradable terms, based on knowledge of typical values in the do- main. In the SEAFACT implementation these values consist of cardinal numbers which specify the length of an interval between repetitions of the action, ex- pressed as a percentage of the total time period. The following calculations are made when a fre- quency adverbial is present in a sentence. The length of a single interval between incidents of the action is calculated by using a percentage value associated with the frequency adverbial, such that IntervalTime - Percentage X TotalTime. The number of inter- vals present during the total time period is calculated by dividing the total time period by the sum of the length of one incident of the action and the length of a single interval. A simplifying assumption is made here that the in- tervals between repetitions are equal. Occasionally might then mean intervals which are 25 per cent of the total time period, and frequently might mean in- tervals which are 5 per cent of the total time period. This algorithm seems to coincide with the intuitive judgment that it is not normal to say stir occasion- ally during a very short time period such as 30 sec- onds. In such a case, the length of an individual stir- ring event might be longer than the total time. That is, for the domain in question there is some minimum interval between stirring events which is necessary for the term occasionally to be appropriate. Plural Objects The use of plural objects or mass terms with a verb may or may not indicate that the action is to he re- peated. The verb may indicate a single action which is performed on multiple objects simultaneously, or it may indicate an action which is repeated for each of a number of objects. This distinction does not always coincide with a mental conception of the objects as a mass or as individuak. Rather, it depends on physical attributes of the objects such as size and consistency. (3) chop the nuts In (3), world knowledge tells us that since nuts are small and relatively soft they can be chopped together in a group, perhaps using a cleaver. (4) chop the tomatoes with a Imlfe Here, world knowledge tells us that (4) usually re. quires a separate chopping event for each tomato, since tomatoes are large compared to knives and have skins which are not easily pierced. Notice that this is a case of repetition of a culminated process. Verbal modifiers may also be used to make explicit whether an action is to be performed separately on each object in a group or once on a group of objects together. (5) beat in the eggs one at a ~ime (Gourmet, 1986, p. 12) (fl) beat in 5 eggs until smooth In (5), the phrase one at a time makes explicit that there is to be a separate beating process for each egg. In (6), a sentence without a verbal modifier, the cul- rnlnated process beat in is performed once on the objects indicated. The Duration of an Action Any expression whose aspectual type is a process or culminated process can co-occur with a duration modifier. The duration of a culminated process refers to the amount of time it continues before the culmi- nation of the process. Duration can be specified as a cardinal number or a gradable term, correspond- ing to the categories used for number of repetitions. Duration can also be specified as co-extensive with the duration of another event, in terms of the change which signals the culmination, and as a disjunction of an explicit duration and a state change. Explicit Duration in Time Units Verbal modifiers may specify an explicit duration by giving a length of time. This can be less exact when a range of time or a minimum is specified. 63 (7) stir for I minute; set aside. (Morash, 1982, p. 132) Duration Given by Gradable Terms The duration of an action can be specified by gradable terms on a continuous scale. (8) blend very briefly (Robertson, 1976, p. 316) Duration Co-extensive with the Duration of Another Action In the cooking domain it is often necessary to do sev- eral actions simultaneously. In such cases it is most natural to express the duration of one of the activities in terms of the duration of the other one. (9) Continue to cook while gent/y folding in the cheeses with a spatula. (Poses, 1985, p. 186) (10) Reduce the heat to medium and fry the millet, stirring, for 5 minutes or until it is light golden. (Sahni, 1985, p. 283) Duration Characterized by a State Change All processes in the cooking domain must have cul- minations since cooking consists of a finite number of steps executed with limited resources. The language used to describe these processes can convey their cul- minations in different ways. In some cases a verb may contain inherent information about the endpoint of the action which it describes. In other cases verbal modifiers characterize the endpoint. (11) Chop the onion. Example (11) specifies a culminated process whose endpoint is defined by the state of the onion. While the desired final state of the onion could be speci- fied more exactly by some adverb such as finely or coarsely, in the absence of such a modifier an end- point can be established based on lexical knowledge about the state of an object which has been chopped. In many cases, however, the meaning of the process verb does not include information on the endpoint of the process, or the domain requires more specific information than that conveyed by the verb alone. For example, in many contexts, the verb beat does not supply the duration or the particular end result of the beating which would determine the duration. This is because different amounts of beating bring about different final states for many substances. Therefore, the cooking domain includes many ex- amples of duration of an action characterized by the specification of a state change in the object being acted on. There must be some perceptual test which verifies when a state change has occurred. For visual changes the test consists of looking at the substance in question. A preparatory action is required only if the substance is not immediately visible, for example, if it is in the oven or in a closed pot. Changes which must be perceived by other senses, usually require additional actions. For example, to perform a tactile test one must touch the substance either directly or with some instrument. The following is an example of a state change which can be perceived visually without an active test. (12) Saute over high heat until moisture is evapo- rated (Morash, 1982, p. 131) Disjunctions of Explicit Durations and State Changes (13) steam ~ minutes or until mussels open (Poses, 1985, p. 83) The meaning of sentences in this category is not the same as that of logical disjunction. Example (13) does not give the cook a choice between steaming for 2 minutes or until the mussels open. The actual mean- ing of these disjunctions is that the state change is to be used to determine the duration of the action. The explicit duration provides information on the usual amount of time that is needed for the state change to take place. Ball (1985) discusses problems that arise in the se- mantic interpretation of what she calls metalinguistic or non-truth functional disjunction. "The first clause is asserted, and the right disjunct provides an alter~ nate, more accessible description of the referent of the left disjunct. ~ (Ball, 1985, p. 3) The truth of these sentences depends on the truth of the first dis- junct. Ball claims that if the first disjunct is true and the second is not, then the sentence is still true although ~our impression will be that something has gone wrong, n (Ball, 1985, p. 3) The disjunctions of explicit durations and state changes seem to be another type of metalinguistic disjunction. They are very similar to the examples given by Ball except that it is the right disjunct which determines the truth of the sentence and the left dis- junct which provides an alternate description. Fur- thermore, this alternate does not have to be strictly synonymous with the right disjunct. The semantics of these disjunctions includes the notion that the left disjunct is only an approximation. 64 The Speed The following verbal modifiers are gradable terms which characterize the speed of the action. (14) quickly tilt and turn the dish (Heatter, 1965, p. 400) (15) rery gradually pour (Heatter, 1965, p. 393) The SEAFACT implementation contain- values for these terms based on knowledge of typical values in the domain. These values are the amount by which the default duration of an action should be multiplied to arrive at the new duration specified by the speed term. The lexical aspect of the verb is used to decide whether all or only a portion of the primitive ac- tions which comprise the verbal action are affected by the speed factor. If the verb is a process then only a portion of the primitive actions are affected. For example, stir the soup quickly for 5 minutes means to make the repeated rotations of the instrument quickly, probably in order to prevent the soup from burning. It does not imply that the entire motion as- sociated with stirring, which includes picking up the instrument and putting it in the soup and later re- moving it from the soup, must be done quickly. The latter interpretation would mean that the speedterm was meant to modify the time which the entire action takes to complete. However, processes in this domain must be specified with a duration and so the duration of the entire action is already fixed. In contrast, if the lexical aspect of the verb is a cul- mination or culminated process then the duration of the entire action is meant to be modified by the speed term. An example of this is corer the pot quickly. The SEAFACT Implementation There are several stages in the translation from En- glish input to the final representation required by the animation simulator. The first stage includes pars- ing and the production of an intermediate semantic analysis of the input. This is accomplished by BUP, A Bottom Up Parser (Finin, 1984). BUP accepts an extended phrase structure grammar. The rules con- sist of the intermediate semantic representation and tests for rule application. The latter include selec- tional restrictions which access information stored in several knowledge bases. The intermediate seman- tic representation consists of roles and their values, which are taken from the input sentence. SEAFACT includes a number of knowledge bases which are implemented using DC-RL, a frame-based knowledge representation language (Cebula, 1986). Two of these knowledge bases, the Object KB and the Linguistic Term KB, are used by the parser to enforce selectional restrictions attached to the gram- matical rules. The Object KB contains world knowledge about the objects in the domain. It contains a representa- tion of each object which can be referred to in the natural language input. These objects are classified according to a very general conceptual structure. For example, all edible items are classified as food, cook- ing tools are classified as instruments, and cooking vessels are classified as containers. This information is used to enforce selectional restrictions in the rules for prepositional phrases. The selectional restrictions check the category to which the prepositional ob- ject belongs. For example, if the prepositional object is an instrument then the rule which applies builds an intermediate semantic representation of the form (INSTRUMENT prepositional-objec O. If the prepo- sitional object denotes a time, and the preposition is for, then the rule which applies builds an intermedi- ate semantic representation of the form (DURATION (EXPLICIT prepositional-object)). The Ling~stic Term KB contain, a classification of adverbial modifiers which is used to enforce selec- tional restrictions on the rules for adverbial phrases. For example, if an adverb is classified as a frequency ~erm then the rule which applies builds an interme- diate semantic representation of the form (REPETI- TIONS (FREQUENCY fi~quency-tcrm)): The second stage in the processing is to create rep- resentations for the verb and the event. The event representation has roles for each of the temporal ver- bal modifiers. Each verb has its own representation containing roles for each of the verbal modifiers which can occur with that verb. The verb representations contain default values for any roles which are essen- tial (Palmer, 1985). Essential roles are those which must be filled but not necessarily from the input sen- tence. For example, the representation for the verb stir includes the essential role instrument with a default value of spoon. After the event and verb representations are created, the role values in those representations are filled in from the roles in the in- termediate semantic representation. Default values are used for any roles which were not present in the input sentence. Each verb in the input is represented by a number of primitive actions which are interpretable by the animation software. In the second stage, the system also creates a representation of the final output which includes values for the starting time and duration of each of these actions. 65 The third stage in the processing is accomplished by the Modifier Analysis Component (MAC). This function performs the additional processing required by some of the temporal verbal modifiers such as frequency terms. This processing consists of mod- ifying the output to reflect the temporal modifiers. This may mean changing the duration of actions (for speed and duration modifiers), modifying the number of times the output is repeated (for repetition modi- fiers), or interspersing intervals of no action with the intervals of action (for frequency modifiers). The final output is created by filling in the primi- tive action representations with values from the verb and event representations. Consider how SEAFACT processes two example sentences. In the first example, Stir the batter with a wisk -for ~ minutes, the intermediate semantic repre- sentation includes a substancel role filled by batter, an instrument role filled by w/sk, and a duration role filled by ~ minutes. These values are inserted in the verb and event representations for the sentence. The MAC modifies the duration of the primitive ac- tions which make up stir so that the duration of the total stirring event is 2 minutes. The second example, Stir the soup occasionally for 2 minutes is more complicated because of the fre- quency adverbial. The intermediate semantic repre- sentation includes s substance1 role filled by soup, a duration role filled by ~ minutes, and a repetitions role filled by occasionally. These values are inserted in the verb and event representations. The default value for the instrument role, spoon, is used. The MAC finds the frequency adverbial and checks for the presence of a duration. However, if no duration were specified, then the sentence would be rejected because the animation requires that each action be finite. The duration specifies the total time interval during which the frequency adverbial applies. The algorithm de- scribed above is used to compute the length of the intervals between stirring events. The length of a single stirring event is a default which is part of the representation of the primitive actions. The number of stirring events which fit in the total time period is calculated. The output consists of repetitions of pairs of the following type: the primitives for a stir- ring event and a specification for no action during the interval between stirring events. A planner could be used to insert some other action into the intervals of no action. Conclusion This analysis has identified categories of verbal mod- ifiers which are found frequently in recipes. While all of these categories are found in other domains as well, some of them are particularly prevalent in this domain because the purpose of recipes is to describe procedures. The temporal category which charac- terizes the duration of an action by a state change is particularly common in recipes for two reasons. First, the physical process of cooking always involves state changes to objects and second, the meaning of many verbs used to describe cooking processes does not include information about the state change which should trigger the culmination of the process. There- fore, verbal modifiers are necessary to make the de- sired state changes explicit. This analysis has also shown a relationship between aspectual categories of events and the modifiers which may co-occur with them. For example, the categories of modifiers which express the number of repetitions of an action can only modify expressions which in- clude an endpoint, that is, points, culminations, or culminated processes. The analysis of the verbal modifier categories re- veals many areas where common sense knowledge or physical knowledge about the world is required to rep- resent the semantics of these categories. For example, when an action is performed on a plural object, phys- ical knowledge about the size and consistency of the objects and about the action itself is necessary to ten us whether it must be repeated for each of the objects separately or performed on all the objects in a group. SEAFACT is a successful implementation of a nat- ural language interface to a computer-generated an- imation system, operating in the domain of cooking tasks. The primitive actions along with the timing information in the SEAFACT output are used to rep- resent the range of verbal modifiers discussed in this paper. The output will be interpreted by an interface to the lower level motion synthesis procedures. This interface (Badler, 1988, 1987a, 1987b) can interpret each type of information in the SEAFACT output: motion changes (e.g. rotation), motion goals, con- stralnts in position and orientation, and temporals. Acknowledgements I would like to thank Dr. Bonnie Webber, Dr. Nor- man Badler, Dr. Mark Steedman, and Dr. Rebecca Passonneau for providing me with guidance and many valuable ideas. This research is partial]y supported by Lockheed Engineering and Management Services, 66 NASA Grant NAG-2-4026, NSF CER Grant MCS- 82-19196, NSF Grant IST-86-12984, and ARO Grant DAAG29-84-K-0061 including participation by the U.S. Army Human Engineering Laboratory. References Badler, Norman I., Jeffrey Esakov, Diana Dadamo, and Phil Lee, Animation Using Constraints, Dynam- ics, and Kinematics, in preparation, Technical Re- port, Department of Computer and Information Sci- ence, University of Pennsylvania, 1988. Badler, Norman I., Computer Animation Techniques, in 2nd International Gesellschafl f~r Informatik Congress on Knowledge-Based Systems, Springer- Verlag, Munich, Germany, October 1987a, pp. 22-34. Badler, Norman I., Kamran Manoochehri, and Gra- ham Waiters, Articulated Figure Positioning by Mul- tiple Constraints, IEEE Computer Graphics and Ap- plications, June 1987b, pp. 28-38. Ball, Catherine N., On the Interpretation of Descrip- tive and Metalinguistic Disjunction, unpublished pa- per, University of Pennsylvania, August 1985. Cebula, David P., The Semantic Data Model and Large Data Requirements, University of Pennsylva- nia, CIS Dept.,Technical Report 87-79, Sept 1986. Croft, William, The Representation of Adverbs, Ad- jectives and Events in Logical Form, Technical Note 344, Artificial Intelligence Center, Computer Science and Technology Division, SRI International, Menlo Park, Ca, December 1984. Finin, Tim and Bonnie Lynn Webber, BUP A Bottom Up Parser, Technical Report MC-CIS-83-16, Univer- sity of Pennsylvania, 1984. Fishwick, Paul A., The Role of Process Abstraction in Simulation, submitted to IEEE Systems, Man and Cybernetics, April 1987. Fishwick, Paul A., Hierarchical Reasoning: Simulat- ing Complex Processes Over Multiple Levels of Ab- straction, PhD Thesis, Technical Report, University of Pennsylvania, 1985. Gourmet Magazine, Volume XLVI, Number 6, June 1986. Karlin, Robin F., SEAFACT: Semantic Analysis for the Animation of Cooking Tasks, Technical Report, MS-CIS-88-04, Graphics Lab 19, Computer and In- formation Science, University of Pennsylvania, 1988. Moens, Marc and Mark Steedman, Temporal Ontol- ogy in Natural Language, in Proceedings of the 25th Annual Meeting of the Association for Computational Linguistics, ACL, 1987, pp. 1-7. Moens, Marc and Mark Steedman, forthcoming, Computational Linguistics, Volume 14, Number 2, 1988. Morash, Marion Victo~ Garden Cookbook, Alfred A. Knopf, N.Y., 1982. Mourelatos, Alexander P. D., Events, Processes, and States, in Syntaz and Semantics, Tense and Aspect, Vol. 14, Philip Tedeschi and Annie Zaenen (eds.), Academic Press, New York, 1981, pp. 191-212. Palmer, Martha S., Driving Semantics for a Limited Domain, PHD Dissertation, University of Edinburgh, 1985. Passonneau, Rebecca J., A Computational Model of the Semantics of Tense and Aspect, forthcoming, Computational Linguistics , Volume 14, Number 2, 1988, Tech. Memo 43, Dec. 17, 1986, Unisys, Paoli Research Center, Paoli, Pa, Dec. 1986. Poses, Steven, Anne Clark, and Becky Roller, The Frog Commissary Cookbook, Doubleday & Company, Garden City, N.Y., 1985. Rombaner, Irma S. and Marion Rombauer Becker, Joy of Cooking, Signet, New American Library, N.Y., 1931. Sahni, Julie, Classic Indian Vegetarian and Grain Cooking, William Morrow and Co., Inc., N.Y., 1985. Talmy, Leonard, Lexicalization Patterns: Semantic Structure in Lexical Forms, in Language typology and syntactic description, Volume IIl, Grammatical cate- gories and the iezicon, Timothy Shopen (ed.), Cam- bridge University Press, Cambridge, 1985. Waltz, David L., Event Shape Diagrams, in AAAI- I98~, pp. 84-87. Waltz, David L., Toward a Detailed Model of Pro- ceasing For Language Describing the Physical World, in IJCAI-1981, pp. 1-6. 67
1988
8
THE INTERPRETATION OF TENSE AND ASPECT IN ENGLISH Mary Dalrymple Artificial Intelligence Center SRI International 333 Ravenswood Avenue Menlo Park, California 94025 USA ABSTRACT An analysis of English tense and aspect is pre- sented that specifies temporal precedence relations within a sentence. The relevant reference points for interpretation are taken to be the initial and terminal points of events in the world, as well as two "hypothetical" times: the perfect time (when a sentence contains perfect aspect) and the pro- gressive or during time. A method for providing temporal interpretation for nontensed elements in the sentence is also described. 1. Introduction The analysis of tense and aspect requires spec- ifying what relations can or cannot hold among times and events, given a sentence describing those events. 1 For example, a specification of the mean- ing of the past-tense sentence "John ate a cake" involves the fact that the time of the main event - in this case, the cake-eating event - precedes the time of utterance of the sentence. Various pro- posals have also been made regarding the analysis of aspect which involve auxiliary times or events, whereby the proper relationship of these auxiliary times or events to "real" main events is specified. We provide an analysis of English tense and aspect that involves specifying relations among times rather than events. We also offer a means of interpreting tenseless elements like nouns and ad- jectives whose interpretation may be temporally dependent. For example, the noun phrase "the warm cakes" picks out different sets of cakes, de- pending on the time relative to which it'receives an interpretation. The analysis presented here has been imple- mented with the Prolog data base query system 1The work presented here was supported by SP, I In- ternational. I am grateful to Phil Cohen, Bill Croft, Doug Edwards, Jerry Hobbe, Doug Moran, and Fernando Perelm for helpful discussion and comments. CHAT (Pereira 1983), and the representations are based on those used in that system. We shall show that an analysis of tense and aspect involv- ing specification of relations among times rather than among events results in a clean analysis of various types of sentences. 2. Time Points Harper and Charniak (1986) [henceforth H&C] provide an interesting and revealing analysis of English tense and aspect involving relations be- tween events. There are several kinds of events: the u~terance event, which is associated with the time of the utterance; the main event, or the event being described by the main verb of the sentence; the perfecg event; and the progressivJe event. The representation of every sentence involves the ut- terance event and the main event; sentences with progressive or perfect aspect also involve progres- sive or perfect events. This treatment is quite different from the Re- ichenbach (1947) conception of "reference time", which is assumed to be relevant for all sentences. To translate between the two systems, the refer- ence time may be thought of as being represented by the perfect event in perfect sentences and by the progressive event in progressive sentences. In the case of perfect progressives, one might con- sider that there are two reference events, while in simple tenses there is no reference event at all. Alternatively, in a system like Webber (1987) in which reference points for each sentence are used to construct an event structure, the tensed event (what H&C call the "anchor event") is the rele- vant one: the perfect event for sentences with per- fect aspect; for sentences with progressive but no perfect aspect, the progressive event; or the main event for simple tense sentences. 2 2 Although insta~s rather than events are used in the representation described here, a similar strategy would be employable in buildin 5 up a Webber-style event structure. 68 In accordance with H&C, we propose perfect reference points for sentences with perfect aspect and progressive reference points for sentences with progressive aspect. Thus, the interpretation of each sentence involves a number of relevant times: the beginning and end of the event described by the main verb for all sentences, the perfect time if it has perfect aspect, and the progressive time if it has progressive aspect. In our analysis, unlike H&C, what is relevant for the interpretation of sentences is not a set of events (which have po- tential duration and beginning and end points) but a set of times or instants. Instants, unlike events, have no beginning or end: they are one- dimensional points. This has several advantages over an analysis such as H&C, in which the per- fect and progressive reference points are events. First, if the reference points for perfect and pro- gressive sentences are events rather than instants, it ought to be possible to predicate duration of them. However, this is not a possible option for perfect and progressive sentences; durational ad- juncts are only interpreted relative to the main event. The sentence "John has swum for three hours" is only true when the duration of the main event (the swimming event) is three hours. Second, relations among events in H&C's sys- tem reduce anyway to relations between instants: the starting and ending points of events. That is, the primitives of systems like H&C's are relations among times. There seems to be little to be gained from constructing hypothetical events based on these relations when a simpler and cleaner analysis can be constructed on the basis of these primitive notions alone. There might seem to be the following objection to adopting times as relevant for the interpreta- tion of sentences: given a sentence like 'John was frosting a cake from 3:00 to 4:00 yesterday', we know about the progressive reference point only that it lies between 3:00 and 4:00; there are in- finitely many instants satisfying that condition. It would be impossible to iterate over all of these times to determine the truth of any utterance. In fact, though, to determine whether a sentence con- taining perfect or progressive aspect is true, it is unnecessary to instantiate the perfect or progres- sive reference times to specific values; it suffices to show that an interval exists within which such a point can be found. That is, they are merely ex- istentially quantified, not instantiated to a value. In this manner, perfect or progressive times may give the appearance of being similar to events with a starting and an ending point, because they are constrained only to exist within some nonnull in- terval. Checking whether or not the sentence is true involves determining whether the interval ex- ists. The following is the representation for the sim- ple past sentence "John frosted a cake", with words in upper case representing variables and words in lower case representing predicate names or constants: (1) ezists X Start End holds(frost(john, X), Start, End) g_4 cake(X) g~ precede(End, now) The predicate holds in the first clause of the repre- sentation takes three arguments, representing the predicate and the beginning and ending times of the event. In other words, John frosted X from time Start to time End. The predicate cake(X) specifies that the thing John frosted was a cake. We do not represent this with a holds predicate because we assume that the property of being a cake is a static property, not one that changes over time.S The predicate precede(End, now) specifies that the ending time End of the cake-frosting event must precede now, the current time. In the course of validating this logical form, the variable End will be instantiated to a numerical value, and the atom now will be replaced by the value of the cur- rent time. The predicate precede represents the less-than-or-equal-to relation, while the predicate strictly.precede represents the leas-than relation. Thus, the cake-frosting event must occur in the past. Let us next consider the semantic representation of a sentence with perfect aspect, "John will have frosted a cake": (2) ezists X Start End Perfect holds(frost(john, X), Start, End) cake(X) g~ precede(End, Perfect) strictly_precede(now, Perfect) 3Th]s is not a necessary part of the analysis; the repre- sentation has been chosen in part for the sake of simplicity. It would also be possible to represent the predicate cake(X) inside a holdJ predicate, with the Start and End tlmes rep- rosent~ag when the cake began and ceased to exist. 69 The interpretation of perfect sentences involves a perfect time Perfect. This time is constrained to follow the main event; this is enforced by the clause precede(End, Perfect). Since this is a future perfect sentence, Perfect is constrained to be in the future. The future tense is represented by the predicate strictly_precede; the perfect time must follow now (not coincide with it). Note, therefore, that in the case of future per- fect sentences the main event is required only to end before a time in the future, and that (as with H&C) it is not a contradiction to say "John will have arrived by tomorrow, and he may already have arrived." Unlike analyses in which relations among all reference points are fully specified, this analysis allows the main event to be in the past even though the sentence itself is in the future per- fect. The following is a representation of the past pro- gressive "John was frosting a cake": (3) e~ists X Start End Progressive holds(frost(john, X), Start, End) cake(X) precede(Start, Progressive) precede(Progressive, End) precede(Progressive, note) Here the progressive time, represented by the variable Progressive, must occur during the cake- frosting event; that is, it must occur after the start and before the end of the main event. Since the sentence is a past progressive, there is a final re- quirement on Progressive: it must precede note. Notice that past progressives differ from simple past sentences in that it is the progressive time and not the ending time of the main event that is required to be in the past. Consequently, as in H&~C, the interpretation of a past progressive like "John was frosting a cake" does not require that the main event lie entirely in the past, but only that some part of it he in the past. The present analysis allows for the possibility that sentences like the following can be true: (4) John was frosting a cake at 3:00, and he is still frosting it. We shall see in the next section that what was referred to as the progressive time in the forego- ing example actually appears in the representation not only of progressives, but of every sentence, as what we shall call the during time. The during time will be used in the temporal interpretation of nontensed elements in the sentence. For this rea- son, the above representations of the simple past and future perfect sentences above were only a first approximation; actually, their complete represen- tations also contain a during time. Finally, the representation of a sentence with both progressive and perfect aspect, like "John will have been frosting a cake", is the following: (5) exists X Start End Progressive Perfect holds(frost(john, X), Start, End) cake(X) precede(Start, Progressive) precede(Progressive, End) precede(Progressive, Perfect) strictly.precede(now, Perfect) Progressive, the progressive or during time, occurs during the cake-frosting event. Progressive is con- strained by the clause precede(Progressive, Per- fect) to precede the perfect time Perfect. In other words, for a perfect progressive sentence, the re- quirement is that some portion of the main event lie before the perfect time. The perfect time is con- strained by the clause strictly_precede(now, Per- fect) to lie in the future. In this analysis, underspecification of relations among times yields results that match the natural- language semantics of sentences. 4 Use of a perfect and a progressive time allows uniform treatment of perfects and progressives without the compli- cation of introducing unwarranted pseudo-events into the representation of simple tenses. Also, the progressive/during time is useful as an anchor for the interpretation of nontensed elements, as we will see below. 3. Temporal Interpretation of Nontensed Elements Not only tensed verbs, but also other nontensed elements in the sentence - adjectives, nouns, prepositions, and so on - must be temporally in- terpreted. Consider the sentence "Are there any warm cakes?" The adjective "warm" must be in- terpreted relative to some time: in this case, the 4 We have not yet enriched the representation of individ- ual predicates to include inherent aspect, as described in, for example, Pammneau (1987). We feel, though, that the resulting representatione will sti~ involve the tree of perfect and during times, and will still be amenable to the treat~ merit of nontensed elemclats described in the next section. 70 present. The question is about cakes that are cur- rently warm. The interpretation of nontensed elements does not always depend on the utterance time, though. The sentence "The third-year students had to take an exam last year" can be interpreted in two ways. Under one interpretation, those who were third- year students last year (the current fourth-year students) had to take a test last year. The inter- pretation of the noun phrase "the third-year stu- dents" is dependent on the tense of the main verb in this case. Under the other interpretation, those who are currently third-year students took a test last year, when they were second-year students. However, the interpretation of nontensed ele- ments with respect to the tense of the main verb in the sentence is not entirely unconstrained. Con- sider the sentence "The wife of the president was working in K-Mart in 1975." "Wife" and '~)resi- dent" are both predicates that must be interpreted with respect to a particular time. The current president is not the same as the 1975 president; if he divorced and remarried, his 1975 wife is not necessarily the same person as his current wife. Given this, there ought to be four possible inter- pretations of this sentence. In fact, there are only three: * He is the current president and she is his cur- rent wife • He is the current president and she was his wife in 1975 • He was the president in 1975 and she was his wife then (but perhaps he is divorced and no longer president) The missing interpretation is that • He was the president in 1975 and she is his current wife (but was not his wife then) A skeletal tree for this example is shown in Fig- ure 1. The sentence involves the syntactic embed- ding of one NP ("the president") inside another NP ("the wife"). The unavailable interpretation is one in which the embedded NP is interpreted with respect to the event time of the higher verb, whereas the intervening NP is not. That is, the unavailable interpretation involves interpreting a discontinuous portion of the parse tree of the sen- tence with respect to the main verb. 5 s As we will see in the next section, it is possible to con- struct ~ context in which the "missing interpretation" is in fact available for this sentence. The clahn ~,~]e here is that this interpretation is not available by means of the syntactic variable-passing mechanism discussed in this section, but One may think of the main-verb event time as being passed or disseminated through the tree. It may be passed down to embedded predicates in the tree only when it is passed through interme- diate predicates and used in their interpretation. If a predication is interpreted with respect to the current time rather than to the event time of the main verb, all predications that are syntactically subordinate to it are also interpreted with respect to the current time. When this happens, the main- verb event time ceases to be passed down and may not be reinstated for interpretation. Note, however, that the verb time and the time with respect to which the nontensed elements are interpreted are not always completely coextensive. Consider again the example "John will be frost- ing a warm cake at 3:00." Under the interpreta- tion that the cake is warm while John is frosting it, the time span during which the cake is warm must include the time 3:00; however, the starting and ending points of the cake-frosting event need not coincide exactly with the starting and ending points of the interval at which the cake is warm. The only requirement is that both events must hold at 3:00. Now consider the sentence "John built a new house." The building event can be thought of as beginning before the event of the house's being new. At the start of the building event, there is no house, nor, obviously, is there is any event of the house's being new. In a situation like this, one does not want to require that the building event be coextensive with the event of the house's being new, but rather, merely to require that the two events should overlap. Our claim is that, in general, temporal interpre- tation of nontensed elements relative to the tense of the main verb of the sentence requires only that the event denoted by the main verb overlap (not be coextensive with or be contained in) the events de- noted by the nontensed elements. We shall accom- plish this by positing a time for each tensed verb, the during time, and passing this time through the syntactic tree. The event denoted by the main verb, as well as the events denoted by any predi- cates interpreted relative to the main verb, must hold at this during time. For example, here is the logical form for the sen- tence "John frosted a warm cake" : is only ~vailable by appea~ to the context constructed. The %nixing interpretation" is missing when there is no context to refe~ to for addition~ interpretations. 71 S NP y ~pp thewife / ~ P NP t of tl~e president VP was working in K-Mart in 1975 Figure 1 (6) ezists X Startl End1 Slart~ End~ During holds(frost(john, X), Start1, End1) cake(X) precede(End1, now) precede(Start1, During) precede(Daring, End) hotdsCwarmCX), Szar~, End~) precede(S~artl, During) precede(Daring, End) There are two predicates in this example that are interpreted with respect to a temporal inter- val: warm and frost. There must be a during time During that occurs during both the cake-frosting event and the event of the cake's being warm: the two events must overlap. We further note that all elements within a NP node are interpreted with respect to the same event. It is not possible, for example, to interpret some elements of a noun phrase with respect to the time of utterance, others with respect to the main verb's during time. Consider the sentence "John frosted three stale warm cakes yesterday." Despite the pragmatic predilection for interpreting "stale" and "warm" at different times (it is hard to imag- ine how cakes that are still warm could already be stale), this sentence has only two interpretations: • John frosted three cakes that were both stale and warm yesterday. • John frosted three cakes yesterday that are both stale and warm now. It is not possible to give the sentence the interpre- tation that the cakes he frosted were warm yes- terday and are stale now, or were stale yesterday and are warm now. Both adjectives must be in- terpreted with respect to the same time. If a system like H&C, in which events and not instants are taken to be the relevant refer- ence points, were extended to include interpre- tation of nontensed elements as described here, such a system might use primitives such as those of Allen (1984). However, none of the primi- tives of Allen's system is suitable for defining the relation of the during time to the main event: during(DuringEvent, MainEvent) is not sufficient, since Allen's "during" relation does not permit the DuringEvent to coincide with the beginning or end points of the main event. The example "John built a new house" shows that this is necessary; in this case, it is precisely the end point of the building event that coincides with the beginning of the event of the house being new. In a system using Allen's primitives, the proper relation be- tween the DuringEvent and the MainEvent would be a disjunction: (7) during(DuringEvent, MainEvent) OR starts(DuringEvent, MainEvent) OR ends(DuringEvent, MainEvent) 4. Passing the During Time: Rules for Temporal Interpretation In the previous section, we examined the tem- poral interpretation of phrases with respect to the during time of the main verb. In addition, 72 we proposed a constraint on the passing of this during time from the verb through its arguments and adjuncts, according to which predicates inter- preted according to the during time must occupy a nondiscontinuous portion of the tree. From the point of view of the tenseless phrase, however, the same process can be seen in a different light. We may think of the interpretation of tempo- rally dependent elements in.a phrase as proceeding in the following manner: • The phrase is interpreted with respect to a temporal modifier internal to the phrase; other- wise • The phrase is interpreted with respect to the closest higher tensed element (allowing for restric- tions on the distribution of the during variable); otherwise • The phrase is interpreted with respect to some contextually relevant time. Temporally dependent nontensed elements in previous sections were always contained in phrases that lacked internal temporal modifiers, so the first option was not applicable. One of two interpreta- tions was given for tenseless elements: they were interpreted either with respect to the during time of the main verb or with respect to now, the time of utterance. Interpretation with respect to now seems to be a particular instance of the general possibifity of interpretation with respect to a con- textually relevant time; since no context was given for the examples in the previous sections, no other contextually relevant time was available. When a phrase contains a phrase-internal temporal modi- fier, the predicates in that phrase must be inter- preted with respect to that modifier, as in the ex- ample "The 1975 president is living in California." The modifier "1975" in the phrase "the 1975 pres- ident" provides the temporal interpretation of the phrase: it must be interpreted with respect to that time. It is not possible to interpret "president" relative to the during time of the main verb. Hinrichs (1987) also proposes that noun phrases be interpreted relative to a time restricted by the context; the difference between his analysis and ours is that, of the three options presented above, he offers only the last. He contends that the only option for temporal interpretation of nontensed el- ements is the third one, namely, by reference to context. Given an analysis like that of Hinrichs, it is dif- ficult to explain the facts noted in the preceding section. In the absence of context (or when the sole context is the moment of utterance), Hinrichs would not predict the absence of one reading for sentences such as "The wife of the president was working in K-Mart in 1975." In an analysis like the one presented here, where the interpretation of nontensed elements is determinable in some in- stances through syntactic processes, the absence of these readings is expected. Enc (1981) and Hinrichs (1987) both argue con- vincingly that there are many instances in which a temporally dependent element is interpreted with respect to a time that is neither the during time nor now. Hinrichs furnishes the following example: (8) Oliver North's secretary testified before the committee. At the time she testified, she was no longer his sec- retary; she-was also not his secretary at the time this sentence was uttered. The sentence would re- ceive the following interpretation: (9) exists X Startl End1 Duringl Start2 End~ During2 holds(secretary(north, X), Start1, End1) precede(Start1, Duringl) precede(During1, End1) ~4 hotdsOestify(X), StartS, End2) g~ precede(Start2, During~) g~ precede(During2, End~) precede(During2, now) There are two events described in the logical form of this sentence: the event of X being North's sec- retary and the event of X testifying. Daring1 is a time during the being-a-secretary event, and Dur- ing2 is a time during the testifying event. The events are not required to overlap, and only the "testify" event is restricted by the tense of the sentence to occur in the past. In a more complete representation, appropriate restrictions would be imposed on During1: the time during which X is a secretary would be restricted by the context, in line with Hinrichs' suggestions. 5. Further Results It appears that the during time of the main clause is used in the interpretation of some tensed subordinate clauses: for example, in the interpre- tation of relative clauses. Consider the sentence "He will catch the dog that is running." Under one interpretation of this sentence, the catching event is simultaneous with the running event - 73 both events take place in the future. In this case, the interpretation of the main verb in the relative clause depends on the during time of the main clause. There is also another interpretation, ac- cording to which the dog that will be caught later is running now. In this case, the interpretation of the relative clause depends on the time of utter- ance of the sentence. One remaining task is to provide a reasonable analysis of the bare present using this system. We feel that such an analysis awaits the incorporation of a representation of inherent lexical aspect as in Passoneau (1987); without a representation of the distinction between (for example) states and activ- ities, a coherent representation of simple present tense sentences is not possible. 7. Conclusion We have shown that distributing an existen- tially quantified during~time variable throughout the tree enables interpretation of nontensed ele- ments in the sentence according to the time of the main verb. Further, the during time is useful in the interpretation of several sentence types: progres- sivss, statives, and sentences containing relative clauses. Finally, an analysis that utilizes under- specified relations among times (not events) pro- vides a good prospect for analyzing tense and as- pect in English. rina Del Rey, California.: Information Sciences In- stitute. Passoneau, Rebecca. 1987. "Situations and Intervals." Proceedings of the ACL Conference, Stanford University, Stanford, California. Pereira, Fernando. 1983. "Logic for Natural Language Analysis." Technical Note 275. Menlo Park, California.: SRI International. Reichenbach, Hans. 1947. Elements of Symbolic Logic. New York, New York: Macmillan. Webber, Bonnie. 1987. "The Interpretation of Tense in Discourse." Proceedings of the ACL Con- ference, Stanford University, Stanford, California. References Allen, James F. 1984. "Towards a General The- ory of Action and Time." Artificial Intelligence 23:2, July 1984. Enc, Murvet. 1981. "Tense without Scope: An Analysis of Nouns as Indexicals." Ph.D. disserta- tion, University of Wisconsin, Madison, Wiscon- sin. Harper, Mary P. and Eugene Charniak. 1986. "Time and Tense in English." Proceedings of the ACL Conference, Columbia University, New York, New York. Hinrichs, Erhard. 1987. "A Compositional Semantics of Temporal Expressions in English." Proceedings of the ACL Conference, Stanford Uni- versity, Stanford, California. Mathiessen, Christian. 1984. "Choosing Tense in English." ISI Research Report RR-84-143. Ma- 74
1988
9
A TRANSFER MODEL USING A TYPED FEATURE STRUCTURE REWRITING SYSTEM WITH INHERITANCE R6mi Zajac ATR Interpreting Telephony Research Laboratories Sanpeidani lnuidani, Seika-cho~ Soraku-gun, Kyoto 619-02, Japan [zajac%[email protected]] ABSTRACT We propose a model for transfer in machine translation which uses a rewriting system for typed feature structures. The grammar definitions describe transfer relations which are applied on the input structure (a typed feaane structure) by the interpreter to produce all possible transfer pairs. The formalism is based on the semantics of typed feature structures as described in [AR-Kaci 84]. INTRODUCTION We propose a new model for transfer in machine translation of dialogues. The goal is twofold: to develop a linguistically-based theory for transfer, and to develop a computer formalism with which we can implement such a theory, and which can be integrated with a unification-based parser. The desired properties of the grammar are (1) to accept as input a feature structure, (2) to produce as output a feature structure, (3) to be reversible, (4) to be as close as possible to current theories and formalisms used for linguistic description. From (1) and (2), we need a rewriting formalism where a rule takes a feature structure as input and gives a feature structure as output. From O), this formalism should be in the class of unification- based formalisms such as PROLOG, and there should be no distinction between input and output. From (4), as the theoretical basis of grammar development in ATR is HPSG [Pollard and Sag 1987], we want the formalism to be as close as possible to HPSG. To meet these requirements, a rewriting system for typed feature structures, based on the semantics of typed feature structures described in [AR-Kaci 84], has been implemented at ATR by Martin Emele and the author [Emele and Zajac 89]. The type system has a lattice structure, and inheritance is achieved through the rewriting mechanism. Type definitions are applied by the interpreter on the input structure (a typed feature structure) using typed unification in a non-deterministic and monotonic way, until no constraint can be applied. Thus, the result is a set of all possible transfer pairs. compatible with the input and with the constraints expressed by the grammar. Thanks to the properties of the rewriting formalism, the transfer grammar is reversible, and can even generate all possible pairs for the grammar, given only the start symbol TRANSLATE. We give an outline of the model on a very simple example. The type inheritance mechanism is mainly used to classify common properties of the bilingual lexicon (sect. 1), and rewriting is fully exploited to describe the relation between a surface structure produced by a unification-based parser and the abstract structme used for transfer (sect. 2), and to describe the relation between Japanese and English structures (sect. 3). An example is detailed in sect. 4. 1. LEXICAL TRANSFER AS A HIERARCHY OF BILINGUAL LEXICAL DEFINITIONS The type system is used to describe a hierarchy of concepts, where a sub-concept inherits all of the properties of its super-concepts. The use of type inheritance to describe the hierarchy of lexical types is advocated for example in [Pollard and Sag 1987, " chap.8]. We use a type hierarchy to describe properties which are common to bilingual classes of the bilingual lexicon. The level of description of the bilingual lexicon is the logico-semantic level: a verb for example has a relational role and links different objects through semantic relations (agent, recipient, space-location .... ). Semantic relations in the bilingual lexicon are common to English and Japanese. Predicates can be classified according to the semantic relations they establish between objects. For example, predicates which have only an agent case are defined as Agent-Verbs, and verbs which also have a recipient role are defined as Agent-Recipient-Verbs, a sub-class of Agent-Verbs. On the leaves of the hierarchy, we find the actual bilingual entries, which describe only idiosyncratic properties, and thus are very simple. The translation relation defined by TRANSLATE is described in sect. 3. We shall concentrate on the propositional part PROP defined here as a disjunction of types: PROP = SPEAKER I HEARER I REG-~mM I BOCK r ASK I ~ I TCt~4 I NEGATIC~ ... The simple hierarchy depicted graphically Figure 1 is written as follows: VERB s [japanese:JV[relaticn:JPROP], english:EJ [relation:EPROP] ]. AG-VERB = VERB[japanese: [agant:#j-ag], english: [agent: #e-ag], trans-ag: PR~P [ japanese: #j-ag, english: #e-ag] ]. in This definition can be read: an Agent-Verb is-a Verb which has-properties agent for Japanese and English. We need to express how the arguments of a relation are translated. This is specified using a trar~late-ag slot with type symbol Pimp, which will be used during the rewriting process (see details in sect 3 and 4). Symbols prefixed with # are tags, which are used to represent co-references (~sharing>O of slructures. In this clef'tuition, we have a one-to-one mapping between the agent argument, and at this level of representation (semantic relations), this simple case arises frequently. However, we must also describe mappings between structures which do not have such simple correspondence, such as idiomatic expressions. In that case, we have to describe the relation between predicate-argument structures in a more complex way, as shown for example in sect.4. AG-BEC-V = ~C--V [ japanese: [recipient: #j-recp], english: [recipient: #e-recp], trans-recp: P~ [japanese: #j-recp, english: #e-recp] ]. ;CJ-REC-OBJ-V ~ ~J-BEC-V [japanese: [object: #j-obj], english: [object: #e-obj], trans-obj :PBOP [ japanese: #j-obj, eng] 18h: #e-obj ] ]. NOUN- [japanese:JN, english:EN]. Actual bilingual entries are very simple thanks to the inheritance of types. SE~D = ~69-REC-fBJ-V[japanese: [reln:OK~JRU-l], english: [reln:SEMD-I] ]. ASK - ~3-REC-V[japanese: [reln:OKIKI-l], english: [reln:ASK-l] ]. ~ -NOUN [japanese :~SHI-I, english: REGISTRATIC~-FO~-I ]. B-HEARER = NCE~[japanese:J-HEARE~ english:E-HEARER]. B-SPEAKER - ~ [ japanese: J-SPEAKER, eng]~ ~h: E-SPEAKER]. PROP I °°''" SPEAKER HEARER REG-FORM ASK SEND Figure 1: a simple hierarchy of types. The type system is interpreted using the rewriting mechanism described in [Ait-Kaci 84], which gives an operational semantics for type inheritance: a feature structure which has a type ~3--v for example is unified with the definition of this type: [ japanese: [agent: #j-ag], english: [agent: #e-ag], trans-ag: PBOP [ japanese: #j-ag, ~glish: #e-ag] ] and the type symbol AG-V is replaced with the super- type VERB in the result of the unification. If type VERB has a deC'tuition, the structure is further rewritten, thus achieving the operational interpretation of inheritance. Disjunctions like Pt~Dp create a non-deterministic choice for further rewriting: the symbol E,I~Dp is replaced with the disjunction of symbols of the right- hand-side creating alternative paths in the rewriting process. This process of rewriting is applied on every 2 sub-structure of a structure to be evaluated, until no type symbol can be rewritten. As the rewriting system does not have any explicit control mechanism for rule application, whenever several rules are applicable all paths are explored, and all solutions are produced in a non deterministic way. This could be a drawback for a practical machine translation system, as only one translation should be produced in the end, and due to the non deterministic behavior of the system, this could also lead to severe efficiency problems. However, the system is primarily intended to be used as a tool for developing a linguistic model, and thus the production of all possible solutions is necessary in order to make a detailed study of ambiguities. Furthermore, according to the principles of second generation MT systems [Ynvge 57, Vauquois 75, Isabelle and Macklovitch 86], a transfer grammar should be purely contrastive, and should not include specific source or target language knowledge. As a result, the synthesis grammar should implement all necessary language specific constraints in order to rule out ungrammatical strucmr~ that could be produced after transfer, and make appropriate pragmatic decisions. 2. RELATING SURFACE AND ABSTRACT SPEECH ACTS A problem in translating dialogues is to translate adequately the speaker's communicative strategy which is marked in the utterance, a problem that does not arise in text machine translation where a structural translation is generally found sufficient [Kume et al. 88]. Indirectness for example cannot be translated directly from the surface structure produced by a syntactic parser and needs to be further analyzed in terms independent of the peculiarities of the language [Kogure et al. 1988]. For example, take the representation produced by the parser for the sentence [Yoshimoto and Kogure 1988]: watashi-ni tourokuyoushi-wo o-okuri jtadake, masu ka I-dative registration-form-acc honor-send can-rT.eive-a-favor polite interr Figure 2: example of a Japanese sentence The representation has already categorized to a certain extent surface speech acts types. The level of analysis produced by the parser is the level of semantic relations (relation, agent, recipient, object,...). The represonmfion reduced to relation fean~es is: ( ~ ~ (CAN (RECEIVE-FA%~3R (OKL~J-1 (~Xm~2~S~-I)) ) ) ) The level of representation we want for transfer can be basically characterized by (1) an abstract speech act type (request, declaration, question, promise .... ), (2) a manner (direct, indirect,...), and (3) the propositional content of the speech act [Kume et al. 88]. A grammar, written in the same formalism, abstracts the meaning of the surface structm'e to: JhEA [ speech-act -type: REQUEST, manner: I~DIRECT-ASKINC--POSSIBrLTTY, speaker: #~ker-J-SPF,~'~% hearer: #hea~r-J-~ s-act: JVC~elaticn: O~J~J-1, agent: #hearer, recipient: #speaker, object: ~ i ] ] and this is the input for the transfer module. 3. DEFINING THE TRANSFER RELATION AT THE LOGICO-SEMANTIC LEVEL Each structure which represents an utterance has (I) an abswact speech act type, (2) a type of manner, and (3) a propositional content Each sub-structure of the propositional content has (I) a lexical head, (2) a set of syntactic featur~ (such as tense-aspect-modality, determination, gender .... ), and may have (3) a set of dependents which are analyzed as case roles (agent, time-location, condition .... ). The manner and abstract speech act categories are universals (or more exactly, common to this language pair for this corpus), and need not be translated: they are simply stated as identical by means of tag identity. The part which represents the propositional content is language dependant, and the translation relation defined between lexical heads, syntactic features and dependents of the heads is defined indirectly by means of transfer rules. Thus, this approach can be characterized as a mix of pivot and wansfer approaches [Tsujii 87, Boitet 88]. speech-act.type REOUEST manner INDIRECT-ASK.POSSIBILITY speaker #0=J-SPEAKER hearer #1=J-HEARER s-act relation OKURU-1 agent #1 recipient #0 object TOUROKUYOUSHI-1 Figure 3: direct mapping by tagging Indirect mapping by rule application speech.act-type REOUEST manner INDIRECT-ASK.POSSIBILITY speaker #2=E-SPEAKER hearer #3:E-HEARER s-act relation SEND-1 agent #3 recipient #2 object REGISTRATION-FORM-1 the translation relation. The definitions of the transfer grammar can be divided into three groups: 1) definitions that state equality of abstract speech act type and manner (the language independent parts), 2) lexical def'mitions that relate predicate-argument structures, 3) definitions that relate syntactic features (not yet included in our grammar). sub-class of lexemes. For example, one can write directly SP~ instead of PROP in the trans-spk slot of the above definition. Another possibility for a mono-directional system is to access the bilingual lexicon using the Japanese entry during parsing. This means that the dictionaries of the system would have to be organized as a single integrated bilingual lexical rhtabas~. Starting from the abstract speech act description, we need only one definition for specifying the direct mapping of Abstract Speech Acts by tagging, which also introduces the type symbol PROP that will trigger the rewriting process for the transfer grmnmar:. ~LA.~ - [ japanese: JASA [speech-act-type: #sat, manner: #manner, speaker: #J-spk, hearer: #j-hrr, s-act: #j-act-u-PROP] ], englimh: EASA [speech-act-type: #sat, manner: #manner, speaker: #e-spk, hearer: #e-hrr, s-act: #e-act=EPROP] ], trans-act: PI%0P [ japanese: # j-act, english: #e-act ] ], trans-spk: PIK)P [japanese: # j-spk, english: #e-spk] ], trans-hrr: PROP [japanese: #j-hrr, english: #e-hrr] ] . In this simple example, the definition of the symbol PR3P contains the full bilingual dictionary. Unifying a structure with ~,l~Zi, means that a structure is unified with a very large disjunction of clef'tuitions. There are several possible ways to overcome this problem. One can use the hierarchical type system to restrict the set of candidates to a small sub-set of definitions and instead of using pROP, use the most adequate specific symbol for translating an argument: such a symbol can be viewed as the initial symbol of a sub-grammar which describes the transfer relation on a 4. A STEP BY STEP EXAMPLE We give in this section a trace of a simple example for the sentence in Figure 4. For translating, we need to add to the definition of PRimP, the following bilingual lexical definitions: BOCK- hU3N[japanese:HCN-l, english:BOOK-l]. -IggXlq[japanese: TE-1, en~]tqh:HAlXD-l]. (japanese: (relation: ~JRERU-I, object: TE-I, spatial-destination: #0], eng]L-h: [relation: TOUCH-I, object: #i], trans0:Pl~P[japanese: #0, english:#1]]. hon-ni te-wo fure-naide kudasai I book-obl2 hand-ob/1 touch-neg please Figure 4: don't touch the books! A lexical definition introduces the PPJ3P symbol for the arguments of a predicate, and the translation relation is defined recursively between argument sub- structures. There could be one-to-one mapping between two substructures, but as in the example of 2~.X2H, the relation is in general not purely compositional, and not one-to-one, and argument description can be as refined as necessary. Here, the object TE-1 (<~hand>>) is a part of the meaning of ~touch~ in this kind of construction, and the semantic relation that links the predicate and the object being touched is a spatial destination in Japanese (perceived as a goal or a target) and an object in English. INPUT : a structure representing a deep analysis of the sentence in Figure 4. The initial symbol that will be rewritten is ~.--'g (symbols to be rewritten are in bold face). TRANSLATE [japanese: JASA [speech-act-type: #sat=RE~T, manner: #mam~IRECT, speaker: #j-m~J-SPEAmm, h~&r: # j -hZ--q-HEABER, s-act: #j-act~ [relation: ~3ATE object: [relaticn: Ft~ERU-I, object: TE-1, spatial-dest/naticn: HCN-1] ] ] STEP 1 : rewrite TRANSLATE which adds to the input structure the English 2a~Aarld new PROP symbols in the translate.act, txans-speaker and trans-hearer slots. [ japanese: JASA [speech-act-type: #sat~EQUEST, manner: #man=DIRECT, speaker: # j-sp-J-SPEAK~ hearer: # j -~-HEARER~ s-act: # j-act~J-PRfP [relation: NEGATE abject: [relatiQn: ~l-I, object: TE-1, spatial-dest/nation: HON-1] ] ] english :EASA [speech-act-type: #sat, manner: #man, ~er: ~ , hearer: #e-hearer, s-act: #e-act-EPROP], t rans-act: P ~X)P [ japanese: #j-act, engl/sh: #e-act], ..] STEP 2 and 3 : the new PINUP symbols are rewritten as disjunctions. For the s-act slot, the unification with NE~ZON is successful. It adds a new PROP symbol which is in turn rewritten and this time the unification with ~ succeeds: it adds the English object and a new translate slot for 1~0I¢. [japanese: JASA [speech-act-type: #sat~B~ST, manner: #man-DIBECT, speaker: # j -sp-J-SPEAKER~ b~arer: # j-hr=J-HEARER, s-act: #j-act-~7-PRCP [ relation: # j-neg--J-NEG object: #-objl [relation: FURE~J-1, cb~ct: #j-obj2--TE-l, spatial-destination: #sd=HC~-I ] ] english :EASA [speech-act-type: #sat~T, manner: #man=DIRECT, speaker: #j-sp=E-SPEA~L hearer: # j -hr--E-HEABER, s-act: #e-act--EV [relation: #e-neg=E-NEG, object: #e-cbj= [relation: TOUCH-I, object: #e-obj2] ], trans-act :.., trans-obj: [japanese: #j-objl, english: #e-obj, trans0 :PROP [ japanese: #sd, english: #e-obj2] ] ] STEP 4 : the new ~ symbol is in turn rewritten as ~ which finally translates the last argument. The final structul'e produced by the interpreter is: [ japanese: JASA [ speech-act -type: #sat=REQJEST, manner: #marmOIRECT, ~aker: J-S~A~ hearer:J~ s-act: J-PROP [relatic~: J-NEG object: [ relation: FURERU-I, object :TE-1, spatial-destination:FEN-l] ], english :EASA [ speech-act -type: #sat, n~nner: #man, speaker:E-SPEAKER, hearer :E-HEARE~ s-act: E-PBOP [relation: E-NEG, object: [relation: TCXX~-I, object :BOOK-I] ], ..] 5 CONCLUSION The rewriting formalism has been implemented in LISP by Martin Emele and the author at ATR in order to develop transfer and generation models of dialogues for a machine translation prototype [Emele and Zajac 89]. The two main characteristics of the formalism are (1) type inheritance which provides a clean way of defining classes and sub-classes of objects, (2) the rewriting mechanism based on typed unification of feature structures which provide a powerful and semantically clear means of specifying (and computing) relations between classes of objects. This latex behavior is somehow similar to the PROLOG mechanism, and grammars can be written to be reversible, which is the case for our transfer grammar. We hope this feature will be useful in the future development of the grammar, allowing for a precise constrastive analysis of Japanese and English. At present, the transfer grammar is in a very early stage of development but nevertheless, capable of translating a few elementary sentences. It covers basic sentence patterns; compound noun phrases and coordination of noun phrases; verb phrases including auxiliaries, medals and adverbs; sentence adverbials; conditionals. The transfer module and the generation module [Emele 89] use the same formalism and integration is thus simple to achieve. As for efficiency considerations, the transfer and generation of the sentence in Figure 2 takes approximately 5 seconds on a Symbolics with our current implementation. However, this figure is not very meaningful because our dictionaries and grammars are still very small, and the implementution of the interpreter itself is still evolving. Full integration with the analysis module (a unification-based parser which produces a set of feature structures) remains to be worked out, but should not cause major problems. In this respect, the closest related works are a transfer model proposed by ['Isabelle and Macklovitch 86] and a model in the LFG framework proposed by [Kudo and Nomura 86] (see also [Beaven and Whitelock 88). There are two major topics for further research: I) the extension of the formalism to include full logical expressions, as described for example in [Smolka 88], and some kind of control mechanism in order to treat default values and prune some solutions (when an idiomatic expression is found for example); (2) the development of a transfer grammar for a larger language fragment, using outputs of the parser already available described in [Yoshimoto and Kogure 1988]. REFERENCES Hassan AIT-KACI. 1984. A Lattice Theoretic Approach to Computation Based on a Calculus of Partially Ordered Type Structures. Ph.D. Thesis, University of Pennsylvania. John L. BEAVEN and Pete WHITELOCK. 1988. Machine Translation Using Isomorphic UCGs. Proceedings of COLING-88, Budapest. Christian BOITET. 1988. Pros and Cons of the Pivot and Transfer Approaches in Multilingual Machine Translation. Prec. of the Intl. Conf. on New Directions in Machine Translation, BSO, Budapest. Martin EMELE. 1989. A Typed Feature Structure Unification-based Approach to Generation. Proceedings of the WGNLC of the IECE, Oita University, Japan. Martin EMELE and R~mi ZAJAC. 1989. RETIF: a Rewriting System for Typed Feature Structures. ATR Technical Report TR-I-0071. Pierre ISABELLE and Eliot MACKLOVITCH. 1986. Transfer and MT Modularity. Proceedings of COLING-86, Bonn. Kiyoshi KOGURE, Kei YOSHIMOTO, Hitoshi IIDA, and Teruaki AIZAWA. 1988. The Intention Translation Method, A New Machine Translation Method for Spoken Dialogues. Submitted for IJCAI-89, DctrOiL Ikuo KUDO and Hirosato NOMURA. 1986. Lexical-Functional Transfer. A Transfer Framework in a Machine Translation System based on LFG. Proceedings of COLING-86, Bonn. Masako KUME, Gayle K. SATO and Kei YOSHIMOTO. 1988. A Descriptive Framework for Translating Speaker's Meaning. Proceedings of the 4th Conference of ACL-Europe, Manchester. Carl POLLARD and Ivan A. SAG. 1987. Information-based Syntax and Semantics. CSLI, Lecture Notes Number 13, Stanford. Gert SMOLKA. 1988. A Feature Logic with Subsorts. LILOG-REPORT 33, IBM Deutschland GmbH, Stuttgart. Jun-Ichi TSUJII. 1987. What is pivot?, Proceedings of the 1st MT Summit, Hakone. Bernard VAUQUOIS. 1975. La traduction automatique d Grenoble. Document de Lingnistique Quantitative 29, Dunod, Paris. V.M. YNVGE. 1957. A Framework for Syntactic Translation. Mechanical Translation 4/3, 59-65. Kei YOSHIMOTO and Kiyoshi KOGURE. 1988. Japanese Sentence Analysis by means of Phrase Structure Grammar. ATR Technical Report TR-I- 0049.
1989
1
Word Association Norms, Mutual Information, and Lexicography Kenneth Ward Church Bell Laboratories Murray Hill, N.J. Patrick Hanks CoLlins Publishers Glasgow, Scotland Abstract The term word assaciation is used in a very particular sense in the psycholinguistic literature. (Generally speaking, subjects respond quicker than normal to the word "nurse" if it follows a highly associated word such as "doctor.") We wilt extend the term to provide the basis for a statistical description of a variety of interesting linguistic phenomena, ranging from semantic relations of the doctor/nurse type (content word/content word) to lexico-syntactic co-occurrence constraints between verbs and prepositions (content word/function word). This paper will propose a new objective measure based on the information theoretic notion of mutual information, for estimating word association norms from computer readable corpora. (The standard method of obtaining word association norms, testing a few thousand subjects on a few hundred words, is both costly and unreliable.) The , proposed measure, the association ratio, estimates word association norms directly from computer readable corpora, waki,~g it possible to estimate norms for tens of thousands of words. I. Meaning and Association It is common practice in linguistics to classify words not only on the basis of their meanings but also on the basis of their co-occurrence with other words. Running through the whole Firthian tradition, for example, is the theme that "You shall know a word by the company it keeps" (Firth, 1957). "On the one hand, bank ¢o.occors with words and expression such u money, nmu. loan, account, ~ m . c~z~c. o~.ctal, manager, robbery, vaults, wortln# in a, lu action, Fb~Nadonal. of F.ngland, and so forth. On the other hand, we find bank m-occorring with r~r. ~bn, boa:. am (end of course West and Sou~, which have tcqu/red special meanings of their own), on top of the, and of the Rhine." [Hanks (1987), p. 127] The search for increasingly delicate word classes is not new. In lexicography, for example, it goes back at least to the "verb patterns" described in Hornby's Advanced Learner's Dictionary (first edition 1948). What is new is that facilities for the computational storage and analysis of large bodies of natural language have developed significantly in recent years, so that it is now becoming possible to test and apply informal assertions of this kind in a more 76 rigorous way, and to see what company our words do keep. 2. Practical Applications The proposed statistical description has a large number of potentially important applications, including: (a) constraining the language model both for speech recognition and optical character recognition (OCR), (b) providing disambiguation cues for parsing highly ambiguous syntactic structures such as noun compounds, conjunctions, and prepositional phrases, (c) retrieving texts from large databases (e.g., newspapers, patents), (d) enhancing the productivity of computational linguists in compiling lexicons of lexico-syntactic facts, and (e) enhancing the productivity of lexicographers in identifying normal and conventional usage. Consider the optical character recognizer (OCR) application. Suppose that we have an OCR device such as [Kahan, Pavlidis, Baird (1987)], and it has assigned about equal probability to having recognized "farm" and "form," where the context is either: (1) "federal t credit" or (2) "some of." The proposed association measure can make use of the fact that "farm" is much more likely in the first context and "form" is much more likely in the second to resolve the ambiguity. Note that alternative disambiguation methods based on syntactic constraints such as part of speech are unlikely to help in this case since both "form" and "farm" are commonly used as nouns. 3. Word Association and Psycholingui~tics Word association norms are well known to be an important factor in psycholinguistic research, especially in the area of lexical retrieval. Generally speaking, subjects respond quicker than normal to the word "nurse" if it follows a highly associated word such as "doctor." "Some resuhs and impl~tfions ere summarized from rexcfion-fime .experiments in which subjects either (a) ~as~f'mi successive strings of lenen as words and nonwords, c~ (b) pronounced the sUnriSe. Both types of response to words (e.g., BUTTER) were consistently fester when preceded by associated words (e.g., BREAD) rather than unassociated words (e.g, NURSE)." [Meyer, Schvaneveldt and Ruddy (1975), p. 98] Much of this psycholinguistic research is based on empirical estimates of word association norms such as [Palermo and Jenkins (1964)], perhaps the most influential study of its kind, though extremely small and somewhat dated. This study measured 200 words by asking a few thousand subjects to write down a word after each of the 200 words to be measured. Results are reported in tabular form, indicating which words were written down, and by how many subjects, factored by grade level and sex. The word "doctor," for example, is reported on pp. 98-100, to be most often associated with "nurse," followed by "sick," "health," "medicine," "hospital," "man," "sickness," "lawyer," and about 70 more words. 4. An Information Theoretic Measure We propose an alternative measure, the association ratio, for measuring word association norms, based on the information theoretic concept of mutual information. The proposed measure is more objective and less costly than the subjective method employed in [Palermo and Jenkins (1964)]. The association ratio can be scaled up to provide robust estimates of word association norms for a large portion of the language. Using the association ratio measure, the five most associated words are (in order): "dentists," "nurses," "treating," "treat," and "hospitals." What is "mutual information"? According to [Fano (1961), p. 28], if two points (words), x and y, have probabilities P(x) and P(y), then their mutual information, l(x,y), is defined to be l(x,y) - Io- P(x,y) s2 P(x) P(y) Informally, mutual information compares the prob- ability of observing x and y together (the joint probability) with the probabilities of observing x and y independently (chance). If there is a genuine association between x and y, then the joint probability P(x,y) will be much larger than chance P(x) P(y), and consequently l(x,y) >> 0. If there is no interesting relationship between x and y, then P(x,y) ~ P(x) P(y), and thus, I(x,y) ~- 0. If x and y are in complementary distribution, then P(x,y) will be much less than P(x) P(y), forcing l(x,y) << O. In our application, word probabilities, P(x) and P(y), are estimated by counting the number of observations of x and y in a corpus, f(x) and f(y), and normalizing by N, the size of the corpus. (Our examples use a number of different corpora with different sizes: 15 million words for the 1987 AP 77 corpus, 36 million words for the 1988 AP corpus, and 8.6 million tokens for the tagged corpus.) Joint probabilities, P(x,y), are estimated by counting the number of times that x is followed by y in a window of w words,f,,(x,y), and normalizing by N. The window size parameter allows us to look at different scales. Smaller window sizes will identify fixed expressions (idioms) and other relations that hold over short ranges; larger window sizes will highlight semantic concepts and other relationships that hold over larger scales. For the remainder of this paper, the window size, w, will be set to 5 words as a compromise; this setting is large enough to show some of the constraints between verbs and arguments, but not so large that it would wash out constraints that make use of strict adjacency.1 Since the association ratio becomes unstable when the counts are very small, we will not discuss word pairs with f(x,y) $ 5. An improvement would make use of t-scores, and throw out pairs that were not significant. Unfortunately, this requffes an estimate of the variance of f(x,y), which goes beyond the scope of this paper. For the remainder of this paper, we will adopt the simple but arbitrary threshold, and ignore pairs with small counts. Technically, the association ratio is different from mutual information in two respects. First, joint probabilities are supposed to be symmetric: P(x,y) = P(y,x), and thus, mutual information is also symmetric: l(x,y)=l(y,x). However, the association ratio is not symmetric, since f(x,y) encodes linear precedence. (Recall that f(x,y) denotes the number of times that word x appears before y in the window of w words, not the number of times the two words appear in either order.) Although we could fix this problem by redefining f(x,y) to be symmetric (by averaging the matrix with its transpose), we have decided not to do so, since order information appears to be very interesting. Notice the asymmetry in the pairs below (computed from 36 million words of 1988 AP text), illustrating a wide variety of biases ranging 1. This definition fw(x,y) uses • rectangular window. It might bc interesting to consider alternatives (e.g., • triangular window or • decaying exponential) that would weight words less and less as they are separated by more and more words. from sexism to syntax. Asymmetry in 1988 AP Corpus ('N ffi 36 million) x y fix,y) fly, x) doctors nurses 81 10 man woman 209 42 doctors lawyers 25 16 bread butter 14 0 save life 106 8 save money 155 8 save from 144 16 supposed to 982 21 Secondly, one might expect f(x,y)<-f(x) and f(x,y) ~f(y), but the way we have been counting, this needn't be the case if x and y happen to appear several times in the window. For example, given the sentence, "Library workers were prohibited from saving books from this heap of ruins," which appeared in an AP story on April l, 1988, f(prohibited) ffi 1 and f(prohibited, from) ffi 2. This problem can he fixed by dividing f(x,y) by w- I (which has the consequence of subtracting Iog2(w- l) -- 2 from our association ratio scores). This adjustment has the additional benefit of assuring that ~ f(x,y) ffi ~ f(x) ffi ~ f(y)ffi N. When l(x,y) is large, the association ratio produces very credible results not unlike those reported in ~alermo and Jenkins (1964)], as illustrated in the tabl~ below. In contrast, when l(x,y) ~ 0, the pairs less interesting. (As a very rough rule of thumb, we have observed that pairs with l(x,y) > 3 tend to be interesting, and pairs with smaller l(x,y) are generally not. One can make this statement precise by calibrating the measure with subjective measures. Alternatively, one could make estimates of the variance and then make statements about confidence levels, e.g., with 95% confidence, P(x,y) > P(x) P(y).) Some Interesting Associations with "Doctor" in the 1987 AP Corpus (N = 15 minion) I(x, y) fix, y) fix) x fly) y 11.3 12 111 honorary 621 doctor 11.3 8 1105 doctors 44 dentists 10.7 30 1105 doctors 241 nurses 9.4 8 1105 do~ors 154 treating 9.0 6 275 examined 621 doctor 8.9 11 1105 doctors 317 treat 8.7 25 621 doctor 1407 bills 8.7 6 621 doctor 350 visits 8.6 19 1105 doctors 676 hospitals 8.4 6 241 nurses 1105 doctors 78 Some Un-interesttng Associations with "Doctor" 0.96 6 621 doctor 73785 with 0.95 41 284690 a 1105 doctors 0.93 12 84716 is 1105 doctors If l(x,y) < < 0, we would predict that x and y are in complementary distribution. However, we are rarely able to Observe l(x,y)<<O because our corpora are too small (and our measurement techniques are too crude). Suppose, for example, that both x and y appear about i0 times per million words of text. Then, P(x)=P(y)=iO -s and chance is P(x)P(x)ffi tO -l°. Thus, to say that l(x,y) is much less than 0, we need to say that P(x,y) is much less than 10-~° a statement that is hard to make with much confidence given the size of presently available corpora. In fact, we cannot (easily) observe a probability less than 1/N = 10 -7, and therefore, it is hard to know ff l(x,y) is much less than chance or not, unless chance is very large. (In fact, the pair (a, doctors) above, appears significantly less often than chance. But to justify this statement, we need to compensate for the window size (which shifts the score downward by 2.0, e.g. from 0.96 down to - 1.04) and we need to estimate the standard deviation, using a method such as [Good (1953)].) 5. Lexico-$yntactic Regularities Although the psycholinguistic literature documents the significance of noun/noun word associations such as doctor/nurse in considerable detail, relatively little is said about associations among verbs, function words, adjectives, and other non-nouns. In addition to identifying semantic relations of the doctor/nurse variety, we believe the association ratio can also be used to search for interesting lexico-syntactic relationships between verbs and typical arguments/adjuncts. The proposed association ratio can be viewed as a formalization of Sinciair's argument: "How common are the phrasal verbs with set7 Set is particularly rich in making combinations with words like about, in, up, out, on, off, and these words are themselves very common. How likely is set off to occur? Both are frequent words; [set occurs approximately 250 times in a million words and] off occurs approximately 556 times in a million words... IT]he question we are asking can be roughly rephrased as follows: how Likely is off to occur immediately after set? ... This is 0.00025x0.00055 [P(x) P(y)], which gives us the tiny figure of 0.0000001375 ... The assumption behind this calculation is that the words are distributed at random in a text [at chance, in our terminology]. It is obvious to a linguist that this is not so, and a cough measure of how much set and off attract each other is to cumpare the probability with what actually happens... $~ off o~urs nearly 70 times in the 7.3 million word corpus [P(x,y)-70/(7.3 106) >> P(x) P(y)]. That is enough to show its main patterning and it suggests that in currently-held corpora there will be found sufficient evidence for the desc~'iption of a substantial collection of phrases... [Sinclair (1987)¢. pp. 151-152] It happens that set ... offwas found 177 times in the 1987 AP Corpus of approximately 15 million words, about the same number of occurrences per million as Sinclair found in his (mainly British) corpus. Quantitatively, l(set,off) = 5.9982, indicating that the probability of set ... off is almost 64 times greater than chance. This association is relatively strong; the other particles that Sincliir mentions have association ratios of: about (1.4), in (2.9), up (6.9), out (4.5), on (3.3) in the 1987 AP Corpus. As Sinclair suggests, the approach is well suited for identifying phrasal verbs. However, phrasal verbs involving the preposition to raise an interesting problem because of the possible confusion with the infinitive marker to. We have found that if we first tag every word in the corpus with a part of speech using a method such as [Church (1988)], and then measure associations between tagged words, we can identify interesting contrasts between verbs associated with a following preposition to~in and verbs associated with a following infinitive marker to~to. (Part of speech notation is borrowed from [Francis and Kucera (1982)]; in = preposition; to = infinitive marker; vb = bare verb; vbg = verb + ins; vbd = verb + ed; vbz = verb + s; vbn = verb + en.) The association ratio identifies quite a number of verbs associated in an interesting way with to; restricting our attention to pairs with a score of 3.0 or more, there are 768 verbs associated with the preposition to~in and 551 verbs with the infinitive marker to~to. The ten verbs found to be most associated before to~in are: • to~in: alluding/vbg, adhere/vb, amounted/vbn, re- lating/vbg, amounting/vbg, revert/vb, re- verted/vbn, resorting/vbg, relegated/vbn • to~to: obligated/vbn, trying/vbg, compened/vbn, enables/vbz, supposed/vbn, intends/vbz, vow- ing/vbg, tried/vbd, enabling/vbg, tends/vbz, tend/vb, intend/vb, tries/vbz Thus, we see there is considerable leverage to be gained by preprocessing the corpus and manipulating the inventory of tokens. For measuring syntactic constraints, it may be useful to include some part of speech information and to exclude much of the internal structure of noun phrases. For other purposes, it may be helpful to tag items and/or phrases with semantic libels such as *person*, *place*, *time*, *body-part*, *bad*, etc. Hindle (personal communication) has found it helpful to preprocess the input with the Fidditch parser ~I-.Iindle (1983a,b)] in order to identify associations between verbs and arguments, and postulate semantic classes for nouns on this basis. 6. Applications in Lexicography Large machine-readable corpora are only just now becoming available to lexicographers. Up to now, lexicographers have been reliant either on citations collected by human readers, which introduced an element of selectivity and so inevitably distortion (rare words and uses were collected but common uses of common words were not), or on small corpora of only a million words or so, which are reliably informative for only the most common uses of the few most frequent words of English. (A million-word corpus such as the Brown Corpus is reliable, roughly, for only some uses of only some of the forms of around 4000 dictionary entries. But standard dictionaries typically contain twenty times this number of entries.) The computational tools available for studying machine-readable corpora are at present still rather primitive. There are concordancing programs (see Figure 1 at the end of this paper), which are basically KWIC (key word in context [Aho, Kernighan, and Weinberger (1988), p. 122]) indexes with additional features such as the ability to extend the context, sort leftwards as well as rightwards, and so on. There is very little interactive software. In a typical skuation in the lexicography of the 1980s, a lexicographer is given the concordances for a word, marks up the printout with colored pens in order to identify the salient senses, and then writes syntactic descriptions and definitions. Although this technology is a great improvement on using human readers to collect boxes of citation index cards (the method Murray used in constructing the Oxford English Dictionary a century ago), it works well if there are no more than a few dozen concordance lines for a word, and only two or three main sense divisions. In analyzing a complex word such as "take", "save", or "from", the lexicographer is trying to pick out significant patterns and subtle distinctions that are buried in literally thousands of concordance lines: pages and pages of computer printout. The unaided human mind simply cannot discover all the significant patterns, let alone group them and rank in order of importance. The AP 1987 concordance to "save" is many pages 79 long; there are 666 lines for the base form alone, and many more for the inflected forms "saved," "saves," "saving," and "savings." In the discussion that follows, we shall, for the sake of simplicity, not analyze the inflected forms and we shall only look at the patterns to the right of "save". Words Often Co.Occurring to the right of "save" l(x, y) fix, y) fix) x f(y) y 9.5 6 724 save ' 170 forests 9.4 6 724 save 180 $1.2 8.8 37 724 save 1697 lives 8.7 6 724 save 301 enormous 8.3 7 724 save 447 annually 7.7 20 724 save 2001 jobs 7.6 64 724 save 6776 money 7.2 36 724 save 4875 life 6.6 g 724 save 1668 dollars 6.4 7 724 save 1719 costs 6.4 6 724 save 1481 thousands 6.2 9 724 save 2590 face 5.7 6 724 save 2311 son 5.7 6 724 save 2387 estimated 5.5 7 724 save 3141 your 5.5 24 724 save 10880 billion 5.3 39 724 save 20846 million 5.2 8 724 save 4398 us 5.1 6 724 save 3513 less 5.0 7 724 save 4590 own 4.6 7 724 save 5798 world 4.6 7 724 save 6028 my 4.6 15 724 save 13010 them 4.5 8 724 save 7434 country 4.4 15 724 save 14296 time 4.4 64 724 save 61262 from 4.3 23 724 save 23258 more 4.2 25 724 save 27367 their 4. I 8 724 save 9249 company 4.1 6 724 save 7114 month It is hard to know what is important in such a concordance and what is not. For example, although it is easy to see from the concordance selection in Figure 1 that the word "to" often comes before "save" and the word "the" often comes after "save," it is hard to say from examination of a concordance alone whether either or both of these co-occurrences have any significance. Two examples will be illustrate how the association ratio measure helps make the analysis both quicker and more accurate. 80 6.1 F.xamp/e 1: "save ... from" The association ratios (above) show that association norms apply to function words as well as content words. For example, one of the words significantly associated with "save" is "from". Many dictionaries, for example Merriam-Webster's Ninth, make no explicit mention of "from" in the entry for "save", although British learners' dictionaries do make specific mention of "from" in connection with "save". These learners' dictionaries pay more attention to language structure and collocation than do American collegiate dictionaries, and lexicographers trained in the British tradition are often fairly skilled at spotting these generalizations. However, teasing out such facts, and distinguishing true intuitions from false intuitions takes a lot of time and hard work, and there is a high probability of inconsistencies and omissions. Which other verbs typically associate with "from," and where does "save" rank in such a list? The association ratio identified 1530 words that are associated with "from"; 911 of them were tagged as verbs. The first I00 verbs are: refi'aJn/vb, gleaned/vii, stems/vbz, stemmed/vbd, stem- mins/vbg, renging/vbg, stemmed/vii, ranged/vii, derived/vii, reng~/vbd, extort/vb, gradu|ted/vbd, bar- red/vii, benefltiag/vbg, benefmect/vii, benefited/vii, ex- ¢used/vbd, m'hing/vbg, range/vb, exempts/vbz, suffers/vbz, exemptingtvbg, benefited/vbd, In.evented/vbd (7.0), seep- ins/vbs, btrted/vbd, tnevents/vbz, suffering/vbs, ex- e.laded/vii, mtrks/vbz, pmfitin~vbs, recoverins/vbg, dis- charged/vii, reboundins/vbg, vary/vb, exempted/vbn, ~te/vb, blmished/vii, withdrawing/vbg, ferry/vb, pre- vented/vii, pmfit/vb, bar/vb, excused/vii, bars/vbz, bene- fit/vb, emerget/vbz, em~se/vb, vm'tes/vbz, differ/vb, re- moved/vim, exemln/vb, expened/vbn, withdraw/vb, stem/vb, separated/vii, judging/vbg, adapted/vbn, escapins/vbs, in- herited/vii, differed/vbd, emerged/vbd, withheld/vbd, kaked/vbn, strip/vb, i~mlting/vbs, discouruge/vb, I~'e- vent/vb, withdrew/vbd, pmhibits/vbz, borrowing/vbg , pre- venting/vbg, prohibit/vb, resulted/vbd (6.0), predude/vb, di- vert/vb, distin~hh/vb, pulled/vbn, fell/vbn, varied/vbn, emerging/vbs, suHe~r/vb, prohibiting/vbg, extract/vb, sub- U'act/vb, remverA, b, paralyzed/vii, stole/vbd, departing/vbs, escaped/vii, l~ohibited/vbn, forbid/vb, evacuated/vii, reap/vb, barring/vbg, removing/vbg, stolen/vii, receives/vbz. "Save ... from" is a good example for illustrating the advantages of the association ratio. Save is ranked 319th in this list, indicating that the association is modest, strong enough to be important (21 times more likely than chance), but not so strong that it would pop out at us in a concordance, or that it would be one of the first things to come to mind. If the dictionary is going to list "save ... from," then, for consistency's sake, it ought to consider listing all of the more important associations as well. Of the 27 bare verbs (tagged 'vb3 in the list above, all but 7 are listed in the Cobuild dictionary as occurring with "from". However, this dictionary does not note that vary, ferry, strip, divert, forbid, and reap occur with "from." If the Cobuild lexicographers had had access to the proposed measure, they could possibly have obtained better coverage at less cost. 6.2 Example 2: Identifying Semantic Classes Having established the relative importance of "save ... from", and having noted that the two words are rarely adjacent, we would now like to speed up the labor-intensive task of categorizing the concordance lines. Ideally, we would like to develop a set of semi-automatic tools that would help a lexicographer produce something like Figure 2, which provides an annotated summary of the 65 concordance lines for "save ... from. ''a The "save ... from" pattern occurs in about 10% of the 666 concordance lines for "save." Traditionally, semantic categories have been only vaguely recognized, and to date little effort has been devoted to a systematic classification of a large corpus. Lexicographers have tended to use concordances impressionistically; semantic theorist, AI-ers, and others have concentrated on a few interesting examples, e.g., '*bachelor," and have not given much thought to how the results might be scaled up. With this concern in mind, it seems reasonable to ask how well these 65 lines for "save ... from" fit in with all other uses of "save"?. A laborious concordance analysis was undertaken to answer this question. When it was nearing completion, we noticed that the tags that we were inventing to capture the generalizations could in most cases have been suggested by looking at the lexical items listed in the association ratio table for "save". For example, we had failed to notice the significance of time adverbials in our analysis of "save," and no 2. The last unclassifaat line, "...save shoppers anywhere from $S0..." raises imeres~g problems. Syntactic "chunking" shows that, in spite of its ~o-coearreaoe of "from" with "save", this line does ant belong hm'e. An intriguing exerciw, given the lookup table we are trying to construct, is how to guard against false inferences such u that since "shoppm's" is tagged [PERSON], "$$0 to 5500" must here count u either BAD m" a LOCATION. Accidental coincidmlces of this kind do not have a significant effect on the measure, however, although they do secve as a reminder of the probabilistic nature of the findings. dictionary records this. Yet it should be clear from the association ratio table above that "annually" and "month ''3 are commonly found with "save". More detailed inspection shows that the time adverbials correlate interestingly with just one group of "save" objects, namely those tagged [MONEY]. The AP wire is fuU of discussions of "saving $1.2 billion per month"; computational lexicography should measure and record such patterns ff they are general, even when traditional dictionaries do not. As another example illustrating how the association ratio tables would have helped us analyze the "save" concordance lines, we found ourselves contemplating the semantic tag ENV(IRONMENT) in order to analyze lines such as: the trend to it's our turn to joined a fight to can we get busy to save the forests[ENV] save the lake[ENV], save their forests[ENV], save the planet[ENV]? If we had looked at the association ratio tables before labeling the 65 lines for "save ... from," we might have noticed the very large value for "save ... forests," suggesting that there may be an important pattern here. In fact, this pattern probably subsumes most of the occurrences of the "save [ANIMAL]" pattern noticed in Figure 2. Thus, tables do not provide semantic tags, but they provide a powerful set of suggestions to the lexicographer for what needs to be accounted for in choosing a set of semantic tags. It may be that everything said here about "save" and other words is true only of 1987 American journalese. Intuitively, however, many of the patterns discovered seem to be good candidates for conventions of general English. A future step would be to examine other more balanced corpora and test how well the patterns hold up. 7. ConcluMom We began this paper with the psycholinguistic notion • of word association norm, and extended that concept toward the information theoretic def'mition of mutual information. This provided a precise statistical calculation that could be applied to a very 3. The word "time" itself also occurs significantly in the table, but on clco~ examination it is clear that this use of "time" (e.g., "to save time") counts as something like a commodity or resource, not as part of a time adjunct. Such are the pitfalls of lexicography (obvious when they are pointed out). 81 large corpus of text in order to produce a table of associations for tens of thousands of words, We were then able to show that the table encoded a number of very interesting patterns ranging from doctor ... nurse to save ... from. We finally concluded by showing how the patterns in the association ratio table might help a lexicographer organize a concordance. In point of fact, we actually developed these resuks in basically the reverse order. Concordance analysis is stilt extremely labor-intensive, and prone to errors of omission. The ways that concordances are sorted don't adequately support current lexicographic practice. Despite the fact that a concordance is indexed by a single word, often lexicographers actually use a second word such as "from" or an equally common semantic concept such as a time adverbial to decide how to categorize concordance lines. In other words, they use two words to triangulate in on a word sense. This triangulation approach clusters concordance Lines together into word senses based primarily on usage (distributional evidence), as opposed to intuitive notions of meaning. Thus, the question of what is a word sense can be addressed with syntactic methods (symbol pushing), and need not address semantics (interpretation), even though the inventory of tags may appear to have semantic values. The triangulation approach requires "art." How does the lexicographer decide which potential cut points are "interesting" and which are merely due to chance? The proposed association ratio score provides a practical and objective measure which is often a fairly good approximation to the "art." Since the proposed measure is objective, it can be applied in a systematic way over a large body of material, steadily improving consistency and productivity. But on the other hand, the objective score can be misleading. The score takes only distributional evidence into account. For example, the measure favors "set ... for" over "set ... down"; it doesn't know that the former is less interesting because its semantics are compositional. In addition, the measure is extremely superficial; it cannot cluster words into appropriate syntactic classes without an explicit preprocess such as Church's parts program "or Hindle's parser. Neither of these preprocesses, though, can help highlight the "natural" similarity between nouns such as "picture" and "photograph." Although one might imagine a preprocess that would help in this particular case, there will probably always be a class of generalizations that are obvious 82 to an intelligent lexicographer, but lie hopelessly beyond the objectivity of a computer. Despite these problems, the association ratio could be an important tool to aid the lexicographer, rather like an index to the concordances, It can help us decide what to look for; it provides a quick summary of what company our words do keep. References Church, K., (1988), "A Stochastic Pans Program and Noun Phrase Parser for Unrestricted Text," Second Conference on AppU~ Natural Language Processing, Austin, Texas. Fano, R., (1961), Tranamlx~n of Information, MIT Press, Cambridge, Massechusens. Firth, J., (1957), "A Synopsis of Linguistic Theory 1930-1955" in Smdiea in l.AnguLvd¢ Analysis, Philological Society, Oxford; reprinted in Palmer, F., (ed. 1968), Selected Papers Of J.R. Firth, Longman, Httlow. Pranch, W., and Kucera, H., (1982), Frequency AnalysiJ of EnglhOt U,~&e, Houghton Mifflin Company, Boston. Good, I. J., (1953), The Population Frequemctea of Species and the F..tttnmrlan of Population Parametera, Biomelxika, Vol. 40, pp, 237-264. Hanks, P. (198"0, "Definitions and Explanations," in Sinclair (1987b). Hindle, D., (1983a), "Deterministic Parsing of Syntactic Non- fluancks," ACL Proceedings. Hindle, D., (1983b), "User manual for Fidditch, a deterministic parser," Naval Research Laboratory Technical Memorandum ¢7590-142 Hornby, A., (1948), The Advanced Learner's D/cn'onary, Oxford Univenity Press. Kahaa, $., Pavlidis, T., and Baird, H., (1987) "On the Recognition of Printed Characters of any Font or She," IEEE Transections PAMI, pp. 274-287. Meyer, D., Schvaneveldt, R.. and Ruddy, M., (1975), "Loci of Contextual Effects on Visual Word-Reoognition," in Rabbin, P., and Domic, S., (ads.), Attention and Performance V, Academic Press, London, New York, San PrantAwo. Pakn-mo, D,, and Jenkins, J., (1964) "Word Asr,~:iation Norms," University of Minnesota Press, Minn~po~. Sine.lair, J., Hanks, P., Fox, G., Moon, R., Stock, P. (ads), (1997a), CoUtma Cobulld Engllah Language DlcrlanaW, Collins, London and Glasgow. Sinclair, J., (lgSTo), "The Nature of the Evidence," in Sinclair, J. (ed.), Looking Up: an account of the COBUILD Project in lexical co.orang, Collins, London and Glasgow. Figure I: Short Sample of the Concordance to "Save" from the AP 1987 Corpus rs Sunday, ~aIlins for greater economic reforms to mmts.qion af~efted that " the Postai Servi~ COUld Then, she said. the family hopes to • out-of*work steelworker. " because that doesn't " We suspend reality when we say we']] scientists has won the first round in an effort to about three children in a mining town who plot to GM executives say the shutdowns will rtmant as receiver, instructed officials to try to The package, which is to newly elshanced image as the moderate who moved to million offer from chairman Victor Posner to help after telling a delivery-room do~or not to try to h birthday Tuesday. cheered by those who fought to at he had formed an ellianco with Moslem rebels to " Basically we could We worked for a year to their expensive rob'mrs, just like in wartime, to ard of many who risked their own lives in order to We must inct~tse the amount Americans save China from poverty. save enormous sums of money in contracting out individual c save enough for a down payment on 8 home. save jobs, that costs jobs. " save money by spending $10,000 in wages for a public works save one of Egypt's great treasures, the decaying tomb of R save the "pit ponies "doomed to be slaughtered. save the automak~r $$00 milfion a year in operating costs a save the company rather than liquidate it and then declared save the counU3, nearly $2 billion, also includes a program save the country. save the fmanclaliy troubled company, but said Posner sail save the infant by inserting a tube in its throat to help i save the majestic Beaux Arts architectural masterpie~,e. save the nation from communism. save the operating costs of the Pershings and ground-launch save the site at enormous expense to us. " said Leveiilee. save them from drunken Yankee brawlers, "Tass said. save those who were passengers. " save. " Figure 2: Some AP 1987 Concordance lines to 'save ... from,' roughly sorted into categories save X from Y (6S concordance lines) 1 save PERSON from Y (23 concordance lanes) 1.1 save PERSON from BAD (19 concordance lines) ( Robert DeNiro ) to save Indian Iribes[PERSON] from se~ocide[DESTRUCT[BAD]] at the hands of '~ We wanted to save him[PERSON] from undue uouble[BAD] and loti[BAD] of money, " Murphy WLV sacriflcod to save more powerful Democrats[PERsoN] from harm[BAD] . "God sent this man to save my five children[PERsoN] from being burned to death[DESTRUCT[BAD]] and Pope John Paul H to " save us[PERSON] from sin[BAD] . " 1.2 save PERSON &ore (BAD) LOC(ATION) (4 concordance lines) rescoers who helped save the toddler[pERSON] from an abandoned weli['LOC] will be feted with a parade while attempting to save two drowning boys[PERSON] from a turbulent[BAD] creek[LOC] in Ohio[LOCI 2. save INSTtTFUTION) &ore (ECON) BAD (27 concordance lines) membe~ states to help save the BEC[INST] from possible bankrnptcy[BCONJ[BAD] this year. should be sought "to save the company[CORP[lNST]] from bankruptey(ECON][BAD] . law was necessary to save the cuuntry[NATION[INST]] from disast~[BAD] . operation " to save the nafion[NATION[INST]] from Communism[BAD]~q3LITICAL] , were not needed to save the system from bankrnptcy[ECON][BAD] . his efforts to save the world[IN'ST] from the likes of Lothar and the Spider Woman 3. save ANIMAL ~'om DESTRUCT(ION) (5 concordance lines) sire them the money to pmgrem intended to UNCLASSIFIED (10 wainut and ash trees to after the attack to, ~.n'~t~ttes that would rove the dogs[ANIMAL] from being des~'oyed[DESTRUCT] , save the slant birds(ANIMAL] from extinction[DESTRUCT] , concordance lines) save them from the axes and saws of a logging company. save the ship from a terrible[BAD] fire, Navy reports concluded Thursday. save shoppers[PERSON] anywhese from $~O[MONEY] [NUMBER] to $500[MONEY] [NUMBER] 83
1989
10
LEXICAL ACCESS IN CONNECTED SPEECH RECOGNITION Ted Briscoe Computer Laboratory University of Cambridge Cambridge, CB2 3QG, UK. ABSTRACT This paper addresses two issues concerning lexical access in connected speech recognition: 1) the nature of the pre-lexical representation used to initiate lexical look- up 2) the points at which lexical look-up is triggered off this representation. The results of an experiment are reported which was designed to evaluate a number of access strategies proposed in the literature in conjunction with several plausible pre-lexical representations of the speech input. The experiment also extends previous work by utilising a dictionary database containing a realistic rather than illustrative English vocabulary. THEORETICAL BACKGROUND In most recent work on the process of word recognition during comprehe~ion of connected speech (either by human or machine) a distinction is made between lexical access and-word recognition (eg. Marslen-Wilsun & Welsh, 1978; Klan, 1979). Lexlcal access is the process by which contact is made with the lexicon on the basis of an initial aconstlo-phonetlc or phonological representation of some portion of the speech input. The result of lexical sccess is a cohort of potential word candidates which are compatible with this initial analysis. (The term cohort is used de__ccriptively in this paper and does not represent any commitment to the perticular account of lexical access end word recognition provided by any version of the cohort theory (e.g. Marslen-Wilsun, 1987).) Most theories assume that the candidates in this cohort are successively whittled down both on the basis of further acoustic-phonetic or phonological information, as more of the speech input becomes available, end on the basis of the candidates' compatibility with the linguistic and extralingulstie context of utterance. When only one candidate remains, word recognition is said to have taken place. Most psycholinguistlc work in this area has focussed on the process of word recognition after a cohort of candidates has been selected, emphasising the role of further lexical or 'higher-level' linguistic constraints such as word frequency, lexical semantic relations, or syntactic and semantic congruity of candidates with the linguistic context (e.g. Bradley & Forster, 1987; Marslen- Wilson & Welsh. 1978). The few explicit and well- developed models of lexical access and word recognition in continuous speech (e.g. TRACE, McCleliand & Elman, 1986) have small and tmrealistic lexicons of. at most, a few hundred words and ignore phonological processes which occur in fluent speech. Therefore, they tend to ove~.stlmatz the amount and reliability, of acoustic information which can be directly extracted from the speech signal (either by human or machine) and make unrealistic and overly-optimistic assumptions concerning the size and diversity of candidates in a typical cohort. This, in turn, casts doubt on the real efficacy of the putative mechanisms which are intended to select the correct word from the cohort. The bulk of engineering systems for speech recognition have finessed the issues of lexical access and word recognition by attempting to map directly from the acoustic signal to candidate words by pairing words with acoustic representations of the canonical pronunciation of the word in the lexicon and employing pattern-matching, best-fit techniques to select the most likely candidate (e.g. Sakoe & Chiba, 1971). However, these techniques have only proved effective for isolated word recognition of small vocabularies with the system trained to an individual speaker, as, for example, Zue & Huuonlocher (1983) argue. Furthermore, any direct access model of this type which does not incorporate a pre-lexical symbolic representation of the input will have di£ficulty capturing many rule-governed phonological processes which affect the ~onunciation of words in fluent speech. since these processes can only be chazacteris~ adequately in terms of operations on a symbolic, phonological representation of the speech input (e.g. Church. 1987; Frazier, 1987; Wiese, 1986). The research reported here forms part of an ongoing programme to develop a computationally explicit account of lexical access and word recognition in connected s1~e-~_~, which is at least informed by experimental results concerning the psychological processes and mechanisms which underlie this task. To guide research. we make use of a substantial lexical database of English derived from machine-readable versions of the Longman Dictionary of Contonporary English (see Boguracv et aL, 1987; Boguraev & Briscoe, 1989) and of the Medical Research Council's psycholinguistic database (Wilson, 1988), which incorporates word frequency information. This specialised database system provides flexible and powerful querying facilities into a database of approximately 30,000 English word forms (with 60,000 separate entries). The querying facilities can be used to explore the lexical structure of English and simulate different approaches to lexical access and word recognition. Previous work in this area has often relied on small illustrative lexicons which tends to lead to overestimation of the effectiveness of various approaches. There are two broad questions to ask concerning the process of lexical access. Firstly, what is the nature of the initial representation which makes contact with the lexicon? Secondly, at what points during the (continuous) analysis of the speech signal is lexical look-up triggered? 84 We can illustrate the import of these questions by considering an example like (1) (modified from Klan via Church. 1987). (1) a) Did you hit it to Tom? b) [dlj~'~dI?mum~] (Where 'I' represents a high, front vowel, 'E' schwa, 'd' a flapped or neutralised stop, and '?' a glottal stop.) The phonetic trmmcriptlon of one possible utterance of (la) in (lb) demonstrates some of the problems involved in any 'dL,~ct' mapping from the speech input to lexical enu'ies not mediated by the application of phonological rules. For example, the palatalisation of final/d/before/y/in /did/means that any attempt to relate that portion of the W'~e¢___h input to the lexicel entry for d/d is h'kely to fail. Sitrfi/ar points can be made about the flapping and glottalisadon of the B/phonemes in/hit/and/It/, and the vowel reductions to schwa. In addition. (1) illustrates the wen-known point that there are no 100% reliable phonetic or phonological cues to word boundaries in connected speech. Without further phonological and lexical analysis there is no indication in a transcrilxlon like (lb) of where words begin or end; for example, how does the lexical access system distinguish word.initial/I/ in/17/fzom word-inlernal /I/ in /hid/? In this paper, I shall argue for a model which splits the lexical access process into a pre-lexical phonological parsing stage and then a lexicel enn7 retrieval stage. The model is simil~ to that of Church (1987), however I argue, firstly, that the initial phonological representation recovered from the speech input is more variable and often less detailed than that assumed by Church and, secondly, that the lexical entry retrieval stage is more directed and ~ . in order to ~ce the number of spurious lexical enuies accessed and to cernp~z~te for likely indetenninacies in the initial representation. THE PRE-LEXICAL PHONOLOGICAL REPRESENTATION Several researchers have argued that phonological processes, such as the palatallsation of/d/in (1), create problems for the word recognition sysmn because they 'distort' the phonological form of the word. Church (1987) and Frazier (1987) argue persuasively that, far fxom creating problems, such phonological processes provide imporu~ clues to the correct syllabic segmentation of the input and thus, to the locadon of word bounderies. However, this argument only goes through on ~ assump6on that quire derailed 'narrow' phonetic information is recovered from the signal, such as aspiration of M in/rE/ and /tam/ in (1) in order m recoguise tim preceding syllable botmdsrles. It is only in. terms of this represer~,tion that phonological processes c~m be recoguised and their effects 'undone' in order to allow correct matching of the input against the canonical phonological represenU~ons contained in lexical entries. Other researchers (e.g. Shipman & Zne, 1982)have argued (in the context of isolated word recogu/tion) that the initial representation which contacts the lexicon should be a broad mmmer-class transcription of the stressed syllables in the speech signal. The evidence in favot~ of this approach is, firstly, that extraction of more detailed information is nouniously diffic~dt and, secondly, that a broad transcription of this type appears to be vexy effective in partit/oning the English lexicon into small cohom. For example, Huttenlocher (1985) reports an average cohort size of 21 words for a 20,000 word lexicon using a six-camgory manner of articulation transcription scheme (employing the categories: Stop, Strong-Fricative, Weak-Fricative, Nasal, Glide-Liquid, and Vowel). This claim suggests that the English lexicon is functionally organised to favour a system which initiates lex/cal access from a broad manner class pre-lexical representation, because most of the discriminatory iv.formation between different words is concentra~i in the manner articulation of stressed syllables. Elsewhere, we have argued that these ideas are mis|-~d;_ngly presented and that there is, in fact, no significant advantage for manner information in suessed syllables (e.g. Carter et al., 1987; Caner, 1987, 1989). We found that there is no advantage per s~ to a manner class analysis of stressed syllables, since a similar malysis of unstressed syllables is as discriminatory and yields as good a partitioning of the English lexicon. However, concantrating on a full phonemic malysis of stressed syllables provides about 10% more information them a similer analysis of tmstressed syllables. This research suggests, then, that the pre-lexical represenw.ion used to initiate lexical access can only afford m concentram exclusively on stressed syllables ff these are analysed (at least) phonemically. None of these studies consider the extracud~ility of the classifications fxom speech input however, whilst there is a g~m~ral belief that it is easier to extract infonnation from stressed portions of the signal, the~ is little reason to believe that mariner class infm'mation is, in general, more or less accessible than other phonologically relevant features. A second argument which can be made against the use of broad represmUstions to contact the lexicon (in the context of conn~ speech) is that such representations will not support the phonological parsing n~essary to 'undo" such processes as palatallsation. For example, in (1) the final/d/of d/d will be realised as/j/ and camgurised as a sarong-fricative followed by liquid- glide using the proposed broad manner ~ransoripfion. Therefore. palamlisadon will need m be recoguised before the required stop-vowel-stop represenr~ion can be recovered and used to initiate lexical access. However, applying such phonological rules in a constrained and useful manner requires a more detailed input transcription. Palamllsation inustra~es this point very cle~ly; not all sequences which will be transcribed as strong-fl'lcative followed by liquid-glide can undergo this process by any means (e.g. /81/), but there will be no way of preventing the rule oven-applying in many inappropriate conmxts and thus presumably leading to the get.ration of many spurious word candidates. 85 A third argument against the use of exclusively broad representations is that these representations will not support the effective recognition of syllable- boundaries and some word-boundaries on the basis of phonotactic and other phonological sequencing constraints. For example, Church (1987) proposes an initial syllabification of the input as a prerequisite to l~dcal access, but his sylla "bificafion of the speech input exploits phonotactic constraints and relies on the extraction of allophonic features, such as aspiration, to guide this process. Similarly, Harringmn et al. (1988) argue that approximately 45% of word boundaries are, in principle, recognisable because they occur in phoneme sequences which are rare or forbidden word-internally. However, exploitation of these English phonological constraints would be considerably impaired if the pre- lexical representation of the input is restricted to a broad classification. h might seem self-evident that people are able to recognise phonemes in speech, but in fact the psychological evidence suggests that this ability is mediated by the output of the word recognition process rather than being an essential prerequisite to its success. Phoneme-monimrin 8 experiments, in which subjects listen for specified phonemes in speech, are sensitive to lexical effects such as word frequency, semmfic association, and so forth (see Cutler et al., 1987 for a summary of the expemnen~ literature and putative explmation of the effect), suggesting that information concemm 8 at least some of the phonetic contain of a word is not available until after the word is recoguised. Thus, people's ability to recognise phonemes tells us very little about the nann~ of the representation used to initiate lexical access. Better (but still indireoO evidence comes from mispronunciation monitoring and phoneme confusion experiments (Cole, 1973; Miller & Nicely, 1955; Sheperd, 1972) which suggest that tlsteners eere likdy to confuse or ~ phonemes along the dimensions predicted by distinctive feature theory. Most e~rcn result in reporting phonemes which differ in only one feanu~ from the target, This result suggests that listenexs are actively considering detailed phonetic information along a munber of dimemions (rather than simply, say, manner of articulation). Theoretical and experimental considerations suggest then that, regardless of the current capabilities of automated acoustic-phonetic fxont-ends, sysmms must be developed to extract as phonetically detailed a pm-lexical phonological represemation as possible. Without such a representation, phonological processes cannot be effectively recoguL~i and compensated for in the word recognition process and the 'extra' information conveyed in stressed syllables cannot be exploited. Nevertheless in fluent connected speech, unstressed syllables often undergo phonological processes which render them highly indemmlinam; for example, the vowel reductions in (I). Therefore, it is implausible m assume that my (human or machine) front-end will always output an accurate narrow phonetic, phonemic of perhaps even broad (say, manner class) mmscription of the speech input. For this reason, fur~er processes involved in lexical access will need to function effectively despim the very variable quality of information extracted from the speech signal. This last point creates a serious difficulty for the design of effective phonological parsers. Church (1987), for example, allows himself the idealisation of an accurate 'nsrmw' phonetic transcription. It remains to be demonstramd that any parsing mclmiques developed for determlnam symbolic input will transfer effectively to real speech input (and such a test may have to await considerably better automated front-ends). For the purposes of the next section. I assume that some such account of phonological parsing can be developed and that the pre-lexical representation used to initiate lexical access is one in which phonological processes have been 'undone' in order to consuuct a representation close to the canonical (phonemic) representation of a word's pronunciation. However, I do not assume that this representation will necessarily be accuram to the same degree of detail throughout the input. LEXICAL ACCESS STRATEGIES Any theory of word recognition must provide a mechanism for the segmentation of connected speech into words. In effect, the theory must explain how the process of lexical access is triggered at appropriate points in the speech signal in the absence of completely reliable phonetic/phonological cues to word boundaries. The various theories of lexical access and word recognition in conneomd speech propose mechanisms which appear to cover the full specumm of logical possibilities. Klan (1979) suggests that lexicai access is triggered off each successive spectral frame derived from the signal (i.e. approximately every 5 msecs.), McClelland & Elman (1986) suggest each successive phoneme, Church (1987) suggests each syllable onset, Grosjean & Gee (1987) suggest each stressed syllable onset, aud Curler & Norris (1985) suggest each pmsodiceliy smmg syllable onset. Finally, Maralan- Wilson & Welsh (1978) suggest that segmentation of the speech input and recognition of word boundaries is an indivisible process in which the endpoint of the previous word defines the point at which lexical access is Iriggered again. Some of these access strategies have been evaluated with respect to three input transcriptions (which are plausible candidates for the pre-lexical represen~uion on the basis of the work discussed in the previous section) in the context of a realistic sized lexicon. The experiment involved one sentence taken from a reading of the 'Rainbow passage' which had been analysed by several phoneticians for independent purposes. This sentence is reproduced in (2a) with the syllables which were judged to be strong by the phoneticians underlined. (2) a) The rainbow is a divis.._ion of whim light into many beautiful col.__ours b) WF-V reln bEu V-SF V S-V vI SF-V-N V-SF walt Idt V-N S-V men V bju: S-V WF-V-G K^I V-SF 86 This utterance was transcribed: 1) fine class, using phonemic U-ensoription throughout; 2) mid class, using phonemic transcription of strong syllables and a six- category intoner of articulation tranm'ipdon of weak syllables; 3) broad class, as mid class but suppressing voicing disK, ations in the strong syllable transcriptions. (2b) gives the mid class transcription of the utterance. In this transcription, phonemes are represented in a manner compatible with the scheme employed in the Longman Dictionary of Contonporary English and the manner class categories in capitals are Stop, Strong-Fricative, Weak-Fricative, Nasal, Glide-liquid, end Vowel as in Hunmlocher (1982) end elsewhe=e. The terms, fine, mid end broad, for each transcription scheme are intended purely descriptively and are not necessarily related to other uses of these terms in the literature. Each of the schemes is intended to represent a possible behaviour of an acoustic-phonetic front-end. The less determinate transoriptions can be viewed either as the result of transcription errors and indatermlnacies or as the output of a less ambitious front-end design. The definition of syllable boundary employed is, of necessity, that built into the syllable parser which acts as the interface to the dictionary d~t-_bese (e.g. Carter, 1989). The parser syllabifies phonemic Iranscriptions according to the phonotactiz constraints given in Ghnson (1980) emd utilis~ the maximal onset principle (Selkirk, 1978) where this leads to ambiguity. Each of the three transcriptions was used as a putative pre-lexical representation to test some of the different access slrategies, which were used to initiate lexieal look-up into the dictionary database. The four access strategies which were tested were: 1) phoneme, using each mr..eessive phoneme to trigger an access amnnp~ 2) word. using the offset of the previous (correct) word in the input to control access attempts; 3) syllable, attempting look-up at each syllable boundary; 4) strong syllable, attemptin 8 look-up at earh strong syllable boundary. That is, the first smuegy assumes a word may begin at any p*'umeme boendary, the second that a word may only begin, at tlm end of the previous one, the third that a word may begin at any syllable boundary, end the fourth that a word may begin at a seron 8 syllable boundary. The strong syllable strategy uses a separate look-up process for typically urmtreimad grammatical, clor, ad-clus vocabulary end allows the possibility of extending look- up 'backwards' over one preceding weak syllable. It was assumed, for the purposes of the experiment, that look- up off weak syllables would be restricted to closed-class vocabulary, would not extend into a strong syllable, and that this process would precede attempts to incorporate a weak syllable *backwards' into an open-class word. The direct access approach was not considered because of its implausibility in the light of the discussion in the previous section. The stressed syllable account is v=y slmilar to the strong syllable approach, but given the problem of stress shift in fluent speech, a formulation in unms of strong syllables, which are defined in terms of the absence of vowel reduction, is preferable. Work by Marslen-Wilson and his colleagues (e.g. Marslen-Wilson & Warren. 1987) suggests that, whatever access strategy is used, there is no delay in the availability of information derived fi'om the speech signal to furth= select from the cohort of word candidates. This suggests that s model in which units (say syllables) of the pre-lexical representation are 'pre-packaged' and then used to wlgser a look-up attempt are implausible. Rathe~ the look-up process must involve the continuous integration of information from the pre-lexical representation immediately it becomes available. Thus the question of access strategy concerns only the points at which this look-up process is initiated. In order to simulate the continuous aspect of lexlcel access using the dictionary database, d~:__M3_ase look-up queries for each strategy were initiated using the two phonemes/segments Horn the trigger point and then again with three phonemes/segmonts and so on until no hu~er English words in the database were compatible with the look-up query (except for closed-class access with the strong syllable strategy where a strong syllable boundary terminated the sequence of accesses). The size of the resulting cohorts was measured for each successively larger query;, for example, using a fine class transcription and triggering access from the /r/ of rainbow yields an initial cohort of 89 cmdidams compatible with/re//. This cohort drops to 12 words when /n/ is added and to 1 word when /b/ is also included and finally goes to 0 when the vowel of/s is -dO,'d= Each sequence of queries of this type which all begin at the same point in the signal will be refened to as an access path. The differ, tee between the access strategies is mostly in the number of distinct access paths they generate. Simulating access attempts using the dictionary d~tnbasc involves generating database queries consisting of partial phonological representatious which return sere of words and enlries which satisfy the query. For example, Figure 1 relxesents the query corresponding to the complete broad-class trenscription of appoint. This qu=y matches 37 word forms in the database. [ [pron [nsylls 2 ] [el [peak ?] [-.2 [etreee 2] [onzet (OR b d g k p t)] [peak ?] [coda (OR m n N) (OR b d g k p t)]]]] Figure 1 - Da'-bue query for 'aR?omt'. The ex~riment involved 8enera~8 s~uen~ of queries of this type and recording the number of words found in the database which matched each query. Figure 2 shows the partial word lattice for the mid class trauscription of th, e ra/nbow /s. using the strong syllable access strategy. In this lattice access paths involving r~o'~sively larger portions of the signal are illustrated. The m=nber under each access attempt represents the size of the set of words whose phonology is compatible 87 with the query. Lines preceded by an arrow indicate a query which forms part of an access path, adding a further segment to the query above it. Th o 14 r ai n b ow i s a ---I ---I --I -I 89 59 5 8 " >-I >---I 12 3 >---I >-I 1 o >--I I 1 0 >---I o Fisum 2 - Partial Word Lmi¢~ The corresponding complete word lattice for the same portion of input using a mid-class tr~cription and the strong syllable strategy is shown in Figure 3. In this lattice, only words whose complete phonology is compatible with the input are shown. Th e r ai n b ow i s a I--I I--I I--I I-I I 14 1 2 5 8 I .... I 3 I I Ir~re 3 - Complete Word The different strategies ware evaluated relative to the 3 trensc6ption schemes by summing the total number of partial words matched for the test scmtence under each strategy and trans=ipdon and also by looking at the total number of complete words matched. RESULTS Table 1 below gives a selection of the more important results for each strategy by transcription scheme for the test umtence in (2). Column 1 shows the total number of access paths initiated for the test sentence under each strategy. Columns 2 to 6 shows the number of words in all the cohorts produced by the particular access strategy for the test sentence after 2 to 6 phonemes/segments of the transcription have been incorporated into each access path. Column 7 shows the total number of words which achieve a complete match during the application of the particular access strategy to the test sentence. Table 1 provides m index of the efficiency of each access strategy in terms of the overall number of candidate words which appear in cohorts and also the overall number of words which receive a full match for the test sentence. In addition, the relative performance of each strategy as the ~ption scheme becomes less determinate is clear. The test sentence contains 12 words, 20 syllables, end 45 phonemes; for the purposes of this experiment the word a in the test sentence does not trigger a look- up attempt with the word strategy because cohort sizes were only recorded for sequences of two or more phonemes/segments. Assuming a fine class trmls=iption serving as lxe-lexical input, the phoneme strategy produces 41 full matches as compared to 20 for the strong syllable strategy. This demonstrates that the strong syllable strategy is more effective at ruling out spurious word candidates for the test sentence. Furthermore, the total number of candidates considered using the phoneme strategy is 1544 (after 2 phonemes/segments) but only 720 for the strong syllable strategy, again indicafng the greater effectiveness of the lanef strategy. When we A _c¢~___- Access Strategy Paths Fine Class Phoneme 45 Word 11 Syllable 20 StrongS 17 Mld Class Word 11 Synable 20 StrongS 17 Broad Class Syllable 20 $trongS 17 No. of words after x segments: 2 3 4 1544 251 46 719 193 32 1090 210 36 720 105 24 4701 1738 802 54 12995 3221 1530 103 760 232 89 13 13744 3407 1591 140 1170 228 100 18 Table I Complete 5 6 Matches 6 2 41 5 2 25 6 2 28 5 2 20 8 249 9 380 4 80 9 117 88 consider the less determinate tran.scriptlons it becomes even clearer that only the strung syllable slrategy remains reasonably effective and does not result in a ma~ive increase in the rmmber of spurious candidates accessed and fully matched. (The phonmne strategy resets are not reporud for mid end broad class tramcrlptlons because the cohort sizes were too large for the database query facilities to cope reliably.) The word candidates recovered using the phoneme strategy with a fine class transcription include 10 full matches resulting from accesses triggered at non-syllabic boundaries; for example arraign is found using the second phoneme of the and rain. This problem becomes considerably worse when moving to a less determinate transcription, illustrating very clearly the undesirable consequences of ignoring the basic linguistio constraint that word boundaries occur at syllable boundaries. Systems such as TRACE (McClelland & Elman. 1986) which use this strategy appear to compensate by using a global best-fit evaluation metric for the entire utterance which s~rongly disfavours 'unattached' input. However. these models still make the implausible claim that candid~_!e~ llke arraign will be highly-activated by the speech input. The results concerning the word based strategy presume that it is possible to determinately recognise the endpuint of the preceding word. This essmnption is based on the Cohort theory claim (e.g. Marslan-Wilsun & Welsh, 1978) that words can be recogulsed before their acoustic offset, using syntactic and semantic expectations to filter the cohort. This claim has been challenged experimentally by Grosjean (1985) and Bard et al. (1988) who demcmstrate that many monosyllabic words in context are not recognised until after their acoustic offset. The experiment reported here supports this expesimental result because even with the fine class transcription there are 5 word candM~t_~ which extend beyond the correct word boundary end 11 full matches which end before the correct boundary. With the mid clam tran.un'iption, ~e~ numbers rise to 849 end 57. respectively. It seems implausible that expectation-based corm~ainm could be powerful enough to correcdy select a unique candidate before its acoustic offset in all contexts. Therefore, the results for the word strategy reported here are overly-optim.isdc, because in order to guarantee that the correct sequence of words are in the cohorts recovered from the input, a lexical access system based on the word strategy would need to operate non- demrministically; that is, it would need to consider several pumndal word boundaries in most cases. Therefore, the results for a practicM syr.em based on Otis approach am likely to be significantly worse. The syllable strategy is effective under the assumption of • determinate and accurate phonemic pre- lexieal representation, but once we abandon this idealisation, the effectiveness of this strategy declines ~trply. Under the plaus~le assumption that the pre- lexical input reprmemation is likely to be least accurate/deanminate for tmslressed/weak syllables, the sw~ng syllable strategy is far more robust. This is a direct consequence of triggering look-up attempts off the more determinate parts of the pre-lexical representation. Further theoretical evidence in support of the strong syllable strategy is provided by Cutler & Carter (1987) who demmmtrate that a listener is six times more likely to e~mter a word with a prosodically strong initial syllable than one with a weak initial syllable when listening to English speech. Experimental evidence is provided by Cutler & Norris (1988) who report results which suggest that listeners tend to treat strong, but not weak, syllables as appropriate points at which to undertake pre-lexical segmentation of the speech input. The architecture of a lexical access system based on the syllable strategy can be quite simple in terms of the organisation of the lexicon and its access routines. It is only n~essary to index the lexicon by syllable types (Church, 1987). By contrast, the strong syllable strategy requires a separate closed.class word lexicon end access system, indexing of the open-class vocabulary by strong syllable and a more complex matching procedure capable of inhering preceding weak syllables for words such as d/v/s/on. Nevertheless, the experimental results reported here suggest that the extra complexity is warranted because the resulting system will be considerably more robust in the face of inacct~rate or indeterminate input concerning the nature of the weak syllables in the input utterance. CONCLUSION The experiment reported above suggests that the strong syllable access strategy will provide the most effective technique for producing minimal cohorts gu~anteed to contain the correct word candidate from a pre-lexical phonological representation which may be partly inaccurate or indeterminate. Further work to be undertaken includes the rerunning of the experiment with further input transcriptions containing pseudo-random typical phoneme perception errors and the inclusion of further test sentences designed to yield a 'phonetically- balanced' corpus. In addition, the relative internal dlscriminability (in tmmm of further phonological and 'higher-lever syntactic and semantic constraims) of the word candidates in the varying cohorts generated with the different strategies should be exandned. The importance of mai~ng use of a dictionary database with a realistic vocabulary size in order to evaluate proposals concerning lexlcal access and word recognition systems is hlghligh~d by the results of this experiment, which demonstrate the theoretical implausibility of many of the proposals in the literature whea we consider the consequences in a simulation involving more than a few hundred illustrative words. 89 ACKNOWLEDGEMENTS I would like to thank Longman Group Ltd. for making the typesetting tape of the Longmcat Dictionary of Contemporary English available to m for research purposes. Part of the work reported here was supported by SERC gram GR/D/4217. I also thank Anne Cuder, Francis Nolan and Tun Sholicar for useful comments and advice. All erroPs remain my own. REFERENCES Bard, E., Shillcock, R. & Altmann, G. (1988). The recognition of words after their acoustic offsets in spontaneous speech: effects of subsequent context. Perception & Psychophysic$, 44, 395-408. Boguraev, B. & Briscoe, E. (1989). Computational Lexicography for Natural Language Processing. Longman Limited, London. Boguraev, B., Carter, D. & Briscoe, E. (1987). A multi- purpose interface to an on-line dictionary. 3rd Conference of Eur. Assoc. for Computational Linguistics, Copenhagen. Bradley, D. & Forster, K. (1987). A reader's view of listeffmg. Cognition, 25, 103-34. Carter, D. (1987). An information-theoretic analysis of phonetic dictionary access. Computer Speech and Language, 2, 1-11. Carter, D., Boguraev, B. & BrL~oe, E. (1987). Lexical sUess and phonzfiz information: which szSments are most informative. Proc. of £ur. Conference on Speech Technology, Edinhoxgh. Carter, D. (1989). LIX)CE and speech recognition. In Boguraev & Briscoo (1989) pp. 135-52. Church, K. (1987). Phonological parsing and lexical muievaL Cognition, 25, 53-69. Cole, R. (1973). Listening for mispronunciations: a measure of what we hear during speech. Perception & Psychophysic~, 1, 153-6. Cutler, A. & Carter, D. (1987). The Ira:dominance of smm 8 initial syllables in the English vocabulary. Computer Speech and Language, 2, 133-42. Cuder, A., Mehler, J., Norris, D. & Segui, J. (1987). Phoneme identification and the lexicon. Cogni:ive Psychology, 19, 141-77. Cuder, A. & Norris. D. (1988). The role of slxong syllables in segmentation for lexical access. J. of Experimental Psychology: Human Perception and Performance, 14, 113-21. Frazier, L. (1987). Slrucmre in auditory word recognition. Cognition, 25, 15%87. Gimson, A. (1980). An Introduction to the Pronunciation of English. 3rd F.~tion, Edw~l Arnold, London. Gmsjean, F. & Gee, L (1987). Prosodic su-ucmre and spoken word recognition. Cognition, 25, 135-155. Harrington, J., Watson, G. & Cooper, M. (1988). Word hound~y identification from phoneme sequence ~mtraims in automatic c~dnuons speech recognition. Proc. of 12th Int. Co~. on Computational Linguistics, Budapest, pp. 225-30. Huttanlocher, D. (1985). Exploiting sequential phonetic constraints in recognizing spoken words. MIT. AI. Lab. Memo 867. Klatt, D. (1979). Speech perceptiom a model of acoustic- phonetic analysis and lexical access. Journal of Pho~t/es, 7, 279-312. Maralen-WiLson, M. (1987). Functional parallelism in spoken word recognition. Cognition, 25, 71-i02. Marden-WiLson, W. & Warren, P. (1987). Continuous uptake of acoustic cues in spoken word recognition. Perception & Psychophy$ics, 41, 262-75. Marslen-Wilson, W. & WeLsh, A. (1978). Processing interactions and lexical access during word recognition in continuous speech. Cognitive Psychology, 10, 29-63. Mcclelland, J. & Elman, I. (1986). The TRACE model of speech perception. Cognitive Psychology, 18, 1-86. Miller. G. & Nicely, P. (1955). Analysis of some perceptual confusions among some English consonants. Journal of Acoustical Society of America, 27, 338-52. Sakoe, H. & Chiba, S. (1971). A dynatrdc programming optimization for spoken word recognition. IEEE Transactions, Acoustics, Speech and Signal Processing, ASSP-26, 43-49. Selkirk, E. (1978). On prosodic structure and its relation to syntactic su'ucmre. Indiana University Linguistics Club, Bloomington, Indiana. Sheperd, R. (1972) Psychological representation of speech sounds. In David, E. & Denes, P. Human Communication: A Unified View, New York: McGraw- Hill Shipman, D. & Zue, V. (1982). Properties of large lexicons: implications for advanced isolated word reco~don systan~. IEEE ICASSP, Paris, 546-549. Wiese, R. (1986). The role of phonology in speech processing. Proc. of llth Int. Conf. on Computational Linguistics, Bonn, pp. 608-11. WiLson. M. (1988). MRC psycholinguisfic database: machine-usable dictionary, version 2.0 Behaviour Research Methods, Instrumentation & Computers, 20, 6-10. Zue, V. & Huttenlocher, D. (1983). Computer recognition of isolated words from large vocabularies. IEEE Conference on Trends and Applications. 9O
1989
11
DICTIONARIES, DICTIONARY GRAMMARS AND DICTIONARY ENTRY PARSING Mary S. Neff IBM T. J. Watson Research Center, P. O. Box 704, Yorktown Heights, New York 10598 Branimir K. Boguraev IBM T. J. Watson Research Center, P. O. Box 704, Yorktown Heights, New York 10598; Computer Laboratory, University of Cambridge, New Museums Site, Cambridge CB2 3QG Computerist: ... But, great Scott, what about structure? You can't just bang that lot into a machine without structure. Half a gigabyte of sequential file ... Lexicographer: Oh, we know all about structure. Take this entry for example. You see here italics as the typical ambiguous structural element marker, being apparently used as an undefined phrase-entry lemrna, but in fact being the subordinate entry headword address preceding the small-cap cross-reference headword address which is nested within the gloss to a defined phrase entry, itself nested within a subordinate (bold lower-case letter) sense section in the second branch of a forked multiple part of speech main entry. Now that's typical of the kind of structural re- lationship that must be made crystal-clear in the eventual database. from "Taking the Words out of His Mouth" -- Edmund Weiner on computerising the Oxford English Dictionary (The Guardian, London, March, 1985) ABSTRACT We identify two complementary p.ro.cesses in. the conversion of machine-readable dmUonanes into lexical databases: recovery of the dictionary structure from the typographical markings which persist on the dictionary distribution tapes and embody the publishers' notational conventions; followed by making explicit all of the codified and ellided information packed into individual entries. We discuss notational conventions and tape for- mats, outline structural properties of dictionaries, observe a range of representational phenomena particularly relevant to dictionary parsing, and derive a set of minimal requirements for a dic- tionary grammar formalism. We present a gen- eral purpose dictionary entry parser which uses a formal notation designed to describe the structure of entries and performs a mapping from the flat character stream on the tape to a highly struc- tured and fully instantiated representation of the dictionary. We demonstrate the power of the formalism by drawing examples from a range of dictionary sources which have been processedand converted into lexical databases. I. INI"RODUCTION Machine-readable dictionaries (MRD's) axe typi, tally ayailable in the form of publishers typesetting tapes, and consequently are repres- ented by a fiat character stream where lexical data proper is heavily interspersed with special (con- trol) characters. These map to the font changes and other notational conventions used in the printed form of the dictionary and designed to pack, and present in a codified compact visual format, as much lexical data as possible. To make maximal use of MRD's, it is necessary to make their data, as well as structure, fully ex- ~ licit, in a data base format that lends itself to exible querying. However, since none of the lexical data base (LDB) creation efforts to date fully addresses both of these issues, they fail to offer a general framework for processing the wide range of dictionary resources available in machine-readable form. As one extreme, the conversion of an MRD into an LDB may be carried out by a 'one-off" program -- such as, for example, used for the Longman Dictionary of Contemporary English (LDOCE) and described in Bogtbr_ aev and Briscoe, 1989. While the re- suiting LDB is quite explicit and complete with respect to the data in the source, all knowledge of the dictionary structure is embodied in the conversion program. On the other hand, more modular architectures consisting of a parser and a _grammar -- best exemplified by Kazman's (1986) analysis of the Oxford English Dictionary (OED) -- do not deliver the structurally rich and explicit LDB ideally required for easy and un- constrained access to the source data. The majority of computational lexicography projects, in fact, fall in the first of the categories above, in that they typically concentrate on the conversion of a single dictlonarv into an LDB: examples here include the work l~y e.g. Ahlswede et al., 1986, on The Webster's Seventh New Collegiate Dictionary; Fox et a/., 1988, on The Collins English Dictionary; Calzolari and Picchi, 1988, on H Nuovo Dizionario Italiano Garzanti; van der Steen, 1982, and Nakamura, 1988, on LDOCE. Even work based on multiple diction- aries (e.g. in bilingual context: see Calzolari and Picchi, 1986) appear to have used specialized programs for eac~ dictionary source. In addition, not an uncommon property of the LDB's cited above is their incompleteness with respect to the original source: there is a tendency_ to extract, in a pre-processing phase, only some fragments (e.g. 91 part of speech information or definition fields) while ignoring others (e.g. etymology, pronun- ciation or usage notes). We have built a Dictionary Entry Parser (DEP) together with grammars for several different dic- tionaries. Our goal has been to create a general mechanism for converting to a common LDB format a wide range of MRD's demonstrating a wide range of phenomena. In contrast to the OED project, where the data in the dictionary is only tagged to indicate its structural character- istics, we identify ,two processes which are crucial for the 'unfolding, or making explicit, the struc- ture of an MRD: identification of the structural markers, followed by their interpretation in con- text resulting in detailed parse trees for individual entries. Furthermore, unlike the tagging of the OED, carried out in several passes over the data and using different grammars (in order to cope with the highly complex, idiosyncratic and am- biguous nature of dictionary entries), we employ a parsing engine exploiting unification and back- tracking, and using a single grammar consisting of three different sets of rules. The advantages of handling the structural complexities of MRD sources and deriving corresponding LDB s in one operation become clear below. While DEP has been described in general terms before (Byrd et al., 1987; Neff eta/., 1988), this paper draws on our experience in parsing the Collins German-English / Collins English-German (CGE/CEG) and LDOCE dictionaries, which represent two very different types of machine- readable sources vis-~t-vis format of the typesetting tapes and notational conventions ex- ploited by the lexicographers. We examine more closely some of the phenomena encountered in these dictionaries, trace their implications for MRD-to-LDB parsing, show how they motivate the design of the DEP grammar formalism, and discuss treatment of typical entry configurations. 2. STRUCTURAL PROPERTIES OF MRD'S The structure of dictionary entries is mostly im- plicit in the font codes and other special charac- ters controlling the layout of an entry on the printed page; furthermore, data is typically com- pacted to save space in print, and it is common for different fields within an entry to employ rad- ically different compaction schemes and abbreviatory devices. For example, the notation T5a, b,3 stands for the LDOCE grammar codes T5a;T5b;T3 (Boguraev and Briscoe, 1989, pres- ent a detailed description of the grammar coding system in this dictionary), and many adverbs are stored as run-ons of the adjectives, using the abbreviatory convention ~ly (the same conven- tion appliesto ce~a~o types of atfixation in gen- eral: er, less, hess, etc.). In CGE, German compounds with a common first element appear grouped together under it: Kinder-: .~.ehor m children's choir; --doe nt children's [ village; -ehe f child marriage. I Dictionaries often factor out common substrings in data fields as in the following LDOCE and CEG entries: ia.cu.bLtor ... a machine for a keeping eggs warm until they HATCH b keeping alive babies that are too small to live and breathe in ordinary air Figure I. Def'mition-initial common fragment Bankrott m -(e)6, -e bankruptcy; (fig) breakdown, collapse; (moralisch) bankruptcy. ~ machen to become or go bankrupt; den - anmelden or ansagen or erld~ren to declare oneself bankrupt. Figure 2. Definition-final common fragment Furthermore, a variety of conventions exists for making text fragments perfo.,rm more than one function (the capitalization of' HATCH above, for instance, signals a close conceptual link with the word being defined). Data of this sort is not very useful to an LDB user without explicit ex- pansion and recovery of compacted headwords and fragments of entries. Parsing a dictionary to create an LDB that can be easily queried by a user or a program therefore implies not only tag- g~ag the data in the entry, but also recovering ellided information, both in form and content. There are two broad types of machine-readable source, each requiring a different strategy for re- covery of implicit structure and content of dic- tionary entries. On the one hand tapes may consist of a character stream with no explicit structure markings (as OED and the Collins bi- linguals exemplify); all of their structure is iml~li.ed in the font changes and the overall syntax ot the entry. On the other hand, sources may employ mixed r~presentation, incorporating both global record delhniters and local structure encoded in font change codes and/or special character se- quences (LDOCE and Webster s Seventh). Ideally, all MRD's should be mapped onto LDB structures of the same type, accessible with a sin- ~le query language that preserves the user s intui- tion about tile structure of lexical data (Neff et a/., 1988; Tompa, 1986), Dictionary entries can be naturally represented as shallov~ hierarchies with a variable number of instances of certain items at each level, e.g. multiple homographs within an entry or multiple senses within a homograph. The usual inlieritance mechanisms associated with a hierarchical orgardsation of data not only ensure compactness of representation, but also fit lexical intuitions. The figures overleaf show sample entries from CGE ,and LDOCE and their LDBforms with explicitly unfolded struc- ture. Within the taxonomy of normal forms .(NF) de- freed by relational data base theo~, dictionary entries are 'unnormalized relations in which at- tributes can contain other relations, rather than simple scalar values; LDB's, therefore, cannot be correctly viewed as relational data bases (see Neff et al., 1988). Other kinds of hierarchically struc- tured data similarly fall outside of the relational 92 .'t~le [...] n (a) Titel m (also Sport); (of chapter) Uberschrift f; (Film) Untertitel m; (form of address) Am'ede f. what -- do yon give a bishop? wie redet or spricht man ¢inen Bischof an? (b) (Jur) (right) (Rechts)anspruch (to auf + acc), Titel (spec) m; (document) Eigentumsurkunde f. entry +-hc:l~: title t • -$upert'K~ . . . +-pos : n ~-slns • -seflsflclm: a +- tran ._qroup l +-tran I ÷~rd: Titel I +-gendmr: m I +Sin: also Sport I ÷ - t ran_g roup I :-~_rlote: of chapter I I •-word: (lberschrift I •-gender: f I +-tran_.group I +-domain: Film I ÷-trim I +-woPd: Untertitel I +-~r: m I ÷-tran~r~3up I +-usaglt_note: form of address I ÷-÷ran I +-'NON: Ant÷de I +-gender: f +-collocat ÷-source: what -- ¢o you give a bishop? *-~rget ÷-~ease: wie redet /or/ spricht man ÷inert Bischof an? ÷-$11~1 ÷-$ensllum: b +-domain: Jur ÷-÷r-an_group ÷-usagl_noti: right t-train • -Nord: Rechtsanspruch ÷'-Nord: Anspruch +-comlmmmt I •-~r4)co~p: to I +-~Poomp: auf + acc ÷-gef~Br: m e-÷ran +-word: Titel +-style: spec ÷-~ndlr: m ÷-÷ran group ÷-usage_note: document ÷-÷ran +-Nord: Eigentumsurkunde ÷-gender: f Figure 3. LDB for a CEG entry NF mould; indeed recently there have been ef- forts to design a generalized data model which treats fiat relations, lists, and hierarchical struc- Ures uniformly (Dadam et al., 1986). Our LDB rmat and Lexical Query l_anguage (LQL) sup- port the hierarchical model for dictionary data; the output of the .parser, similar to the examples in Figure 3 and Figure 4, is compacted, encoded, and loaded into an LDB. nei.~,.ce/'nju:s~ns II 'nu:-: n I a person or an÷real that annoys or causes trouble, PEST: Don't make a nuisance of yourself." sit down and be quiet! 2 an action or state of affairs which causes trouble, offence, or unpleasantness: What a nuisance! I've forgotten my ticket 3 Commit no nuisance (as a notice in a public place) Do not use this place as a a lavatory b aTIP ~ entry • -I'wJb#: nuisance I +-SUlmPhom ÷-print foist1: nui.sance I +-primaw I ÷-peon strir~j: "nju:sFns II "nu:- +-syncat: n I +-sensa_def +-sense_no: 1 •-darn I •-implicit_xrf I I +-to: pest I ÷-def stril~: a person or animal that | annoys or causes trouble: I pest ÷-example ÷-eX stril~: Don't make a nuisance of yourself: sit down an¢ be quiet/ •-sense_def • -slmse .no: 2 +.-defn I ÷-def_string: an action or state of affairs [ which causes trouble, offence. I or unpleasantness +-example • -ex_strirlg: What a nuisancel i've forgotten my ticket +-sense_def ÷-sense no: 3 ÷-de~ - ÷-h¢~ j~rase: Commit no nuisance +-quail§let: as a notice in a public place +-sub defn I a I +-def_stril~: Do not use this place I as a lavatory ÷-~.~b_dlfn +-seq_no: b ÷--defn *-i.~li¢it_xrf I *-to: tip I ÷-h¢~ no: 4 ÷-dQf s]ril~J~: Do not use this place as a tip Figure 4. LDB for an LDOCE entry 3. DEP GRAMMAR FORMALISM The choice of the hierarchical model for the rep- resentation of the LDB entries (and thus the output of DEP) has consequences for the parsing mechanism. For us, parsing involves determining the structure of all the data, retrieving implicit information to make it explicit, reconstructing ellided information, and filling a (recursive) tem- plate, without any data loss. This contrasts with a strategy that fills slots in predefmed (and finite) sets of records for a relational system, often dis- carding information that does not fit. In order to meet these needs, the formalism for dictionary entry grammars must meet at least three criteria, in addition to being simply a nota- tional device capable of describing any particular 93 dictionary format. Below we outline the basic requirements for such a formalism. 3.1 Effects of context The graham,_ .~ formalism should be capable of handling mildly context sensitive' input streams, as structurally identical items may have widely differing functions depending on both local and global contexts. For example, parts of speech, field labels, paraphrases of cultural items, and many other dictionary fragments all appear in the CEG in italics, but their context defines their identity and, consequently, their interpretation. Thus, in the example entry in Figure 3 above, m, (also Sport), (of chapter), and (spec) acquire the very different labels of pos, do, in, us=g=_not=, and sty1.=. In addition, to distin- t~ish between domain labels, style labels, dialect els, and usage notes, the rules must be able to test candidate elements against a closed set of items. Situations like this, involving subsidiary application of auxiliary procedures (e.g. string matching, or dictionary lookup required for an example below), require that the rules be allowed to selectively invoke external functions. The assignment of labels discussed above is based on what we will refer to in the rest of this paper asglobal context. In procedural terms, this is defined as the expectations of a particular gram- mar fragment, reflected in the names of the asso- dated rides, which will be activated on a given pare through the grammar. Global context is a dynamic notion, best thought of as a 'snapshot' of the state of the parser at any_ point of process- ing an entry. In contrast, local context is defined by finite-length patterns of input tokens, ,arid has the effect of Identifying typographic 'clues to the structure of an entry. Finally, immediate context reflects v.ery loc~ character patte12as which tend t 9 drive the initial segmentatmn ot the 'raw' tape character stream and its fragmentation into structure- and information-carrying tokens. These three notions underlie our approach to structural analysis of dictionaries andare funda- mental to the grammar formalism design. 3.2 Structure manipulation The formalism should allow operations on the (partial) structures delivered during parsing, and not as.separate tree transtormations once proc- essing is complete. This is needed, for instance, in order to handle a variety of scoping phenom- ena (discussed in section 5 below), factor out items common to more than one fragment within the same entry, and duplicate (sub-)trees as com- plete LDB representatmns ~ being fleshed out. Consider the CEG entry for abutment": I abutment [.,.] n (Archit) Fltigel- or Wangenmauer f. I Here, as well as in "title" (Figure 3), a copy of the gender marker common to both translatmns needs to migrate back to the ftrst tram. In addi- tion, a copy of the common second compound element -mauer also needs to migrate (note that e•_• : abutment I ÷-superhom ,I.-$ens ÷- t Pan_group +-tran I +-iNord: F/Ogelmauer I *-~nd=r: f ÷-tran +.-t,K)rd : Wangenmauer ÷-gender: f identifying this needs a separate noun compound parser augmented with dictionary lookup). An example of structure duplication is illustrated by our treatment of (implicit) cross-references in LDOCE, where a link between two closely re- lated words is indicated by having one of {hem typeset in small capitals embedded in, a definition of the other (e.g. "PEST' and "TIP' in the deft- nitions of "nuisance" in Figure 4). The dual purpose such words serve requires them to appear on at least two different nodes in the final LDB structure: ¢~f_string and implicit_xrf. In or- der to perform the required transformations, the formalism must provide an explicit dle on partial structures, as they are being built by the parser, together with operations which can mariipulate them -- both in terms of structure decomposition and node migration. In general, the formalism must be able to deal witli discontinuous constituents, a problem not dissimilar to the problems of discontinuous con- stituents in natural language parsing; however in dictionaries like the ones we discuss the phe- nomena seem less regular (if discontinuous con- stituents can be regarded as regular at all). 3.3 Graceful failure The nature of the information contained in dic- tionaxies is such that certain fields within entries do not use any conventions or formal systems to present their data. For instance, the "USAGE" notes in LDOCE can be arbitrarily complex and unstructured. . fragments, .c°mbining straaght text with a vanety of notattonal devices (e.g. font changes, item highlighting and notes segmenta- tion) in such a way that no principled structure may be imposed on them. Consider, for example, the annotation of "loan": loan 2 v ........ esp. AmE to give (someone) the use of, lend ........ USAGE It is perfectly good AmE to use loan in the meamng of lend: He loaned me ten dollars. The word is often used m BrE, esp. in the meaning 'to lend formally for a long period': He loaned h/s collection of pictures to the public GALLERY but many people do not like it to be used simply in the meaning of lend in BrE... Notwithstanding its complexity, we would still like to be able to process the complete entry, re- covering as much as we can from the regularly encoded information and only 'skipping' over its truly unparseable fragment(s). Consequently, the formalism and the underlying processing flame- 94 work should incorporate a suitable mechanism for explicitly handling such data, systematically occumng in dictionaries. The notion of .graceful failure is, in fact, best re- garded as 'seledive parsing'. Such a mechanism has the additional benefit of allowing the incre- mental development of dictionary grammars with (eventually) complete coverage, and arbit .r-~.ry depth of analysis, of the source data: a particular grammar might choose, for instance, to treat ev- erything but the headword, part of speech, and pronunciation as 'junk', and concentrate on elaborate parsing of the pron.u:n, ciation fields, while still being able to accept all input without having to assign any structure to most of it. 4. OVERVIEW OF DEP DEP uses as input a collection of 'raw' typesetting images of entries from a dictionary 0.e. a typesetting .tape. with begin-end' bounda- ries of entries explicitly marked) and, by consult- ing an externally supplied .gr-qmmar s.p~." c for that particular dictionary, produces explicit struc- tural representations for the individual entries, which are either displayed or loaded into an LDB. The system consists of a rule compiler, a parsing nDg~Be, a dictionary entry template generator, an loader, and various development facilities, all in a PROLOG shell. User-written PROLOG functions and primitives are easily added to the system. The fdrmalism and rule compiler use the Modular Logic Grammars of McCo/'d (1987) as a point of d~ure, but they have been sub- stantially modified and extended to reflect the re- quirements of parsing dictionary entries. The compiler accepts three different kinds of rules corresponding to the three phases of dictionary entry analysis: tokenization, retokenization, and proper. Below we present informally ghts of the grammar formalism. 4.1 Tokenization Unlike in sentence parsing, where tokenization (or lexical analysis) is driven entirely by blanks and punctuation, the DEP grammar writer ex- plicitly defines token delimiters and token substi- tutions. Tokenixation rules specify a one-to-one mapping from a character substring to a rewrite token; the mapping is applied whenever the specified substring is encountered in the original typesetting tape character stream, and is only sensitive to immediate context. Delimiters are usually font change codes and other special char- acters or symbols; substitutions axe atoms (e.g. ital_correction, field_m) or structured terms be.g. fmtl italic l, ~! "1" I). Tokenization reaks the source character stream into a mixture of tokens and strings; the former embody the notational conventions employed by the printed dictionary, and are used by tlae parser to assign structure to an entry; the latter carry the textual (lexical) content of the dictionary. Some sample rules for the LDOCE machine-readable source, marking the beginning and end of font changes, or making explicit special print symbols, are shown below (to facilitate readability, (*AS) re- presents the hexadecimal symbol x'AS'). dolim( "(~i)", font( i~alic } ). dolia( "(UCA)", font( beginl samll_caps ) I ). dolim(II{~mS) ii f~r~t ( end( small_caps ) ) ). dolim!"(~)", ital correction). delill( "OqlO)", hyl~in_mark ). Immediate context, as well as local string rewrite, " can be specified by more elaborate tokenization rules, in which two additional arguments specify strings to be 'glued' to the strings on the left and right of the token delimiter, respectively. For CEG, for instance, we have dotiml". >u4<", f~t;~l;)>~).<°'). delim( ":>u~<", delim( ">uS<", font( roman ) ). Tokenization opeEates recursively on the string fragments formed by an active rule; thus, appli- catton of the first two rules above to the stnng ,,mo~. :~a,: ~r~" results in the following token list: "xxx" . lad . fontlbold) , "y~¢". 4.2 Retokenization Longer_-range (but still local) context sensitivity~ is irfiplemented via retokenization, the effect ot which is the 'normalization' of the token list. Retokenization rules conform to a general rewrite format -- a pattern on the left-hand side defines a context as a sequence of (explicit or variable place holder) tokens, in which the token list should be adlusted as indicated by the right-hand side -- and can be used to .perform a range of cleaning up tasks before parsing proper. Streamlining the token list. Tokens without in- formation- or structure-bearing content; such as associated with the codes for fialic correction or thin space, are removed: ital correction : ,Seg <:> ÷Seg. Superfluous font control characters can be simply deleted, when they follow or precede certain data-can'ying tokens which also incorporate typesetting information (such as a homogra.ph superscript symbol or a pronunciation marker indicating the be~finning of the scope of a pho- netic font): rk font! phonetic ) < • rk. supl N) < • R (Re)adjusting the token list. New tokens can be introduced in place of certain token sequences: bra : fonttitalic) <=> beginlrestric~ion). f~'tt(r~m~'t) : ket < • ~wl(r~stricti~'b). Reconstruction of string segments. Where the initial (blind) tokenization has produced spurious lragraentation, string sewnents can be suitably reconstructed. For instance, a hyphen-delimited sequence of syllables in place of the print form of a headword, created by tokeni~ation on ~,-rg), can be 'glued' back as follows: *Syl_l : ~ mark : +$ 1 Z t strxngpTSyl 1 ) : $s~r~ngp( S¥1 2 ) <=> w~oin(Seg, S~1_1.' .... .$yl_2.n:l"I t~. This rule demonstrates a characteristic property. of the DEP formalism, discussed in more detail 95 later: arbitrary Prolog predicates can be invoked to e.g. constrain rule application or manipulate strings. Thus, the rule oialy applies to string to- kens surrounding a hyphen character; it manu- factures, by string concatenation, a new segment which replaces the triggering pattern. Further segmentation. Often strings need to be split, with new tokens inserted between the pseces, to correct infelicities in the tapes, or to insert markers between recognizably distinct con- tiguous segments that appear in the same font. The rule below implements the CGE/CEG con- vention that a swung dash is an implicit switch to bold if the current font is not bold already. fontIX} : $(-X=bold) : ¢E : tstringplE} tcm~=at( A,B,E ) tconcat (" ~',re,B}: <=> rant(X) : ÷A : font(bold} : +B. Dealing with irregular input. Rules that rear- range tokens are o~ten needed to correct errors in the tapes. In CEG/CGE, parentheses surround- ing italic items often appear (erroneously) in a roman font. A suite ofiaxles detaches the stray parentheses from the surrounding tokens, moves them around the font marker, and glues them to the item to which they belong. +E : $strir~piE) : t¢oncat(") "~E1,EI <=> t0 )n- : +El. /* detach */ font(F) : ")" < • ., ),o : : retoKen( font( F ) ). /* move */ +E : Sstrirtgl=iE) : ")" : toc~:at(E,")"~E1} <:> ÷El. /~ gluo */ eot~um invokes retokenization recursively on the sublist beginning with fontt e) and including all tokens to its right. In p "nneiple, the three rules can be subsumed by a single one; in practice, separate rules also 'catch' other types of errone- ous or nots), input. Although retokenization is conceptually a sepa- rate process, it is interleaved in practice with tokemzation, bringing imp .rovements in perform- ance. Upon completion, the tape stream corre- sponding, for instance, to the LDOCE entry non-trivial manipulation of (partial) trees, as im- plicit and/or ellided information packed in the bntries is being recovered and reor-gaxxized. Pars- ing is a top-down depth-first operation, and only the first successful parse is used. This strategy, augmented by a 'junk collection' mechanism (discussed below) to recover from parsing failures, turns out to be adequate for handling all of the phenomena encountered while assigning struc- tural descriptions to dictionary entries. Dictionary grammars follow the basic notational conventions of logic grammars; .however, we use additional operators tailored to the structure ma- nipulation requirements of dictionary parsing. In pLrticular, the right-hand side of grammar rules admits the use of-four different types ot operators, designed to deal with token list consumption, to- ken list manipulation, structure assignment, and (local) tree transformations. These operators suitably modify the expansions of grammar rules; ultimately, all rules are compiled into Prolog. Token consumption. Tokens axe removed from the token list by the + and - operators; + also as- signs them as terminal nodes under the head of the invoking rule. Typically, delimiters intro- duced by tokenization (and retokenization) are removed once they serve their primary function of identifying local context; string segments of the token list are assigned labels and migrate to ap- propriate places in the final structural represen- iation ot an entry. A simple rule for the part of speech fields in CEG (Figure 3) would be: los ::>-fzntl italic) = +Sag. A structured term stpos, "n".nil) is built as a result of the rule consuming, for instance, the to- ken "n", Rule names are associated with attri- butes in the LDB representation for a dictionary entry; structures built by rules are pairs of the form sire, Vii=l, where velt~ is a list of one or more elements (strings or further structures 'returned' by reeunively invoked rules). au.tit.fi¢ ;¢¢'tistik, adj suffering from AUTISMI: I autistic chlld/behaviour -- ...ally adv [Wa4] I F<wtistic<F<>wO~O} titC*80}~icP<C: "fist Z kH<adj<S<OOOO<O<suf qer ing from{~CA)autis m¢~B){*SA) : £u~6}autistic childrm~behavi our(~) R<OZ<R<-nmlZy<R<><adv<N~< is converted into the following token list: maHtar fld ~ . p@ maHter . pro~_wmrker - ~sd_--rker do~ marker font.T~, inl mll caps ) }. ~t ..1-1 . bagin~e~m) . "autistic" "au-tis-tic" "C : "tlstlk" -adp 0 "0000" "suffering from" "a~ut i~a#' "amtisti¢ ahild/be~viour" "01" Token list manipulation. Adjustment of the to- ken list may be required in, for instance, simple cases of recovering ellided information or reor- dering tokens in the input stream. This is achieved by the tm and ir~x operators, which respectively insert single, or sequences of, tokens into the token list at the current position; and the ++ operator, which inserts tokens (or arbitrary tree fragments) directly into the structure under construction. Assuming a global variable, .rod, bound to the headword of the current entry, and the ability to invoke a Prolog string concat- enation tunction trom within a rule (~a the * operator; see below), abbreviated morphological derivations stored as run-ons might be recovered ~l~ e ltlqc~r in~doriv | . "autisti(ally" by: ! doriv ) . fld_sep . "adv" fld_sep . "Ha4" . fld_sep . run_on =:>-rurmn mark : -fon~lbold} : -Sag : ..e~x~=~l,,-,,~ X, Seg) wi.I X. suffix) 4.3 Parsing t~,n~'l:te,m,:l, x, Oerivl ++Ooriv. Parsing proper makes use of unification and backtrracking to handle identification of segments (i tin is separately defined to test for membership by context, and is heavily augmented with some of a closed class of suffixes.) 96 Structure assignment. The ++ operator can only assign arbitrary structures directly to the node in the tree which is currently under construction. A more general mechanism for retaining structures for future use is provided by allowing variables to be (optionally) associated with grammar rules: in this way the grammar writer can obtain an ex- plicit handle on tree fragments, in contrast to tlae default situation where each rule implicitly 'returns' the structure it constructs to its caller. The following rule, for example, provides a skel- eton treatment to the situation exemplified in Figure 4, where a definition-initial substring is common to more than one sub-definition: dofs = • (Sag) : s stjxkafs(X) ==> subdof(X) : opt(subdofs(X)). subdof(X) ==>-font(bold) : sd letter : -fontl rol~n) : ~ncatlX, Seg, DefStr~ng) : ins(DefString) : dof_strxng. S d:Fletter ==> *Sag ~veri~(Seg, "abe"). de _siring =:> +Sag ~ estringp(Seg). The defs rule removes the defmition-irtitial string segment and passes: it on to the repeatedly in- voked ~ s . This manufactures the complete definition string by concatenating the common initial segment, available as an argument instantiated two levels higher, with the continua- tion string specific to any given sub-definition. Tree transformations. The ability to refer, by name, to fragments of the tree being constructed by an active grammar rule, allows arbitrary tree transformations using the complementary opera- tors -z. and +~.. They can only be applied to non-terminal grammar rules, and require the ex- plicit specification of a place-holder variable as a rule argument; this is bound to the structure constructed by the rule. The effect of these op- erators on the tree fragments constructed by the rules they modify is to prevent their incorporation into the local tree (in the case of -z), to explicitly splice it in (in the case of ÷z), or simply to capture it (z). The use of this mechanism in conjunction with the structure naming facility allows both permanent deletion of nodes, as well as their practically unconstrained migration between, and within, different levels of grammar (thus imple- menting node raising and reordering). It is also possible to write a rule which builds no structure (the utility of such rules, in particular for con- trolling token consumption and junk collection, is discussed in section 5). Node-raising is illustrated by the grammar frag- ment below, which might be used to deal with certain collocation phenomena. Sometimes dic- tionaries choose to explain a word in the course of defining .another related word by arbitrarily in- setting mm~-entnes in their defmitmns: lach.ry.mal 'l~kfimal adj [Wa51 of or concerning tears of the organ (lach~mai gland/'_ ./) of the body that produces them The potentially complex structure associated with the embedded entry specification does not belong to the definition string, and should be factored out as a separate node moved to a higher level of the tree, or even used to create a new tree entirely. The rule for parsi.n.g the definition fields of an entry makes a provmon for embedded entries; the structure built as an ~ entry is bound to the str,ac argument in the aofn rule. The -z op- erator prevents the ~_entry node from being incorporated as a daughter to ae~n: how- ever, by finification, it beghas its ,mi',gr, ation 'upwards' through the tree, till it is 'caught by the entry rule several levels ~gher and inserted (via • x) in its logically appropnate place. entry ::> head : ton : pos : code : defn( Em~fled ) : +Xembedded_entryl Embedded ). ckafn(StrIJc) ==>-Segl : Sstringp(Segl) : -Ze~=~KJded entry( Struc ) -Seg2 : $s~ringp( Seg2 ) $concat { Segl,S~2, De÷String ) : *+OefString. embedded_entry ==>-bra : ........ : -ket. Capturing generalizations / execution control. The expressive power of the system is further en- hanced by allowing optionality (via the opt oper- ator), alternations (I) and conditional constructs in the gra'--:nar rules; the latter are useful both for more co~:::,.,ct rule specification and to control backtracking while parsing. Rule application may be constrained by arbitrary tests (revoked, as Prolog predicates, via a t operator), and a string operator is available for sampling local context. The mechanism of escaping to Prolog, the motivation for which we discuss below, can also be invoked when arbitrary manipulation of lexical data -- ranging from e.g. simple string processing to complex morphological analysis -- Is required during parsing. Tree structures. Additional control over the shape of dictionary" entry trees is provided by having two types of non-terminal nodes: weak and strong ones. The difference is in the explicit presence or absence of nodes, corresponding to the rule names, in the final tree: a structure frag- ment manufactu~d by a weak non-terminal is effectively spliced into the higher level structure, without an intermediate level of naming. One common use of such a device is the 'flattening' of branching constructions, typically built by re- cursive rules: the declaration str~;,-,~_nonterminals ( clefs . subde¢ . nil 1. when applied to the sub-definitions fragment above, would lead to the creation of a group of sister ~ f nodes, immediately dominated bv a aefs node. Another use of the distinction be- wcteen weak and strong non-terminals is the ef- ive mapping from typographically identical entry segments to appropriately named structure fragments, with global context driving the name assignment. Thus, assuming a weak label rule which captures the label string for further testing, analysis of the example labels discussed in 3.1 could be achieved as follows (also see Figure 3): 97 labellXI =:> -beginlrestriction} :.÷X : $strir~p(X] : -endfresxrictionl. tr~n ==> opt I doamin I style I diaZ I usaga_note -) : word. ~o~en ==> labeltX} i ,i,,X, ~_!ab). ==> label(X } Sisal X, lab]. dial = • labellX} $isalX, dial-lab). usagenote ==> labellX). Such a mechanism captures g~aeralities in typograp~tc conventions employed across any given dictionary, and yet preserves the distinct, name spaces required for a meaningful unfolding of a dictionary entry structure. 5. RANGE OF PHENOMENA TO HANDLE Below we describe some typical phenomena en- countered in the dictionaries we have parsed and discuss their treatment. 5.1 Messy token lists: controlling token consumption The unsystematic encoding of font changes be- fore, as well as after, punctuation marks (com- mas, semicolons, parentheses) causes blind tokenization to remove punctuation marks from the data to which they are visually and concep- tually attached. As already discussed (see 4.2), most errors of this nature can be corrected by retokenization. Similarly, the confusing effects of another pervasive error, namely the occurrence of consecuti, e font changes, can be avoided by having a retokenization rule simply remove all but the last one. In general, context sensitivity is handled by (re)adjusting the token list; retokenization, however, is only sensitive to local context. Since global context cannot be deter- mined unequivob.ally till parsing, the grammar writer is given complete control over the con- sumption and addition of tokens as parsing pro- ceeds from left to right -- this allows for motivated recovery of ellisions, as well as dis- carding of tokens in local transformations. For instance, spurious occurrences of a font marker before a print symbol such as an opening parenthesis, which is not affected by a font dec- ' laration, clearly cannot be removed by a retokenization rule font! roman] : bra <=> bra. (The marker may be genuinely closing a font segment prior to a different entry fragment which commences with, e.g., a left parenthesis). Instead, a grammar rule anticipating a br~ token within its scope can readiust the token list using either of: ... ==> ... : -fontlroman) : -bra : inslbr-a). ... ==> ... : -fantlromanl : stringlbra.*]. (The $*ri-e operator tests for a token list with br~ as its first element.) 5.2 The Peter-1 principle: scoping phenomena Consider the entry for "Bankrott" in Figure 2. Translations sharing the label (fig) ("breakdown, collapse ') are grOUl>ed together ~6ith commas and separated from other lists with semicolons. The restnctlon (context or label) precedes the llst and can be said to scope 'right' to the next semicolon. We place the righ-t-scoping labels or context un- der the (semicolon-delimited) t~,n_group as sister nodes to the multiple (comma-delimited) tr--~ nodes (see also the representation of "title" in Figure 3). Two principles ate at work here: meiintaining implicit e~dence of synonymy among terms in the target langtmge responds to the "do not discard anything" philosophy; placing common data items as high as possible in the tree (the 'Peter-minus-1 princaple') is in the spirit of Flickinger et al. (1985), and implements the notion of placing a t~al node at the hi~. est position hi tlae tree wlaere its value is valid in combination with the values at or below its sister nodes. The latter principle also motivates sets of rules like ~rm~ ==> "'" pr~n ... : homograph .... ==> pratt used to account for entries in English where the pronunciation differs for different homographs. 5.3 Tribal memory: rule variables Some compaction or notational conventions in dictionaries require a mechanism for a rule to re,- member (part of) its ancestry or know its sister s descendants. Consider the l~roblem of determin- ing the scope of gender or labels immediately following variants of the headword: Advolmturbfiro nt (Sw), Advokaturskanzlei f ( Aus) lawyer's offize. Tippfr~ein nt ( lnf), ~ppse f -, -n ( pej ) typist. Alchemic ( esp Aus) , Akhimief alchemy. The first two entries show forms differing, re- spectively, in dialect and gender, and register and gender. The third illustrates other combinations. The rule accounting for labels after a variant must know whether items of like type have already been found after the hcadword, since items before the variant belong to the headword, different items of identical type following both belong in.- dividuaUy, and all the rest are common to botla. This 'tribal' memory is implemented using rule variables: entry ::> ... ( I dial : $(N:dial)) I (N=f-,~dial}) : ... : opt(subhm~lN)| .... subhamdlN} ==> opt( $(N=nodial) : optldial) ) : .... In addition to enforcing rule constraints via unification, rule arguments also act as 'channels' for node raising and as a mcchanisrn for control- ling rule behaviour depending on invocation context. This latter need stems from a pervasive phenom- enon in dictionaries: the notational conventions for a logical unit within an entry persist across different contexts, and the sub-grammar for such a unit should be aware of the environment it is activated in. Implicit cross-references in LDOCE are consistently introduced by fontl stall csos ], independent of whether the runnin 8 text is a de- fmiuon (roman font), example (italic), or an era- 98 bedded phrase or idiom (bold); by enforcing the return to the font active before the invocation of iaq)iioit=xrf, we allow the analysis of cross- references to be shared: implicit xrft X) ==> -1Font( begin( stall cams ) ) - : ... :-¢ont(X).- df tx* ==> ... implicit xrflroaan) : .... ex-txt =ffi> implicit-xrf(italic) id_-_tx* ==> ... implioit-xvfl bold) ..... 5.4 Unpacking, duplication and movement of structures: node migration The whole range of phenomena requiring explicit manipulation of entry fragment trees is handled by the mechanisms for node raising, reordering, and deletion. Our analysis of implicit cross- references in LDOCE factors them out as sepa- rate structural units participatingin the make-up of a word sense definition, as well as reconstructs a 'text image' of the definition text, with just the orthography of the cross-reference item 'spliced in' (see Figure 4). darn ==> .dof_segs.! O_String) . : ooT_szringCD_St r trig J. clef segslStr_l) = • def_nugget(Seg) ( d~f segslStr O) Str-O : "" )- tcon(~*( Seg,Str_O ,Str_l ). def_nugget(Ptr ) ==> 7.iatPlicit xr¢ (s( impliEit xrf, . s( to, Ptr.Ril ). Resx ) ). def_nuggot! Seg ) ==> -Seg : Sstringpt Seg ). def_strlngi Dof) ==> ÷+Oef. The rules build a definition string from any se- quence of substrings or lexical items used as cross-references: by invoking the appropriate de¢_nusmat rule, the simple segments are retained only for splicing the complete definition text; cross-reference pointers are extracted from the structural representation of an implicit eross- reference; and itmlicit._xef nodes are propagated up to a sister position to the dab_string. The string image is built incrementally (by string con- catenation, as the individual a-¢_nutmts are parsed); ultim, ately the ~¢_strir~ rule simply incorporates tt into the structure for ae~. De- claring darn, def string and implicit_xrf to be strong non-terminals ultimately results in a dean structure similar to the one illustrated in Figure 4. Copying and lateral migration of common gender labels in CEG translations, exemplified by title' (Figure 3) and "abutment" (section 3.2), makes a differ r- ent use of the ¢z operator. To capture the leftward scope of gender labels, in contrast to common (right-scoping) context labels, we create, for each noun translatton (tran), a gender node with an empty value. The comma-delimited *ran nodes are collected by a recursive weak non- terminal *fans rule. trams ==> tran(G) : opt( -ca : trans(G) ). tran(G) :=> ... word ... : opt( -Zoenektr! G ) ) : *7.gendor( G ). The (conditional) removal of gander" in the sec- ond rule followed by (obligatory) insertion of a ~ne~r node captures the gender if present and 'digs a hole' for it if absent. Unification on the last iteration of tear~ fills the holes. Noun compound fragments, as in "abutment" can be copied and migrated forward or backward using the same mechknism. Since we have not implemented the noun compound parsing mech- amsm required for identification of segments to be copied, we have temporized by naming the fragments needing partners alt_.=¢x or alt_sex. 5.5 Conflated lexical entries: homograph unpacking We have implemented a mechanism to allow creation of additional entries out of a single one, for example from orthographic, dialect, or morphological variants of the original headword. Some CGE examples were given in sections 2 and 5.3 above. To handle these, the rules build the second entry inside the main one and manufac- ture cross reference information for both main form and variant, in anticipation of the imple- mentation of a splitting mechanism. Examples of other types appear in both CGE and CEG: vampire [...] n (lit) Vampir, Blutsauger (old~ m; (fig) Vampir m. - hat Vampir, Blutsauger (old) m. wader [...] n (a) (Orn) Watvogel m. (b) ~s pl (boots) Watstiefel pl. house in cpd~ HaLts-; ~ arrest n Hausarrest m; ~ boat n Hausboot n~ - baund adj ans Haus gefesselt; .... house:. --hunt vi auf Haussuche sein; they have started --hunting sic haben angefangen, nach einem Haus zu suchen; -hunting n Haussuche n; .... The conventions for morphological vari,'ants, used heavily in e.g. LDOCE and Webster s Seventh, are different and would require a different mech- anism. We have not yet developed a generalized rule mechanism for ordering any kind of split; indeed we do not know if it ts possible, given the wide variation ~, seemingly aa hoc conventions for 'sneaking in logically separate entries into re- lated headword definitions: the case of "lachrymal gland" in 4.3 is iust one instance of this phe- nomena; below we list some more conceptually similar, but notationally different, examples, demonstrating the embedding of homographs in the variant, run-on, word-sense and example fields of LDOCE. daddy long.legs .da~i lot~jz also (/'m/) crane fly -- n ... a type of flying insect with long legs ac.rLmo.ny ... n bitterness, as of manner or language -- -nious ~,kri'maunias/ adj: an acrimonious quarrel -- -niously adv crash I ... v ... 6 infml also gatecrash -- to join (a party) without having been invited ... folk et.y.mol.o.gy ,,..'--~ n the changing of straage or foreign words so that they become like quite common ones: some people say ~parrowgrass instead of ASPARAGUS: that ia an example of folk etymology 99 5.6 Notational promiscuity: selective tokenization Often distinctly different data items appear con- tiguous in the same font: the grammar codes of LDOCE (section 2) are just one example. Such run-together segments clearly need their own tokenization rules, which can only be applied when they are located during parsing. Thus, commas and parentheses take on special meaning in the string "X(to be)l,7", indicating, respec- tively, ellision of data and optionality of p~ase. This is a different interpretation from e.g. alter- nation (consider the meaning of "adj, noun")or the enclosing of italic labels m parentheses (Fig- ure 3). Submission of a string token to further tokemzation is best done by revoking a special purpose pattern matching module; thus we avoid global (and blind) tokenization on common (and ambiguous) characters such as punctuation marks. The functionality required for selective tokenization is provided'by a ~e primitive; below we demonstrate the construction of a list of sister synca* nodes from a segment like "n, v, adj", repetitively invoking oa)-~a) to break a string into two substrings separated by a comma: -Seg : $stri ( ) : syr~ats ==> $t~rse(Hd." ~n~.Re~s .nil, Se9) : ins1( Hd. Rest.nil ) : s t syncat • ,~a: : opttsyncats). == tin( Seg, portofspeec:h 1. 5.7 Parsing failures: junk collection The systematic irregularity of dictionary data (see section 3.3) is only one problem when parsing dictionary entries. Parsing failures in general are common during .gr-,~maar development; more specifically, they tmght arise due to the format of an entry segment being beyond (easy) capturing within the grammar formalism, or requiring non- trivial external functionality (such as compound word parsing or noun/verb phrase analysis). Typically, external procedures o~. rate on a newly constructed string token which represents a 'packed' unruly token list. AlternaUvely, if no format need be assigned to the input, the graxn. - mar should be able to 'skip over' the tokens m the list, collecting them under a 'junk' node. If data loss is not an issue for a specific applica- tion, there is no need even to collect tokens from irregular token lists; a simple rule to skip over USAGE fields might be wntten as usacje ==> -usage nmrk : use field. use field ==> -U ToKen : Snotiee~d ufield} : opt( use_f ield ). - (Rules like these, building no structure, are espe- cially convenient when extensive reorganizatmn of tile token list is required -- typically in cases of grammar-driven token reordering or token de- letion without token consumption.) In order to achieve skipping over unparseable in- put without data loss, we have implemented a ootleztive rule class. The structure built by such rules the (transitive) concatenation of all the character strings in daughter segments. Coping with gross irregularities is achieved by picking up any number of tokens and 'packing' them to- ther. This strategy is illustrated by a grammar phrases conjoined with italic 'or' in example sentences and/or their translations (see Figure 3). The italic conjunction is surrounded by slashes in the resulting collected string as an audit trail. The extra argument to e~n$ ehforces, following the strategy outlined in section 5.3, rule application only m the correct font context. stron~nonterminals (source . targ . hill. colle~ives !conj . nil ). source ==> ¢on~(bo].d). r~ ==> (:~rl..11 rOlllilr~ J. - IX) ::> -TOrt~|X) +~ -fort~(i~l 1} : 44'* /" 4,"Or" ~ ++"/ " -font I X ) +Seg. Finally, for the most complex cases of truly ir- regular input, a mechanism exists for constraining juiak collection to operate only as a last resort and only at the point at which parsing can go no fur- ther. 5.8 Augmenting the power of the formalism: escape to Prolog Several of the mechanisms described above, such as contextual control of token consumption (sec- tion 5.1), explicit structure handling (5.4), or se- lective toke/fization (5.6), are implemented as • separate Prolo~z modules. Invoking such extemai functionality from the grammar rules allows the natural integration of the form- and content- recovery procedures into the top-down process of dictionary entry analysis. The utility of this device should be clear from the examples so far. Such escape to the underlying implementation language goes against the grain of recent devel- opments of declarative gran3m_ ar formalisms. (the procedural ramifications of, for instance, being able to call arbitrary LISP functions from the arcs of an ATN grammar have been discussed at length: see, for instance, the opening chapters in Whitelock et al., 1987). However, we feel justi- fied in augmenting, the . . . . . formalism in such a way, as we are dealing with input which Is different m nature from, and on occasions possibly more complex than, straight natural language. Unho- mogeneous mixtures of heavily formal notations and annotations in totally free format, inter- spersed with (occasionally incomplete) fragments of natural language phrases, can easily defeat any attempts at 'cleafi' parsing. Since the DEP sys- tem is designed to deal with an open-ended set of dictionaries, it must be able to corffront a sim- ilarly open-ended set of notational conventions and abbreviatory devices. Furthermore. dealing in full with some of these notations requires ac- cess to mechanisms and theories well beyond the power of any grammar formalism: consider, for stance, what is involved in analyzing pronun- ciation fields in a dictionary, where alternative pronunciation patterns are marked only for syllable(s) which differ from the primar3 ~ pronun- caation (as in arch.bish.op: /,a:tfbiDp II ,at-/); where the pronunciation string itself ts not marked for syllable structure; and where the as- signment of syllable boundaries is far from trivial (as in fas.cist: /'f=ej'a,st/)! 100 6. CURRENT STATUS The run-time environment of DEP includes gr .ammar debugging utilities, and a number of opttons. All facilities have been implemented, except where noted. We have very detailed grammars for CGE (parsing 98% of the entries), CEG (95%), and LDOCE (93%); less detailed grammars for Webster s Seventh (98%), and both laalves of the Collins French Dictionary (approxi- mately 90%). The Dictionary Entry Parser is an integra.1, part of a larger system designed to recover dictionary structure to an arbitrary depth of detail, convert the resulting trees into LDB records, and make the data av/tilable to end users via a flexible and powerful lexical query language (LQL). Indeed, we have built LDB's for all dictionaries we have parsed; further development of LQL and the ex- ploitation of the LDB's via query for a number of lexical studies are separate projects. Finally, we note that, in the light of recent efforts to develop an interchange standard for (English mono-lingual) dictionaries (Amsler and Tompa, 1988), DEP acquires additional relevance, since it can be used, given a suitable annotation of the grammar rules for the machine-readable source, to transduce a typesetting tape into an inter- changeable dictionary source, available to a larger user commumty. ACILNOWLEDGEMENTS . We would like to thank Roy Byrd, Judith Klavans and Beth Levin for many discussions concerning the Dictionary Entry Parser system in general, and this paper in particular. Any re- maining errors are ours, and ours only. REFERENCES Ahlswede, T, M Evens, K Rossi and J Markowitz W1986) "Building a Lexical Database by Parsing ebster's Seventh New Collegiate Dictionary '~, Advances in Lexicology, Second Annual Confer- ence of the UW Centre for the New Oxford English Dictionary, 65- 78. Amsler, R and F Tompa (1988) "An SGML-Based Standard for English Monolingual Dictionaries", Information in Text, Fourth An- nual Conference of the L'W Centre for the New Oxford English Dictionary, 61- 79. Boguraev, B, and E Briscoe (Eds) (1989) Com- putational Lexicography for Natural Language Processing, Longman, Harlow. .~yrd, R, N Calzolari, M Chodorow, J Klavans, Neff and O Rizk (1987) "Tools and Methods for Computational Lexicology", Computational Linguistics, vol. 13(3 - 4), 219 - 240. Calzolari~ N and E Picchi (1986) "A Project for a Bilingual Lexical Database System", Advances in Lexicology, Second ~ual Conference of the L.'W Centre for the New Oxford English Dic- tionary, 79- 92. Calzolari, N and E Picchi (1988) "Acquisition of Semantic Information from an On-Line Dictionary.", Proceedings of the 12th Interna- tional Conference on Computational Linguistics, 87- 92. Collins (1980) Collins German Dictionary: German- English, English- German, Collins Publishers, Glasgow. Gaxzanti (1984) II Nuovo Dizionario Italiano Garzanti, Garzanti, Milano. Longman (1978) Longman Dictionary of Con- temporary English, Longman Group, London. Dadam, P, K Kuespert, F Andersen, H Blanken, R Erbe, J Guenauer, V Lure, P Pistor and G Walsh (1986) "A DBMS Prototype to Support Extended NF2 Relations: An ~tegrated View on Flat Tables and Hierarchies, Proceedings of A CM SIGMOD'86: International Conference on Management of Data, 356- 367. Flickinger, D, C Pollard, T Wasow (1985) "Structure Sharing in Lexical Representation", Proceedings of the 23rd Annual Meeting of the Association for Computational Linguistics, 262- 267. Fox, E, T Nutter, T Alhswede, M Evens and J Markowitz (1988) "Building a Large Thesaurus for Information Retrieval", Proceedings of the Second Conference on Applied Natural Language Processing, 101 - 108. Kazman, R (1986) "Structuring the Text of the Oxford Engl!s,h Dictionary through Finite State Transduction , University of Waterloo Technical Report No. TR - 86- 20. McCord, M (1987} "Natural Language Process- ing and Prolog", m A Walker, MMcCord, J Sowa and W Wilson (Eds) Knowledge Systems and ' Prolog, Addison-Wesley, Waltham, Massachusetts, 291 - 402. Nakamura, J and Makoto N (1988) "Extraction of Semantic Information from an Ordinary Eng- lish Dictionary and Its Evaluation", Proceedings of the 12th International Conference on Computa- tional Linguistics, 459 - 464. Neff, M, R Byrd and O Rizk (1988) "Creat~g and Querying Hierarchical Lexical Data Bases , Proceedings df the Second Conference on Applied Natural Language Processing, 84- 93. van der Steen, G J (1982) "A Treatment of Que- ries in Large Text Corpora", in S Johansson (Ed) Computer Corpora in English Language Research, Norwegian Computing Centre for the Humanities, Bergen, 49 - 63". Tompa, F (1986) "'Database Design for a Dic- tionary of the Future', University of Waterloo, unpublished. W7 (1967) Webster's Seventh New Collegiate Dictionary, C.&C. Merriam Company, Springfield, Massachussetts. Whitelock, P, M Wood, H Somers, R Johnson and P Bennett (Eds) (1987) Linguistic Theory and Computer Applications, Academic Press, New York. 101
1989
12
SOME CHART-BASED TECHNIQUES FOR PARSING ILL-FORMED INPUT Chris S. Mellish Deparlment of Artificial Intelligence, University of Edinburgh, 80 South Bridge, EDINBURGH EH1 1HN, Scotland. ABSTRACT We argue for the usefulness of an active chart as the basis of a system that searches for the globally most plausible explanation of failure to syntactically parse a given input. We suggest semantics-free, grammar- independent techniques for parsing inputs displaying simple kinds of ill-formedness and discuss the search issues involved. THE PROBLEM Although the ultimate solution to the problem of processing ill-formed input must take into account semantic and pragmatic factors, nevertheless it is important to understand the limits of recovery stra- tegies that age based entirely on syntax and which are independent of any particular grammar. The aim of Otis work is therefore to explore purely syntactic and granmmr-independent techniques to enable a to recover from simple kinds of iil-formedness in rex. tual inputs. Accordingly, we present a generalised parsing strategy based on an active chart which is capable of diagnosing simple ¢nvrs (unknown/mi.uq~elled words, omitted words, extra noise words) in sentences (from languages described by context free phrase slructur¢ grammars without e- productions). This strategy has the advantage that the recovery process can run after a standard (active chart) parser has terminated unsuccessfully, without causing existing work to be reputed or the original parser to be slowed down in any way, and that, unlike previous systems, it allows the full syntactic context to be exploited in the determination of a "best" parse for an ill-formed sentence. EXPLOITING SYNTACTIC CONTEXT Weischedel and Sondheimer (1983) present an approach to processing ill-formed input based on a modified ATN parser. The basic idea is, when an ini- tial p~s@ fails, to select the incomplete parsing path that consumes the longest initial portion of the input, apply a special rule to allow the blocked parse to continue, and then to iterate this process until a successful parse is generated. The result is a "hillo climbing" search for the "best" parse, relying at each point on the "longest path" heuristic. Unfortunately, sometimes this heuristic will yield several possible parses, for instance with the sentence: The snow blocks I" te road (no partial parse getting past the point shown) where the parser can fail expecting either a verb or a deter- miner. Moreover, sometimes the heuristic will cause the most "obvious" error to be missed: He said that the snow the road T The paper will T the best news is the Times where we might suspect that there is a missing verb and a misspelled "with" respectively. In all these cases, the "longest path" heuristic fails to indicate unambiguously the minimal change that would be necessary to make the whole input acceptable as a sentence. This is not surprising, as the left-fight bias of an ATN parser allows the system to take no account of the right context of a possible problem element. Weischedel and Sondheimer's use of the "longest path" heuristic is similar to the use of locally least-cost error recovery in Anderson and Backhouse's (1981) scheme for compilers. It seems to be generally accepted that any form of globally "minimum-distance" error correction will be too costly to implement (Aho and Ullman, 1977). Such work has, however, not considered heuristic approaches, such as the one we are developing. Another feature of Weischedel and Sondheimer's system is the use of grammar-slx~ific recovery rules ("meta-rules" in their terminology). The same is true of many other systems for dealing with ill-formed input (e.g. Carhonell and Hayes (1983), Jensen et al. (1983)). Although grammar- specific recovery rules are likely in the end always to be more powerful than grammar-independent rules, it does seem to be worth investigating how far one can get with rules that only depend on the grammar for- ma//sm used. 102 IOT tbe T gardener T c°llects T manure T ff T the T a n t u m n 7 T J , 1 2 3 4 5 6 , <Need S from 0 to 7> <Need NP+VP from 0 to 7> <Need VP from 2 to 7> <Need VP+PP from 2 to 7> <Need PP from 4 to 7> <Need P+NP from 4 to 7> <Need P from4 to 5> (hypoth~s) (by top-down rule) (by fundamental rule with NP found bottom-up) (by top-down rule) (by fundamental rule with VP found bottom-up) (by top-down rule) (by fundamental rule with NP found bottom-up) Figure 1: Focusing on an emx. In _~.~_pting an ATN parser to compare partial parses, Weischedel and Sondheimer have already introduced machinery to represent several alternative partial parses simultaneously. From this, it is a rela- tively small step to introduce a well-formed substring table, or even an active chart, which allows for a glo- hal assessment of the state of the parser. If the gram- mar form~fi~m is also changed to a declarative for- malism (e.g. CF-PSGs, DCGs (Pereira and Warren 1980), patr-ll (Shieber 1984)), then there is a possi- bility of constructing other partial parses that do not start at the beginning of the input. In this way, right context can play a role in the determination of the ~est" parse. WHAT A CHART PARSER LEAVES BEHIND The information that an active chart parser leaves behind for consideration by a "post mortem" obviously depends on the parsing sWategy used (Kay 1980, Gazdar and Mellish 1989). Act/re edges are particularly important fx~n the point of view of diag- nosing errors, as an unsatisfied active edge suggests a place where an input error may have occurred. So we might expect to combine violated expectations with found constituents to hypothesise complete parses. For simplicity, we assume here that the grammar is a simple CF-PSG, although there are obvious generalisations. (Left-right) top-down pars/ng is guaranteed to create active edges for each kind of phrase that could continue a partial parse starling at the beginning of the input. On the other hand, bottom-up parsing (by which we mean left corner parsing without top-down filtering) is guaranteed to find all complete consti- merits of every possible parse. In addition, whenever a non-empty initial segment of a rule RHS has been found, the parser will create active edges for the kind of phrase predicted to occur after this segment. Top- down parsing will always create an edge for a phrase that is needed for a parse, and so it will always indicate by the presence of an unsatisfied active edge the first ester point, if there is one. If a subsequent error is present, top-down parsing will not always create an active edge corresponding to it, because the second may occur within a constituent that will not be predicted until the first error is corrected. Simi- larly, fight-to-left top-down parsing will always indi- cate the last error point, and a combination of the two will find the first and last, but not necessarily any error points in between. On the other hand, bottom- up parsing will only create an active edge for each error point that comes immediately after a sequence of phrases corresponding to an initial segment of the RI-IS of a grammar rule. Moreover, it will not neces- sarily refine its predictions to the most detailed level (e.g. having found an NP, it may predict the existence of a following VP, but not the existence of types of phrases that can start a VP). Weisobedel and Sondheimer's approach can be seen as an incremen- tal top-down parsing, where at each stage the right- most tin.riffled active edge is artificially allowed to be safistied in some way. As we have seen, there is no guarantee that this sort of hill-climbing will find the "best" solution for multiple errors, or even for single errors. How can we combine bottom-up and top-down parsing for a more effective solution? FOCUSING ON AN ERROR Our basic stramgy is to run a bottom-up parser over the input and then, if this fails to find a complete parse, to run a modified top-down parser over the resulting chart to hypothesise possible complete parses. The modified top-down parser attempts to find the minimal errors that, when taken account of, enable a complete parse to be constructed. Imagine that a bottom-up parser has already run over the input "the gardener collects manure if the autumn". Then Figure 1 shows (informally) how a top-down parser might focus on a possible error. To implement this kind of reasoning, we need a top-down parsing rule that knows how to refine a set of global needs and a 103 fundamental rule that is able m incorporate found constituents from either directim. When we may encounter multiple rotors, however, we need to express multiple needs (e.g. <Need N from 3 to 4 and PP from 8 to I0>). We also need to have a fimda- mental rule that can absorb found phrases firom any- where in a relevant portion of the chart (e.g. given a rule "NP --+ Det Adj N" and a sequence "as marvel- lous sihgt", we need to be able to hypothesi~ that "as" should be a Det and "sihgt" a N). To save repealing work, we need a version of the top-down rule that stops when it reaches an appropriate category that has already been found bottom-up. Finally, we need to handle both "anchored" and "unanchored" needs. In an anchored need (e.g. <Need NP from 0 to 4>) we know the beginning and end of the portion of the chart within which the search is to take place. In looking for a NP VP sequence in "the happy blageon su'mpled the bait", however, we can't initially find a complete (initial) NP or (final) VP and hence don't know where in the chart these phrases meeL We express this by <Need NP from 0 to *, VP f~om * to 6>, the symbol "*" denoting a position in the chart that remains to be determined. GENERALISED TOP-DOWN PARSING If we adopt a chart parsing suategy with only edges that carry informafim about global needs, thee will be considerable dupficated effort. For instance, the further refinement of the two edges: <Need NP hem 0 to 3 and V from 9 to 10> <Need NP from 0 to 3 and Adj from 10 to 11> can lead to any analysis of possible NPs between 0 and 3 being done twice. Restricting the possible for- mat of edges in this way would be similar to allowing the "functional composition rule" (Steedman 1987) in standard chart parsing, and in general this is not done for efficiency reasons. Instead, we need to produce a single edge that is "in charge" of the computation looking for NPs between 0 and 3. When poss£ole NPs are then found, these then need to be combined with the original edges by an appropriate form of the fun- damental rule. We are thus led to the following form for a generalised edge in our chart parser:. <C from S to E needs C$1 fi'om $1 toel, cs2 from s2 to e2. .o. C$, from $. to e,> where C is a category, the c$~ are lists of categories (which we will show inside square brackets),. S, E, the si and the e~ ate positions in the chart (or the spe- cial symbol "*~). The presence of an edge of this kind in the chart indicates that the parser is attempt- ing to find a phrase of category C covering the por- tion of the chart from S to E, but that in order to succeed it must still satisfy all the needs listed. Each need specifies a sequence of categories csl that must be found contiguously to occupy the portion of the chart extending from st to ei. Now that the format of the edges is defined, we can be precise about the parsing rules used. Our modified chart parsing rules are shown in Figure 2. The modified top-down ru/e allows us to refine a need into a more precise one, using a rule of the grammar (the extra conditions on the rule prevent further refinement where a phrase of a given category has already been found within the precise part of the chart being considezed). The modified fundamental ru/e allows a need to be satisfied by an edge that is completely ~ti~fied (i.e. an inactive edge, in the stan- dard terminology). A new rule, the simplification ru/~, is now required to do the relevant housekeeping when one of an edge's needs has been completely satisfied. One way that these rules could run would be as follows. The chart starts off with the inactive edges left by bottom-up parsing, together with a sin- gle "seed" edge for the top-down phase <GOAL from 0 to n needs [S] from 0 to n>, where n is the final position in the chart. At any point the fundamental rule is run as much as possible. When we can proceed no further, the first need is refined by the top-down rule (hopefully search now being anchored). The fundamental rule may well again apply, taking account of smaller phrases that have already been found. When this has run, the top-down rule may then further refine the system's expectations about the parts of the phrase that cannot be found. And so on. This is just the kind of "focusing" that we discussed in the last section.. If an edge expresses needs in several separate places, the first will eventu- ally get resolved, the simplification rule will then apply and the rest of the needs will then be worked on. For this all to make sense, we must assume that all hypothesised needs can eventually be resolved (otherwise the rules do not suffice for more than one error to be narrowed down). We can ensure this by introducing special rules for recoguising the most primitive kinds of errors. The results of these rules must obviously be scored in some way, so that errors are not wildly hypothesised in all sorts of places. I04 Top-down rule: <C from S toe needs [cl...csl] from sl to e:, cs2 fzom s2 to e2 .... cs. from s. toe.> c I ~ RHS (in the grammar) <cl from sl toe needs RHS from sx toe> where e = ff csl is not empty or e 1 ffi * then * else e x (el = * or CSl is non-empty or there is no category cl from sl to e:) Fundamental rule: <C from S mE needs [...cs n c l ...cs n] from s l to e x, cs 2 ...> <c ~ from S ~ to El needs <nothing>> <C fxom S toe needs csn from sx to S t, csx2 fxom E t to el, cs2 ...> (sl < Sx, el = * or El < e:) Simplification rule: <C fxom S toE needs ~ from s to s, c$2 from s2 to e2, ... cs. from s. me,,> <C from S toe needs cs2 from s2 to e2, ... cs. fxom s. toe.> Garbage rule: <C fronts toE needs I] from sl to el, c$2 from s2 to e2, ... cs. froms, toe.> <C fronts toE needs cs2 from s2 to e2, ... cs. from s. me.> (s, ~el) Empty category rule: <C from S toE needs [cl...csl] from s to s, cs2 from s2 to e2 .... ca. from s. toe.> <C fxom S toE needs cs2 from s2 to e2. ... cs. f~om s, toe,> Unknown word rule: <C from S toe needs [cl...csl] from sl to ex, cs2 from s2 to e2 .... cs. fzom s. toe.> <C from S toE needs cs~ from st+l to ex, cs2 from s2 to e2, ... cs. from s. toe.> (cl a lexical category, sl < the end of the chart and the word at s i not of category c ~). Figure 2: Generalised Top-down Parsing Rules SEARCH CONTROL AND EVALUATION FUNCTIONS Even without the extra rules for recognising primitive errors, we have now introduced a large parsing search space. For instance, the new funda- mental rule means that top-down processing can take place in many different parts of the chart. Chart parsers already use the notion of an agenda, in which possible additions to the chart are given priority, and so we have sought to make use of this in organising a heuristic search for the "best" poss~le parse. We have considered a number of parameters for deciding which edges should have priority: MDE (mode of formation)We prefer edges that arise from the fundamental rule to those that arise from the rap-down rule; we disprefer edges that arise from unanchored applications of the top-down nile. PSF (penalty so far) Edges resulting from the garbage, empty category and unknown word rules are given penalty scores. PSF counts the penalties that have been accumulated so far in an edge. PB (best penalty) This is an estimate of the best possible penalty that this edge, when complete. could have. This score can use the PSF, together with information about the parts of the chart covered - for 105 instance, the number of words in these parts which do not have lexical entries. GU$ (the ma~um number of words that have been used so far in a partial parse using this edge) We prefer edges that lead to parses accounting for more words of the input. PBG (the best possible penalty for any com- plete hypothesis involving this edge). This is a short- fall score in the sense of Woeds (1982). UBG (the best possible number of words that could be used in any complete hypothesis containing this edge). In our implementation, each rule calculates each of these scores for the new edge from those of the contributing edges. We have experimented with a number of ways of using these scores in comparing two possible edges to be added to the chart. At present, the most promising approach seems to be to compare in mm the scores for PBG, MDE, UBG, GUS, PSF and PB. As soon as a difference in scores is encountered, the edge that wins on this account is chosen as the preferred one. Putting PBG first in this sequence ensures that the first solution found will be a solution with a minimal penalty score. The rules for computing scores need to make estimates about the possible penalty scores that might arise from attempting to find given types of phrases in given parts of the chart. We use a number of heuristics to compute these. For instance, the pres. ence of a word not appearing in the lexicon means that every parse covering that word must have a non-zero penalty score. In general, an attempt to find an instance of a given category in a given portion of the chart must produce a penalty score if the boltom- up parsing phase has not yielded an inactive edge of the correct kind within that portion. Finally, the fact that the grammar is assumed to have no e- productions means that an attempt to find a long sequence of categories in a short piece of chart is doomed to produce a penalty score; similarly a sequence of lexical categories cannot be found without penalty in a pordon of chart that is too long. Some of the above scoring parameters score an edge according what sorts of parses it could contri- bute to, not just according to bow internally plausible it seems. This is desirable, as we wish the construc- tion of globally most plausible solutions to drive the parsing. On the other hand, it introduces a number of problems for chart organisation. As the same edge (apart from its score) may be generated in different ways, we may end up with multiple possible scores for it. It would make sense at each point to consider the best of the possible scores associated with an edge to be the current score. In this way we would not have to repeat work for every differently scored version of an edge. But consider the following scenario: Edge A is added to the chart. Later edge B is spawned using A and is placed in the agenda. Subsequently A's scc~e increases because it is derived in a new and better way. This should affect B's score (and hence B's position on the agenda). If the score of an edge increases then the scores of edges on the agenda which were spawned from it should also increase. To cope with this sort of prob- lem, we need some sort of dependency analysis, a mechanism for the propagation of changes and an easily resorted agenda. We have not addressed these problems so far - our cterent implementation treats the score as an integral part of an edge and suffers fiom the resulting duplication problem. PRELIMINARY EXPERIMENTS To see whether the ideas of this paper make sense in practice, we have performed some very prel- iminaw experiments, with an inefficient implementa- tion of the chart parser and a small CF-PSG (84 rules and 34 word lexicon, 18 of whose entries indicate category ambiguity) for a fragment of English. We generated random sentences (30 of each length con- sidered) from the grammar and then introduced ran- dom ocxunences of specific types of errors into these sentences. The errors considered were none (i.e. leav- ing the correct sentence as it was), deleting a word, adding a word (either a completely unknown word or a word with an entry in the lexicon) and substituting a completely unknown word for one word of the sen- tence. For each length of original sentence, the re,~ts were averaged over the 30 sentences ran- domly generated. We collected the following statis- tics (see Table 1 for the results): BU cyc/e$ - the number of cycles taken (see below) to exhaust the chart in the initial (standard) bottom-up parsing phase. #$olns - the number of different "solutions" found. A "solution" was deemed to be a description of a possible set of errors which has a minimal penalty score and if corrected would enable a com- plete parse to be constructed. Possible errors were adding an extra word, deleting a word and substitut- ing a word for an instance of a given lexical category. 106 Table 1: Preliminary experimental results Error None Delete one word Add unknown word Add known word Subst unknown word Length of original 3 6 9 12 3 6 9 12 BU cycles, , #Solns 31 i 69 1 135 1 198 1 17 5 50 5 105 6 155 7 '3 29 1 6 60 2 9 105 2 12 156 3 3 6 9 12 3 6 9 12 37 3 72 3 137 3 170 5 17 2 49 2 96 2 150 3 First Last TD cycles 0 0 0 0 0 0 0 0 0 0 0 0 14 39 50 18 73 114 27 137 350 33 315 1002 9 17 65 24 36 135 39 83 526 132 289 1922 29 51 88 .d 43 88 216 58 124 568 99 325 1775 17 28 46 23 35 105 38 56 300 42 109 1162 The penalty associated with a given set of errors was the number of em3~ in the set. First - the number of cycles of generalised top-down parsing required to find the first solution. Last - the number of cycles of generalised top- down parsing required to find the last solution. TD cyc/es - the number of cycles of generalised top-down parsing required to exhaust all possibilities of sets of errors with the same penalty as the first solution found. It was important to have an implementation- independent measure of the amount of work done by the parser, and for this we used the concept of a "cycle" of the chart parser. A "cycle" in this context represents the activity of the parser in removing one item from the agenda, adding the relevant edge to the chart and adding to the agenda any new edges that are suggested by the rules as a result of the new addi- tion. For instance, in conventional top-down chart parsing a cycle might consist of removing the edge <S from 0 to 6 needs [NP VI'] from 0 to 6> from the front of the agenda, adding this to the chart and then adding new edges to the agenda, as follows. Ftrst of all, for each edge of the form <NP from 0 to a needs 0> in the chart the fundamental rule determines that <S from 0 to 6 needs [VP] from ct to 6> should be added. Secondly, for each rule NP -.., 7 in the gram- mar the top-down rule determines that <NP from 0 to * needs y from 0 to *> should be added. With gen- eralised top-down parsing, there are more rules to be considered, but the idea is the same. Actually, for the top-down rule our implementation schedules a whole collection of single additions ("apply the top down rule to edge a") as a single item on the agenda. When such a request reaches the front of the queue, the actual new edges are then computed and themselves added to the agenda. The result of this strategy is to make the agenda smaller but more structured, at the cost of some extra cycles. EVALUATION AND FUTURE WORK The preliminary results show that, for small sentences and only one error, enumerating all the possible minimum-penalty errors takes no worse than 10 times as long as parsing the correct sentences. Finding the first minimal-penalty error can also be quite fast. There is, however, a great variability between the types of error. Errors involving com- pletely unknown words can be diagnosed reasonably 107 quickly because the presence of an unknown word allows the estimation of penalty scores to be quite accurate (the system still has to work out whether the word can be an addition and for what categories it can substitute for an instance of, however). We have not yet considered multiple errors in a sentence, and we can expect the behaviour to worsten dramatically as the number of errors increases. Although Table 1 does not show this, there is also a great deal of varia- bility between sentences of the same length with the same kind of introduced error. It is noticeable that errors towards the end of a sentence are harder to diagnose than those at the start. This reflects the leR- fight orientation of the parsing rules - an attempt to find phrases starting to the right of an error will have a PBG score at least one more than the estimated PB, whereas an attempt m find phrases in an open-ended portion of the chart starting before an error may have a PBG score the same as the PB (as the error may occur within the phrases to be found). Thus more parsing attempts will be relegated to the lower parts of the agenda in the first case than in the second. One disturbing fact about the statistics is that the number of minimal-penalty solutions may be quite large. For instance, the ill-formed sentence: who has John seen on that had was formed by adding the extra word "had" to the sentence "who has John seen on that". Our parser found three other possible single errors to account for the sentence. The word "on" could have been an added word, the word "on" could have been a substi- tution for a complementiser and there could have been a missing NP after "on". This large number of solutions could be an artefact of our particular gram- ram" and lexicon; certainly it is unclear how one should choose between possible solutions in a grammar-independent way. In a few cases, the intro- duction of a random error actually produced a gram- matical sentence - this occurred, for instance, twice with sentences of length 5 given one random A__dded word. At this stage, we cannot claim that our experi- ments have done anything more than indicate a cer- tain concreteness to the ideas and point to a number of unresolved problems. It remains to be seen how the performance will scale up for a realistic grammar and parser. There are a number of detailed issues to resolve before a really practical implementation of the above ideas can be produced. The indexing stra- tegy of the chart needs to be altered to take into account the new parsing rules, and remaining prob- lems of duplication of effort need to be addressed. For instance, the generalised version of the funda- mental rule allows an active edge to combine with a set of inactive edges satisfying its needs in any order. The scoring of errors is another ar~ which should be better investigated. Where extra words are introduced accidentally into a text, in practice they are perhaps unlikely to be words that are already in the lexicon. Thus when we gave our system sen- tences with known words added, this may not have been a fair test. Perhaps the scoring system should prefer added words to be words outside the lexicon, substituted words to substitute for words in open categories, deleted words to be non-content words, and so on. Perhaps also the confidence of the system about possible substitutions could take into account whether a standard spelling corrector can rewrite the acnmi word to a known word of the hypothesised category. A more sophisticated error scoring strategy could improve the system's behaviour considerably for real examples (it might of course make less difference for random examples like the ones in our experiments). Finally the behaviour of the approach with realistic grammars written in more expressive nota- tions needs to be established. At present, we are investigating whether any of the current ideas can be used in conjunction with Allport's (1988) "interest- ing corner" parser. ACKNOWLEDGEMENTS This work was done in conjunction with the SERC-supported project GR/D/16130. I am currently supported by an SERC Advanced Fellow- ship. REFERENCES Aho, Alfred V. and Ullman, Jeffrey D. 1977 Princi- ples of Compiler Design. Addison-Wesley. Allpo~ David. 1988 The TICC: Parsing Interesting Text. In: Proceedings of the Second ACL Conference on Applied Natural Language Processing, Austin, Texas. Anderson, S. O. and Backhouse, Roland C. 1981 Locally Least-Cost Error-Recovery in Earley's Algorithm. ACM TOPIAS 3(3): 318-347. Carbonell, Jaime G. and Hayes, Philip J. 1983 Recovery Strategies for Parsing 108 Extragrammafical Language. A/CL 9(3-4): 123-146. Gazdar, Gerald and Mellish, Chris. 1989 Natura/ Language Processing in LISP - An Intro- duction to Computational Linguistics. Addison-Wesley. Jensen, Karen, Heidom, George E., Miller, Lance A. and Ravin, Yael. 1983 Parse Fitting and Prose Fitting: Getting a Hold on Ill. Formedness. A/C/, 9(3-4): 147-160. Kay, Matin. 1980 Algorithm Schemata and Data Structures in Syntactic Processing. Research Report CSL-80-12, Xerox PARC. Pereir& Fernando C. N. and Warren, David I-L D. 1980 Definite Clause Grammars for Language Analysis - A Survey of the For- malism and a Comparison with Augmented Transition Networks. Artifu:ial Intelli- gence 13(3): 231-278. Shieber, Stuart M. 1984 The Design of a Computer Language for Linguistic Information. In Proceedings of COLING-84, 362-366. Steedman, Mark. 1987 Combinatow Grammars and Human Language ~ g . In: Garfield, J., Ed., Modularity in Knowledge Representation and Natural Language Pro- ceasing. Bradford Books/MIT Press. Weischedel, Ralph M. and 5ondheimer. Norman K. 1983 Meta-rules as a Basis for ~ g HI-Formed Input. AICL 9(3-4): 161-177. Woods, Williant A. 1982 Optimal Search Strategies for Speech Understanding Control. Artificial Intelligence 18(3): 295-326. 109
1989
13
ON REPRESENTING GOVERNED PREPOSITIONS AND HANDLING "INCORRECT" AND NOVEL PREPOSITIONS Hatte R. Blejer, Sharon Flank, and Andrew Kchler SRA Corporation 2000 15th St. North Arlington, VA 22201, USA ABSTRACT NLP systems, in order to be robust, must handle novel and ill-formed input. One common type of error involves the use of non-standard prepositions to mark arguments. In this paper, we argue that such errors can be handled in a systematic fashion, and that a system designed to handle them offers other advantages. We offer a classification scheme for preposition usage errors. Further, we show how the knowledge representation employed in the SRA NLP system facilitates handling these data. 1.0 INTRODUCTION It is well known that NLP systems, in order to be robust, must handle ill- formed input. One common type of error involves the use of non-standard prepositions to mark arguments. In this paper, we argue that such errors can be handled in a systematic fashion, and that a system designed to handle them offers other advantages. The examples of non-standard prepositions we present in the paper are taken from colloquial language, both written and oral. The type of error these examples represent is quite frequent in colloquial written language. The frequency of such examples rises sharply in evolving sub-languages and in oral colloquial language. In developing an NLP system to be used by various U.S. government customers, we have been sensitized to the need to handle variation and innovation in preposition usage. Handling this type of variation or innovation is part of our overall capability to handle novel predicates, which arc frequent in sub- language. Novel predicates created for sub- languages arc less "stable" in how they mark arguments (ARGUMENT MAPPING) than general English "core" predicates which speakers learn as children. It can be expected that the eventual advent of successful speech understanding systems will further emphasize the need to handle this and other variation. The NLP system under development at SRA incorporates a Natural Language Knowledge Base (NLKB), a major part of which consists of objects representing SEMANTIC PREDICATE CLASSES. The system uses hierarchical knowledge sources; all general "class-level" characteristics of a semantic predicate class, including the number, type, and marking of their arguments, are put in the NLKB. This leads to increased efficiency in a number of system aspects, e.g., the lexicon is more compact and easier to modify since it only contains idiosyncratic information. This representation allows us to distinguish between Icxically and semantically determined ARGUIVIENT MAPPING and to formulate general class-level constraint relaxation mechanisms. I.I CLASSIFYING PREPOSITION USAGE Preposition usage in English in positions governed by predicating elements, whether adjectival, verbal, or nominal, may be classified as (I) lexically determined, (2) syntactically determined, or (3) semantically determined. Examples are: LEXICALLY DETERMINED: laugh at, afraid of SYNTACTICALLY DETERMINED: by in passive sentences SEMANTICALLY DETERMINED: move to~from Preposition usage in idiomatic phrases is also considered to be lexically determined, e.g., ~ respect to. 1.2 A TYPOLOGY OF ERRORS IN PREPOSITION USAGE We have classified our corpus of examples of the use of non-standard 110 prepositions into the following categories: (1) substitution of a semantically appropriate preposition -- either from the same class or another -- for a semantically determined one, (2) substitution of a semantically appropriate preposition for a lexically determined one, (3) false starts, (4) blends, and (5) substitution of a semantically appropriate preposition for a syntactically determined one. A small percentage of the non-standard use of prepositions appears to be random. 1.3 COMPUTATIONAL APPLICATIONS OF THIS WORK In a theoretical linguistics forum (Blejcr and Flank 1988), we argued that these examples of the use of non-standard prepositions to mark arguments (1) represent the kind of principled variation that underlies language change, and (2) support a semantic analysis of government that utilizes thematic roles, citing other evidence for the semantic basis of prepositional case marking from studies of language dysfunction (Aitchison 1987:103), language acquisition (Pinker 1982:678; Mcnyuk 1969:56), and typological, cross- linguistic studies on case-marking systems. More theoretical aspects of our work (including diachroni¢ change and arguments for and against particular linguistic theories) were covered in that paper; here we concentrate on issues of interest to a computational linguistics forum. First, our natural language knowledge representation and processing strategies take into account the semantic basis of prepositional case marking, and thus facilitate handling non-standard and novel use of prepositions to mark arguments. The second contribution is our typology of errors in preposition usage. We claim that an NLP system which accepts naturally occurring input must recognize the type of the error to know how to compensate for it. Furthermore, the knowledge representation scheme we have implemented is an efficient representation for English and lends itself to adaptation to representing non-English case-marking as well. There is wide variation in computational strategies for mapping from the actual natural language expression to some sort of PREDICATE-ARGUMENT representation. At issue is how the system recognizes the arguments of the predicate. At one end of the spectrum is an approach which allows any marking of arguments if the type of the argument is correct for that predicate. This approach is inadequate because it ignores vital information carried by the preposition. At the other extreme is a semantically constrained syntactic parse, in many ways a highly desirable strategy. This latter method, however, constrains more strictly than what humans actually produce and understand. Our strategy has been to use the latter method, allowing relaxation of those constraints, under certain well-specified circumstances. Constraint relaxation has been recognized as a viable strategy for handling ill-formed input. Most discussion centers around orthographic errors and errors in subject-verb agreement. Jensen, Heidorn, Miller, and Ravin (1983:158) note the importance of "relaxing restrictions in the grammar rules in some principled way." Knowing which constraints to relax and avoiding a proliferation of incorrect parses however, is a non-trivial task. Weischedel and Sondheimer .(1983:163ff) offer cautionary advice on this subject. There has been some discussion of errors similar to those cited in our paper. Carbonell and Hayes (1983:132) observed that "problems created by the absence of expected case markers can be overcome by the application of domain knowledge" using case frame instantiation. We agree with these authors that the use of domain knowledge is an important element in understanding ill-formed input. However, in instances where the preposition is not omitted, but rather replaced by a non- standard preposition, we claim that an understanding of the linguistic principles involved in the substitution is necessary. To explain how constraint relaxation is accomplished, a brief system description is needed. Our system uses a parser based on Tomita (1986), with modifications to allow constraints and structure-building. It uses context-free phrase structure rules, augmented with morphological, contextual, and semantic constraints. Application of the phrase structure rules results in a parse tree, similar to a Lexical-Functional Grammar (LFG) "c-structure" (Bresnan 1982). The constraints are unified at parse time to produce a functionally labelled template (FLT). The FLT is then input to a semantic translation module. Using ARGUMENT 111 MAPPING rules and other operator- operand semantic rules, semantic translation creates situation frames (SF). SFs consist of a predicate and entity frames (EF), whose semantic roles in the situation are labeled. Other semantic objects are relational frames (e.g. prepositional phrases), property frames (e.g. adjective phrases), and unit frames (measure phrases). During the semantic interpretation and discourse analysis phase, the situation frame is interpreted, resulting in one or more instantiated knowledge base (KB) objects, which are state or event descriptions with entity participants. 2.0 REPRESENTING ARGUMENT MAPPING IN AN NLP SYSTEM In our lexicons, verbs and adjectives are linked to one or more predicate classes which are defined in the Natural Language Knowledge Base (NLKB). Predicates typically govern one or more arguments or thematic roles. All general, class-level information about the thematic roles which a given predicate governs is represented at the highest possible level. Only idiosyncratic information is represented in the lexicon. When lexicons are loaded the idiosyncratic information in the lexicon is unified with the general information in the NLKB. Our representation scheme has certain implementational advantages: lexicons are less error-prone and easier to modify, the data are more compact, constraint relaxation is facilitated, etc. More importantly, we claim that such semantic classes are psychologically valid. Our representation scheme is based on the principle that ARGUMENT MAPPING is generally determined at the class-level, i.e., predicates group along semantic lines as to the type of ARGUMENT MAPPING they take. Our work draws from theoretical linguistic studies of thematic relations (e.g., Gruber 1976, Jackendoff 1983, and Ostler 1980). We do not accept the "strong" version of localism, i.e., that all form mirrors function -- that ARGUMENT MAPPING classes arise from metaphors based on spatial relations. Unlike case grammar, we limit the number of cases or roles to a small set, based on how they are manifested in surface syntax. We subsequently "interpret" roles based on the semantic class of the predicate, e.g., the GOAL of an ATTITUDE is generally an animate "experiencer'. For example, in the NLKB the ARGUMENT MAPPING of predicates which denote a change in spatial relation specifies a GOAL argument, marked with prepositions which posit a GOAL relation (to, into, and onto) and a SOURCE argument, marked with prepositions which posit a SOURCE relation (from, out of, off of). A sub-class of these predicates, namely Vendler's (1967) achievements, mark the GOAL argument with prepositions which posit an OVERLAP relation (at, in). Compare: MOVE to/into/onto from/out of/off of ARRIVE at/in from The entries for these verbs in SRA's lexicon merely specify which semantic class they belong to (e.g., SPATIAL-RELATION), whether they are stative or dynamic, whether they allow an agent, and whether they denote an achievement. Their ARGUMENT MAPPING is not entered explicitly in the lexicon. The verb reach, on the other hand, which marks its GOAL idiosyncratically, as a direct object, would have this fact in its lexical entry. 2.1 GROUPING SEMANTIC ROLES Both on implementational and on theoretical grounds, we have grouped certain semantic roles into superclasses. Such groupings arc common in the literature on case and valency (see Somers 1987) and are also supported by cross- linguistic evidence. Our grouping of roles follows previous work. For example, the AGENT SUPERCLASS covers both animate agents as well as inanimate instruments. A GROUND SUPERCLASS (as discussed in Talmy 1985) includes both SOURCE and GOAL, and a GOAL SUPERCLASS includes GOAL, PURPOSE, an'd DIRECTION. Certain semantic roles, like GOAL and SOURCE, as well as being sisters are "privatives", that is, opposites semantically. Our representation scheme differentiates between lexically and semantically determined prepositions. We will show how this representation facilitates recognition of the type of error, and therefore principled relaxation of the constraints. Furthermore, a principled 112 relaxation of the constraints depends in many instances on knowing the relationship between the non-standard and the expected prepositions: are they sisters, privatives, or is the non-standard preposition a parent of the expected preposition. In the following section we present examples of the five types of preposition usage errors. In the subsequent section, we discuss how our system presently handles these errors, or how it might eventually handle them. 3.0 THE DATA We have classified the variation data according to the type of substitution. The main types are: (1) semantic for semantic (Section 3.1), (2) semantic for lexical (Section 3.2), (3) blends (Section 3.3), (4) false starts (Section 3.4), and (5) semantic for syntactic (Section 3.5). The data presented below are a representative sample of a larger group of examples. The current paper covers the classifications which we have encountered so far; we expect that analysis of additional data will provide further types of substitutions within each class. 3.1 SEMANTIC FOR SEMANTIC 3.1.1 To/From The substitution of the goal marker for the source marker cross-linguistically is recognized in the case literature (e.g., lkegami 1987). In English, this appears to be more pronounced in certain regional dialects. Common source/goal alternations cited by Ikegami (1987:125) include: averse from/to, different from/to, immune from/to, and distinction from/to. The majority of examples involve to substituting for from in lexical items which incorporate a negation of the predicate; the standard marker of GROUND in this class of predicates is a SOURCE marker, e.g., different from. The "positive" counterparts mark the GROUND with GOAL, e.g., similar to, as discussed in detail in Gruber (1976). Variation between to and from can only occur with verbs which incorporate a negative, otherwise the semantic distinction which these prepositions denote is necessary. (1) The way that he came on to that bereaved brother completely alienated me TO Mr. Bush. 9/26/88 MCS (2) At this moment I'm different TO primitive man. 10/12/88 The Mind, PBS 3.1.2 To/With Communication and transfer of knowledge can be expressed either as a process with multiple, equally involved participants, or as an asymmetric process with one of the participants as the "agent" of the transfer of information. Our data document the substitution of the GOAL marker for the CO-THEME marker; this may reflect the tendency of English to prefer "agent" focussing. The participants in a COMMUNICATION situation are similar in their semantic roles, the only difference being one of "viewpoint." By no means all communication predicates operate in this way: e.g., EXPLANATION, TRANSFER OF KNOWLEDGE are more clearly asymmetric. The system differentiates between "mutual" and "asymmetric" communication predicates. (3) The only reason they'll chat TO you is, you're either pretty, or they need something from your husband. 9/30/88 MCS (4) 171 have to sit down and explore this TO you. 10/16/88 3.2 SEMANTIC FOR LEXICAL 3.2.1 Goal Superclass (Goal/ Purpose/Direction) Goal and purpose are frequently expressed by the same case-marking, with the DIRECTION marker alternating with these at times. The standard preposition in these examples is lexically determined. In example (6), instead of the lexically determined to, which also marks the semantic role GOAL, another preposition within the same superclass is chosen. In example (5) the phrasally determined for is replaced by the GOAL marker. There is abundant cross-linguistic evidence for a GOAL SUPERCLASS which includes GOAL and PURPOSE; to a lesser extent DIRECTION also patterns with these cross- linguistically. (5) It's changing TO the better. 8/3/88 MCS (6) Mr. Raspberry is almost 200 years behind Washingtonians aspiring FOR full citizenship. 10/13/88 WP 113 3.2.2 On/Of Several examples involve lexical items expressing knowledge or cognition, for which the standard preposition is lexically determined. This preposition is uniformly replaced by on, also a marker of the semantic role of REFERENT. Examples include abreast of, grasp of, an idea of, and knowledge of. We claim that the association of the role REFERENT with knowledge and cognition (as well as with transfer-of-information predicates) is among the more salient associations that language learners encounter. (7) Terry Brown, 47, a truck driver, agreed; "with eight years in the White House," he said, "Bush ought to have a better grasp ON the details." 9/27/88 NYT p. B8 (8) I did get an idea ON the importance of consistency as far as reward and penalty are concerned. 11/88 ETM journal 3.2.3 With/From/To In this class, we believe that "mutual action verbs" such as marry and divorce routinely show a CO-THEME marker with being substituted for either to or from. Such predicates have a SECONDARY- MAPPING of PLURAL-THEME in the NLKB. Communication predicates are another class which allows a PLURAL- THEME and show alternation of GOAL and CO-THEME (Section 3.1.2). (9) Today Robin Givens said she won't ask for any money in her divorce WITH Mike Tyson. 10/19/88 ATC 3.3 FALSE STARTS The next set of examples suggests that the speaker has "retrieved" a preposition from a different ARGUbIENT MAPPING for the verb or for a different argument than the one which is eventually produced. For example, confused with replaces confused by in (10), and say to replaces say about in (11). Such examples are more prevalent in oral language. Handling these examples is difficult since all sorts of contextual information -- linguistic and non-linguistic -- goes into detecting the error. (10) They didn't want to be confused WITH the facts. 11/14/88 DRS (11) The memorial service was really well done. The rabbi did a good job. What do you say TO a kid who died fike that? 11/14/88 3.4 BLENDS Here, a lexically or phrasally determined preposition is replaced by a preposition associated with a semantically similar lexical item. In (12) Quayle says he was smitten about Marilyn, possibly thinking of crazy about. In (13) he may be thinking of on the subject/topic of. The questioner in (14) may have in support/favor of in mind. In (15) Quayle may have meant we learn by making mistakes. In (16), the idiomatic phrase in support of is confused with the ARGUlVlENT MAPPING of the noun support, e.g., "he showed his support for the president'. (12) I was very smitten ABOUT her... I saw a good thing and I responded rather quickly and she did too. 10/20/88 WP, p. C8 (13) ON the area of the federal budget deficit .... 10/5/88 Sen. Quayle in vp debate (& NYT 10/7/88 p. B6) (14) You made one of the most eloquent speeches IN behalf of contra aid. 10/5/88 Questioner in VP debate (& NYT 10/7/88 p.B6) (15) We learn BYour mistakes. 10/5/88 Sen. Quayle in vp debate (& NYT 10/7/88 p. B6) (16) We testified in support FOR medical leave. 10/22/88 FFS 3.5 SEMANTIC FOR SYNTACTIC -- WITH/BY In the majority of the following examples, the syntactically governed by marking passives is replaced by WITH. This alternation of with and by in passives has been attested for hundreds of years, and we hypothesize that English may be in the process of reinterpreting by, as well as replacing it with with in certain contexts. On the one hand, by is being reinterpreted as a marker of "archetypal" agents, i.e, those high on the scale of AGENTIVITY (i.e., speaker • human • animate • potent • non- animate, non-potent). On the other hand, a semantically appropriate marker is being 114 substituted for by. We analyze the WITH in these examples either as the less agentive AGENT (namely the INSTRUlVlENT) in example (18), or the less agentive CO- THEME in example (17). The substitutions are semantically appropriate and the substitutes are semantically related to AGENT. • (17) All of Russian Hfe was accompanied WITH some kind of singing. 8/5/88 ATC (18) Audiences here are especially enthused WITH Dukakis's description of the Reagan-Bush economic policies. 11/5/88 ATC 4.0 THE COMPUTATIONAL IMPLEMENTATION Of the five types of errors cited in Section 3, substitutions of semantic for semantic (Section 3.1), semantic for lexical (Section 3.2), and semantic for syntactic (Section 3.5) are the simplest to handle computationally. 4.1 SEMANTIC FOR SEMANTIC OR LEXICAL The representation scheme described above (Section 2) facilitates handling the semantic for semantic and semantic for lexical substitutions. Semantic for semantic substitutions are allowed if (i) the predicate belongs to the communication class and the standard CO- THEME marker is replaced by a GOAL marker, or (ii) the predicate incorporates a negative and GOAL is substituted for a standard SOURCE, or vice versa. Semantic for lexical substitutions are allowed if (iii) the non-standard preposition is a non- privative sister of the standard preposition (e.g., in the GOAL SUPERCLASS), (iv) "the non-standard preposition is the NLKB-inherited, "default" preposition for the predicate (e.g., REFERENT for predicates of cognition and knowledge), or (v) in the NLKB the predicate allows a SECONDARY-MAPPING of PLURAL- THElvIE (e.g., marital predicates as in the divorce with example). Handling the use of a non-standard preposition marking an argument crucially involves "type-checking', wherein the "type" of the noun phrase is checked, e.g. for membership in an NLKB class such as animate-creature, time, etc. Type-checking is also used to narrow the possible senses of the preposition in a prepositional phrase, as well as to prefer certain modifier attachments. Prepositional phrases can have two relations to predicating expressions, i.e., a governed argument (PREP-ARG) or an ADJUNCT. During parsing, the system accesses the ARGUMENT MAPPING for the predicate; once the preposition is recognized as the standard marker of an argument, an ADJUNCT reading is disallowed. The rule for PREP-ARG is a separate rule in the grammar. When the preposition does not match the expected preposition, the system checks whether any of the above conditions (i-v) hold; if so, the parse is accepted, but is assigned a lower likelihood. If a parse of the PP as an ADJUNCT is also accepted, it will be preferred over the ill-formed PREP-ARG. 4.2 SEMANTIC FOR SYNTACTIC The substitution of semantic marking for syntactic (WITH for BY) is easily handled: during semantic mapping by phrases in the ADJUNCTS are mapped to the role of the active subject, assuming that "type checking" allows that interpretation of the noun phrase. It is also possible for such a sentence to be ambiguous, e.g., "he was seated by the man'. We treat with phrases similarly, except that ambiguity between CO-THEME and PASSIVE SUBJECT is not allowed, based on our observation that with for by is used for noun phrases low on the animacy scale. Thus, only the CO-THEME interpretation is valid if the noun phrase is animate. 4.3 FALSE STARTS AND BLENDS False starts are more difficult, requiring an approach similar to that of case grammar. In these examples, the preposition is acceptable with the verb, but not to mark that particular argument. The 115 type of the argument marked with the "incorrect" preposition must be quite inconsistent with that sense of the predicate for the error even to be noticed, since the preposition is acceptable with some other sense. We are assessing the frequency of false starts in the various genres in which our system is being used, to determine whether we need to implement a strategy to handle these examples. We predict that future systems for understanding spoken language will need to accomodate this phenomenon. We do not handle blends currently. They involve a form of analogy, i.e., smitten is like mad, syntactically, semantically, and even stylistically; they may shed some light on language storage and retrieval. Recognizing the similarity in order to allow a principled handling seems very difficult. In addition, blends may provide evidence for a "top down" language production strategy, in which the argument structure is determined before the lexieai items are chosen/inserted. Our data suggest that some people may be more prone to making this type of error than are others. Finally, blends are more frequent in genres in which people attempt to use a style that they do not command (e.g., student papers, radio talk shows). 5.0 DIRECTIONS FOR FUTURE WORK In this paper we have described a frequent type of ill-formed input which NLP systems must handle, involving the use of non-standard prepositions to mark arguments. We presented a classification of these errors and described our algorithm for handling some of these error types. The importance of handling such non-standard input will increase as speech recognition becomes more reliable, because spoken input is less formal. In the near term, planned enhancements include adjusting the weighting scheme to more accurately reflect the empirical data. A frequency- based model of preposition usage, based on a much larger and broader sampling of text will improve system handling of those errors. ACKNOWLEDGEMENTS We would like to express our appreciation of our colleagues' contributions to the SRA NLP system: Gayle Aycrs, Andrew FanG, Ben Fine, Karyn German, Mary Dee Harris, David Reel, and Robert M. Simmons. REFERENCES 1. Aitchison, Jean. 1987. Words in the Mind. Blackwell, NY. 2. Blejer, Hatte and Sharon Flank. 1988. More Evidence for the Semantic Basis of Prepositional Case Marking, delivered December 28, 1988, Linguistic Society of America Annual Meeting, New Orleans. 3. Bresnan, Joan, cd. 1982. The Mental Representation of Grammatical Relations. MIT Press, Cambridge. 4. Carbonell, Jaime and Philip Hayes. 1983. Recovery Strategies for Parsing Extragrammatical Language. American Journal of Computational Linguistics 9(3-4): 123-146. 5. Chierchia, Gennaro, Barbara Partee, and Raymond Turner, eds. 1989. Properties, Types and Meaning. Kluwer, Dordrecht. 6. Chomsky, Noam. 1981. Lectures on Government and Binding. Foris, Dordrecht. 7. Croft, William. 1986. Categories and Relations in Syntax: The Clause-Level Organization of Information. Ph.D. Dissertation, Stanford University. 8. Dahlgren, Kathleen. 1988. Naive Semantics for Natural Language Understanding. Kluwer, Boston. 9. Dirven, Rene and Gunter Radden, eds. 1987. Concepts o/ Case. Gunter Narr, Tubingen. 10. Dowry, David. 1989. On the Semantic Content of the Notion of 'Thematic Role'. In Chierchia, et al. II:69-129. 11. Foley, William and Robert Van Valin Jr. 1984. Functional Syntax and Universal Grammar. Cambridge Univ. Press, Cambridge. 116 12. Gawron, Jean Mark. 1988. Lexical Representations and the Semantics of Complementation. Garland, NY. 13. Gazdar, Gerald, Ewan Klein, Geoffrey Pullum, and Ivan Sag. (GKPS) 1985. Generalized Phrase Structure Grammar. Harvard Univ. Press, Cambridge. 14. Gruber, Jeffrey. 1976. Lexical Structures in Syntax and Semantics. North- Holland, Amsterdam. 15. Haiman, John. 1985. Natural Syntax: lconicity and Erosion. Cambridge University Press, Cambridge. 16. Hirst, Graeme. 1987. Semantic Interpretation and the Resolution of Ambiguity. Cambridge University Press, Cambridge. 17. Ikegami, Yoshihiko. 1987. 'Source' vs. 'Goal': a Case of Linguistic Dissymetry, in Dirven and Radden 122-146. 18. Jackendoff, Ray. 1983. Semantics and Cognitwn. MIT Press, Cambridge. 19. Jensen, Karen, George Heidorn, Lance Miller and Yael Ravin. 1983. Parse Fitting and Prose Fixing: Getting a Hold on Ill- formedness. American Journal of Computational Linguistics 9(3-4): 147-160. 20. Menyuk, Paula. 1969. Sentences Children Use. MIT Press, Cambridge. 21. Miller, Glenn and Philip Johnson-Laird. 1976. Language and Perception. Harvard University Press, Cambridge. 22. Ostler, Nicholas. 1980. A Theory of Case Linking and Agreement. Indiana University Linguistics Club. 23. Pinker, Steven. 1982. A Theory of the Acquisition of Lexical Interpretive Grammars, in Bresnan 655-726. 24. Shopen, Timothy, ed. 1985. Language Typology and Syntactic Description. Cambridge University Press, Cambridge. 25. Somers, H. L. 1987. Valency and Case in Computational Linguistics. Edinburgh University Press, Edinburgh. 26. Talmy, Leonard. 1985. Lexicalization Patterns: Semantic Structure in Lexical Forms. In Shopen III:57-149. 27. Tomita, Masuru. 1986. Efficient Parsing for Natural Language. Kluwer, Boston. 28. Vendler, Zeno. 1967. Linguistics in Philosophy. Cornell University Press, Ithaca. 29. Weischedel, Ralph and Norman Sondheimer. 1983. Meta-rules as a Basis for Processing Ill-Formed Input. American Journal of Computational Linguistics 9(3- 4):161-177. APPENDIX A. DATA SOURCES ATC: National Public Radio news program, "All Things Considered" ME: National Public Radio news program, "Morning Edition" WE: National Public Radio news program, "Weekend Edition" MCS: WAMU radio, Washington D.C., interview program, "The Mike Cuthbert Show" DRS: WAMU radio, Washington D.C., interview program, "Diane Rehm Show" FFS: WAMU radio, Washington D.C., interview program, "Fred Fiske Saturday" AIH: Canadian Broadcasting Company radio news program, "As It Happens" NYT: The New York Times WP: The Washington Post ETM_: Student journal for "Effective Teaching Methods," a junior undergraduate course 117
1989
14
ACQUIRING DISAMBIGUATION RULES FROM TEXT Donald Hindle AT~T Bell Laboratories 600 Mountain Avenue Murray Hill, NJ 07974-2070 Abstract An effective procedure for automatically acquiring a new set of disambiguation rules for an existing deterministic parser on the basis of tagged text is presented. Performance of the automatically ac- quired rules is much better than the existing hand- written disambiguation rules. The success of the acquired rules depends on using the linguistic in- formation encoded in the parser; enhancements to various components of the parser improves the ac- quired rule set. This work suggests a path toward more robust and comprehensive syntactic analyz- ers. 1 Introduction One of the most serious obstacles to developing parsers to effectively analyze unrestricted English is the difficulty of creating sufllciently comprehen- sive grammars. While it is possible to develop toy grammars for particular theoretically interest- ing problems, the sheer variety of forms in En- glish together with the complexity of interaction that arises in a typical syntactic analyzer makes each enhancement of parser coverage increasingly difficult. There is no question that we are still quite far f~om syntactic analyzers that even begin to adequately model the grammatical variety of English. To go beyond the current generation of hand built grAmrnars for syntactic analysis it will be necessary to develop means of acquiring some of the needed grammatical information from the regularities that appear in large corpora of natu- rally occurring text. This paper describes an implemented training procedure for automatically acquiring symbolic rules for a deterministic parser on the basis of un- restricted textual input. In particular, I describe experiments in automatically acquiring a set of rules for disambiguation of lexical category (part of speech). Performance of the acquired rule set is much better than the set of rules for lexical dis- ambiguation written for the parser by hand over a period of several rules; the error rate is approx- imately half that of the hand written rules. Fur- thermore, the error rate is comparable to recent probabilistic approaches such as Church (1987) and Garside, Leech and Sampson (1987). The current approach has the added advantage that, since the rules acquired depend on the parser's grammar in general, independent improvements in other modules of the parser can lead to improve- ment in the performance of the disambiguation component. 2 Categorial Ambiguity Ambiguity of part of speech is a pervasive char- acteristic of English; more than a third of the word tokens in the million-word "Brown Corpus" of written English (Francis and Kucera 1982) are cate$orially ambiguous. It is possible to construct sentences in which every word is ambiguous, such as the following, (1) Her hand had come to rest on that very book. But even without such contrived exaggeration, ambiguity of lsxical category is not a trivial prob- lem. Nor can part of speech ambiguity be ig- nored in constructing models of natural language processing, since syntactic analysis (as well as higher levels of analysis) depends on correctly dis- ambiguating the lexical category of both content words and function words like to and that. It may seem that disambiguating lexical cate- gory should depend on complex reasoning about a variety of factors known to influence ambiguity in general, including semantic and pragmatic fac- tors. No doubt some aspects of disambiguating lexical category can be expressed in terms of such higher level decisions. But if disambiguation in fact depends on such higher level reasoning, there is little hope of succeeding in disambiguation on unrestricted text. 118 Fortunately, there is reason to believe that lex- ical disambiguation can proceed on more limited syntactic patterns. Indeed, recent increased inter- est in the problem of disambiguating lexical cat- egory in English has led to significant progress in developing effective programs for assigning lexi- cal category in unrestricted text. The most suc- cessful and comprehensive of these are based on probabilistic modeling of category sequence and word category (Church 1987; Garside, Leech and Sampson 1987; DeRose 1988). These stochastic methods show impressive performance: Church re- ports a success rate of 95 to 99%, and shows a sample text with an error rate of less than one percent. What may seem particularly surprising is that these methods succeed essentially with- out reference to syntactic structure; purely sur- face lexical patterns are involved. In contrast to these recent stochastic methods, earlier meth- ods based on categorical rules for surface patterns achieved only moderate success. Thus for exam- ple, Klein and Simmons (1963) and Greene and Rubin (1971) report success rates considerably be- low recent stochastic approaches. It is tempting to conclude from this contrast that robust handling of unrestricted text de- mands general probabilistic methods in preference to deeper linguistic knowledge. The Lancaster (UCREL) group explicitly takes this position, sug- gesting: "... if we analyse quantitatively a suffi- ciently large amount of language data, we will be able to compensate for the computer's lack of so- phisticated knowledge and powers of inference, at least to a considerable extent." (Garside, Leech and Sampson 1987:3). In this paper, I want to emphasize a somewhat different view of the role of large text corpora in building robust models of natural language. In particular, I will show that that large corpora of naturally occurring text can be used together with the rule-based syntactic analyzers we have today - to build more effective linguistic analyzers. As the information derived from text is incorporated into our models, it will help increase the sophistication of our linguistic models. I suggest that in order to move from our current impoverished natural lan- guage processing systems to more comprehensive and robust linguistic models we must ask Can we acquire the linguistic information needed on the basis of tezt? If we can answer this question aff~matively - and this paper presents evidence that we can - then there is hope that we can make some progress in constructing more adequate nat- ural language processing systems. It is important to emphasize that the ques- tion whether we can acquire linguistic informa- tion from text is independent of whether the model is probabilistic, categorical, or some combination of the two. The issue is not, I believe, symbolic versus probabilistic rules, but rather whether we can acquire the necessary linguistic information in- stead of building systems completely by hand. No algorithm~ symbolic or otherwise, will succeed in large scale processing of natural text unless it can acquire some of the needed knowledge from sam- pies of naturally occurring text. 3 Lexical Disambiguation in a Deterministic Parser The focus of this paper is the problem of disam- biguating lexical category (part of speech) within a deterministic parser of the sort originated by Marcus (1980). Fidditch is one such deterministic parser, designed to provide a syntactic analysis of text as a tool for locating examples of various lin- guisticaUy interesting structures (Hindle 1983). It has gradually been modified over the past several years to improve its ability to handle unrestricted text. Fidditch is designed to provide an annotated surface structure. It aims to build phrase structure trees, recovering complement relations and gapped elements. It has • a lexicon of about 100,000 words listing all possible parts of speech for each word, along with root forms for inflected words. • a morphological analyzer to assign part of speech and root form for words not in the lexicon • a complementation lexicon for about 4000 words • a list of about 300 compound words, such as of cotJrse • a set of about 350 regular grammar rules to build phrase structure • a set of about 350 rules to disambiguate lexi- cal category Being a deterministic parser, Fidditch pursues a single path in analyzing a sentence and provides a single analysis. Of course, the parser is necessarily far from complete; neither its grammar rules nor its lexicon incorporate all the information needed 119 to adequately describe English. Therefore, it is to be expected that the parser will encounter struc- tures that it does not recognize and will make er- rors of analysis. When it is unable to provide a complete analysis of text, it is designed to return a partial description and proceed. Even with the inevitable errors, it has proven useful for analyz- ing text. (The parser has been used to analyze tens of millions of words of written text as well as transcripts of speech in order to, for example, search for subject-verb-object triples.) Rules for the parser are essentially pattern- action rules which match a single incomplete node (from a stack) and a buffer of up to three com- pleted constituents. The patterns of parser rules can refer only to limited aspects of the current parser state. Rules can mention the grammatical category of the constituents in the buffer and the current incomplete node. Rules can also refer to a limited set (about 200) of specific words that are grammatically distinguished (e.g. be, of, as). Complementation rules of course refer to a larger set of specific lexical items. The model of the parser is that it recognizes grammatical patterns; whenever it sees a pattern of its rule base, it builds the associated structure; if it doesn't see a pattern, it does nothing. At ev- ery step in the parse, the most specific pattern is selected..The more linguistic information in the parser, the better able it will be to recognize and describe patterns. But when it does not recognize some construction, it simply uses a more general pattern to parse it. This feature (i.e., matching the most specific pattern available, but always having default analyses as more general patterns) is nec- essary both for analyzing unrestricted text and for training on the basis of unrestricted text. Disambiguation rules One of the possible rule actions of the parser is to select a lexical category for an ambiguous word. In Fidditch about half of the 700 pattern-action rules are disambiguation rules. A simple disambiguation rule, both existing in the hand-written disambiguation rules and ac- quired by the training algorithm, looks like this: (9.) [PREP-{-TNS] "-TNS [N'ILV] Rule (2) says that a word that can be a preposi- tion or a tense marker (i.e. the word to) followed by a word which can be a noun or a verb is a tense marker followed by a verb. This rule is obvi- ously not always correct; there are two ways that 120 it can be overridden. For rule (2), a previous rule may have already disambiguated the PREP-t-TNS, for example by recognizing the phrase close to. Al- ternatively, a more specific current rule may apply, for example recognizing the specific noun date in to date. In general, the parser provides a window of attention that moves through a sentence from the beginning to the end. A rule that, considered in isolation, would match some sequence of words in a sentence, may not in fact apply, either be- cause a more specific rule matches, or because a different rule applied earlier. These disambiguation rules are obviously closely related to the bigrams and trigrams of stochastic disambiguation methods. The rules differ in that 1) they can refer to the 200 specified lexical items, and 9.) they can refer to the current incomplete node. Disambiguation of lexical category must occur before the regular grammar rules can run; regu- lar grammar rules only match nodes whose lexical category is disambiguated. 1 The grammatical categories Fidditch has 46 lexical categories (incltlding 8 punctuations), mostly encoding rather standard parts of speech, with inflections folded into the category set. This is many fewer than the 87 sim- ple word tags of the Brown Corpus or of related tagging systems (see Garside, Leech and Samp- son 1987:165-183). Most of the proliferation of tags in such systems is the result of encoding in- formation that is either lexically predictable or structurally predictable. For example, the Brown tagset provides distinct tags for subjective and ob- jective uses of pronouns. For I and me this dis- tinction is predictable both from the lexical items themselves and from the structure in which they occur. In Fidditch, both subjective and objective pronouns are tagged simply as PRo. One of the motivations of the larger tagsets is to facilitate searching the corpus: using only the elaborated tags, it is possible to recover some lex- ical and structural distinctions. When Fidditch is used to search for constructions, the syntactic structure and lexical identity of items is available and thus there is no need to encode it in the tagset. To use the tagged Brown Corpus for training and IMore recent approaches to deter~i-i~tic parsing may allow categorial disamhiguation to occur ~fler some of the syntactic properties of phrases are noted (Marcus, Hindle, and Fleck 1983). But in structure-b,,Hdln~ determlniRtlc parsers such ss Fidditch, lexical category must be disam- biguAted be/ore any ~m~r~ can he built. evaluating disambiguation rules, the Brown cate- gories were mapped onto the 46 lexical categories native to Fidditch. Errors in the hand-written disam- biguation rules Using the tagged Brown Corpus, we can ask how well the disambiguation rules of Fidditch perform in terms of the tagged Brown Corpus. Compar- ing the part of speech assigned by Fidditch to the (transformed) Brown part of speech, we find about 6.5% are assigned an incorrect category. Approxi- mately 30% of the word tokens in the Brown Cor- pus are categorially ambiguous in the Fidditch lex- icon; it is this 30% that we are concerned with in acquiring disambignation rules. For these ambigu- ons words, the error rate for the hand constructed disambignation rules is about 19%. That is, about 1 out of 5 of the ambiguous word tokens are in- correctly disambiguated. This means that there is a good chance that any given sentence wilt have an error in part of speech. Obviously, there is considerable motivation for improving the lexical disambiguation. Indeed, errors in lexical category disambignation are the biggest source of error for the parser. It has been my experience that the disambigna- tion rule set is particularly difficult to improve by hand. The disambiguation rules make less syn- tactic sense than the regular grammar rules, and therefore the effect of adding or deleting a rule on the parser performance is hard to predict. In the long run it is likely that these disambignation rules should be done away with, substituting dis- ambiguation by side effect as proposed by Milne (1986). But in the meantime, we are faced with the need to improve this model of lexical disana- bignation for a determinhtic parser. 4 The Training Procedure The model of deterministic parsing proposed by Marcus (1980) has several properties that aid in acquisition of symbolic rules for syntactic analy- sis, and provide a natural way to resolve the twin problems of discovering a) when it is necessary to acquire a new rule, and b) what new rule to ac- quire (see the discussion in Berwick 1985). The key features of this niodel of parsing relevant to acquisition are: • because the parser is deterministic and has a limited window of attention, failure (and therefore the need for a new rule) can be lo- calized. • because the rules of the parser correspond closely to the instantaneous description of the state of the parser, it is easy to determine the form of the new rule. • because there is a natural ordering of the rules acquired, there is never any ambiguity about which rule to apply. The ordering of new rules is fixed because more specific rules al- ways have precedence. These characteristics of the deterministic parser provide a way to acquire a new set of lexical disam- biguation rules. The idea is as follows. Beginning with a small set of disambiguation rules, proceed to parse the tagged Brown Corpus. Check each d~ambiguation action against the tags to see if the correct choice was made. If an incorrect choice was made, use the current state of the parser'to- gether with the current set of disambiguation rules to create a new disambiguation rule to make the correct choice. Once a rule has been acquired in this manner, it may turn out that it is not a correct rule. Al- though it worked for the triggering case, it may fail on other cases. If the rate of failure is sufficiently ~ high, it is deactivated. An additional phase of acquisition would be to generalize the rules to reduce the number of rules and widen their applicability. In the experiments reported here, no genera~.ation has been done. This makes the rule set more redundant and less compact than necessary. However, the simplicity of the rule patterns of this expanded rule set al- low a compact encoding and an ei~cient pattern matching. The initial state for the training has the com- plete parser grammar - all the rules for building structures - but only a minimal set of context in- dependent default disambiguation rules. Specifi- cally, training begins with a set of rules which se- lect a default category for ambiguous words words ignoring all context. For example, the rule (3) says that a word that can be an adjective or a noun or a verb (appearing in the first buffer position) is a noun, no matter what the second and third buffer positions show and no matter what the current incomplete node is. (3) A default dlsambiguation rule = N [*] [*] In the absence of any other disambiguation rules (i.e. before any training), this rule would declare 121 fleet, which according to Fidditch's lexicon is an XVJ-I-Nq-V, to be a noun. There are 136 such de- fault disambiguation rules, one for each lexically possible combination of lexical categories. Acquisition of the disambiguation rules pro- ceeds in the course of parsing sentences. In this way, the current state of the parser - the sentence as analyzed thus far - is available as a pattern for the training. At each step in parsing, before apply- ing any parser rule, the program checks whether a new disambiguation rule may be acquired. If nei- ther the first nor the second buffer position con- tains an ambiguous word, no disambiguation can occur, and no acquisition will occur. When an am- biguous word is encountered in the first or second buffer position, the current set of disambiguation rules may change. New rule acquisition The training algorithm has two basic components. The first component - new rule acquisition - first checks whether the currently selected dis- ambiguation rule correctly disambiguates the biguous items in the buffer. If the wrong choice is made, then a new, more specific rule may be added to the rule set to make the correct disam- biguation choice. (Since the new rule is more spe- cific than the currently selected rule, it will have precedence over the older rule, and thus will make the correct disambiguation for the current case, overriding any previous disamhiguation choice). The pattern for the new rule is determined by the current parse state together with the cur- rent set of disambiguation rules. The new rule pat- tern must match the current state and also must be be more specific than any currently matching disambiguation rule. (If an existing rule matches the current state, it must be doing the wrong dis- ambiguation, otherwise we would not be trying to acquire a new rule). If there is no available more specific pattern, no acquisition is possible, and the current rule set reiD~ins. Although the patterns for rules are quite re- stricted, referring only to the data structures of the parser with a restricted set of categories, there are nevertheless on the order of 109 possible dis- ambiguation rules. The action for the new rule is simply to choose the correct part of speech. Rule deactivation The second component of the rule acquisition - rule deactivation - comes into play when the current disambiguation rule set makes the wrong disambiguation and yet no new rule can be ac- quired (because there is no available more specific rule). The incorrect rule may in this case be per- manently deactivated. This deactivation occurs only when the proportion of incorrect applications reaches a given threshold (10 or 20% incorrect rule applications). Ideally we might expect that each disambigua- tion rule would be completely correct; an incorrect application would count as evidence that the rule is wrong. However, this is an inappropriate ide- AliT.ation, for several reasons. Most crucially, the gr~,m~atical coverage as well as the range of lin- guistic processes modeled in Fidditch, are limited. (Note that this is a property of any current or foreseeable syntactic analyzer.) Since the gram- mar itself is not complete, the parser will have misanalyzed some constructions, leading to incor- rect pattern matching. Moreover, some linguistic patterns that determine disambiguation (such as for example, the influence of parallelism) cannot be incorporated into the current rules at all, lead- ing to occasional failure. As the overall syntactic model is improved, such cases will become less and less f~equent, but they will never disappear alto- gether. Finally, there are of course errors in the tagged input. Thus, we can't demand perfection of the trained rules; rather, we require that rules reach a certain level of success. For rules that disambiguate the first element (except the default disambiguation rules), we require 80% success; for the other rules, 90% success. These cutoff fig- ures were imposed arbitrarily; other values may be more appropriate. An example of a rule that is acquired and then deactivated is the following. (4) [ADJ+N+V] = ADJ [*l This rule correctly disambiguates some cases like sound health and light barbell but fails on a suffi- cient proportion (such cases as sound energy and light intens/ty) that it is permanently deactivated. Interleaving of grammar and disam- biguation One of the advantages of embedding the training of disambiguation rules in a general parser is that independent parser actions can make the disam- biguation more effective. For example, adverbs 122 often occur in an auxiliary phrase, as in the phrase has immediately left The parser effectively ignores the adverb immediately so that from its point of view, has and left are contiguous. This in turn allows the disambignation rules to see that has is the leR context for left and to categorize left as a past participle (rather than a past tense or an adjective or a noun). 5 The Training The training text was 450 of the 500 samples that make up the Brown Corpus, tagged with part of speech transformed into the 46 grammatical cate- gories native to Fidditch. Ten percent of the cor- pus, selected from a variety of genres, was held back for testing the acquired set of disambigua- tion rules. The tr~inlng set (consisting of about a million words) was parsed, beginning with the default rule set and acquiring disambiguation rules as de- scribed above. After parsing the training set once, a certain set of disambignation rules had been ac- quired. Then it was parsed over again, a total of five times. Each time, the rule set is further re- fined. It is effective to reparse the same corpus be- cause the acquisition depends both on the sentence parsed and on the current set of rules. Therefore, the same sentence can induce different changes in the rule set depending on the current state of the rule set. After the five iterations, 35000 rules have been acquired. For the training set, overall error rate is less than 2% and error rate for the ambiguous words is less than 5%. Clearly, the acquired rules effectively model the training set. Because the rule patterns are simple, they can be efficiently indexed and applied. For the one tenth of the corpus held back (the test set), the performance of the trained set of rules is encouraging. Overall, the error rate for the test set is about 3%. For the ambiguous words the error rate is 10%. Compared to the performance of the existing hand-written rules, this shows almost a 50% reduction in the error rate. Additionally of course, there is a great saving in development time; to cut the error rate of the original hand- written rules in half by further hand effort would require an enormous amount of work. In contrast, this training algorithm is automatic (though it de- pends of course on the hand-written parser and set of grammar rules, and on the significant effort in tagging the Brown Corpus, which was used for 123 training). It is harder to compare performance directly to other reported disambiguation procedures, since the part of speech categories used are different. The 10% error rate on ambiguous words is the same as that reported by Garside, Leech and Sampson (1987:55). The program developed by Church (1987), which makes systematic use of rel- ative tag probabilities, has, I believe, a somewhat smaller overall error rate. Adding lexical relationships The current parser models complementation rela- tions only partially and it has no model at all of what word can modify what word (except at the level of lexical category). Clearly, a more com- prehensive system would reflect the fact, for ex- ample, that public apathy is known to be a noun- noun compound, though the word public might be a noun or an adjective. One piece of evidence of the importance of such relationships is the fact that more than one fourth of the errors are confu- sions of adjective use with noun use as premodifier in a noun phrase. The current parser has no access to the kinds of information relevant to such modifi- cation and compound relationships, and thus does not do well on this distinction. The claim of this paper is that the linguistic information embodied in the parser is useful to disambiguation, and that enhancing the linguis- tic information will result in improving the disam- bignation. Adding that information about lexical relations to the parser, and making it available to the disambignation procedure, should improve the accuracy of the disambiguation rules. In the long run the parser should incorporate general mod- els of modification. However, we can crudely add some of this information to the disambiguation procedure, and take advantage of complementa- tion information. For each word in the training set, all word pairs including that word that might be lexically condi- tioned modification or complementation relation- ships are recorded. Any pair that occurs more than once and always has the same lexical cate- gory is taken to be a lexically significant colloca- tion - either a complementation or a modification relationship. For example, for the word study the following lexical pairs are identified in the training set. bD ] [NOUN] [NI [NI [VPPRT] IN] [PP-ZPI[N] [vl[M [vl[Pp.zP] [NI[Pm~P] recent study, present study, psychological study, graduate study, own study, such study, theoretical study use study, place-name study, growth study, time-&-motion study, birefringence study prolonged study, detailed study under study study dance study at study of, study on, study by Obviously, only a small subset of the modifica- tion and complementation relations of English are included in this set. But m[qsing pairs cause no trouble, since more general disambiguation rules will apply. This is an instance of the general strat- egy of the parser to use specific information when it is available and to fall back on more general (and less accurate) information in case no specific pattern matches, permitting an incremental im- provement of the parser. The set of lexical pairs does include many high frequency collocations in- volving potentially ambiguous words, such as close tO (ADJ PREP) and long time (ADJ N). The test set was reparsed using this lexical infor- mation. The error rate for dis~mhiguation using to these lexically related word pairs is quite small (3.5% of the ambiguous words), much better than the error rate of the disambiguation rules in gen- eral, resulting in an improved overall performance in disambiguation. Although this is only a crude model of complementation and modification rela- tionships, it suggests how improvements in other modules of the parser will result in improvements in the disamhiguation. Using grammatical dependency A second source of failure of the acquired disam- biguation rules is that the acquisition algorithm is not paying enough attention to the information the parser provides. The large difference in accuracy between the training set and the test set suggests that the ac- quired set of disambiguation rules are matching idiosyncratic properties of the training set rather than general extensible properties; the rules are too powerful. It seems that the rules that refer to all three items in the buffer are the culprit. For example, the acquired rule 124 (5) [M[P P+TNS] = 'rNs [ +vl = v applies to such cases as (6) Shall we flip a coin to see which of us goes first? -~ In effect, this rule duplicates the action of another rule (7) [PREP'~t'TNS] ----" TNS [N'~V] "-- V In short, the rule set does not have appropriate shift invariance. The problem with disamhiguation rule (5) is that it refers to three items that are not in fact syntactically related: in sentence (6), there is no structural relation between the noun coin and the infinitive phrase to see. It would be appropriate to only acquire rules that refer to constituents that occur in construction with each other, since the predictability of part of speech from local context arises because of stract,ral relations among words; there should be no predictabifity across words that ate not structurally related. We should therefore be able to improve the set of disamhiguation rules by restricting new rules to only those involving elements that are in the same structure. We use the grammar as implemented in the parser to decide what elements are related and thus to restrict the set of rules acquired. Specif- ically, the following restriction on the acquisition of new rules is proposed. All the buffer elements referred to by a disambiguation rule must appear to- gether in some other single rule. This rules out examples like rule (5) because no single parser grammar rule ever refers to the noun, the to and the following verb at the same time. However, a rule llke (7) is accepted because the parser grammar rule for infinitives does refer to to and the following verb st the same time. For training, an additional escape for rules was added: if the first element of the buffer is ambigu- ous s rule may use the second element to disam- biguate it whether or not there is any parser rule that refers to the two together. In these cases, if no new rule were added, the default disamhiguation rules, which are notably ineffective, would match. (The default rules have a success rate of only 55% compared to over 94% for the disambiguation rules that depend on context.) Since the parser is not sufficiently complete to recognize all cases where words are related, this escape admits some local context even in the absence of parser internal rea- sons to do so. The training procedure was applied with this new constraint on rules, parsing the training set five times to acquire a new rule set. Restricting the rules to related elements had three notable ef- fects. First, the number of disambiguation rules acquired was cut to nearly one third the number for the unrestricted rule set (about 12000 rules). Second, the difference between the tr~inlng set and the test set is reduced; the error rate differs by only one percent. Finally, the performance of the restricted rule set is if anything slightly better than the unrestricted set (3427 errors for the re- stricted rules versus 3492 errors for the larger rule set). These results show the power of using the grammatical information encoded in the parser to direct the attention of the disambiguation rules. 6 Conclusion I have described a training algorithm that uses an existing deterministic parser together with a corpus of tagged text to acquiring rules for dis- ambiguating lexical category. Performance of the trained set of rules is much better than the pre- vious hand-written rule set (error rate reduced by half). The success of the disambiguation proce- dure depends on the linguistic knowledge embod- ied in the parser in a number of ways. It uses the data structures and linguistic cat- egories of the parser, focusing the rule acqui- sition mechanism on relevant elements. It is embedded in the parsing process so that parser actions can set things up for acquisition (for example, adverbs axe in ef- fect removed within elements of the auxil- iary, restoring the contiguity of auxiliary ele- ments). It uses the grammar rules to identify words that are grammatically related, and are there- fore relevant to disambiguation. It can use rough models of complementation and modification to help identify words that are related. Finally, the parser always provides a default action. This permits the incremental im- provement of the parser, since it can take ad- vantage of more specific information when it is available, but it will always disambiguate somehow, no matter whether it has acquired the appropriate rules or not. This work demonstrates the feasibility of acquiring the linguistic information needed to analyze unre- stricted text from text itself. Further improve- ments in syntactic analyzers will depend on such automatic acquisition of grammatical and lexical facts. References Berwick, Robert C. 1985. The Acquisition of Syn- tactic Knowledge. MIT Press. Church, Kenneth. 1987. A stochastic parts pro- gram and noun phrase parser for unrestricted text. Proceedings Second A CL Conference on Applied Natural Language Processing. DeRose, Stephen J. 1988. Grammatical category disambiguation by statistical optimization. Computational Linguistics 14.1.31-39. Francis, W. Nelson and Henry Kucera. 1982. Fre- fuency Analysis of English Usage. Houghton Mifflin Co. Garside, Roger, Geoffrey Leech, and Geoffrey Sampson. 1987. The Computational Analysis of English. Longman. Greene, Barbara B. and Gerald M. Rubin. 1971. Automated grammatical tagging of English. Department of Linguistics, Brown University. Donald Hindle. 1983. Deterministic parsing of syn- tactic non-fluencies. Proceedings of the 21st Annual Meeting of the Association for Com- putational Linguistics. Klein, S. and R.F. Simmons. 1963. A computa- tional approach to grammatical coding of En- glish words. JACM 10:334-47. Marcus, Mitchell P. 1980. A Theory of Syntactic Recognition for Natural Language. MIT Press. Marcus, Mitchell P., Donald Hindle and Margaret Fleck. 1983. D-theory: talking about talking about trees. Proceedings of the ~lst Annual Meeting of the Association for Computational Linguistics. Milne, Robert. 1986. Resolving Lexical Ambigu- ity in a Deterministic Parser. Computational Linguistics 12.1, 1-12. 125
1989
15
THE EFFECTS OF INTERACTION ON SPOKEN DISCOURSE Sharon L. Oviatt Philip It. Cohen Artificial Intelligence Center SItI International 333 Ravenswood Avenue Menlo Park, California 94025-3493 ABSTRACT Near-term spoken language systems will likely be limited in their interactive capabilities. To design them, we shall need to model how the presence or absence of speaker interaction in- fluences spoken discourse patterns in different types of tasks. In this research, a comprehensive examination is provided of the discourse struc- ture and performance efficiency of both interac- tive and noninteractive spontaneous speech in a seriated assembly task. More specifically, tele- phone dialogues and audiotape monologues are compared, which represent opposites in terms of the opportunity for confirmation feedback and clarification subdialognes. Keyboard communi- cation patterns, upon which most natural lan- guage heuristics and algorithms have been based, also are contrasted with patterns observed in the two speech modalities. Finally, implications are discussed for the design of near-term limited- interaction spoken language systems. INTRODUCTION Many basic issues need to be addresssed be- fore technology will be able to leverage suc- cessfully from the natural advantages of speech. First, spoken interfaces will need to be struc- tured to reflect the realities of speech instead of text. Historically, language norms have been based on written modalities, even though spo- ken and written communication differ in major ways (Chafe, 1982; Chapanis, Parrish, Ochsman, & Weeks, 1977). Furthermore, it has become clear that the algorithms and heuristics needed to design spoken language systems will be different from those required for keyboard system s (Co- hen, 1984; Hindle, 1983; Oviatt & Cohen, 1988 ~: 1989; Ward, 1989). Among other things, speech understanding systems tend to have considerable difficulty with the indirection, confirmations and reaffirmations, nonword fillers, false starts and overall wordiness of human speech (van Katwijk, van Nes, Bunt, Muller & Leopold, 1979). To date, however, research has not yet provided ac- curate models of spoken language to serve as a basis for designing future spoken language sys- tems. People experience speech as a very rapid, direct, and tightly interactive communication modality, one that is governed by an array of conversational rules and is rewarding in its so- cial effectiveness. Although a full. y interactive ex- change that includes confirmatory feedback and clarification subdialo~mes is the prototypical or netural form of speech, near-term spoken lan- guage systems are likely to provide only limited interactive capabilities. For example, lack of ad- equate confirmatory feedback, variable delays in interactive processing, and limited prosodic anal- ysis all can be expected to constrain interactions with initial systems. Other speech technology, such as voice mail and automatic dictation de- vices (Gould, Conti & Hovanyecz, 1983; Jelinek, 1985), isdesigned specifically for noninteractive speech input. Therefore, to the extent that inter- active and noninteractive spoken language differ, future SLSs may require tailoring to handle phe- nomena typical of noninteractive speech. That is, at least for the near term, the goal of design- ing SLSs based on models of fully interactive di- alogne may be inappropriate. Instead, building accurate speech models for SLSs may depend on 126 an examination of the discourse and performance characteristics of both interactive and noninter- active spoken language in different types of tasks. Unfortunately, little is known about how the opportunity for interactive feedback actually in- fluences a spoken discourse. To begin exam- ining the influence of speaker interaction, the present research aimed to investigate the main distinctions between interactive and noninterac- rive speech in a hands-on assembly task. More specifically, it explored the discourse and perfor- mance features of telephone dialogues and audio- tape monologues, which represent opposites on the spectrum of speaker interaction. Since key- board is the modality upon which most current natural language heuristics and algorithms are based, the discourse and performance patterns observed in the two speech modalities also were contrasted with those of interactive keyboard. Modality comparisons were performed for teams in which an expert instructed a novice on how to assemble a hydraulic water pump. A hands-on assembly task was selected since it has been con- jectured that speech may demonstrate a special efficiency advantage for this type of task. One purpose of this research was to provide a comprehensive analysis of differences between the interactive and noninteractive speech modal- ities in discourse structure, referential charac- teristics, and performance efficiency. Of these, the present paper will focus on the predominant referential differences between the two speech modes. A fuller treatment of modality distinc- tions is provided elsewhere (Oviatt & Cohen, 1988). Another goal involved outlining patterns in common between the two speech modalities that differed from keyboard. A further objective was to consider the implications of any observed contrasts among these modalities for the design of prospective speech systems that are habitable, high quality, and relatively enduring. Since fu- ture SLSs will depend in part on adequate models of spoken discourse, a final goal of this research was to begin constructing a theoretical model from which several principal features of interac- tive and noninteractive speech could be derived. For a discussion of the theoretical model, which is beyond the scope of the present research sum- mary, see Oviatt & Cohen (1988). METHOD The data upon which the present manuscript is based were originally collected as part of a larger study on modality differences in task-oriented communication. This project collected exten- sive audio and videotape data on the commu- nicative exchanges and task assembly in five dif- ferent modalities. It has provided the basis for a previous research report (Cohen, 1984) that com- pared communicative indirection and illocution- ary style in the keyboard and telephone condi- tions. As indicated above, the present research focused on a comprehensive assessment of the discourse and performance features of speech. More specifically, it compares noninteractive au- diotape and interactive telephone. Thirty subjects, fifteen experts and fifteen novices, were included in the analysis for the present study. The fifteen novices were ran- domly assigned to experts to form a total of fif- teen expert-novice pairs. For five of the pairs, the expert related instructions by telephone and an interactive dialogue ensued as the pump was assembled. For another five pairs, the expert's spontaneous spoken instructions were recorded by audiotape, and the novice later assembled the pump as he or she listened to the taped mono- logue. In this condition, there was no oppor- tunity for the audiotape speakers and listeners to confirm their understanding as the task pro- gressed, or to engage in clarification subdialogues with one another. For the last five pairs, the expert typed instructions on a keyboard, and a typed interactive exchange then took place be- tween the participants on linked CRTs. All three communication modalities involved spatial dis- placement of the participants, and participation in the noninteractive audiotape mode also was disjoint temporally. The fifteen pairs of partici- pants were randomly assigned to the telephone, audiotape, and keyboard conditions. Each expert participated in the experiment on two consecutive days, the first for training 127 and the second for instructing the novice part- ner. During training, experts were informed that the purpose of the experiment was to investigate modality differences in the communication of in- structions. They were given a set of assembly directions for the hydraulic pump kit, along with a diagram of the pump's labeled parts. Approxi- mately twenty minutes was permitted for the ex- pert to practice putting the pump together using these materials, after which the expert practiced administering the instructions to a research as- sistant. During the second session, the expert was informed of a modality assignment. Then the expert was asked to explain the task to a novice partner, and to make sure that the part- ner built the pump so that it would function cor- rectly when completed. The novice received sim- ilar instructions regarding the purpose of the ex- periment, and was supplied with all of the pump parts and a tray of water for testing. Written transcriptions were available as a hard copy of the keyboard exchanges, and were composed from audio-cassette recordings of the monologues and coordinated dialogues, the latter of which had been synchronized onto one audio channel. Signal distortion was not measured for the two speech modalities, although no subjects reported difficulty with inaudible or unintelligi- ble instructions, and < 0.2% or 1 in 500 of the recorded words were undecipherable to the tran- scriber and experimenter. All dependent mea- sures described in this research had interrater reliabilities ranging above .86, and all discourse and performance differences reported among the modal]ties were statistically significant based on either apriori t or Fisher's exact probability tests (Siegel, 1956). RESULTS AND DISCUSSION well as averaging significantly longer. In ad- dition, repetitions were significantly more com- mon in the audiotape modality, in comparison with interactive telephone and keyboard. Al- though noninteractive speech was more elab- orated and repetitive than interactive speech, these two speech modes did not differ in the total number of words used to convey instructions. Noninteractive monologues also displayed a number of unusual elaborative patterns. In the telephone modality, the prototypical pattern of presentation involved describing one pump piece, a second piece, and then the action required to assemble them. In contrast, an initial audiotape piece description often continued to be elabo- rated even after the expert had described the main action for assembling the piece. The follow- ing two examples illustrate this audiotape pat- tern of perseverative piece description: "So the first thing to do is to take the metal rod with the red thing on one end and the green cap on the other end. Take that and then look in the other parts -- there are three small red pieces. Take the smallest one. It looks like a nail -- a little red nail -- and put that into the hole in the end of the green cap. There's a green cap on the end of the silver ~hing. ~ "...Now, the curved tube that you just put in that should be pointing up still Take that, uh m Take the the cylinder that's left over m it's the biggest piece that's left over m and place that on top of that, fit that into that curved tube that you just put on. This piece tha~ I'm talking about is has a blue base on it and it's a round tube..." Compared to interactive telephone dialogues and keyboard exchanges, the principal referen- tial distinction of the noninteractive monologues was profuse elaborative description. Audiotape experts' elaborations of piece and action descrip- tions, which formed the essence of these task in- structions, were significantly more frequent, as These piece elaborations that followed the main assembly action were significantly more common in the audiotape modality. However, the frequency of piece elaborations in the more prototypical location preceding specification of the action did not differ significantly between the audiotape and telephone modes. 128 Another phenomenon observed in noninterac- tive audiotape discourse that did not occur at all in interactive speech or keyboard was elab- orative reversion. Audiotape experts habitually used a direct and definite style when instruct- ing novices on the assembly of pump pieces. For example, they used significantly more definite determiners during first reference to new pump pieces (88% in audiotape, compared with 48% in telephone). However, after initially introducing a piece in a definite and direct manner, in some cases there was downshifting to an indefinite and indirect elaboration of the same piece. All cases of reverted elaborations were presented as exis- tential statements, in which part or all of the same phrase used to describe the piece was pre- sented again with an indefinite determiner. The following are two examples of audiotape rever- sions: "...You take the L-shaped clear plastic tube, another tube, there's an L-shaped one with a big base..." "...you are going to insert that into the long clear tube with two holes on the side. Okay. There's a tube about one inch in diameter and about four inches long. Two holes on the side. ~ These reversions gave the impression of being out-of-sequence parenthetical additions which, together with other audiotape dysflueneies like perseverative piece descriptions, tended to dis- rupt the flow of noninteractive spoken discourse. Partly due to phenomena such as these, the referential descriptions provided during audio- taped speech simply were less well integrated and predictably sequenced than descriptions in tele- phone dialogue. To "begin with, the high rate of audiotape elaborations introduced more in- formation for the novice to integrate about a piece. In addition, perseverative piece descrip- tions required the novice to integrate information from two separate locations in the discourse. As such, they created unpredictability with respect to where piece information was located, and vio- lated expectations for the prototypical placement of piece information. In the case of both per- severative and reverted piece elaborations, the novice had to decide whether the reference was anaphoric, or whether a new piece was being re- ferred to, since these elaborations were either discontinuous from the initial piece description or began with an indefinite article. Once estab- lished as anaphoric, the novice then had to suc- cessfully integrate the continued or reverted de- scription with the appropriate earlier one. For example, did it refine or correct the earlier de- scription? All of these characteristics produced more inferential strain in the audiotape modality. An evaluation of total assembly time indi- cated that the audiotape novices functioned sig- nificantly less efficiently than telephone novices. Furthermore, the length of novice assembly time demonstrated a strong positive correlation with the frequency of expert elaborations, implicat- ing the inefficiency of this particular discourse feature. Evidently, experts who elaborated their descriptions most extensively were the ones most likely to be part of a team in which novice assem- bly time was lengthy. The different patterns observed between inter- active and noninteractive speech may be driven by the presence or absence of confirmation feed- back. The literature indicates that access to con- firmation feedback is associated with increased dialogue efficiency in the form of shorter noun phrases with repeated reference (Krauss & Wein- heimer, 1966). During the present hands-on assembly interactions, all interactive telephone teams produced a high and stable rate of con- firmations, with 18% of the total verbal inter- action spent eliciting and issuing confirmations, and a confirmation forthcoming every 5.6 sec- onds. Confirmations were clearly a major vehi- cle available for the telephone listener to signal to the expert that the expert's communicative goals had been achieved and could now be discharged. Since audiotape experts had to operate without confirmation feedback from the novice, they had no metric for gauging when to finish a description and inhibit their elaborations. Therefore, it was not possible for audiotape experts to tailor a de- 129 scription to meet the information needs of their particular partner most efficiently. In this sense, their extensive and perseverative elaborating was an understandably conservative strategy. In spite of the fact that instructions in the two speech modalities were almost three-fold wordier than keyboard, novices who received spoken in- structions nonetheless averaged pump assembly times that were three times faster than keyboard novices (cf. Chapanis, Parrish, Ochsman, & Weeks, 1977). These data confirm that speech interfaces may be a particularly apt choice for use with hands-on assembly tasks, as well as provid- ing some calibration of the overall efficiency ad- vantage. For a more detailed account of the simi- larities and differences between the keyboard and speech modalities, see 0viatt & Cohen (1989). IMPLICATIONS FOR INTERACTIVE SPOKEN LANGUAGE SYSTEMS 1 A long-term goal for many spoken language systems is the development of fully interactive ca- pabilities. In practice, of course, speech applica- tions currently being developed are ill equipped to handle spontaneous human speech, and are only capable of interactive dialogue in a very lim- ited sense. One example of an ihteractional limi- tation is the fact that system responses typically are more delayed than the average human conver- sant. While the natural speed of human dialogue creates an efficiency advantage in tasks, it simul- taneously challenges current computing technol- ogy to produce more consistently rapid response times. In research on telephone conversations, transmission and access delays 2 of as little as .25 to 1.8 seconds have been found to disrupt the normal temporal pattern of conversation and to reduce referential efficiency (Krauss ~z Bricker, 1967; Krauss, Garlock, Bricker, & McMahon, x For a discussion of the implications of this research for non/nteractive speech technology, see Oviatt ~ Cohen (198S). 2A transmission delay refers to a relatively pure delay of each speaker's utterances for some defined time period. By contrast, an access delay prevents simultaneous speech by the listener, and then delays circuit access for a defined time period after the primary speaker ceases talking. 1977). These data reveal that the threshold for an acceptable time lag can be a very brief in- terval, and that even these minimal delays can alter the organization and efficiency of spoken discourse. Preliminary research on human-computer di- alogue has indicated that, beyond a certain threshold, language systems slower than real- time will elicit user input that has characteristics in common with noninteractive speech. For ex- ample, when system response is slow and prompt confirmations to support user-system interaction are not forthcoming, users will interrupt the sys- tem to elaborate and repeat themselves, which ultimately results in a negative appraisal of the system (van Katwijk, van Nes, Bunt, Muller, & Leopold, 1979). For practical purposes, then, people typically are unable to distinguish be- tween a slow response and no response at all, so their strategy for coping with both situations is similar. Unfortunately, since system delays typ- ically vary in length, their duration is not pre- dictable from the user's viewpoint. Under these circumstances, it seems unrealistic to expect that users will learn to anticipate and accommodate the new dialogue pace as if it had been reduced by some constant amount. Apart from system delay, another current limitation that will influence future interac- tive speech systems is the unavailability of full prosodic analysis. Since an interactive system must be able to analyze prosodic meaning in or- der to deliver appropriate and timely confirma- tions of received messages, limited prosodic anal- ysis may make the design of an effective confir- mation system more difficult. In spoken interac- tion, speakers typically convey requests for con- firmation prosodically, and such requests occur mid-sentence as well as at sentence end. For ex- ample: 130 Expert: Novice: Expert: Novice: "Put that on the hol~ on the side of that tube --" (pause) "Yeah." "-- that is nearest to the top or nearest to the green handle." "Okay." For a system to analyze and respond to re- quests for confirmation, it would need to detect rising intonation, pausing, and other characteris- tics of the speech signal which, although elemen- tary in appearance, cannot yet be performed in a reliable manner automatically (Pierrehumbert, 1983; Walbel, 1988). A system also would need to derive the contextually appropriate meaning for a given intonation pattern, by mapping the prosodic structure of an utterance onto a rep- resentation of the speaker's intentions at a par- ticular moment. Since the pragmatic analysis of prosody barely has begun (Pierrehumbert & Hirschberg, 1989; Waibel, 1988), this important capability is unlikely to be present in initial ver- sions of interactive speech systems. Therefore, the typical prosodic vehicles that speakers use to request confirmation will remain unanalyzed such that confirmations are likely to be omitted. . k This may be especially true of rind-sentence con- firmation requests that lack redundant grammat- ical cues to their function. To the extent that confirmation feedback is omitted, speakers' dis- course can be expected to become more elabo- rative, repetitive, and generally similar to mono- logue as they attempt to engage in dialogue with limited-interaction systems. If supplying apt and precisely timed confir- mations for near-term spoken language systems will be difficult, then consideration is in order of the difficulties posed by noninteractive dis- course phenomena for the design of preliminary systems. For one thing, the discourse phenom- ena of noninteractive speech differ substantially from the keyboard discourse upon which cur- rent natural language processing algorithms are based. Keyboard-based algorithms will require alteration, especially with respect to referential features and discourse macrostructure, if design- ers expect future systems to handle spontaneous human speech input. With respect to refer- ence resolution, the system will have to iden- tify whether a perseverative elaboration refers to a new part or a previously mentioned one, whether the initial descriptive expression is being further expanded, qualified, or corrected, and so forth. The potential difficulty of tracking noun phrases throughout a repetitive and elaborative discourse, espedally segments that include perse- verative descriptions displaced from one another and definite descriptions that revert to indefinite elaborations about the same part, is illustrated in the following brief monologue segment: "and then you take the L-shaped clear plas- tic tube, another tube, there's an L-shaped one with a big base, and that big base hap- pens to fit over the top of this hole that you just put the red piece on. Okay. So there's one hole with a blue piece and one with a red piece and you take the one with the red piece and put the L-shaped instrument on top of this, so that..." For example, a system must distinguish whether "another tube" is a new tube or whether it co-refers with "the L-shaped clear plastic tube" uttered previously, or with the other two itali- cized phrases. In cases where description of a part persists beyond that of the basic assembly action, the system also must determine whether a new discourse assembly segment has been ini- tiated and whether a new action now is being described. In the above illustration, the system must determine whether "and you take the one with the red piece and put the L-shaped instru- ment on top of this" refers to a new action, or whether it refers back to the previously described action in "that big base happens to fit over the top of this hole..." The system's ability to re- solve such co-reference relations will determine the accuracy with which it interprets the basic assembly actions underway. To optimize the in- terpretation of spoken monologues, a system will have to continually reexamine whether further descriptive information supports or refutes cur- 131 rent beliefs about part identity and action perfor- mance. That is, the system's orientation should be geared more toward frequent cross-checking of previous information, rather than automatically positing new entities and actions. In order to see how current algorithms will need to be altered to process noninteractive speech phenomena, we consider how recent di- alogue and text processing systems would fare if confronted with such data. The ability to rec- ognize when and how utterances elaborate upon previous discourse is a special case of recogniz- ing how speakers intend discourse segments to be related. The ARGOT dialogue system (Lit- man & Allen, 1989) takes one important step to- ward recognizing discourse structures by distin- guishing the speaker's domain plan, such as for assembling parts, from his or her discourse plan, such as to clarify which domain plans are being performed. Although there are technical diffi- culties, its "identify parameter" discourse plan is designed to process elaborations that further specify the arguments of requested actions during interactive dialogue. However, ARGOT would have to be extended to include.a number of new types of discourse p]anA before it would be able to aa~lyze noninteractive speech phenomena cor- rectly. For one thing, ARGOT does not distin- guish different types of elaboration such that in- formation in the two segments of discourse could be integrated correctly. Also, instead of hav- ing a discourse plan for self-correction, ARGOT focuses exclusively on a strategy for correcting other agents' plans by means of requesting them to perform remedial actions. In addition, AR- GOT's current processing scheme is not geared to handle elaborative requests. Briefly, ARGOT performs an action once a sufficiently precise re- quest to perform that action has been recognized. However, since monologue speakers tend to per- sist in attempting to achieve their goals, they es- sentially issue multiple requests for the listener to perform a particular action. For example, in the above audiotape fragment, the speaker tried twice to get the listener to put the L-shaped piece over the outlet containing the red valve. Any sys- tem unable to recognize that the second request is an elaboration of the first would likely make the fundamental error of positing the existence of two separate actions to be performed. Although text processing systems are explic- itly designed to analyze noninteractive discourse, they fail to provide the needed solutions for an- alyzing noninteractive speech. These systems currently have no means for identifying basic discourse elaborations and, to date, they have not incorporated discourse structural cues which could be helpful in signaling the relationship of discourse segments (Grosz & Sidner, 1986; Lit- man & Allen, 1989; Oviatt & Cohen, 1989; Re- ichman, 1978). In addition, they are restricted to declarative sentences. One recent text analysis system called Tacitus (Hobbs, Stickel, Martin & Edwards, 1988) ap- pears uniquely capable of handling some of the elaborative phenomena found in our corpus. In selecting the best analysis of a text, Tacitus uses an abductive strategy to search for an interpre- tation that minimizes the overall cost of the set of assu.mptions needed to prove that the text is true. The interpretive cost is a weighted func- tion of the individual costs of the assumptions needed to derive that interpretation. Depend- ing on the assignment of costs, it is possible for Tacitus to adopt a non-minimal individual as- sumption as part of a globally optimal discourse interpretation. Applying this general strategy to noun phrase interpretation, Tacitus' heuristics for referring expressions include a higher cost for assuming that a definite noun phrase refers to a new discourse entity than to a previously intro- duced one, as well as a higher cost for assuming that an indefinite noun phrase refers to a previ- ously introduced entity than to a new one. These heuristics could handle the prevalent noninterac- tive speech phenomenon of definite first reference to new pump parts, as well as elaborative re- versions, although both would entail higher-cost individual assumptions. That is, if it makes the most global sense, the system could interpret def- inite first references and reversions as referring to "new" and "old" entities, respectively, contrary 132 to the usual preferences in computational linguis- tics. Although such an interpretation strategy may sometimes be sufficient to establish the needed co-reference relations in elaborative discourses, due to the nature of Tacitus' global optimization approach one cannot be certain that any par- ticular case of elaboration will be resolved cor- rectly without first weighing all other local dis- course specifics. It is neither clear what percent- age of the phenomena would be handled correctly at present, nor whether Tacitus' heuristics could be extended to arrive at consistently correct in- terpretations. Furthermore, since Tacitus' usual strategy for determining what should be proven is simply to conjoin the meaning representations of two utterances, it would fail to provide correct interpretations for certain types of elaborations, such as corrections in which the latter descrip- tion supercedes an earlier one. Hobbs (1979) has recognized and attempted to define elaboration as a coherence relation in previous work, and is currently refining Tacitus' computational meth- ods in a manner that may yield improvements in the processing of elaborations. CONCLUSIONS In summary, the present results imply that near-term spoken language systems that are un- able to provide meaningful and timely confirma- tions may not be able to curtail speakers' elab- orations effectively, or the related discourse con- volutions typical of noninteractive speech. Cur- rent dialogue and text processing systems are not prepared to handle this type of elaborative dis- course. Clearly, new heuristics will need to be developed to accomodate speakers who try more than once to achieve their communicative goals, in the process using multiple utterances and var- ied speech acts. Under these circumstances, models of noninteractive speech may provide a more appropriate basis for designing near-term spoken language systems than either keyboard models or models of fully interactive dialogue. To model discourse accurately for interactive SLSs, further research will be needed to estab- lish the generality of these noninteractive speech phenomena across different tasks and applica- tions, and to determine whether speakers can be trained to alter these patterns. In addition, research also will be needed on the extent to which human-computer task-oriented speech dif- fers from that between humans. At present, there is no well developed discourse theory of human- machine communication, and the few studies comparing human-machine with human-human communication have focused on the keyboard modality, with the exception of Hauptmann & Rudnicky (1988). These studies also have relied exclusively on the Wizard of Oz paradigm, al- though this technique entails unavoidable feed- back delays due to the inherent deception, and it was never intended to simulate the interactional coverage of any particular system. Further work ideally would examine human-computer speech patterns as prototypes of interactive SLSs be- come available. In short, our present research findings imply that designers of future spoken language sys- tems should be vigilant to the possibility that their selected application may elicit noninterac- tive speech phenomena, and that these patterns may have adverse consequences for the technol- ogy proposed. By anticipating or at least recog- nizing when they occur, designers will be better prepared to develop speech systems based on ac- curate discourse models, as well as ones that are viable ergonomically. ACKNOWLEDGMENTS This research was supported by the National Institute of Education under contract US-NIE-C- 400-76-0116 to the Center for the Study of l~ead- ing at the University of Illinois and Bolt Beranek and Newman, Inc., and by a contract from ATR International to SPd International. References [11 Chapanis A., R. N. Parrish, R. B. Ochsman, and G. D. Weeks. Studies ifi interactive communi- cation: If. The effects of four communication modes on the linguistic performance of teams 133 during cooperative problem solving. Human Fac. tots, 19(2):101-125, 1977. [2] W. L. Chafe. Integration and involvement in speaking, writing, and oral literature. In D. Tan- nun, editor, Spoken and Written Language: Ez- ploring Oralit~/ and Literacy, chapter 3, pages 35-53. Ablex Publishing Corp., Norwood, New Jersey, 1982. [3] P. R. Cohen. The pragmatics of referring and the modality of communication. Computational Linguistics, 10(2):97-146, 1984. [4] J. D. Gould, J. Conti, and T. Hovanyeez. Com- posing letters with a simulated listening type- writer. Communications of the ACM, 26(4):295- 308, April 1983. [5] B. J. Grosz and C. L. Sidner. Attention, intentions, and the structure of discourse. Computational Linguistics, 12(3):175--204, July- September 1986. [6] A. G. Hauptmann and A. I. Rudnicky. Talking to computers: An empirical investigation. Interna- tional Journal of Man-Machine Studies, 28:583- 604, 1988. [7] D. Hindle. Deterministic parsing of syntactic non-flueneies. In Proceedings of the ~1s1. An- nual Meeting of the Association for Computa- tional Linguistics, pages 123-128, Cambridge, Massachusetts, June 1983. [8] J. Hobbs. Coherence and coreferenee. Cognitive Science, 3(I):67-90, 1979. [9] J. R. Hobbs, M. Stickel, P. Martin, and D. Ed- wards. Interpretation as abduction. In Proceed- ings of the ~6th Annual Meeting of the Associ- ation for Computational Linguistics, pages 95- 103, Buffalo, New York, 1988. [10] F. Jelinek. The development of an experimental discrete dictation recognizer. Proceedings of the IEEE, 73(11):1616-1624, November 1985. [11] R. M. Krauss and P. D. Bricker. Effects of transmission delay and access delay on the ef- ficiency of verbal communication. The Journal of the Acoustical Societ~/ of America, 41(2):286- 292, 1967. [12] R. M. Krauss, C. M. Garlock, P. D. Bricker, and L. E. McMahon. The role of audible and visible back-channel responses in interpersonal commu- nication. Journal of Personality and Social Pup chology, 35(7):523-529, 1977. [13] R. M. Krauss and S. Weinheimer. Concur- rent feedback, confirmation, and the encoding of referents in verbal communication. Journal of Personality and Social Psychology, 4(3):343- 346, 1966. [14] D. J. Litman and J. F. Alien. Discourse pro- ceasing and commonsense plans. In P. R. Co- hen, J. Morgan, and M. E. Pollack, editors, In- tentions in Communication. M.I.T. Press, Cam- bridge, Massachusetts, 1989. [15] S. L. Oviatt and P. R. Cohen. Discourse struc- ture and performance efficiency in interactive and noninteractive spoken modalities. Technical Report 454, Artificial Intelligence Center, SRI International, Menlo Park, California, 1988. [16] S. L. Oviatt and P. R. Cohen. The contribut- ing influence of speech and interaction on human discourse patterns. In J. W. Sullivan and S. W. Tyler, editors, Architectures for Intelligent Inter- faces: Elements and Prototypes. Addison-Wesley Publishing Co., Menlo Park, California, 1989. [17] J. Pierrehumbert. Automatic recognition of in- tonation patterns. In Proceedings of the 21st Annual Meeting of the Association for Compu- tational Linguistics, pages 85-90, Cambridge, Massachusetts, June 1983. [18] J. Pierrehumbert and J. Hirschberg. The mean- in 8 of intonational contours in the interpretata- tion of discourse. In Intentions in Communica- tion. Bradford Books, M.I.T. Press, Cambridge, Massachusetts, 1989. [19] R. Reichman. Conversational coherency. Cogni- tive Science, 2(4):283-328, 1978. [20] S. Siegel. Nonparametric Methods for the Be- havioral Sciences. McGraw-Hill Publishing Co., New York, New York, 1956. [21] A. F. VanKatwijk, F. L. VanNes, H. C. Bunt, H. F. Muller, and F. F. Leopold. Naive subjects interacting with a conversing information sys- tem. IPO Annual Progress Report, Eindhoven, Netherlands, 14:105--112, 1979. [22] A. Waibel. Prosody and Speech Recognition. Pit- man Publishing, Ltd., London, U. K., 1988. [23] W. Ward. Understanding spontaneous speech. In Proceedings of the Darpa Speech and Natu- ral Language Workshop, February 1989, Morgan Kaufman Publishers, Inc., Los Altos, California. 134
1989
16
How to cover a grammar Ren6 Leermakers Philips Research Laboratories, P.O. Box 80.000 5600 JA Eindhoven, The Netherlands ABSTRACT A novel formalism is presented for Earley-like parsers. It accommodates the simulation of non-deterministic pushdown automata. In particular, the theory is applied to non-deterministlc LRoparsers for RTN grammars. 1 Introduction A major problem of computational linguistics is the inefficiency of parsing natural language. The most popular parsing method for context-free natural lan- guage grammars, is the genera/ context-free parsing method of Earley [1]. It was noted by Lang [2], that Earley-like methods can be used for simulating a class of non-determlnistic pushdown antomata(NPDA). Re- cently, Tondta [3] presented an algorithm that simulates non-determlnistic LRoparsers, and claimed it to be a fast Mgorithm for practical natural language processing sys- tems. The purpose of the present paper is threefold: 1 A novel formalism is presented for Earley-like parsers. A key rSle herein is played by the concept of bi- linear grammaxs. These are defined as context-free grammars, that satisfy the constraint that the right hand side of each grammar rule have at most two non-terminals. The construction of parse matrices • for bilinear grammars can be accomplished in cubic time, by an algorithm called C-paxser. It includes an elegant way to represent the (possibly infinite) set of parse trees. A case in point is the use of predict functions, which impose restrictions on the parse matrix, if part of it is known. The exact form and effectiveness of predict functions depend on the bilineax grammar at hand. In order to parse a gen- era] context-free grammar G, a possible strategy is to define a cover for G that satisfies the bilin- ear grammar constraint, and subsequently parse it with C-parser using appropriate predict functions. The resulting parsers axe named Earley-like, and differ only in the precise description for deriving covers, and predict functions. 2 We present the Lang algorithm by giving a bilin- ear grammar corresponding to an NPDA. Em- ploying the correct predict functions, the parser for this grammar is equivalent to Lang's algo- rithm, although it works for a slightly different class of NPDA's. We show that simulation of non- deterministic LR-parsers can be performed in our version of the Lang framework. It follows that Earley-like Tomita parsers can handle all context- free grammars, including cyclic ones, although Tomita suggested differently[3]. 3 The formalism is illustrated by applying it to Recur- sire Transition Networka(RTN)[S]: Applying the techniques of deterministic LR-parsing to gram- mars written as RTN's has been the subject of re- cent studies [9,10]. Using this research, we show how to construct efficient non-deterministic LR- parsers for RTN's. 2 C-Parser The simplest parser that is applicable to all context-free languages, is the well-known Cocke-Younger-Kasa~i (CYK) parser. It requires the grammar to be cast in Chomsky normal form. The CYK parser constructs, for the sentence zl ..zn, a parse matrix T. To each part zi+1 ..zj of the input corresponds the matrix element T.j, the value of which is a set of non-terminals from which one can derive zi+1..zj. The algorithm can easily be generalized to work for any grammar, but its complexity then increases with the number of non-terminals at the right hand side of grammar rules. Bilinear grammars have the lowest complexity, disregarding linear gram- mars which do not have the generative power of general context-free grammars. Below we list the recursion re- lation T must satisfy for general bilinear grammars. We write the grammar as as a four-tuple (N, E, P, S), where N is the set of non-terminals, E the set of terminals, P the set of production rules, and S 6 N the start sym- bol. We use variables I,J,K,L E N, ~1,~2,~z E E*, and i,j, kl..k4 as indices of the matrix T 1 . I E ~ij -~ 3J, KEN,i<kt<k2~ks~ka<j(J ~ Tktk~^ K E Tkak4 A I "* 81JI~2KI~ A ~a = zi+l ..zkt AB2 = Zk3÷1..Zk3 A B3 ~-" 2~k4-~1..Zj) ^Bt = zi+t..zk~ a ~2 = Zk~..zi) The relation can be solved for the diagonal elements T,, independently of the input sentence. They are equal to the set of non-terminals that derive e in one or more 1 Throughout the paper we identify a gr~ummar rule [ --* with the boolean expression 'l directly derives ~'. 135 steps. Algorithms that construct T for given input, will be referred to as C-paxsers. The time needed for con- structing T is at most a cubic function of the input length ~, while it takes an amount of space that is a quadratic function of n. The sentence is successfully parsed, if S E Ton. From T, one can simply deduce an output grammar O, which represents the set of parse trees. Its non-termlnals axe triples < I,i,j >, where I is a non-termlnal of the original bilineax grammar, and i,j are integers between 0 and n. < l,i,# >--. #~ < 3,h,I~2 > fl~ < K,h,/~, > #s = I E T,i AI --. #13[~Kfl3 ^ J G Th~h2 ^K G Tk~k, Afll = z~+l..z~ ^fl~ ---- z~+~..z~ Afls = z~+~..z# < I, i, j >-- fl~ < 3, h, k~ > ~ - I ~ T~j ^ I -- fl~ 3#2 ^J E Tk~ka A fll ---- zi+~..zk~ A/@2 ---- :gk3.1.1 ..Z i < I, i,j >--* fla _= I ~ T~# A I --* fl~ ^ & = zi+~..zj The grammar rules of O axe such that they generate only the sentence that was parsed. The parse trees according to the output grammar are isomorphic to the parse trees generated by the original grammar. The latter parse trees can be obtained from the former by replacing the triple non-terminals by their first element. Matrix elements of T are such that their members cover part of the input. This does not imply that all members axe useful for constructing a possible parse of the input as a whole. In fact, many are useless for this purpose. Depending on the grammar, knowledge of part of T may give restrictions on the possibly useful contents of the rest of T. Making use of these restrictions, one may get more efficient parsers, with the same function- ality. As an example, one has the generalized E~rley prediction. It involves functions predlct~ : 2 ~ --* 2N(N is the set of non-terminais), such that one can prove that the useful contents of the Tj~ axe contained in the elements of a matrix @ related to T by Soo = S~, O,~ ffi predictj_,(~.o O~,) m T,~, if j > O, where O c, called the initial prediction, in some constant set of non-termln~ls that derive (. It follows that T~$ can be calculated from the matrix elements O~t with i < k, l ~ j, i.e. the occurrences of T at the right hand side of the recurrence relation may be replaced by O. Hence 0~j, j > 0, can be calculated from the matrix elements O~t, with ! < j: O~j = predict~_~(~ Os~)~ {II~J, xe~t,,<~<~<~<~o<_~(3 ~ 0~^ K ~. O~s, , AI -- fl~Jfl~Kfls Afl~ = z,+~..z~ Afl~ = z~+z..z~ Aria = z~,+z..z~) V3aeN, i<k~<_k~<j( 3 ~ Okxk~ A I "-~ fll 3~ Aflx = z~+~..zk~ A fl~ ---- Zk~..z~) V(! -- ~ ^ ~ = ~,+~..z,)) The algorithm that creates the matrix @ in this way, scanning the input from left to right, is called a re- stricted C-paxser. The above relation does not deter- mine the diagonal elements of ~ uniquely, and a re- stricted C-paxser is to find the smal]est solution. Con- cerning the gain of efficiency, it should be noted that this is very grammax-dependent. For some grammars, restriction of the paxser reduces its complexity, while for others predict functions may even be counter-productive [4]. 3 Bilinear covers A grammar G is said to be covered by a grammar C(G), if the language generated by both grammars is identical, and if for each sentence the set of parse trees generated by G can be recovered from the set of parse trees gen- erated by C(G). The grammar C(G) is called a cover for G, and we will be interested in covers that axe hi- linear, and can thus be parsed by C-paxser. It is rather surprising that at the heart of most parsing algorithms for context-free languages lies a method for deriving a bilineax cover. 3.1 Earley's method Eaxley's construction of items is a clear example of a con- struction of a biHneax cover CE(G) for each context-free grammar G. The terminals of CE(G) and G axe iden- ticai, the non-terminals of Cz(G) axe the items (dotted rnies[1]) I~, defined as follows. Let the non-terminal de- fined by rule i of grammar G be given by N~, then I~ is N~ -- a. fl, with lilt + 1 = k (~, # axe used for sequences of terminals and non-terminais). We assume that only one rule, rule O, of G rewrites the start symbol S. The length of the right-hand side of rule i is given by M~ - 1. The rules of C~(G) are derived as follows. • Let I~ be an item of the form A --* ~ • B~, and hence I~ -l be A --, aB. ~. Then if B is a terminal, I~ -I ...* I~B, and if B is non-terminal then I~ -I -- I~, for all j such that Nj = B. • Initial items of the form N~ --- .or rewrite to e: • For each i one has the final rule/~ -- I~. In [4] a similar construction was given, leading to a grammar in canonical two-form for each context-free grammar. Among other things it differs from the above in the appearance of the final rules, which axe indeed superfluous. We have introduced them to make the ex- tension to RTN's, in section 4, more immediate. The description just given, yields a set of production rules consisting of sections P~, that have the following structure: Pi --- ~-,iI211M' ,'fI#-li -- I~ z'~/} t.,l{I~ ( -- flu {I ° -* I!}, where z~/ E U, {/~i) u E. Note that the start symbol of the cover is/~0. The construction of parse matrices T by C-paxser yields the Eaxley algorithm, without its pre- diction part. By restricting the parser by the predicto function satisfying v,edicto( W) - ( X, - ^ x, t), the initial prediction 0¢ being the smallest solution of s ° = v, dicto(S u }, 136 one obtains a conventional Earley parser (predict~ -~ U~. {I~ } for k > 0). The cover is such that usually the J predict action speeds up the parser considerably. There are many ways to define covers with dotted rules as non-terminals. For example, from recent work by Kruseman Aretz [6], we learn a prescription for a bilinear cover for G, which is smaller in size compared to C~(G), at the cost of rules with longer right hand sides. The prescription is as follows (c~, ~, 7, s are sequences of terminals and non-termlnaJs, ~ stands for sequences of terminals only, and A, B, C are non-terminals): • Let I be an item of the form A --* or. Bs, and K is an item B --* */-, then J .--, IK~, where either J is item A --* c~B~. C~ and ~: = ~C~, or J is item A --* ~B~. and s --- 6. • Let I be an item of the form A ---, 6 .Bc~ or A -* 6., then I --* 6. 3.2 Lang grammar In a similar fashion the items used by Lang [2] in his algorithm for non-deterministic pushdown automata (NPDA) may be interpreted as non-terminals of a hi- linear grammar, which we will call the Lang grammar. We adopt restrictions on NPDA's similarly to [2], the main one being that one or two symbols be pushed on the stack in a singie move, and each stark symbol is re- moved when it is read. If two symbols &re pushed on the sta~k, the bottom one must be identical to the sym- bol that is removed in the same transition. Formally we write an NPDA as & 7-tuple (Q, E, r, 6, q0, Co, F), where Q is the set of state symbols, E the input alphabet, r the pnshdown symbols, 6 : Q x (I" tJ {e}) × (E U {¢}) --* 2 Qx((~}uru(rxr)) the transition function, qo E Q the initial state, ¢0 E 1` the start symbol, and F C_ Q is the set of final states. If the automaton is in state p, and ¢~ is the top of the stack, and the current symbol on the input tape is It, then it may make the following eight types of moves: if (r, e) E 6(p, e, e): gO to state r if (r, e) E 6(p, or, e): pop ~, go to state r if (r, 3") ~ 6(p, a, e): pop ~, push 3', go to state r if (r, e) ~ 6(p, e, It): shift input tape, go to state r if (r, 3') E 6(p, e, It): push 7, shift tape, go to r if (r, e) ~ 6(p, c~, It): pop ~, shift tape, go to r if (r, 3") ~ 6(p, ¢~, It): pop c~, push % shift tape, go to r if (r, 3"or) ~ 6(p, ~, y): push % shift tape, go to r We do not allow transitions such that (r, ~r) ~ 6(p, e, e), or (r, "yo~) ~ 6(p, ~, e), and assume that the initial state can not be reached from other states. The non-terminals of the Lang grammar are the start symbol 3 and four-tuple entities (Lang's 'items') of the form < q, c~,p, ~ >, where p and q axe states, and cr and stack symbols. The idea is that iff there exists a com- putation that consumes input symbols zi..zj, starting at state p with a stack ~0 (the leftmost symbol is the top), and ending in state q with stack ~0, and if the stack fl(o does not re-occur in intermediate configura~ tions, then < q,a,p,~ >---" z~..zj. The rewrite rules of the La~g grammar are defined as follows (universal quantification over p, q, r, s E Q; ~, ~, 7 E 1`; z E ~, t.J e, It E E is understood): S -*< p,a, qo,¢0 >-p E F (final rules) < r,~,s, 7 >--,< q,~,s, 7 >< p,c~,q,/3 > z ---- (,', ~) ~ 6(p, ~, ~) < r, 7, q, ~ >--"< P, ct, q, ~ > z ((,', ~) ~ 6(,,,,, ~, z))V ((,', '0 E 5(p, e, ,~) ^ (~ = 7)) < r, 7,P,a >---, It ((,, ~) ~ 6(p, ~, It))v ((,, ~) ~ ~(p, ~, It)) < q0, ~0, g0, ¢0 >--* e (initial rule) From each NPDA one may deduce context-free gram- mars that generate the same language [5]. The above construction yields such a grammar in bilinear form. It only works for automata, that have transitions like we use above. Lang grammars are rather big, in the rough form given above. Many of the non-terminals do not occur, however, in the derivation of any sentence. They can be removed by a standard procedure [5]. In addition, during parsing, predict functions can be used to limit the number of possible contents of parse ma- trix elements. The following initial prediction and pre- dict functions render the restricted C-parser functionally equivalent to Lang's original algorithm, albeit that Lang considered & class of NPDA's which is slightly different from the class we alluded to above: s ° = {< q0,¢0,q0,¢0 >} predictk(L) = ~ if k = 0 else predic~h(L) -- {< s,~,q,~ > 13,,~ < ¢,~,r, 3" >~ L} u{Slk ffi n} (n is sentence length). The Tomita parser [3] simulates an NPDA, con- structed from a context-free grammar via LR-parsing tw hies. Within our formalism we can implement this idea, and arrive at an Earley-like version of the Tomita parser, which is able to handle general context-free grammars, including cyclic ones. 4 Extension to RTN's In the preceding section we discussed various ways of deriving bilinear covers. Reversely, one may try to dis- cover what kinds of grammars are covered by certain bllinear grammars. A billnear grammar C~(G), generated from a context- free grammar by the Earley prescription, has peculiar properties. In general, the sections P~ defined above con- stitute regular subgrammars, with the ~ as terminals. Alternatively, P~ may be seen as a finite state automa- ton with states I~. Each rule I~ -l --.//Jz~ corresponds to a transition from I~ to I~ -l labeled by z~. This cot- respondence between regular grammars and finite state 137 automata is in fact a special instance of the correspon- dence between Lang bilinear grammars and NPDA's. The Pi of the above kind are very restricted finite state automata, generating only one string. It is a natu- ral step to remove this restriction and study covers that are the union of general regular subgrammars. Such a grammar will cover a grammar, consisting of rules of the form N~ -. ~, where ~ is a regular expression of terminals and non-terminals. Such grammars go under the names of RTN grammars [8], or extended context- free grammars [9], or regular right part grammars [10]. Without loss of generality we may restrict the format of the fufite state automata, and stipulate that it have one initial •tale I~' and one final state/~, and only the following type of rules: • final rules P, -. I~ • rules I I -- .[~z, where z ~ Um{J°m} U ~, k <> 0 and j <> M~. • the initial rule I/M~ -- (. For future reference we define define the set I of non- terminals as I = U,${I~}, and its subset/o = U,{/~i }. A covering prescription that turns an RTN into a set of such subgrammars, reduces to C~ if applied to normal context-free grammars, and will be referred to by the same name, although in general the above format does not determine the cover uniquely. For some example definitions of items for RTN's (i.e. the I~), see [1,9]. 5 The CNLR Cover A different cover for RTN grammars may be derived from the one discussed in the previous section. So our starting point is that we have a biline&r grammar C£(G), consisting of regular subgrammars. We (approx- imately) follow the idea of Tomita, and construct an NPDA from an LR(O)-antomaton, whose states are sets of items. In our case, the items are the non-terminals of C~(G). The full specification of the automaton is ex- tracted from [9] in a straightforward way. Subsequently, the general prescription of chapter 3 yields a bilinear grammar. In this way we arrive at what we would like to call the canonical non-deterministic LR-parser (CNLR parser, for short). 5.1 LR(0) states In order to derive the set Q of LR(0) states, which are subset• of I, we first need a few definitions. Let • be an element of 2 I, then closure(s) is the smMlest element of 2 x, such that s c closure(s)^ ((~! ~ ~osure(s)^ (xp - xl~)) x= ~- ~ aos.re(s)) Similarly, the sets gotot(s, z), and goto.j(s, z), where z E /o U E, are defined as goto~(s, ffi) = closu,e({~'l II ~ s ^ (I,* --. I!~) ^ j <> M,}) goto~(s, ~) = closure({I?lI, ~' ~ • ^ (Ip - I~'ffi)}) The set Q then is the smallest one that satisfies aosnre({&~°}) ~ q^ (~ ~ q * (gaot(s, =) = O V gotot(s, z) ~ q)^ Oao2(,, z) = O v go,o2(s, ~) ~ q)) The automaton we look for can be constructed in terms of the LR(0) states. In addition to the goto function•, we will need the predicate reduce, defined by ,'edna(s,_:) -- 3,,((~ -- X~') ^Xl' ~ s). A point of interest is the possible existence of •tacking conflicts[9]. These arise if for some s, z both gotol (s, z) and goto2(a, x) are not empty. Stacking conflict• cause an increase of non-determinism that can always be avoided by removing the conflicts. One method for do- ing this has been detailed in [9], and consist• of the split- ting in parts of the right hand side of grammar rule• that cause conflicts. Here we need not and will not assume anything about the occurrence of stacking conflict•. Grammars, of which Earley cover• do not give rise to stacking conflicts, form a proper subset of the set of extended context-free grammars. It could very well be that natural language grammar•, written as RTN's in order to produce 'natural' syntax trees, generally belong to this subset. For an example, see section 6. 5.2 The automaton To determine the automaton we specify, in addition to the set of states Q, the set of stack symbols F ---- QUI°u {Co}, the initial state q0 = closure({IoM°}), the final states F ffi {slrednce(s, ~)}~ and the transition function & 6(s, -f, y) = {(t, q'f)l "f ~/°A (0 = goto~(s, y) ^ q ffi s) v(~ = gotol(s, y) ^ q = +))} 6(8,-r, ¢) -- {(t, q)l~ E/°h ((t = gotot (s, "f) Aq = ¢)V ((t = goto2 (s, 7) A q = s))} u{(~, ~)l'f ~ q ^ reduce(s, ~)} 5.3 The grammar From the automaton, which is of the type discussed in section 3.2, we deduce the bilinear grammar S --< s,~,q0,¢0 >= reduce(s,~) < t,r,q,~ >---~< s,r,q,/~ > y = t = gotoz(s,y) < t, s, s, r >--. y -- t = goto2(•, y) < t,#,p,~ >-.< q,~,p,~ > < s,/°, ,q,~ > - t = gotol(s,l °) < t,s,q,~ >-.< s, I2,q,~ >=- t = goto~(•,l'~,) < p,l~,q,~ >--*< s,p,q,# > reduce(s,I °) < qo, Co, qo,¢o >"* ~, where $,t,q,p E Q, r E QU{C0}, ~,/~ E r, y E E. A• was mentioned in section 3.2, this grammar can be reduced by a standard algorithm to contain only useful non-terminals. 138 5.3.1 A reduced form If the reduction algorithm of [5] is performed, it turns out that the structure of the above grammar is such that useful non-terminals < p, ¢~, q, ~ > satisfy a ~Q=~.otfq ~f~Q=~p=q Furthermore, two non-terminals that differ only in their fourth tuple-element always derive the same strings of terminals. Hence, the fourth element can safely be dis- carded, as can the second if it is in Q and the first if the second is not in Q. The non-termlnals then become pairs < ~, s >, with ~ ~ I' and s ~ Q. For such non- terminals, the predict functions, mentioned in section 2, must be changed: 0 ° = {< ~o,~o >} pcedia~(L) = 0 if k = 0 else predicts(L) = {< ~, ~ > 13~ < s, q >E L} U {Sit = n} The grammar gets the general form S --*< s, qo >---- reduce(s,/~o) < t,q >--*< ~,q > //= t = gotot(s, 9) < t, s >--* y ---- t = gotoa(s, y) < ~,0 >-< ,,~ >< P,,, > - ~ = ~oto:(,,~) < ~,, >-< ~, s >= ~ = ~o~o~(,, ~) < ~, q >-<., ~ > __. reau~(s, ~) Note that the terminal < q0, q0 > does not appear in this grammar, but will appear in the parse matrix be- cause of the initial prediction 0 c. Of course, when the automaton is fully specified for a particular language, the corresponding CNLR grammar can be reduced still further, see section 6.4. 5.3.2 Final form Even the grammar in reduced form contains many non- terminals that derive the same set of strings. In partic- ular, all non-terminals that only differ in their second component generate the same language. Thus, the sec- ond component only encodes information for the predict functions. The redundancy can be removed by the fol- lowing means. Define the function ¢ : I' -. 2 Q, such that ~(~r) ---- {s{ < or, s > is a useful non-terminal of the above grammar}. Then we may simply parse with the 'bare' grammar, the non-terminals of which are the automaton stack symbols F: S --* S ~ reduce(s, ~0) t --. sy --t =.gotoz(s,y) * -- P, - ~,(~ = goto20, ~,)) I~, -. ~ - reduce(s, I°), using the predict functions 0 ° = {qo} predicth(L) = ~ if k = 0 else preaiah(Z,) = {~1~,(" ~ L^, ~ ~(~))} u {Slk = .}. The function ¢ can also be deduced directly from the bare grammar, see section 7. 5.4 Parse trees Each parse tree r according to the original grammar can be obtained from a corresponding parse tree t according to the cover. Each subset of the set of nodes of t is par- tially ordered by the relation 'is descendant of'. Now consider the set of nodes of t that correspond to non- terminals/~. The 'is descendant of' ordering defines a projected tree that contains, apart from the terminals, only these nodes. The desired parse tree r is now ob- tained by replacing in the projected tree, each node 1 ° by a node labeled by N~, the left hand side of grammar rule i of the original grammar. 6 Example The foregoing was rather technical and we will try to re- pair this by showing, very explicitly, how the formalism works for a small example grammar. In particular, we will for a small RTN grammar, derive the Earley cover of section 4, and the two covers of sections 5.3.1 and 5.3.2. 6.1 The grammar The following is a simple grammar for finite subordinate clauses in Dutch. $ -* conj NP VP VP --* [NP] {PP} verb [S] PP --* prep NP NP --* det noun {PP} So we have four regular expressions defining No = S, N1 ffi V P, N2 = P P, N3 -- N P. 6.2 The Earley cover The above grammar is covered by four regular subgrarn- m aA's" ~0 - z~;I~ - I0~z,°; Zo ~ - I~; Ig - Io`co.j; Io' - - x~;g - I~;II - I~Ig;x ~, - I~erb;X~ - x?~;x~ - I,*~;P, - ~?,,erb;¢? - z~z°;z~ - x~ &; P, - Xb, erb; x~ - z** .o.; ~ - It ae*; xt - Note that the Mi in this case turn out as M0 = 4, Mz = 5, M~ = 3, M3 = 4. 139 6.3 The automaton The construction of section 5.1 yields the following set of states: qo = {I~}; ql = {I~,I~}; q2 = {~,I[,I~,~}; qa = {I~}; q, -- {IoI }; qs ffi {I~,I$}; q* = {I~,I~}; q, = {Xo~, x,=}; qs = {P,,xD;qo = {zL xD; qlO = {R};q- = {R}; ¢12 = {xLR} The transitions axe grouped into two parts. First we list the function goto~: goto2(¢0, ~o,=~) = ~; goto=(¢l, det) ffi ¢~; go=o.(q~, P.) ffi qs; OO=O~(q2, ~) ffi ~s; goto2(,2, verb) ffi q.; goto~(~2, prep) = ~; go¢o2(q2, de0 = q~; got~(~, prep) = qs; goto~(qs,prep) = qs; goto~(qr, conj) -- ql; goto~(qs, det) = qa; goto~(qs,prep) = qs; goto2(ql=, prep) "J-- qs Likewise, we have the gotot function, which gives the non-stacking transitions for our grammar: gotol (ql , ~) = q'a; gotol (q,, I~ ) = q,; gotol (q~, noun) = q~; gotol (qs, g) ---- qs; gotol(qs, verb) = ~,; goto~(qs, ~=) = qs; goto, (~, , Po ) = elo; goto, (es, ~) = q. ; go,o, (e., ~) = el=; go,o, (q,=, g) = e,, The predicate reduce holds for six pairs of states and non-terminals: redu~O,, Po); redu=O,o, ~); redffi~(q,, ~); reduce(q,l , ]~=); reduce(q,, g); reduce(ql=, l~a ) 6.4 CNLR parser Given the automaton, the CNLR grammar follows ac- cording to section 5.3. After removal of the useless non- terminals we arrive at the following grammar, which is of the format of section 5.3.1. S ..--,< q4,qo > < q~,q >-.< q~,q > noun, where q E [ql,q~,qs] < qT, q~ >--*< qs,q~ > verb < q~,q >-* conj, where q E [qo, qT] < q~,q >--* det, where q E [qt,q~,qs] < q?, q2 >--* verb < qs,q >"* prep, where q E [q~,q~,qs,qe,q~] < q~,q >-*< ql,q > </~,q~ >, where q ~ [qo,qT] < qt,q >--*< q~,q > </~t,q~ >, where q E [qo, qT] < qs, q2 >'-*< qs,q~ >< I~, qs > < qs, q~ >'-'*< qs,q~ ></~,qs > < qlo, q2 >"'*< ql', q2 >< ~0, q? > < q~,q >-'*< qs,q > < /~,qs >, where q E [q~, ~s, qs, w, q~2] < ql2, q >-"*< qs, q > < ~,q9 >, where q E [ql,q2,qs] < q12, q >"*< ql2, q > < /~2,q12 >, where q E [~,,q2,qd < qs,~ >-*< ~,q2 >, < qs,q2 >-.< ~,q2 > < I~o,qv >--.< q4,q7 >, < I~l,q2 >-.< qlo,q2 > "</~x,q2 >-'*< qT, q2 > < ]~2,q >"~< qll,q >, where q E [q2,qs,qe,qo, q12] < I°,q >-*< qs, q >, where q E [qx,q2,qs] </~3,q >'-'~< q12,q >, where q E [ql,q2,qs] From this grammar, the function ¢ can be deduced. It is given by ~(¢1) ffi ~(q2 ffi ~(q.) = [¢0, q,] ~r(q3) -- ~(qg) --- a(q12) ---- ~(I °) = [ql, q2, qs] .(q~) = ~(¢s) = #(q,) = ~(q~0) = ~(:) = [q2] ~0s) = ~(q-) = ~(~) = [q2, q~, q~, q~, q12] ~(g) = [q,l Either by stripping the above cover, or by directly de- ducing it ~om the automaton, the bare cover can be obtained. We list it here for completeness. S -* q4, q9 -* q3noun, q? "* qsverb ql -* conj, q3 --* det, q7 "* verb qs "* prep, q2 "* qlI~3, q4 "* q2]~z qn "* qs~, g12 "-* qs~, q12 --* q12~ - qlo, ~ - q,, ~ - qll ~- q,, ~- q,2, Together with the predict functions defined in section 5.3.2, this grammar should provide an efficient parser for our example grammar. 7 Tadpole Grammars The function ~ has been defined, in section 5, via a grammar reduction algorithm. In this section we wish to show that an alternative method exists, and, moreover, that it can be applied to the class of bilinear tadpole grammars. This class consists of all bilineax grammars without epsilon rules, and with no useless symbols, with non-termlnals (the head) preceding terminals (the tail) at the right hand side of rules.Thus, rules are of the form A -* a6, where we use the symbol 6 as a variable over possibly empty sequences of terminals, and a denotes a possibly empty sequence of at most two non-terminals. Capital romu letters are used for non-terminals. Note that a CNLR cover is a member of this class of grammars, as are all grammars that are in Chomsky normal form. First we change the grammar a little bit by adding q0 to the set of non-terminals of the grammar, assum- ing that it was not there yet. Next, we create a new 140 grammar, inspired by the grammar of 5.3.1, with pairs < A, C > as non=terminals. The rules of the new gram- mar are such that (with implicit universal quantification over all variables, as before) < A, C >-..~ 6 -- A -.~ 6 < A,C >.--~< B,C > 6 m__A..-~ B6 < A,C >-~< B,C >< D,B > 8 =_ A-. BD8 The start symbol of the new grammar, which can be seen as a parametrized version of the tadpole grammar, is defined to be < S, qo >. A non-terminal < B, C > is a useful one, whence C E ~(B) according to the definition of ~, if it occurs in a derivation of the parametrized grammar: < S, qo >---" ~ < B, C > A, where i¢ is an arbitrary sequence of non-terminals, and A is a sequence of terminals and non-terminals. Then, we conclude that q0 E ~(B) -< S, q0 >-.'< B, q0 > A C E ~r(B) ^ C <> q0 --- 3A,~(< A,C >--'< B,C >/, ^ < S, qo >--* " s < C,D >< A,C > A) This definition may be rephrased without reference to the parametrized grammar. Define, for each non- terminal A a set firstnonts(A), such that firstnonts(A) --.. {BIA --" BA}. The predict set o(A) then is obtainabh as • (s) = {Cl3.~,v,,(a ~. firstnonts(A)A D -- CA6)} u {qolS E firstnonts(S)}, where S is the start symbol. As in section 5.3.2, the initial prediction is given by 0= = {q0}. 8 An LL/LR-automaton In order to illustrate the amount of freedom that ex- ists for the construction of automata and associated parsers, we shall construct a non-deterministic LL/LR- automaton and the associated cover, along the lines of section 5. 8.1 The automaton We change the goto functions, such that they yield sets of states rather that just one state, as follows: go=o,(s, z) ---- {dosure({I,~})l Zl ~ s ^ (Z~ -- ZI=) A j <> M,} goto~O, =) = {ao.ure({z~})lZ, ~' e s A (Z, ~ -- Z,~'=)} The set Q is changed accordingly to be the smallest one that satisfies ctos,,re({Xo"°}) E Q^ (s E q =~ (go=o,(s, =) = 0 v goto,(s, =) c q)^ (goto2(s, z) m ~ V gotoa(s, z) C q)) Every state in this automaton is defined as a set clos~re({I~ }) and is, as a consequence, completely char- acterized by the one non-terminal I~. The reason for calling the above an LL/LR-automaton lies in the fact that the states of LR(0) automata for LL(1) grammars have exactly this property. The predicate reduce is de- fined as in section 5.1. 8.2 The LL/LR-cover The cover associated with the LL/LR-automaton just defined, is a simple variant of the cover of section 5.3.2: S -- s -ffi reduce(s, I °) t -* 8y =-- t E gotox(s,g) t -. y - 3,0 ~ ao~oz(s, y)) t - sP,, - ~ ~ goto, O,z °) t -- I ° = 3,(t E goto2(s, I°)) -. s - reduce(s, I°), As it is of the tadpole type, the predict mechanism works as explained in section 7. We just mentioned that each LL/LR-state, and hence each non-terminal of the LL/LR-cover, is completely characterized by one non-terminal, or 'item', of the Earley cover. This correspondence between their non- terminals leads to a tight connection between the two covers. Indeed, the cover we obtained from the LL/LR- automaton can be obtained from the cover of section 4, by eliminating the e-rules-I~ ~ --~ e. Of course, the predict functions associated to both covers differ consid- erably, as it axe the non-terminals deriving e, the items beginning with a dot, that axe the object of prediction in the Earley algorithm, and they axe no longer present in the LL/LR-cover. 9 Efficiency We have discussed a number of bilinear covers now, and we could add many more. In fact, the space of bilinear covers for each context-free grammar, or RTN grammar, is huge. The optimal one would be the one that makes C-parser spend the least time on the average sentence. In general, the least time will be more or less equivalent to the smallest content of the parse matrix. Naively, this content would be proportional to the size of the cover. Under this assumption, the smallest cover would be optimal. Note that the number of non-terminals of the CNLR cover is equal to the number of states of the LR-antomaton plus the number of non-terminals of the original grammar. The size of the Earley cover is given by the number of items. In worst case situations the size of the CNLR cover is an exponential function of the size of the original grammar, whereas the size of the Ea~ley cover dearly grows linearly with the size of the original grammar. For many grammars, however, the number of LR(0)-states, may be considerably smaller than the number of items. This seems to be the case for the nat- ural language grammaxs considered by Tomita[3]. His 141 data even suggest that the number of LR(0) states is a sub-linear function of the original grammar size. Note, however, that predict functions may influence the re- lation between grammar size and average parse matrix content, as some grammars may allow more restrictive predict functions then others. Summarizing, it seems unlikely, that a single parsing approach would be opti- mal for all grammars. A viable goal of research would be to find methods for determining the optimal cover for a given grammar. Such research should have a solid experimental back-bone. The matter gets still more complicated when the orig- inal grammar is an attribute grammar. Attribute evalu- ation may lead to the rejection of certain parse trees that are correct for the grammar without attributes. Then the ease and efficiency of on-the-fly attribute evalua- tion becomes important, in order to stop wrong parses as soon as possible. In the Rosetta machine transla- tion system [11,12], we use an attributed RTN during the analysis of sentences. The attribute evaluation is bottom-up only, and designed in such a way that the grammar is covered by an attributed Earley cover. Other points concerning efficiency that we would like to discuss, are issues of precomputation. In the con- ventional Earley parser, the calculation of the cover is done dynamically, while parsing a sentence. However, it could just as well be done statically, i.e. before parsing, in order to increase parsing performance. For instance, set operations can be implemented more efficiently if the set elements are known non-terminals, rather than un- known items, although this would depend on the choice of programming language. The procedure of generating bilinear covers from LR-antomata should always be per- formed statically, because of the amount of computation involved. Tomita has reported [3], that for a number of grammars, his parsing method turns out to be more efli- cient than the Earley ~gorithm. It is not clear, whether his results would still hold if the creation of the cover for the Earley parser were being done statically. Onedmight be inclined to think that if use is made of precomputed sets of items, as in LR-parsers, one is bound to have a parser that is significantly different from and probably faster than Earley's algorithm, which com- putes these sets at parse time. The question is much more subtle as we showed in this paper. On the one hand, non-deterministic LR-parsing comes down to the use of certain covers for the grammar at hand, just like the Earley algorithm. Reversely, we showed that the Earley cover can, with minor modifications, be obtained from the LL/LR-automaton, which also uses precom- puted sets of items. 10 Conclusions We studied parsing of general context-free languages, by splitting the process into two parts. Firstly, the gram- mar is turned into bilinear grammar format, and sub- sequently a general parser for bilinear grammars is ap- plied. Our view on the relation between parsers and covers is similar to the work on covers of Nijholt [7] for grammars that are deterministically parsable. We established that the Lung algorithm for simulat- ing pushdown automata, hides a prescription for deriv- ing bilinear covers from automata that satisfy certain constraints. Reversely, the LR-parser construction tech- nique has been presented as a way to derive automata from certain bilinear grammars. We found that the Earley algorithm is intimately re- lated to an automaton that simulates non-deterministic LL-parsing and, furthermore, that non-deterministic LR-automata provide general parsers for context-free grammars, with the same complexity as the Earley al- gorithm. It should be noted, however, that there are as many parsers with this property, as there are ways to obtain bilinear covers for a given grammar. References 1 Earley, J. 1970. An Efficient Context-Free Parsing Algorithm, Communication8 ACM 13(2):94-102. 2 Lang, B. 1974. Deterministic Techniques for Efficient Non-deterministic Parsers, Springer Lecture Notes in Computer Science 14:255-269. 3 Tomita, M. 1986. Efficient Parsing for Natural Lan- guage, Kluwer Academic Publishers. 4 Graham, S.L., M.A. Harrison and W.L. Ruzzo 1980. An improved context-free recognizer, ACM trans. actions on Progr. Languages and Systems 2:415- 462. 5 Aho, A.V. and J.D. Ullman 1972. The theory of pars- ing, translation, and compiling, Prentice Hall Inc. Englewood Cliffs N.J. 6 Kruseman Aretz, F.E.J. 1989. A new approach to Earley's parsing algorithm, Science of Computer Programming volume 12.. T Nijholt, A. 1980. Context-free Grammars: Cov- ers, Normal Forms, and Parsing, Springer Lecture Notes in Computer Science 93. 8 Woods, W.A. 1970. Transition network grammars for natural language analysis, Commun. ACM 13:591- 602. 9 Purdom, P.W. and C.A. Brown 1981. Parsing ex- tended LR(k) grammars, Acta [n]ormatica 15:115- 127. 10 Nagata, I and M. Sama 1986. Generation of Efficient LALR Parsers for Regular Right Part Grammars, Acta In]ormatica 23:149-162. 11 Leermakers, R. and J. Rons 1986. The Transla- tion Method of Rosetta, Computers and Transla- tion 1:169-183. 12 Appelo L., C Fellinger and J. Landsbergen 1987. Subgrammars, Rule Classes and Control in the Rosetta Translation System, Proceedings o/ 3rd Conference ACL, European Chapter, Copenhagen 118-133. 142
1989
17
The Structure of Shared Forests in Ambiguous Parsing Sylvie Billot t" Bernard Lang* INRIA rand Universit~ d'Orl~ans billotGinria.inria.fr langGinria.inria.fr Abstract The Context-Free backbone of some natural language ana- lyzers produces all possible CF parses as some kind of shared forest, from which a single tree is to be chosen by a disam- biguation process that may be based on the finer features of the language. We study the structure of these forests with respect to optimality of sharing, and in relation with the parsing schema used to produce them. In addition to a theo- retical and experimental framework for studying these issues, the main results presented are: - sophistication in chart parsing schemata (e.g. use of look-ahead) may reduce time and space efficiency instead of improving it, - there is a shared forest structure with at most cubic size for any CF grammar, - when O(n 3) complexity is required, the shape of a shared forest is dependent on the parsing schema used. Though analyzed on CF grammars for simplicity, these re- sults extend to more complex formalisms such as unification based grammars. Key words: Context-Free Parsing, Ambiguity, Dynamic Programming, Earley Parsing, Chart Parsing, Parsing Strategies, Parsing Schemata, Parse Tree, Parse Forest. 1 Introduction Several natural language parser start with & pure Conte~zt. Free (CF) backbone that makes a first sketch of the struc- ture of the analyzed sentence, before it is handed to a more elaborate analyzer (possibly a coroutine), that takes into ac- count the finer grammatical structure to filter out undesir- able parses (see for example [24,28]). In [28], Shieber sur- veys existing variants to this approach before giving his own tunable approach based on restrictions that ~ split up the infinite nonterminal domain into a finite set of equivalence classes that can be used for parsing". The basic motivation for this approach is to benefit from the CF parsing technol- ogy whose development over 30 years has lead to powerful and ei~cient parsers [I,7]. A parser that takes into account only an approximation of the grammatical features will often find ambiguities it can- not resolve in the analyzed sentences I. A natural solution *Address: INRIA, B.P. 105, 78153 Le Chesn~y, France. The work reported here was partially supported by the Eureka Software Factory project. 1 Ambiguity may also have a semantical origin." is then to produce all possible parses, according to the CF backbone, and then select among them on the basis of the complete features information. One hitch is that the num- ber of parses may be exponential in the size of the input sentence, or even infuite for cyclic grammars or incomplete sentences [16]. However chart parsing techniques have been developed that produce an encoding of all possible parses as a data structure with a size polynomial in the length of the input sentence. These techniques are all based on a dynamic programming paradigm. The kind of structure they produce to represent all parses of the analyzed sentence is an essential characteristic of these algorithm. Some of the published algorithms produce only a chart as described by Kay in [14], which only associates nonterminal categories to segments of the analyzed sentence [11,39,13,3,9], and which thus still requires non-trivial pro- ceasing to extract parse-trees [26]. The worst size complex- ity of such a chart is only a square function of the size of the input 2. However, practical parsing algorithms will often produce a more complex structure that explicitly relates the instances of nonterminals associated with sentence fragments to their constituents, possibly in several ways in case of ambiguity, with a sharing of some common subtrees between the distinct ambiguous parses [7,4,24,31,25] ~ One advantage of this structure is that the chart retains only these constituents that can actually participate in a parse. Furthermore it makes the extraction of parse-trees a trivial matter. A drawback is that this structure may be cubic in the length of the parsed sentence, and more gener- ally polynomial' for some proposed algorithms [31]. How- ever, these algorithms are rather well behaved in practice, and this complexity is not a problem. In this paper we shall call shared forests such data struc- 2 We do not consider CF reco~zers that have asymptotically the lowest complexity, but are only of theoretical interest here [~S,5]. 3 There are several other published implementation of chart parsers [23,20,33], hut they often do not give much detail on the output of the parsing process, or even side-step the problem ~1. together [33]. We do not consider here the well .formed s~bs~ring fablea of Shell [26] which falls somewhere in between in our classi- ficgtlon. They do not use pointers and parse-trees are only "indi- rectly" visible, but may be extracted rather simply in linear time. • The table may contain useless constituents. 4 Space cubic algorithms often require the lan~tage grammar to be in Chomsky Normal Form, and some authors have incorrectly conjectured tha~ cubic complexity cannot he obtained otherwise. 143 tures used to represent simultaneously all parse trees for a given sentence. Several question• may be asked in relation with shared forests: • How to construct them during the parsing process? • Can the cubic complexity be attained without modify- ing the grammar (e.g. into Chomsky Normal Form)? s What is the appropriate data structure to improve sharing and reduce time and space complexity? • How good is the sharing of tree fragments between ambiguous parses, and how can it be improved? • Is there a relation between the coding of parse-trees in the shared forest and the parsing schema used? • How well formalized is their definition snd construc- tion? These questions are of importance in practical systems because the answers impact both the performance and the implementation techniques. For example good sharing may allow a better factorization of the computation that filters parse trees with the secondary features of the language. The representation needed for good sharing or low space com- plexity may be incompatible with the needs of other com- ponents of the system. These components may also make assumptions about this representation that are incompatible with some parsing schemata. The issue of formalization is of course related to the formal tractability of correctness proof for algorithms using shared forests. In section 2 we describe a uniform theoretical framework in which various parsing strategies are expressed and compared with respect to the above questions. This approach has been implemented into a system intended for the experimental study and comparison of parsing strategies. This system is described in section 3. Section 4 contain~ a detailed example produced with our implementation which illustrates both the working of the system and the underlying theory. 2 A Uniform Framework To discus• the above issue• in a uniform way, we need a gen- era] framework that encompasses all forms of chart parsing and shared forest building in a unique formalism. We shall take a• a l~sk a formalism developed by the second author in previous papers [15,16]. The idea of this approach is to separate the dynamic programming construct• needed for ef- ficient chart parsing from the chosen parsing schema. Com- parison between the classifications of Kay [14] and Gritfith & Petrick [10] shows that a parsing schema (or parsing strat- egy) may be expressed in the construction of a Push-Down Transducer (PDT), a well studied formalization of left-to- right CF parsers 5. These PDTs are usually non-deterministic and cannot be used as produced for actual parsing. Their backtrack simulation does not alway• terminate, and is often time-exponential when it does, while breadth-first simula- tion is usually exponential for both time and space. However, by extending Earley's dynamic programming construction to PDTs, Long provided in[15] a way of simulating all possible computations of any PDT in cubic time and space complex- s Grifllth & Petrick actually use Turing ma,'hines for pedagog- ical reasons. ity. This approach may thus be used as a uniform framework for comparing chart parsers s. 2.1 The algorithm The following is a formal overview of parsing by dynamic programming interpretation of PDT•. Our ahn is to parse sentences in the language £(G) gen- erated by a CF phrase structure grammar G -- (V, ~, H, N) according to its syntax. The notation used is V for the set of nontermln~l, ~ for the set of terminals, H for the rules, for the initial nonterminal, and e for the empty string. We assume that, by some appropriate parser construction technique (e.g. [12,6,1]) we mechanically produce from the grammar G a parser for the language £(G) in the form of a (possibly non-deterministic) push.down transducer (PDT) T G. The output of each possible computation of the parser is a sequence of rules in rl ~ to be used in a left-to-right reduction of the input sentence (this is obviously equivalent to producing a parse-tree). We assume for the PDT T G a very general formal defini- tion that can fit most usual PDT construction techniques. It is defined as an 8-tuple T G -- (Q, ]~, A, H, 6, ~, ;, F) where: Q is the set of states, ~ is the set of input word symbols, A is the set of stack symbols, H is the set of output symbols s (i.e. rule• of G), q is the initial state, $ is the initial stack symbol, F is the set of final states, 6 is a fnite set of tran- sitions of the form: (p A a ~-* q B u) with p, q E Q, x,s ¢ A u {e}, a E ~: u {~}, and . ~ H*. Let the PDT be in a configuration p -- (p Aa az u) where p is the current state, Aa is the •tack contents with A on the top, az is the remaining input where the symbol a is the next to be shifted and z E ~*, and u is the already produced output. The application of a transition r = (p A a ~-* qB v) result• in a new configuration p' ---- (q Bot z uv) where the terminal symbol a has been scanned (i.e. shifted), A has been popped and B has been pushed, and t, has been concatenated to the existing output ,~ If the terminal symbol a is replaced by e in the transition, no input symbol is scanned. If A (reap. B) is replaced by • then no stack symbol is popped from (resp. pushed on) the •tack. Our algorithm consist• in an Earley-like 9 simulation of the PDT T G. Using the terminology of [1], the algorithm builds an item set ,~ successively for each word symbol z~ holding position i in the input sentence z. An item is constituted of two modes of the form (p A i) where p is a PDT state, A is a stack symbol, and i.is the index of an input symbol. The item set & contains items of the form ((p A i) (q B j)) . These item• are used as nontermineds of an output grammar S The original intent of [15] was to show how one can generate efficient general CF chart parsers, by first producing the PDT with the efllcient techniques for deterministic parsing developed for the compiler technology [6,12,1]. This idea was later successfu/ly used by Tomits [31] who applied it to LR(1) parsers [6,1], and later to other puelulown based parsers [32]. 7 Implomczxtations usually dc~ote these rules by their index in the set rl. s Actual implementations use output symbols from rIu~, since rules alone do not distinguish words in the same lexical category. s We asmune the reader to be familiar with some variation of Earley's algorithm. Earley's original paper uses the word stere (from dynamic programming terminology) instead of item. 144 = (8, l'I, ~, U~), where 8 is the set of all items (i.e. the union of &), and the rules in ~ are constructed together with their left-hand-side item by the algorithm. The initial nonterminal Ut of ~ derives on the last items produced by a successful computation. Appendix A gives the details of the construction of items and rules in G by interpretation of the transitions of the PDT. More details may be found in [15,16]. 2.2 The shared forest An apparently major difference between the above algorithm and other parsers is that it represents a parse as the string of the grammar rules used in a leftmost reduction of the parsed sentence, rather than as a parse tree (cf. section 4). When the sentence has several distinct paxses, the set of all possi- ble parse strings is represented in finite shared form by a CF grammar that generates that possibly infinite set. Other published algorithms produce instead a graph structure rep- resenting all paxse-trees with sharing of common subpaxts, which corresponds well to the intuitive notion of a shared forest. This difference is only appearance. We show here in sec- tion 4 that the CF grammar of all leftmost parses is just a theoretical formalization of the shared.forest graph. Context- Free grammars can be represented by AND-OR graphs that are closely related to the syntax diagrams often used to de- scribe the syntax of programming languages [37], and to the transition networks of Woods [22]. In the case of our gram- mar of leftmost parses, this AND-OR graph (which is acyclic when there is only finite ambiguity) is precisely the shaxed- forest graph. In this graph, AND-nodes correspond to the usual parse-tree nodes, whil~ OR-nodes correspond to xmbi- guities, i.e. distinct possible subtrees occurring in the same context. Sharing ofsubtrees in represented by nodes accessed by more than one other node. The grammar viewpoint is the following (cf. the example in section 4). Non-terminal (reap. terminal) symbols corre- spond to nodes with (reap. without) outgoing arcs. AND- nodes correspond to right-hand sides of grammar rules, and OR-nodes (i.e. ambiguities) correspond to non-terminals de- fined by several rules. Subtree sharing is represented by seVo eral uses of the same symbol in rule right-hand sides. To our knowledge, this representation of parse-forests as grammars is the simplest and most tractable theoretical for- malization proposed so far, and the parser presented here is the only one for which the correctness of the output gram- mar -- i.e. of the shared-forest -- has ever been proved. Though in the examples we use graph(ical) representations for intuitive understanding (grammars axe also sometimes represented as graphs [37]), they are not the proper formal tool for manipulating shared forests, and developing formal- ized (proved) algorithms that use them. Graph formalization is considerably more complex and awkward to manipulate than the well understood, specialized and few concepts of CF grammars. Furthermore, unlike graphs, this grammar formalization of the shared forest may be tractably extended to other grammatical formalisms (ct: section 5). More importantly, our work on the parsing of incomplete sentences [16] has exhibited the fundamental character of our grammatical view of shared forests: when parsing the completely unknown sentence, the shared forest obtained is precisely the complete grammar of the analyzed language. This also leads to connections with the work on partial eval- nation [8]. 2.3 The shape of the forest For our shared-forest, x cubic space complexity (in the worst case -- space complexity is often linear in practice) is achieved, without requiring that the language grammar be in Chonmky Normal Form, by producing a grammar of parses that has at most two symbols on the right-hand side of its rules. This amounts to representing the list of sons of a parse tree node as a Lisp-like list built with binary nodes (see fig- ures 1 L- 2), and it allows partial sharing of the sons i0 The structure of the parse grammar, i.e. the shape of the parse forest, is tightly related to the parsing schema used, hence to the structure of the possible computation of the non-deterministic PDT from which the parser is constructed. First we need a precise characterization of parsing strategies, whose distinction is often blurred by superimposed optimiza- tions. We call bottom-up a strategy in which the PDT decides on the nature of a constituent (i.e. on the grammar rule that structures it), after having made this decision first on its subconstituents. It corresponds to a postfix left-to- right walk of the parse tree. Top-Down parsing recognizes a constituent before recognition of its subconstituents, and corresponds to a prefix walk. Intermediate strategies are also possible. The sequence of operations of a bottom-up parser is basi- cally of the following form (up to possible simplifying oi>. timizations): To parse a constituent A, the parser first parses and pushes on the stack each sub-constituent B~; at some point, it decides that it has all the constituents of A on the stack and it pops them all, and then it pushes A and outputs the (rule number ~- of the) recognized rule f : A -* Bl ... Bn,. Dynamic programming interpretation of such a sequence results in a shared forest containing parse- trees with the shape described in figure 1, i.e. where each node of the forest points to the beginning of the llst of its sons. A top-down PDT uses a different sequence of operations, detailed in appendix B, resulting in the shape of figure 2 where a forest node points to the end of the list of sons, which is itself chained backward. These two figures are only simple examples. Many variations on the shape of parse trees and forests may be obtained by changing the parsing schema. Sharing in the shared forest may correspond to sharing of a complete subtree, but also to sharing of a tail of a llst of sons: this is what allows the cubic complezity. Thus bottom- up parsing may share only the rightmost subconstituents of a constituent, while top-down parsing may share only the left- most subconstituents. This relation between parsing schema and shape of the shared forest (and type of sharing) is a con- sequence of intrinsic properties of chart parsing, and not of our specific implementation. It is for example to be expected that the bidirectional na- ture of island parsing leads to irregular structure in shared forests, when optimal sharing is sought for. 3 Implementation and Experimental Results The ideas presented above have been implemented in an ex- perimental system called Tin (after the woodman of OZ). 10 This was noted by Shell [26] and is implicit in his use of "2- form ~ grammars. 145 A A Figure 1: Bottom-up parse-tree ' Figure 2: Top-down parse-tree The intent is to provide a uniform f~amework for the con- struction and experimentation of chart parsers, somewhat as systems like MCHART [29], but with a more systematic theoretical foundation. The kernel of the system is a virtual parsing machine with a stack and a set of primitive com- mands corresponding essentially to the operation of a practi- cal Push-Down Transducer. These commands include for ex- ample: push (resp. pop) to push a symbol on the stack (reap. pop one), check~indow to compare the look-ahead symbol(s) to some given symbol, chsckstack to branch depending on the top of the sta~k, scan to read an input word, outpu$ to output a rule number (or a terminal symbol), goto for uncon- ditional jumps, and a few others. However theae commands are never used directly to program parsers. They are used as machine instructions for compilers that compile grammatical definitions into Tin code according to some parsing schema. A characteristic of these commands is that they may all be marked as non-determlnistic. The intuitive interpretation is that there is a non-deterministic choice between a command thus marked and another command whose address in the virtual machine code is then specified. However execution of the virtual machine code is done by an all-paths interpreter that follows the dynamic programming strategy described in section 2.1 and appendix A. The Tin interpreter is used in two different ways: 1. to study the effectiveness for chart parsing of known parsing schemata designed for deterministic parsing. We have only considered formally defined parsing schemata, corresponding to established PDA construc- tion techniques that we use to mechanically translate CF grammars into Tin code. (e.g. LALR(1) and LALR(2) [6], weak precedence [12], LL(0) top-down (recursive descent), LR(0), LR(1) [1] ...). 2. to study the computational behavior of the generated code, and the optimization techniques that could be used on the Tin code -- and more generally chart parser code -- with respect to code size, execution speed and better sharing in the parse forest. Experimenting with several compilation schemata has shown that sophistication may have a negative effect on the ej~iciency of all-path parsin911 . Sophisticated PDT construc- tion techniques tend to multiply the number of special cases, thereby increasing the code size of the chart parser. Some- times it also prevents sharing of locally identical subcom- putations because of differences in context analysis. This in turn may result in lesser sharing in the parse forest and sometimes longer computation, as in example $BBL in ap- pendix C, but of course it does not change the set of parse- trees encoded in the forest 12. Experimentally, weak prece- dence gives slightly better sharing than LALR(1) parsing. The latter is often v/ewed as more efficient, whereas it only has a larger deterministic domain. One essential guideline to achieve better sharing (and often also reduced computation time) is to try to recognize every grammar rule in only one place of the generated chart parser code, even at the cost of increasing non-determinism. Thus simpler schemata such as precedence, LL(0) (and probably LR(0) I~) produce the best sharing. However, since they correspond to a smaller deterministic domain within the CF grammar realm, they may sometimes be computationally less efficient because they produce a larger number of useless items (Le. edges) that correspond to dead-end computational paths. Slight sophistication (e.g. LALR(1) used by Tomita in [31], or LR(1) ) may slightly improve computational per- formance by detecting earlier dead-end computations. This may however be at the expense of the forest sharing quality. More sophistication (say LR(2)) is usually losing on both accounts as explained earlier. The duplication of computa- tional pgths due to distinct context analysis overweights the 11 We mean here the sophistication of the CF parser construc- tion technique rather than the sophistication of the language fea- tures chopin to be used by this parser. l~ This negative behavior of some techniques originally intended to preserve determlni~n had beam remarked and analyzed in a special case by Bouckaert, Pirotte and Shelling [3]. However we believe their result to be weaker than ours, since it seems to rely on the fact that they directly interpret ~'anuuars rather than first compile them. Hence each interpretive step include in some sense compilation steps, which are more expensive when look-ahead is increased. Their paper presents several examples that run less ef- ficiently when look-ahead is increased. For all these examples, this behavior disappears in our compiled setting. However the gram- mar SBBL in appendix C shows a loss of eltlciency with increased look-ahead that is due exclusively to loss of sharing caused by ir- relevant contextual distinctions. This effect is particularly visible when parsing incomplete sentences [16]. Eiticiency loss with increased look-ahead is mainly due to state splitting [6]. This should favor LALR techniques ova- LR ones. is Our resnlts do not take into account a newly found optimiza- tion of PDT interpretation that applies to all and only to bottom- up PDTs. This should make simple bottom-up schemes compet- itive for sharing quality, and even increase their computational ei~ciency. However it should not change qualitatively the rela- tive performances of bottom-up parsers, and n~y emphasize even more the phenomenon that reduces efficiency when look-ahead in- 146 benefits of early elimination of dead-end paths. But there can be no absolute rule: ff a grammar is aclose" to the LR(2) domain, an LR(2) schema is likely to give the best result for most parsed sentences. Sophisticated schemata correspond also to larger parsers, which may be critical in some natural language applications with very large grammars. The choice of a parsing schema depends in fine on the grammar used, on the corpus (or kind) of sentences to be an- alyzed, and on a balance between computational and sharing efficiency. It is best decided on an experimental basis with a system such as ours. Furthermore, we do not believe that any firm conclusion limited to CF grammars would be of real practical usefulness. The real purpose of the work pre- sented is to get a qualitative insight in phenomena which are best exhibited in the simpler framework of CF parsing. This insight should help us with more complex formalisms (cf. section 5) for which the phenomena might be less easily evidenced. Note that the evidence gained contradicts the common be- lid that parsing schemata with a large deterministic domain (see for example the remarks on LR parsing in [31]) are more effective than simpler ones. Most experiments in this area were based on incomparable implementations, while our uni- form framework gives us a common theoretical yardstick. 4 A Simple Bottom-Up Example The following is a simple example based on a bottom-up PDT generated by our LALR(1) compiler from the following grammar taken from [31]: I (0) '$ax ::= $ 's $ (1) 's ::= 'up 'vp (2) 'e ::- 's 'pp (3) 'up ::= n (4) 'up ::- det n (5) 'up ::- 'up 'pp (6) 'pp ::- prep 'up (7) 'vp ::= v 'up Nonterminals are prefixed with a quote symbol The first rule is used for initialization and handlhg of the delimiter symbol 8. The $ delimiters are implicit in the actual input sentence. The sample input is a(n v det n prep n) ~. It figures (for example) the sentence: aT see a man at home ~. 4.1 Output grammar produced by the parser The grammar of parses of the input sentence is given in fig- ure 3. The initial nonterminal is the left-hand side of the first rule. For readability, the nonterminals have been given com- puter generated names of the form at2, where z is an integer. All other symbols are terminal. Integer terminals correspond to rule numbers of the input language grammar given above, and the other terminals are symbols of the parsed language, except fo r the special terminal %i1" which indicates the end of the list of subconstituents of a sentence constituent, and may also be read as the empty string ~. Note the ambiguity for nontermlnal at4. It is possible to simplify this grammar to 7 rules without losing the sharing of common subparses. However it would no longer exhibit the structure that makes it readable as a shared-forest (though this structure could be retrieved). nt0 ::= ntl 0 ntl9 ::= nt20 nil ntl ::= nt2 nt3 nt20 ::= n nt2 ::- $ at21 ::- nt22 nil nt3 ::= nt4 nt37 nt22 ::= nt23 6 at4 ::= at5 2 at23 ::= at24 nt25 nt4 ::= nt29 1 nt24 ::= prep nt5 ::= nt6 nt21 nt25 ::= nt26 nil nt6 ::= nt7 1 nt26 ::= nt27 3 nit ::= nt8 ntll nt27 ::= nt28 nil at8 ::- at9 3 nt28 ::= n nt9 ::=ntlO nil at29 ::- nt8 nt30 nil0 ::- n at30 ::= nt31 nil ntll ::- nil2 nil at31 ::= at32 7 nil2 ::= nil3 7 at32 ::= nil4 at33 n~13 ::= nil4 nil5 nt33 ::= nt34 nil nil4 ::- v at34 ::= nt35 5 nt15 ::= nil6 nil nt35 ::= nil6 nt36 nil6 ::= at17 4 nt36 ::= nt22 nil nil7 ::= ntl8 ntl9 nt37 ::= nt38 nil nt18 ::= det nt38 ::= $ Figure 3: Grammar of parses of the input sentence The two parses of the input sentence defined by this gram- mar are: $ n 3 v det n 4 7 1 prep n 3 6 2 $ $ n 3 vdet n 4 prepn 3 6 5 7 1 $ Here again the two $ symbols must be read as delimiters. The ~1" symbols, no longer useful, have been omitted in these two parses. 4.2 Parse shared-forest constructed fi'om that grnnalxlar To explain the structure of the shared forest, we first build a graph from the grammar, as shown in figure 4. Each node corresponds to one terminal or nonterminal of the grammar in figure 3, and is labelled by it. The labels at the right of small dashes are rule numbers from the parsed language grammar (see beginning of section 4). The basic structure is that of figure 1. From this first graph, we can trivially derive the more tra- ditional shared forest given in figure 5. Note that this simpli- fied representation is not always adequate since it does not allow partial sharing of their sons between two nodes. Each node includes a label which is a non-terminal of the parsed language grammar, and for each possible derivation (several in case of ambiguity) there is the number of the grammar rule used for that derivation. Though this simplified version is more readable, the representation of figure 5 is not ade- quate to represent partial sharing of the subconstituents of a constituent. Of course, the ~constructions ~ given in this section are purely virtual. In an implementation, the data-structure rep- resenting the grammar of figure 3 may be directly interpreted and used as a shared-forest. A similar construction for top-down parsing is sketched in appendix B. 147 0--@ I 1~ 3 37"~ I" 2 38 $ $ 5 21 .--]l. 29 ~ 30--~" "-r' i. ° "-') • 'f-' 9~. 13--f-~ ~ -~," z.t -I, 3s 3e--l," 10 14 16B4 28 rt V ] fl E I 17 ~ 19"-~' I I 18 20 det n Figure 4: Graph of the output grammar NP 4 n v det n PP 6 prep n., Figure 5: The shared forest 5 Extensions As indicated earlier, our intent is mostly to understand phe- nomena that would be harder to evidence in more complex grammatical formalisms. This statement implies that our approach can be extended. This is indeed the case. It is known that many simple parsing schemata can be expressed with stack based machines [32]• This is certainly the case for M! left-to-right CF chart parsing schemata. We have formally extended the concept of PDA into that of Logical PDA which is an operational push-down stack de- vice for parsing unification based grammars [17,18] or other non-CF grammars such as Tree Adjoining Grammars [19]. Hence we axe reusing and developing our theoretical [18] and experimental [38] approach in this much more general set- ting which is more likely to be effectively usable for natural language parsing. Furthermore, these extensions can also express, within the PDA model, non-left-to-fight behavior such as is used in is- land parsing [38] or in Shei]'s approach [26]• More generally they allow the formal analysis of agenda strategies, which we have not considered here. In these extensions, the coun- terpart of parse forests are proof forests of definite clause programs. 6 Conclusion AnMysis of Ml-path parsing schemata within a common framework exhibits in comparable terms the properties of these schemata, and gives objective criteria for chosing a given schema when implementing a language analyzer. The approach taken here supports both theoreticM analysis and actuM experimentation, both for the computational behavior of pLmers and for the structure of the resulting shared forest. Many experiments and extensions still remain t 9 be made: improved dynamic programming interpretation of bottom- up parsers, more extensive experimental measurements with a variety of languages and parsing schemata, or generaliza- tion of this approach to more complex situations, such as word lattice parsing [21,30], or even handling of "secondary" language features. Early research in that latter direction is promising: our framework and the corresponding paradigm for parser construction have been extended to full first-order Horn clauses [17,18], and are hence applicable to unification based grammatical formalisms [27]. Shared forest construc- tion and analysis can be generalized in the same way to these more advanced formalisms. Acknowledgements: We are grateful to V~ronique Donzeau-Gouge for many fruitful discussions. This work has been partially supported by the Eureka Software Factory (ESF) project. References [1] Aho, A.V.; and Ullman, J.D• 1972 The Theory of Parsing, Trar~lation and Compiling. Prentice- Hall, Englewood Cliffs, New Jersey. [2] Billot~ S. 1988 Analyseurs Syntaxiques et Non. D6terminigme. Th~se de Doctorat, Universit~ d'Ofl~ns la Source, Orleans (France). 148 [3] Bouckaert, M.; Pirotte, A~; and Sn~lllng, M. 1975 Efficient Parsing Algorithms for General Context- Free Grammars. Information Sciences 8(1): 1-26 [4] Cooke, J.; ~nd Schwartz, J.T. 1970 Programming Languages and Their Compilers. Courant Insti- tute of Mathematical Sciences, New York Univer- sity, New York. [5] Coppersmith, D.; and Winograd, S. 1982 On the Asymptotic Complexity of Matrix Multiplication. SIAM Journal on Computing, 11(3): 472-492. [6] DeRemer, F.L. 1971 Simple LR(k) Grammars. Communications A CM 14(7): 453-460. [7] Earley, J. 1970 An Efficient Context-Free Parsing Algorithm. Communications ACM 13(2): 94-102. [8] Fntamura, Y. (ed.) 1988 Proceedings of the Work- shop on Paxtial Evaluation and Mixed Computa- tion. New Generation Computing 6(2,3). [9] Graham, S.L.; Harrison, M.A.; and Ruzzo W.L. 1980 An Improved Context-Free Recognizer. A CM Transactions on Programming Languages and Sys- tems 2(3): 415-462. [10] Griffiths, L; and Petrick, S. 1965 On the Relative Efficiencies of Context-Free Grammar Recogniz- ers. Communications A CM 8(5): 289-300. [11] Hays, D.G. 1962 Automatic Language-Data Pro- ceesing. In Computer Applications in the Behav- ioral Sciences, (H. Borko ed.), Prentice-Hall, pp. 394-423. [12] Ichbiah, J.D.; and Morse, S.P. 1970 A Technique for Generating Almost Optimal Floyd-Evans Pro- ductions for Precedence Grammars. Communica- tions ACM 13(8): 501-508. [13] Kuami, J. 1965 An E~icient Recognition and Slmtax Analysis Algorithm .for Context-Free Lan. geages. Report of Univ. of Hawaii, also AFCRL- 65-758, Air Force Cambridge Research Labor~- tory, Bedford (Massachusetts), also 1968, Univer- sity of Illinois Coordinated Science Lab. Report, No. R-257. [14] Kay, M. 1980 Algorithm Schemata and Data Structures in Syntactic Processing. Proceedings oy the Nobel Symposium on Text Processing, Gothen- burg. [15] Lung, B. 1974 Deterministic Techniques for Effi- cient Non-deterministic Parsers. Proc. oy the 2 "~ Colloquium on Automata, Languages and Pro- gramming, J. Loeckx (ed.), Saarbrflcken, Springer Lecture Notes in Computer Science 14: 255-269. Also: Rapport de Recherche 72, IRIA-Laboris, Rocquencourt (France). [16] Lung, B. 1988 Parsing Incomplete Sentences. Proc. of the 12 en Internat. Cony. on Computational Lin- guistics (COLING'88) "CoL 1:365-371, D. Vargha (ed.), Budapest (Hungary). [17] Lung, B. 1988 Datalog Automata. Proc. of the rd 3 Internat. Cony. on Data and Knowledge Bases, C. Beeri, J.W. Schmidt, U. Dayal (eds.), Morgan Kanfmann Pub., pp. 389-404, Jerusalem (Israel). [18] Lung, B. 1988 Complete Evaluation of Horn Clauses, an Automata Theoretic Approach. INRIA Research Report 913. [19] LanK, B. 1988 The Systematic Construction of Eadey Parsers: Application to the Production o/ O(n 6) Earle~ Parsers for Tree Adjoining Gram- mars. In preparation. [20] Li, T.; and Chun, H.W. 1987 A Massively Psral- lel Network-Based Natural Language Parsing Sys- tem. Proc. ol £nd Int. Cony. on Computers and Applications Beijing (Peking), : 401-408. [21] Nakagawa, S. 1987 Spoken Sentence Recogni- tion by Time-Synchronous Parsing Algorithm of Context-Free Grammar. Proc. ICASSP 87, Dallas (Texas), Vol. 2 : 829-832. [22] Pereira, F.C.N.; and Warren, D.H.D. 1980 Deft- uite Clause Grammars for Language Analysis -- Asurvey of the Formalism and a Comparison with Augmented Transition Networks. Artificial Intel. ligence 13: 231-278. [23] Phillips, J.D. 1986 A Simple Efficient Parser for Phrase-Structure Grammars. Quarterly Newslet- ter of the Soc. for the Study of Artificial Intelli- gence (AISBQ) 59: 14-19. [24] Pratt, V.R. 1975 LINGOL -- A Progress Report. In Proceedings of the Jth IJCAI: 422-428. [25] Rekers, J. 1987 A Parser Generator for Finitely Ambiguous Context-Free Grammars. Report CS- R8712, Computer Science/Dpt. of Software Tech- nology, Centrum voor Wiskunde en Informatica, Amsterdam (The Netherlands). [26] Sheil, B.A. 1976 Observations on Context Free Parsing. in Statistical Methods in Linguistics:. 71- 109, Stockholm (Sweden), Pros. of Internat. Conf. on Computational Linguistics (COLING-76), Or- taw'4 (Canada). Also: Techuical Report TR 12-76, Center for Re- search in Computing Technology, Alken Computa- tion Laboratory, Harvard Univ., Cambridge (Mas- sachusetts). [27] Shieber, S.M. 1984 The Design of a Computer Language for Linguistic Information. Proc. of the 10 'h Internat. Cony. on Computational Linguistics -- COLING'84: 362-366, Stanford (California). [28] Shieber, S.M. 1985 Using Restriction to Extend Parsing Algorithms for Complex-Feature-Based Formalisms. Proceedings oy the ~3rd Annual Meet- ing of the Association for Computational Linguis- tics: 145-152. [29] Thompson, H. 1983 MCHART: A Flexible, Mod- ular Chart Parsing System. Proc. of the National Conf. on Artificial Intelligence (AAAI-83), Wash- ington (D.C.), pp. 408-410. [30] Tomita, M. 1986 An Efficient Word Lattice Pars- ing Algorithm for Continuous Speech Recognition. In Proceedings oy IEEE-IECE-ASJ International Conference on Acoustics, Speech, and Signal Pro- ¢essing (ICASSP 86), Vol. 3: 1569-1572. [31] Tomita, M. 1987 An Efficient Augmented- Context-Free Parsing Algorithm. Computational Linguistics 13(1-2): 31-46. [32] Tomita, M. 1988 Graph-structured Stack and Nat- ural Language Parsing. Proceedings oy the 26 th Annual Meeting Of the Association for Computa. tional Linguistics: 249-257. 149 [33] Uehaxa, K.; Ochitani, R.; Kaknsho, 0.; Toyoda, J. 1984 A Bottom-UpParser based on Predicate Logic: A Survey of the Formalism and its Im- plementation Technique. 198~ In•ernst. Syrup. on Logic P~mming, Atlantic City (New Jersey), : 220-227. [34] U.S. Department of Defense 1983 Reference Manual for the Ada Programming Language. ANSI/MIL-STD-1815 A. I35] Valiant, L.G. 1975 General Context-Free Recog- nition in Less than Cubic Time. Journal of Com- puter and System Sciences, 10: 308-315. [36] Villemonte de la Clergerie, E.; and Zanchetta, A. 1988 Eealuateur de Clauaes de Horn. Rapport de Stage d'Option, Ecole Polytechulque, Palaise&u (n'auce). [37] Wirth, N. 1971 The Programming Language Pas- cal. Acta Informatica, 1(1). [38] Ward, W.H.; Hauptmann, A.G.; Stern, R.M.; and Chanak, T. 1988 Parsing Spoken Phrases Despite Missing Words. In Proceedings of the 1988 In- ternational Conference on Acot~tics, Speech, and Signal Processing (ICASSP 88), Vol. 1: 275-278. [39] Younger, D.H. 1967 Recognition and Parsing of Context-Free Languages in Time n 3. Information and Control, 10(2): 189-208 A The algorithm This is the formal description of a minimal dynamic pro- gramming PDT interpreter. The actual Tin interpreter has a larger instruction set. Comments are prefixed with ~. Begin parse with input sentence x of length n step-A: -- Initialization := So := {~}; 7, := {~}; i := 0; initial item -- first rule of output grammar initialize item.set So -- rules of output grammar input-scanner indez is set -- before the first input symbol step-B: -- Iteration while i < n loop for every ires Uf((pAi)(qBj)) in S, do for every ~zanaition r in 6 do me consider four kinds of transitions, corresponding to the instructions of a minimal PDT interpreter. i~ rf(p•e ~-* fez) then ~ O U T P U T z Y :---- ((rAi) CqBj)); & := 8, u {v}; 7' := P u {(v - u~)}; if r----(pe• ~-, roe) then --PUSHC V :---- ((r O i) (p A i)) ; s, :=&u(V}; := 7' u {(v - •)}; if r=(pAe ~ tee) then --PAPA for every il;en Y = ((q B j) (S D k)) in Sj do V := ((r B i) (s V k)) ; s, := & u {v}; 7' := 7, u ((v - Yu)}; if r = (p•a ~-~ r• •) then V := ((rAi+l)(qej)); S,+x := &+x u {V} ; := 7, u ((v -. u)}; i := i+1; .nd loop; step-C: -- Termination :Jar every item O = ((f; n) (~; 0)) such that fEF do := ~' u (U~ -. U); Uf is the initial nonterminal of ~. -- End of parse -- SHIFT a in S. B Interpretation of a top-down PDT To illustrate the creation of the shared forest, we present here informally a simplified sequence of transitions in their order of execution by a top-down parser. We indicate the transitions as Tin instructions on the left, as defined in ap- pendix A. On the right we indicate the item and the rule produced by execution of each instruction: the item is the left-hand-side of the rule. The pseudo-instruction scan is given in italics because it does not exist, and stands for the parsing of a sub- constituent: either several transitions for a complex con- stituent or a single shift instruction for a lexical constituent. The global behavior of scan is the same as that of ehif% and it may be understood as a shift on the whole sub-constituent. Items axe represented by a pair of integer. Hence we give no details about states or input, but keep just enough infor- mation to see how items axe inter-related when applying a pop transition: it must use two items of the form (a,b) and (b, c) as indicated by the algorithm. The symbol r stands for the rule used to recognize a con- stituent s, and ~ri stands for the rule used to recognize its i 'h sub-constituent ei. The whole sequence, minus the first and the last two instructions, would be equivalent to "scan s'. • .. (6,6) push r (7,6) -> e push rl (8,7) -> • scan 81 (9,7) -> (8,7) sl out rl (10,7) -> (9,7) rl pop (11,6) -> (7,6) (10,7) ptmh f2 (12,11) -> • scan s~ (13,11) -> (12,11) ss out r2 (14,11) -> (13,11) r2 pop (15,6) -> (11,6) (14,11) push r~ (16,15) -> • scan sa (17,15) -> (16,15) s3 out r3 (18,15) -> (17,15) ~3 pop (19,6) -> (15,6) (18,15) out f (20,6) -> (19,6) pop (21,5) -> (6,5)(20,6) , , . This grammar may be simplified by eliminating useless non-terminals, deriving on the empty string e or on a single other non-terminal. As in section 4, the simplified grammar may then be represented as a graph which is similar, with more details (the rules used for the subconstituents), to the graph given in figure 2. 150 C Experimental Comparisons This appc~dlx gives some of the experimental data gathered to c~npa~ compilation achemata~ For each grammar, the first table gives the size of the PDTs oh- t~dned by compiling it accordlnZ to several compilation schematL This size corresponds to the number of instructions genca'ated for the PDT, which is roughly the n,mher of possible PDT states. The second table gives two figures far each schema and for sevm-al input sentences. The first figure is the number of items computed to parse that sentence with the given schema: it may be read as the number of computation steps and is thus • measure of computational ei~ciency. The second figure is the n,,ml~er of items r~n~in;ng after simp/ification of the output grarnm~, it is thus an indicator of shsx~g quality. Sharing is better when this second figure is low. In these tables, columns beaded with LR/LALR stands for the LR(0), LR(1), LALR(1) and LALR(2) cases (which often give the same results), unlesa one of these cases has its own expl;clt column. Tests were run on the GRE, NSE, UBDA and RR gramman of [3]: they did not exhibit the loss of eRiciency with incre~md look-ahead that was reported for the bottom-up look-ahead of [3]. We believe the results presented here axe consistent and give an accurate comparison of performances of the parsers considered, despite some implementation departure from the strict theoretical model required by performance considerations. A tint version of our LL(0) compiler &,ave results that were inconsistent with the results of the bottom-up parsers. This was ,, due to & weakness in that LL(0) compiler which was then corrected. We consider this experience to be a conflrm~ion of the nsefuln~ of our uniform framework. It must be stressed that these ~re prellmi~L~-y experiments. On the basis of thdr. ~,dysis, we intend a new set of experiments that will better exhibit the phenomena discussed in the paper. In particular we wish to study variants of the schen~ta and dynamic progr~nming interpretation that give the best p,~dble sharing. C.I Gr-mmar UBDA it ::'A• J • LR(0) [ LR(1) LALR(1) LALR(2) 38 60 41 41 input string ma &aaaam LR/LALR 14- 9 23- 15 249- 156 prece(L 15-9 29- 15 226 - 124 preced. LL(0) 36 46 LL(0) 41 - 9 75 - 15 391 - 112 C.2 Gr-mmar RR • ::-x• l • gramm~ is LALR(1) but not LR(0), which explains the lower performance of the LR(O) parser. LB.(0) LR(1) LALR(1) LALR(2) preced. LL(0) 34 37 37 37 48 46 input string LR(0) LR/LALR preced. • 14-9 14-9 15-9 xx 23- 13 20- 13 25 - 13 xxzxxz 99- 29 44 - 29 56 - 29 C.3 Picogrsmmar of English S ::8 EP VP [ S PP IP ::" n J dec a [ lip PP VP ::= v irP PP : :• prep lip LL(O) 28- 9 43- 13 123- 29 LR(0) LR(1) LALR(1) LALR(2) preced. LL(0) 110 341 104 104 90 116 input string LFt/LALR preced. LL(0) n • n prep n 71.47 72 - 47 169 - 43 n • n (prep n) 2 146 - 97 141 - 93 260 - 77 n • u (Fep n) 3 260 - 172 245 - 161 371 - 122 n • n (prep n) s 854 - 541 775 - 491 844 - 317 C.4 Grammar of Ada expressions This grimm&r, too long for inclusion h~e, is the grammar of ex- pressions of the ]an~cru~e AdsN as given in the reference man- ual [3@ This grammar is ambiguous. In these examples, the use of look-ahead give approximately a 25% gain in speed elliciency over LR(0) parsing, with the same fo~t shadng. However the use of look-ahead rn~y increase the LR(1) parser size quadratically with the granunar size. Still, a better engineered LR(1) construction should not usually increase that size as dra- nmticaily as indicated by our experimental figure. LR(0) LR(1) LALR(1) preced. 587 32210 534 323 input string LIt(0) LR(1) LALR(1) a*3 7'4 - 39 59 - 39 59 - 39 (ae3)+b 137- 75 113 - 75 113- 75 &*3"I-b**4 169- 81 122 - 81 122 - 81 C.5 Grnmmar PB E ::= a A d [ • Be [ b a ¢ [ b B d • ::me B ::me preced. 80- 39 293- 75 227 - 81 LR(0) LR(1) LALR(1) ~ (2) p~'cd. LL(0) 76 i00 80 84 122 Thin ~p-ammar is LR(1) but is not LALR. For each compilation scb,'ma it gives the same result on all possible inputs: aed, ae¢, bec and bed. LR(0) LR(1) LALR(1) & (2) preced. LL(0) 26-15 23- 15 26- 15 29-15 47- 15 C.6 Grammar SBBL E ::8 X £ d J I B c [ Y £ c [ Y B d X ::mr Y ::mr A ::=ei[ g B::=eAJ s LR(0) LR(1) LALR(1) LALR(2) preced. 159 294 158 158 104 input string LR(0) LR(1) LALR(1) & (2) preced. fegd 50- 21 57 o 37 50 - 21 84 - 36 feee~Fl 62 - 29 7'5 - 49 62 - 29 II0 - 44 The termln,d f may be ambiguously parsed as X or as Y. This ambiguous left context increases uselessly the complexity of the LR(1) ~ during recognition of the A and B constituents. Hence LR(0) performs better in this case since it ignores the context. 151
1989
18
A Calculus for Semantic Composition and Scoping Fernando C.N. Pereira Artificial Intelligence Center, SRI International 333 R.avenswood Ave., Menlo Park, CA 94025, USA Abstract Certain restrictions on possible scopings of quan- tified noun phrases in natural language are usually expressed in terms of formal constraints on bind- ing at a level of logical form. Such reliance on the form rather than the content of semantic inter- pretations goes against the spirit of composition- ality. I will show that those scoping restrictions follow from simple and fundamental facts about functional application and abstraction, and can be expressed as constraints on the derivation of possi- ble meanings for sentences rather than constraints of the alleged forms of those meanings. 1 An Obvious Constraint? Treatments of quantifier scope in Montague gram- mar (Montague, 1973; Dowty et al., 1981; Cooper, 1983), transformational grammar (Reinhart, 1983; May, 1985; Helm, 1982; Roberts, 1987) and com- putational linguistics (Hobbs and Shieber, 1987; Moran, 1988) have depended implicitly or explic- itly on a constraint on possible logical forms to explain why examples 1 such as (1) * A woman who saw every man disliked him are ungrammatical, and why in examples such as (2) Every man saw a friend of his (3) Every admirer of a picture of himself is vain the every.., noun phrase must have wider scope than the a... noun phrase if the pronoun in each example is assumed to be bound by its antecedent. What exactly counts as bound anaphora varies be- tween different accounts of the phenomena, but the rough intuition is that semantically a bound pronoun plays the role of a variable bound by the logical form (a quantifier) of its antecedent. Ex- ample (1) above is then "explained" by noting that lIn all the examples that follow, the pronoun and its intended antecedent are italicized. As usual, starred exam- pies are supposed to be ungrmticaL its logical form would be something like 3W.WOMAN(W)~ (Vm.MAN(rn) ::~ SAW(W, rn))~ DISLIKED(W, m) but this is "ill-formed" because variable m occurs as an argument of DISLIKED outside the scope of its binder Vm. 2 As for Examples (2) and (3), the argument is similar: wide scope for the log- ical form of the a... noun phrase would leave an occurrence of the variable that the logical form of every.., binds outside the scope of this quantifier. For lack of an official name in the literature for this constraint, I will call it here the free-variable constraint. In accounts of scoping possibilities based on quantifier raising or storage (Cooper, 1983; van Ei- jck, 1985; May, 1985; Hobbs and Shieber, 1987), the free-variable constraint is enforced either by keeping track of the set of free variables FREE(q) in each ralsable (storable) term q and when z E FREE(q) blocking the raising of q from any context Bz.t in which z is bound by some binder B, or by checking after all applications of raising (unstor- ing) that no variable occurs outside the scope of its binder. The argument above is often taken to be so ob- vions and uncontroversial that it warrants only a remark in passing, if any (Cooper, 1983; Rein- hart, 1983; Partee and Bach, 1984; May, 1985; van Riernsdijk and Williams, 1986; Williams, 1986; Roberts, 1987), even though it depends on non- trivial assumptions on the role of logical form in linguistic theory and semantics. First of all, and most immediately, there is the requirement for a logical-form level of representa- tion, either in the predicate-logic format exempli- fied above or in some tree format as is usual in transformational grammar (Helm, 1982; Cooper, 1983; May, 1985; van Riemsdijk and Williams, 1986; Williams, 1986; Roberts, 1987). 2In fact, this is & perfectly good ope~t well-formed for~ nmla and therefore the precise formulation of the constraint is more delicate than seems to be realized in the literature. 152 Second, and most relevant to Montague gram- mar and related approaches, the constraint is for- mulated in terms of restrictions on formal ob- jects (logical forms) which in turn are related to meanings through a denotation relation. How- ever, compositionaiity as it is commonly under- stood requires meanings of phrases to be func- tions of the meanings rather than the forms of their constituents. This is a problem even in ac- counts based on quantifier storage (Cooper, 1983; van Eijck, 1985), which are precisely designed, as van Eijck puts it, to "avoid all unnecessary ref- erence to properties of ... formulas" (van Eijck, 1985, p. 214). In fact, van gijck proposes an inter- eating modification of Cooper storage that avoids Cooper's reliance on forbidding vacuous abstrac- tion to block out cases in which a noun phrase is unstored while a noun phrase contained in it is still in store. However, this restriction does not deal with the case I have been discussing. It is also interesting to observe that a wider class of examples of forbidden scopings would have to be considered if raising out of relative clauses were allowed, for example in (4) An author who John has read every book by arrived In this example, if we did not assume the re- striction against raising from relative clauses, the every.., noun phrase could in principle be as- signed widest scope, but this would be blocked by the free-variable constraint as shown by the occur- rence of b free as an argument of BOOK-BY in Vb.BOOK-BY(b, a) :~ (~a.AUTHOR(a)& HAS-READ(JOHN, b)&ARRIVED(a)) That is, the alleged constraint against raising from relatives, for which many counterexamples exist (Vanlehn, 1978), blocks some derivations in which otherwise the free-variable constraint would be in- volved, specifically those associated to syntactic configurations of the form [Np," • .N[s-- • • [Np¢- • .X, • • .] • • .] • • • ] where Xi is a pronoun or trace coindexed with NPI and NPj is a quantified noun phrase. Since some of the most extensive Montague grammar fragments in the literature (Dowry et al., 1981; Cooper, 1983) do not cover the other major source of the problem, PP complements of noun phrases (replace S by PP in the configuration above), the question is effectively avoided in those treatments. 153 The main goal of this paper is to argue that the free-variable constraint is actually a consequence of basic semantic properties that hold in a seman- tic domain allowing functional application and ab- straction, and are thus independent of a particular 10gical-form representation. As a corollary, I will also show that the constraint is better expressed as a restriction on the derivations of meanings of sentences from the meanings of their parts rather than a restriction on logical forms. The result- ing system is related to the earlier system of con- ditional interpretation rules developed by Pollack and Pereira (1988), but avoids that system's use of formal conditions on the order of assumption discharge. 2 Curry's Calculus of Func- tionality Work in combinatory logic and the A-calculus is concerned with the elucidation of the basic notion of functionality: how to construct functions, and how to apply functions to their arguments. There is a very large body of results in this area, of which I will need only a very small part. • One of the simplest and most elegant accounts of functionality, originally introduced by Curry and Feys (1968) and further elaborated by other authors (Stenlund, 1972; Lambek, 1980; Howard, 1980) involves the use of a logical calculus to de- scribe the types of valid functional objects. In a natural deduction format (Prawitz, 1965), the cal- culns can be simply given by the two rules in Fig- ure 1. The first rule states that the result of ap- plying a function from objects of type A to objects of type B (a function of type A --* B) to an ob- ject of type A is an object of type B. The second rule states that if from an arbitrary object of type A it is possible to construct an object of type B, then one has a function from objects of type A to objects of type B. In this rule and all that fol- low, the parenthesized formula at the top indicates the discharge of an assumption introduced in the derivation of the formula below it. Precise defini- tions of assumption and assumption discharge are given below. The typing rules can be directly connected to the use of the A-calculus to represent functions by restating the rules as shown in Figure 2. That is, if u has type A and v has type A ~ B then v(u) has type B, and if by assuming that z has type A we can show that u (possibly containing z) has type B, then the function represented by Ax.u has type A ~ B. A A--*B (A) B B A....*B Figure 1: Curry Rules (x : A) [app] :u: A v: A--* B [abs]: u: B v(u) : B Az,u : A-- B Figure 2: Curry Rules for Type Checking To understand what inferences are possible with rules such as the ones in Figure 2, we need a precise notion of derivation, which is here adapted from the one given by Prawitz (1965). A derivation is a tree with each node n labeled by a formula ¢(n) (the conclusion of the node) and by a set r(n) of formulas giving the =ss.mpiions of $(n). In addition, a derivation D satisfies the following conditions: i. For each leaf node n E D, either ~b(n) is an axiom, which in our case is a formula giving the type and interpretation of a lexical item, and then r(n) is empty, or @(n) is an assumption, in which case r(.) = {,(.)} ii. Each nonleaf node n corresponds either to an application of lapp], in which case it has two daughters m and m' with ¢(m) - u : A, ,(m') --. : A -. B. ÷(,) = v(u) : B and r(.) = r(m) u r(m'), or to an application of [abs], in which case n has a single daughter m, and ,(m) =- u : B. ~(,) = Ax.u : A -. B. and r(.) = rcm)- {~: A} If n is the root node of a derivation D, we say that D is a derivation of ¢(n) from the assumptions r(~). Notice that condition (ii) above allows empty abstraction, that is, the application of rule labs] to some formula u : B even if z : A is not one of the assumptions of u : B. This is neces- sary for the Curry calculus, which describes all typed A-terms, including those with vacuous ab- straction, such as the polymorphic K combinator Az.Ay.z : A ~ (B ~ A). However, in the present work, every abstraction needs to correspond to an actual functional dependency of the interpre- tation of a phrase on the interpretation of one of 154 its constituents. Condition (ii) can be easily modi- fied to block vacuous abstraction by requiring that z : A e r(m) for the application of the labs] rule to a derivation node m. 3 The definition of derivation above can be gener- alized to arbitrary rules with n premises and one conclusion by defining a rule of inference as a n+l- place relation on pairs of formulas and assumption sets. For example, elements of the [app] relation would have the general form ((u : A, rl), (v : A B, r~), {v(u) : B, r, v r~)), while elements of the [abs] rule without vacuous abstraction would have the form ({u: B, r), (Ax.u : A -- B, r - {x: A})) whenever z : A E r. This definition should be kept in mind when reading the derived rules of inference presented informally in the rest of the paper. 3 Semantic Combinations and the Curry Calculus In one approach to the definition of allowable se- mantic combinations, the possible meanings of a phrase are exactly those whose type can be de- rived by the rules of a semantic calculus from ax- ioms giving the types of the lexical items in the phrase. However, this is far too liberal in that 3Without this restriction to the abstraction rule, the types derivable using the rules in Figure 2 are exactly the consequences of the three axioms A -+ A, A --* (B --~ A) and (A -* (S -. C)) -* ((A -* S) -* (A -* C)), w~ch are the polymorphic types of the three combinators I, K and S that generate all the dosed typed A-calculus terms. Furthermore, if we interpret -* as implication, these theo- rems are exactly those of the pure implicational fragment of intuitlonlstic propositional logic (Curry and Feys, 1968; Stenlund, 1972; Anderson and Be]nap, 1975). In contrast, with the restriction we have the weaker system of pure rel- evant implication R- (Prawitz, 1965; Anderson and Bel- nap, 1975). the possible meanings of English phrases do not depend only on the types involved but also on the syntactic structure of the phrases. A possible way out is to encode the relevant syntactic con- straints in a more elaborate and restrictive system of types and rules of inference. The prime exam- ple of a more constrained system is the Lambek calculus (Lambek, 1958) and its more recent elab- orations within categorial grammar and semantics (van Benthem, 1986a; van Benthem, 1986b; Hen- driks, 1987; Moortgat, 1988). In particular, Hen- driks (1987) proposes a system for quantifier rais- ing, which however is too restrictive in its coverage to account for the phenomena of interest here. Instead of trying to construct a type system and type rules such that free application of the rules starting from appropriate lexical axioms will generate all and only the possible meanings of a phrase, I will instead take a more conservative route related to Montague grammar and early ver- sions of GPSG (Gazdar, 1982) and use syntactic analyses to control semantic derivations. First, a set of derived rules will be used in addi- tion to the basic rules of application and abstrac- tion. Semantically, the derived rules will add no new inferences, since they will merely codify infer- ences already allowed by the basic rules of the cal- culus of functionality. However, they provide the semantic counterparts of certain syntactic rules. Second, the use of some semantic rules must be licensed by a particular syntactic rule and the premises in the antecedent of the semantic rule must correspond in a rule-given way to the mean- ings of the constituents combined by the syntactic rule. As a simple example using a context-free syntax, the syntactic rule S -, NP VP might li- cense the function application rule [app] with A the type of the meaning of the NP and A --* B the type of the meaning of the VP. Third, the domain of types will be enriched with a few new type constructors, in addition to the function type constructor --*. From a purely se- mantic point of view, these type constructors add no new types, but allow a convenient encoding of rule applicability constraints motivated by syntac- tic considerations. This enrichment of the formal universe of types for syntactic purposes is famil- iar from Montague grammar (Montague, 1973), where it is used to distinguish different syntac- tic realizations of the same semantic type, and from categorial grammar (Lambek, 1958; Steed- man, 1987), where it is used to capture syntactic word-order constraints. Together, the above refinements allow the syn- x : trace) [trace+]. z- trace [trace-]" r: I; z:e ,~z.r : e --* I; Figure 3: Rules for Relative Clauses [pron+] : (X : pron) Z : pron [pron-] : s : A y : B z :e (Ax.s)(y) : A Figure 4: Bound Anaphora Rules tax of language to restrict what potential semantic combinations are actually realized. Any deriva- tions will be sound with respect to [app] and [abs], but many derivations allowed by these rules will be blocked. 4 Derived Rules In the rules below, we will use the two basic types • for individuals and t for propositions, the function type constructor --* associating to the right, the formal type constructor qua,at(q), where q is a quantifier, that is, a value of type (e --~ t) -* t, and the two formal types pron for pronoun assumptions and trace for traces in rel- ative clauses. For simplicity in examples, I will adopt a "reverse Curried" notation for the mean- ings of verbs, prepositions and relational nouns. For example, the meaning of the verb ~o love will be LOVe. : • ~ • ~ t, with z the lover and y the loved one in LOVE(y)(z). The assumptions corre- sponding to lexical items in a derivation will be appropriately labeled. 4.1 Trace Introduction and Ab- straction The two derived rules in Figure 3 deal with traces and the meaning of relative clauses. Rule [trace+] is licensed by the the occurrence of a trace in the syntax, and rule [trace-] by the construction of a relative clause from a sentence containing a trace. Clearly, if n : • --* t can be derived from some as- sumptions using these rules, then it can be derived using rule labs] instead. For an example of use of [trace+] and [trace-], assume that the meaning of relative pronoun that is THAT ~ Ar.An.Az.n(x)&r(z) : (e --* t) --* (e--* 155 [trace] y : 1;race I [trace+] Z/" e [lexical] OWN : • --* e ~ 1: lapp] OWN(y) : e --* 1; [[exica[] JOHN : e [app] OWN(y)(JOHN): ~, / [trace--] )ty.OWN(y)(JOHS) I e --+ l; [[exical] THAT: (e --+ 1;) --+ (e --+ 1;) ---+ (e ---+ t) [app] An.,,\z.n(z)~OWN(z)(JOHN): (e -'+ 1;) -'* (e ---* I;) [lexlcal] CAR: e ~ 1; [app] ~kz.CAR(Z)~OWN(z)(JOHN) " e -'~ 1; Figure 5: Using Derived Rules z) ~ (e --* t). Given appropriate syntactic licens- ing, Figure 5 shows the derivation of a meaning for car tha~ John o~#ns. Each nonleaf node in the derivation is labeled with the rule that was used to derive it, and leaf nodes are labeled accord- ing to their origin (lexical entries for words in the phrase or syntactic traces). The assumptions at each node are not given explicitly, but can be eas- ily computed by looking in the subtree rooted at the node for undischarged assumptions. 4.2 Bound Anaphora Introduction and Elimination Another pair of rules, shown in Figure 4, is re- sponsible for introducing a pronoun and resolving it as bound anaphora. The pronoun resolution rule [pron-] applies only when B is trace or quant(q) for some quantifier q. Furthermore, the premise y : B does not belong to an immediate constituent of the phrase licensing the rule, but rather to some undischarged assumption of s : A, which will re- main undischarged. These rules deal only with the construction of the meaning of phrases containing bound anaphora. In a more detailed granunar, the li- censing of both rules would be further restricted by linguistic constraints on coreference -- for in- stance, those usually associated with c-command (Reinhart, 1983), which seem to need access to syntactic information (Williams, 1986). In partic- ular, the rules as given do not by themselves en- force any constraints on the possible antecedents of reflexives. The soundness of the rules can be seen by noting that the schematic derivation (z : pron) z.'e s:A y:B : A to a special case of the schematic corresponds derivation 2 : e) s:A y:e Az.s : e ---. A (Ax.s)Cy) : A The example derivation in Figure 7, which will be explianed in more detail later, shows the applica- tion of the anaphora rules in deriving an interpre- tation for example sentence (2). 156 [quant+] : q: (e --* 10 --* t z: quant(q) ~g:e [quant--] : (=: quant(~)) s:t q(A=.s) : t Figure 6: Quantifier Rules 4.3 Quantifier Raising The rules discussed earlier provide some of the auxiliary machinery required to illustrate the free- variable constraint. However, the main burden of enforcing the constraint falls on the rules responsi- ble for quantifier raising, and therefore I will cover in somewhat greater detail the derivation of those rules from the basic rules of functionality. I will follow here the standard view (Montague, 1973; Barwise and Cooper, 1981) that natural- language determiners have meanings of type (e --* t) --* (e --* 10 ---+ ¢. For example, the mean- ing of every might be Ar.As.Vz.r(z) ~ s(z), and the meaning of the noun phrase every man will be As.Vz.MAN(z) =~ s(z). To interpret the combina- tion of a quantified noun phrase with the phrase containing it that forms its scope, we apply the meaning of the noun phrase to a property s de- rived from the meaning of the scope. The pur- pose of devices such as quantifying-in in Montague grammar, Cooper storage or quantifier raising in transformational grammar is to determine a scope for each noun phrase in a sentence. From a se- mantic point of view, the combination of a noun phrase with its scope, most directly expressed by Montague's quantifying-in rules, 4 corresponds to the following schematic derivation in the basic cal- culus (rules lapp] and labs] only): (=: e) #:'G Az.s : e ---, l; q : (e ---, l:) ---, t q(t=.s) : ~ • where the assumption z : • is introduced in the derivation at a position corresponding to the oc- currence of the noun phrase with meaning q in the sentence. In Montague grammar, this corre- spondence is enforced by using a notion of syn- tactic combination that does not respect the syn- 4I!1 gmaered, quantifyilMg-in has to apply not only to proposition-type scopes but ahto to property-type scopes (meAnings of common-noun phrases and verb-phrases). Ex- tending the argument that foUows to those cases offers no difficulties. 157 tactic structure of sentences with quantified noun phrases. Cooper storage was in part developed to cure this deficiency, and the derived rules pre- sented below address the same problem. Now, the free-variable constraint is involved in situations in which the quantifier q itself depends on assumptions that must be discharged. The rel- evant incomplete schematic derivation (again in terms of [app] and labs] only) is (a) (z : e) (b) Y: • s : t q :(e --, t) --+ t (5) ~x.s : e-.-+ t ? q(Az.s) : t ? Given that the assumption y : • has not been dis- charged in the derivation of q : (e ---, ~) ---, t, that is, y : • is an undischarged assumption of q : (e ---, t) -* t, the question is how to com- plete the whole derivation. If the assumption were discharged before q had been combined with its scope, the result would be the semantic object Ay.q : • --, (e --, t) ---, t, which is of the wrong type to be combined by lapp] with the scope Az.s. Therefore, there is no choice but to discharge (b) after q is combined with its scope. Put in an- other way, q cannot be raised outside the scope of abstraction for the variable y occurring free in q," which is exactly what is going on in Example (4) ('An author who John has read every book by arrived'). A correct schematic derivation is then (a) (= : 0) : (b) (V: 0) 8:t Az., : • -. t ~ : (e ~ t) ----+ t q(~z.s) : ¢ u:A Ay.u : e--+ A In the schematic derivations above, nothing en- sures the association between the syntactic posi- EVERY MAN EVERY(MAN) (a) ~n: quant(EVERY(MAN)) (b) h :pron [quant-I-] rrt : e FRIEND-OF [pron-I-] h : e SAw(1)( ) I [quant--] A(FRIEND-OF(h))(Af.SAW(f)(m)) [pron--] A (FRIEND-OF (Ira)) (~f.SAW (f)(rn)) I [quant--] EVERY(MAN)(Am.A (FRIEND-OF(m))(Af.SAW (f)(m))) Most interpretation types and the inference rule label on uses of [app] have been omitted for simplicity. Figure 7: Derivation Involving Anaphora and Quantification tion of the quantified noun phrase and the intro- duction of assumption (a). To do this, we need the the derived rules in Figure 6. Rule [qusnt-t-] is licensed by a quantified noun phrase. Rule [qusnt-] is not keyed to any particular syntactic construction, but instead may be applied when- ever its premises are satisfied. It is clear that any use of [quant+] and [quant--] in a derivation z:e s:t q(Ax.s) : can be justified by translating it into an instance of the schematic derivation (5). The situation relevant to the free-variable con- straint arises when q in [quant+] depends on as- sumptions. It is straightforward to see that the 158 constraint on a sound derivation according to the basic rules discussed earlier in this section turns now into the constraint that an assumption of the form z : quant(q) must be discharged before any of the assumptions on which q depends. Thus, the free-variable constraint is reduced to a constraint on derivations imposed by the basic theory of func- tionality, dispensing with a logical-form represen- tation of the constraint. Figure 7 shows a deriva- tion for the only possible scoping of sentence (2) when erery man is selected as the antecedent of his. To allow for the selected coreference, the pro- noun assumption must be discharged before the quantifier assumption (a) for every man. Further- more, the constraint on dependent assumptions requires that the quantifier assumption (c) for a friend of his be discharged before the pronoun as- sumption (b) on which it depends. It then follows that assumption (c) will be discharged before as- sumption (a), forcing wide scope for every man. 5 Discussion The approach to semantic interpretation outlined above avoids the need for manipulations of log- ical forms in deriving the possible meanings of quantified sentences. It also avoids the need for such devices as distinguished variables (Gazdar, 1982; Cooper, 1983) to deal with trace abstrac- tion. Instead, specialized versions of the basic rule of functional abstraction are used. To my knowl- edge, the only other approaches to these problems that do not depend on formal operations on log- ical forms are those based on specialized logics of type change, usually restrictions of the Curry or Lambek systems (van Benthem, 1986a; Hen- driks, 1987; Moortgat, 1988). In those accounts, a phrase P with meaning p of type T is consid- ered to have also alternative meaning t¢ of type T', with the corresponding combination possibil- ities, if p' : T' follows from p : T in the chosen logic. The central problem in this approach is to design a calculus that will cover all the actual se- mantic alternatives (for instance, all the possible quantifier scopings) without introducing spurious interpretations. For quantifier raising, the system of Hendriks (1987) seems the most promising so far, but it is at present too restrictive to support raising from noun-phrase complements. An important question I have finessed here is that of the compositionality of the proposed se- mantic calculus. It is clear that the application of semantic rules is governed only by the existence of appropriate syntactic licensing and by the avail- ability of premises of the appropriate types. In other words, no rule is sensitive to the form of any of the meanings appearing in its premises. How- ever, there may be some doubt as to the status of the basic abstraction rule and those derived from it. After all, the use of A-abstraction in the consequent of those rules seems to imply the con- straint that the abstracted object should formally be a variable. However, this is only superficially the case. I have used the formal operation of A- abstraction to represent functional abstraction in this paper, but functional abstraction itself is in- dependent of its formal representation in the A- calculus. This can be shown either by using other notations for functions and abstraction, such as that of de Bruijn's (Barendregt, 1984; Huet, 1986), or by expressing the semantic derivation rules in A- Prolog (Miller and Nadathur, 1986) following ex- isting presentations of natural deduction systems (Felty and Miller, 1988). Acknowledgments This research was supported by a contract with the Nippon Telephone and Telegraph Corp. and by a gift from the Systems Development Founda- tion as part of a coordinated research effort with the Center for the Study of Language and Informa- tion, Stanford University. I thank Mary Dalrym- pie and Stuart Shieber for their helpful discussions regarding this work. Bibliography Alan Ross Anderson and Nuel D. Belnap, Jr. 1975. Entailment: the Logic of Relevance and Necessity, Volume L Princeton University Press, Princeton, New Jersey. Hank P. Barendregt. 1984. The Lambda Calcu- lus: its Syntaz and Semantics. North-Holland, Amsterdam, Holland. .Ion Barwise and Robin Cooper. 1981. General- ized quantifiers and natural language. Linguis- tics and Philosophy, 4:159-219. Robin Cooper. 1983. Quantification and Syntac- tic Theory. D. Reidel, Dordrecht, Netherlands. Haskell B. Curry and Robert Feys. 1968. Com- binatory Logic, Volume L Studies in Logic and the Foundations of Mathematics. North- Holland, Amsterdam, Holland. Second print- ing. David R. Dowty, Robert E. Wall, and Stanley Pe- ters. 1981. Introduction to Montague Seman- tics, Volume 11 of Synthese Language Library. D. Reidel, Dordrecht, Holland. Amy Felty and Dale Miller. 1988. Specifying theo- rem provers in a higher-order logic programming language. Technical Report MS-CIS-88-12, De- partment of Computer and Information Science, University of Pennsylvania, Philadelphia, Penn- sylvania. Gerald Gazdar. 1982. Phrase structure grammar. In P. Jacobson and G.K. Pullum, editors, The Nature of Syntactic Representation, pages 131- 186. D. Reidel, Dordrecht, Holland. Irene R. Heim. 1982. The Semantics of Defi- nite and Indefinite Noun Phrases. Ph.D. thesis, Department of Linguistics, University of Mas- sachusetts, Amherst, Massachusetts (Septem- ber). Herman Hendriks. 1987. Type change in seman- tics: the scope of quantification and coordina- tion. In Ewan Klein and Johan van Benthem, 159 editors, Catego.mes, Polymorphism and Unifica- tion, pages 95-120. Centre for Cognitive Sci- ence, University of Edinburgh, Edinburgh, Scot- land. Jerry R. Hobbs and Stuart M. Shieber. 1987. An algorithm for generating quantifier scopings. Computational Linguistics, 13:47-63. W.A. Howard. 1980. The formulae-as-types no- tion of construction. In J.P. Seldin and J.R. Hindley, editors, To H.B. Curry: Essays on Combinatory Logic, Lambda Calculus and For- malism, pages 479-490. Academic Press, Lon- don, England. Gdrard Huet. 1986. Formal structures for compu- tation and deduction. First edition of the lec- ture notes of a course given in the Computer Sci- ence Department of Carnegie-Mellon University during the Spring of 1986 (May). Joachim Lambek. 1958. The mathematics of sentence structure. American Mathematical Monthly, 65:154-170. Joachim Lambek. 1980. From A-calculus to carte- sian closed categories. In J.P. Seldin and J.R. Hindley, editors, To H.B. Curry: Essays on Combinatory Logic, Lambda Calculus and For- realism, pages 375-402. Academic Press, Lon- don, England. Robert May. 1985. Logical Form: its Struc. ture and Derivation, Volume 12 of Linguistic Inquiry Monographs. MIT Press, Cambridge, Massachusetts. Dale A. Miller and Gopalan Nadathur. 1986. Higher-order logic programming. In Ehud Shapiro, editor, Third International Confer- ence on Logic Programming, Berlin, Germany. Springer-Verlag. Richard Montague. 1973. The proper treatment of quantification in ordinary English. In Rich- mond H. Thomason, editor, Formal Philosphy. Yale University Press. Michael Moortgat. 1988. Categorial Investiga- tions: Logical and Linguistic Aspects of the Lambek Calculus. Ph.D. thesis, University of Amsterdam, Amsterdam, Holland (October). Douglas B. Moran. 1988. Quantifier scoping in the SRI Core Language Engine. In $6th Annual • Meeting of the Association for Computational Linguistics, pages 33-47, Morristown, New Jer- sey. Association for Computational Linguistics. Barbara Partee and Emmon Bach. 1984. Quan- tification, pronouns and VP anaphora. In 160 J.A.G. Groenendijk, T.M.V. Janssen, and M.B.J. Stokhof, editors, Truth, Interpretation and Information, pages 99-130. Forts, Dor- drecht, Holland. Martha E. Pollack and Fernando C.N. Pereira. 1988. An integrated framework for semantic and pragmatic interpretation. In P6th Annual Meeting of the Association for Computational Linguistics, pages 75-86, Morristown, New Jer- sey. Association for Computational Linguistics. Dug Prawitz. 1965. Natural Deduction: A Proof- Theoretical Study. Almqvist and Wiksell, Upp- sala, Sweden. Tanya Reinhart. 1983. Anaphora and Semantic Interpretation. Croom Helm, London, England, corrected and revised printing, 1987 edition. Craige Roberts. 1987. Modal Subordination, Anaphora and Distributivity. Ph.D. thesis, De- partment of Linguistics, University of Mas- sachusetts, Amherst, Massachusetts (February). Mark Steedman. 1987. Combinatory grammars and parasitic gaps. Natural Language and Lin- guistic Theory, 5(3):403-439. SSren Stenlund. 1972. Combinators, A-Terms and Proof Theory. D. Reidel, Dordrecht, Holland. Johan van Benthem. 1986a. Categorial grammar and lambda calculus. In D. Skordev, editor, Mathematical Logic and its Application, pages 39-60: Plenum Press, New York, New York. Johan van Benthem. 1986b. Essays in Logical Semantics, Volume 29 of Studies in Linguistics and Philosophy. D. Reidel, Dordreeht, Holland. Jan van Eijek. 1985. Aspects of Quantification in Natural Language. Ph.D. thesis, University of Groningen, Groningen, Holland (February). Henk van Riemedijk and Edwin Williams. 1986. Introduction to the Theory of Grammar, Vol- ume 12 of Current Studies in Linguistics. MIT Press, Cambridge, Massachusetts. Kurt A. Vanlehn. 1978. Determining the scope of English quantifiers. Master's thesis, M.I.T. (June). Edwin Williams. 1986. A reassignment of the functions of LF. Linguistic Inquiry, 17(2):265- 299.
1989
19
A Semantic-Head-Driven Generation Algorithm for Unification-Based Formalisms Stuart M. Shieber," Gertjan van Noord, t Robert C. Moore," and Fernando C. N. Pereira.* "Artificial Intelligence Center SRI International Menlo Park, CA 94025, USA tDepartment of Linguistics Rijksuniversiteit Utrecht Utrecht, Netherlands Abstract We present an algorithm for generating strings from logical form encodings that improves upon previous algorithms in that it places fewer restric- tions on the class of grammars to which it is ap- plicable. In particular, unlike an Earley deduction generator (Shieber, 1988), it allows use of seman- tically nonmonotonic grammars, yet unlike top- down methods, it also permits left-recursion. The enabling design feature of the algorithm is its im- plicit traversal of the analysis tree for the string being generated in a semantic-head-driven fashion. 1 Introduction The problem of generating a well-formed natural- language expression from an encoding of its mean- ing possesses certain properties which distinguish it from the converse problem of recovering a mean- ing encoding from a given natural-language ex- pression. In previous work (Shieber, 1988), how- ever, one of us attempted to characterize these differing properties in such a way that a sin- gle uniform architecture, appropriately parame- terized, might be used for both natural-language processes. In particular, we developed an archi- tecture inspired by the Earley deduction work of Pereira and Warren (1983) but which generalized that work allowing for its use in both a parsing and generation mode merely by setting the values of a small number of parameters. As a method for generating natural-language expressions, the Earley deduction method is rea- sonably successful along certain dimensions. It is quite simple, general in its applicability to a range of unification-based and logic grammar for- malisms, and uniform, in that it places only one restriction (discussed below) on the form of the lin- guistic analyses allowed by the grammars used in generation. In particular, generation from gram- mars with recursions whose welbfoundedness relies on lexical information will terminate; top-down generation regimes such as those of Wedekind (1988) or Dymetman and Isabelle (1988) lack this property, discussed further in Section 3.1. Unfortunately, the bottom-up, left-to-right pro- cessing regime of Earley generation--as it might be called--has its own inherent frailties. Efficiency considerations require that only grammars pos- sessing a property of semantic monotonicity can be effectively used, and even for those grammars, processing can become overly nondeterministic. The algorithm described in this paper is an at- tempt to resolve these problems in a satisfactory manner. Although we believe that this algorithm could be seen as an instance of a uniform archi- tecture for parsing and generation--just as the extended Earley parser (Shieber, 1985b) and the bottom-up generator were instances of the general- ized Earley deduction architecture= our efforts to date have been aimed foremost toward the devel- opment of the algorithm for generation alone. We will have little to say about its relation to parsing, leaving such questions for later research.1 2 Applicability of the Algo- rithm As does the Earley-based generator, the new algo- rithm assumes that the grammar is a unification- based or logic grammar with a phrase-structure backbone and complex nonterminMs. Further- more, and again consistent with previous work, we assume that the nonterminals associate to the phrases they describe logical expressions encoding their possible meanings. We will describe the al- gorithm in terms of an implementation of it for definite-clause grammars (DCG), although we be- I Martin Kay (personal communication) has developed a parsing algorithm that seems to be the parsing correlate to the generation algorithm presented here. Its existence might point the way towards a uniform architecture. lieve the underlying method to be more broadly applicable. A variant of our method is used in Van No- ord's BUG (Bottom-Up Generator) system, part of MiMo2, an experimental machine translation system for translating international news items of Teletext, which uses a Prolog version of PATI~-II similar to that of Hirsh (1987). According to Mar- tin Kay (personal communication), the STREP machine translation project at the Center for the Study of Language and Information uses a ver- sion of our algorithm to generate with respect to grammars based on head-driven phrase-structure grammar (HPSG). Finally, Calder et al. (1989) report on a generation algorithm for unification categorial grammar that appears to be a special case of ours. 3 Problems with Existing Generators Existing generation algorithms have efficiency or termination problems with respect to certain classes of grammars. We review the problems of both top-down and bottom-up regimes in this sec- tion. 3.1 Problems with Top-Down Gen- erators Consider a naive top-down generation mechanism that takes as input the semantics to generate from and a corresponding syntactic category and builds a complete tree, top-down, left-to-right by apply- ing rules of the grammar nondeterministically to the fringe of the expanding tree. This control regime is realized, for instance, when running a DCG "backwards" as a generator. Clearly, such a generator may not terminate. For example, consider a grammar that includes the rule siS --> np/NP, vp(gP)/S. (The intention is that verb phrases like, say, "loves Mary" be associated with a nonterminal vp(X)/love(X, mary).) Once this rule is ap- plied to the goal s/love(john, mary), the sub- goal np/NP will be considered. But the generation search space for that goal is infinite and so has infinite branches, because all noun phrases, and thus arbitrarily large ones, match the goal. This is an instance of the general problem known from logic programming that a logic program may not terminate when called with a goal less instanti- ated than what was intended by the program's designer. Dymetman and Isabelle (1988), not- ing this problem, propose allowing the grammar- writer to specify a separate goal ordering for pars- ing and for generation. For the case at hand, the solution is to generate the VP first--from the goal vp(NP)/loves(john, mary)--in the course of which the variable NP will become bound so that the generation from np/NP will terminate. Wedekind (1988) achieves this goal by expanding first nodes that are connected, that is, whose se- mantics is instantiated. Since the NP is not con- nected in this sense, but the VP is, the latter will be expanded first. In essence, the technique is a kind of goal freezing (Colmerauer, 1982) or im- plicit wail declaration (Naish, 1986). For cases in which the a priori ordering of goals is insufficient, Dymetman and Isabelle also introduce goal freez- ing to control expansion. Although vastly superior to the naive top-down algorithm, even this sort of amended top-down ap- proach to generation based on goal freezing under one guise or another fails to terminate with cer- tain linguistically plausible analyses. For example, the "complements" rule given by Shieber (1985a, pages 77-78) in the PATR-II formalism VP1 ~ VP2 X (VPI head) = (VP2 head) (VP2 syncat first) = (X) (VP2 syncat rest) - (VP1 syncat) can be encoded as the DCG-style rule: vp(Head, Synca~) --> vp(Head, [CompllSyncat]), Compl. Top-down generation using this rule will be forced to expand the lower VP before its complement, since Comp1 is uninstantiated initially. But appli- cation of the rule can recur indefinitely, leading to nontermination. The problem arises because there is no limit to the size of the subcategorization list. Although one might propose an ad hoc upper bound for lexi- ca/entries, even this expedient may be insufficient. In analyses of Dutch cross-serial verb construc- tions (Evers, 1975; Huybrechts, 1984), subcate- gorization lists such as these may be appended by syntactic rules (Moortgat, 1984; Steedman, 1985; Pollard, 1988), resulting in indefinitely long lists. Consider the Dutch sentence dat [Jan [Marie [de oppasser [de olifanten that John Mary the keeper the elephants [zag helpen voeren]]]] saw help feed that John saw Mary help the keeper feed the elephants The string of verbs is analysed by appending their subcategorization lists as follows: V [e,k,md] v [mj] V [e,k,m] zag sato v [k,m] V [e,k] I I helpen voeren help feed Subcategorization lists under this analysis can have any length, and it is impossible to predict from a semantic structure the size of its corre- sponding subcategorization list mereiy by exam- ining the lexicon. In summary, top-down generation algorithms, even if controlled by the instantiation status of goals, can fail to terminate on certain grammars. In the case given above the well-foundedness of the generation process resides in lexical information unavailable to top-down regimes. 3.2 Problems with Bottom-Up Generators The bottom-up Earley-deduction generator does not fall prey to these problems of nontermination in the face of recursion, because lexical informa- tion is available immediately. However, several im- portant frailties of the Earley generation method were noted, even in the earlier work. For efficiency, generation using this Earley de- duction method requires an incomplete search strategy, filtering the search space using seman- tic information. The semantic filter makes gen- eration from a logical form computationally feasi- ble, but preserves completeness of the generation process only in the case of semantically monotonic grammars -- those grammars in which the seman- tic component of each right-hand-side nonterminal subsumes some portion of the semantic component of the left-hand-side. The semantic monotonicity constraint itself is quite restrictive. Although it is intuitively plausible that the semantic content of subconstituents ought to play a role in the seman- tics of their combination--this is just a kind of compositionality claim--there are certain cases in which reasonable linguistic analyses might violate this intuition. In general, these cases arise when a particular lexical item is stipulated to occur, the stipulation being either lexical (as in the case of particles or idioms) or grammatical (as in the case of expletive expressions). Second, the left-to-right scheduling of Earley parsing, geared as it is toward the structure of the string rather than that of its meaning, is inherently more appropriate for parsing than generation. ~ This manifests itself in an overly high degree of nondeterminism in the generation pro- tess. For instance, various nondeterministic pos- sibilities for generating a noun phrase (using dif- ferent cases, say) might be entertained merely be- cause the NP occurs before the verb which would more fully specify, and therefore limit, the options. This nondeterminism has been observed in prac- tice. 3.3 Source of the Problems We can think of a parsing or generation process as discovering an analysis tree, 3 one admitted by the grammar and satisfying certain syntactic or se- mantic conditions, by traversing a virtual tree and constructing the actual tree during the traversal. The conditions to be satisfied--possessing a given yield in the parsing case, or having a root node la- beled with given semantic information in the case of generation--reflect the different premises of the two types of problem. From this point of view, a naive top-down parser or generator performs a depth-first, left-to-right traversal of the tree. Completion steps in Earley's algorithm, whether used for parsing or generation, correspond to a post-order traversal (with predic- tion acting as a pre-order filter). The left-to-right traversal order of both of these methods is geared towards the given information in a parsing prob- lem, the string, rather than that of a generation problem, the goal logical form. It is exactly this mismatch between structure of the traversal and 2Pereira and Warren (1983) point out that Earley de- duction is not restricted to a left-to-right expansion of goals, but this suggestion was not followed up with a spe- cific algorithm addressing the problems discussed here. 3We use the term "analysis tree" rather than the more familiar "parse tree" to make clear that the source of the tree is not necessarily a parsing process; rather the tree serves only to codify a particular analysis of the structure of the string. 9 structure of the problem premise that accounts for the profligacy of these approaches when used for generation. Thus for generation, we want a traversal order geared to the premise of the generation problem, that is, to the semantic structure of the sentence. The new algorithm is designed to reflect such a traversal strategy respecting the semantic struc- ture of the string being generated, rather than the string itself. 4 The New Algorithm Given an analysis tree for a sentence, we define the pivot node as the lowest node in the tree such that it and all higher no.des up to the root have the same semantics. Intuitively speaking, the pivot serves as the semantic head of the root node. Our traversal will proceed both top-down and bottom- up from the pivot, a sort of semantic-head-driven traversal of the tree. The choice of this traversal allows a great reduction in the search for rules used to build the analysis tree. To be able to identify possible pivots, we dis- tinguish a subset of the rules of the grammar, the chain rules, in which the semantics of some right-hand-side element is identical to the seman- • tics of the left-hand side. The right-hand-side ele- ment will be called the rule's semantic head. 4 The traversal, then, will work top-down from the pivot using a nonchain rule, for if a chain rule were used, the pivot would not be the lowest node sharing semantics with the root. Instead, the pivot's se- mantic head would be. After the nonchain rule 4 In case there axe two right-hand-side elements that are semantically identical to the left-hand side, there is some freedom in choosing the semantic head, although the choice is not without ramifications. For instance, in some analyses of NP structure, a rule such as np/NP--> det/NP, nbar/NP. is postulated. In general, a chain rule is used bottom-up from its semantic head and top-down on the non-semantic- head siblings. Thus, if a non-semantic-head subconstituent has the same semantics as the left-hand-side, a recursive top-down generation with the same semantics will be in- voked. In theory, this can lead to nonterrnination, unless syntactic factors eliminate the recursion, as they would in the rule above regardless of which element is chosen as se- mantic head. In a rule for relative clause introduction such as the following (in highly abbreviated form) nbarlg --> nbarlN, sbar/N. we can (and must) choose the nominal as semantic head to effect termination. However, there are other problem- atic cases, such as verb-movement analyses of verb-second languages, whose detailed discussion is beyond the scope of this paper. is chosen, each of its children must be generated recursively. The bottom-up steps to connect the pivot to the root of the analysis tree can be restricted to chain rules only, as the pivot (along with all interme- diate nodes) has the same semantics as the root and must therefore be the semantic head. Again, after a chain rule is chosen to move up one node in the tree being constructed, the remaining (non- semantic-head) children must be generated recur- sively. The top-down base case occurs when the non- chain rule has no nonterminal children, i.e., it introduces lexical material only. The bottom-up base case occurs when the pivot and root are triv- ially connected because they are one and the same node. 4.1 A DCG Implementation To make the description more explicit, we will de- velop a Prolog implementation of the algorithm for DCGs, along the way introducing some niceties of the algorithm previously glossed over. In the implementation, a term of the form node(Cat, P0, P) represents a phrase with the syntactic and semantic information given by Cat starting at position P0 and ending at position P in the string being generated. As usual for DCGs, a string position is represented by the list of string elements after the position. The generation pro- cess starts with a goal category and attempts to generate an appropriate node, in the process in- stantiating the generated string. gen(Cat, String) :- generate (node (Cat, String, [] ) ). To generate from a node, we nondeterministi- cally choose a nonchain rule whose left-hand side will serve as the pivot. For each right-hand-side el- ement, we recursively generate, and then connect the pivot to the root. generate(Root) :- choose nonchain rule appl icable_non_chain_rule (Root, Pivot, RHS), generate all subconstituents generate _rhs ( RHS ), generate material on path to root connect (Pivot, Root). The processing within genera'ce_rhs is a simple iteration. generate_rhs(D). 10 generate_rhs([First [ Rest]) :- generate (First), generat e_rhs (Rest). The connection of a pivot to the root, as noted before, requires choice of a chain rule whose semantic head matches the pivot, and the re- cursive generation of the remaining right-hand- side. We assume a predicate applicable_chain_ rule(Semrlead, LHS, Rool;, RHS) that holds if there is a chain rule admitting a node LHS as the left-hand-side, SeraHead as its semantic head, and RHS as the remaining right-hand-side nodes, such that the left-hand-side node and the root node Root can themselves be connected. cormect (Pivot, Root) :- choose chain rule applicable_chain_rule (Pivot, LHS, Root, RHS), generate remaining siblings generate_rhs (RHS), ~$ connect the new parent to the root connect. (LItS, Root). The base case occurs when the root and the pivot are the same. Identity checks like this one must be implemented correctly in the generator by using a sound Unification algorithm with the occurs check. (The default unification in most Prolog systems is unsound in this respect.) For example, a grammar with a gap-threading treat- ment of wh-movement (Pereira, 1981; Pereira and Shieber, 1985) might include the rule np(Agr, [np(Agr)/SemlX]-X)/Sem---> []. stating that an NP with agreement Agr and se- mantics Sera can be empty provided that the list of gaps in the NP can be represented as the difference list [np(Agr)/SemlX]-X, that is the list contain- ing an NP gap with the same agreement features Agr (Pereira and Shieber, 1985, p. 128). Because the above rule is a nonchain rule, it will be consid- ered when trying to generate any nongap NP, such as the proper noun np(3-sing,G-G)/john. The base case of connecl; will try to unify that term with the head of the rule above, leading to the at- tempted unification of X with l'np(Agr)/SemIX], an occurs-check failure. The base case, incorpo- rating the explicit call to a sound unification algo- rithm is thus as follows: cozmect(Pivot, Root) :- % trivially connect pivot to root unify(Pivot, Root). 11 Now, we need only define the notion of an ap- plicable chain or nonchain rule. A nonchain rule is applicable if the semantics of the left-hand-side of the rule (which is to become the pivot) matches that of the root. Further, we require a top-down check that syntactically the pivot can serve as the semantic head of the root. For this purpose, we assume a predicate chained_nodes that codifies the transitive closure of the semantic head rela- tion over categories. This is the correlate of the link relation used in left-corner parsers with top- down filtering; we direct the reader to the discus- sion by Matsumoto et al. (1983) or Pereira and Shieber (1985, p. 182) for further information. applicable_non_chain_rule (Root, Pivot, RHS) :- 7o semantics of root and pivot are same node_semantics (Root, Sem), node_semantics(Pivot, Sem), ~o choose a nonchain rule non_ehain_rule(r.HS, RttS), ~$ ...whose lhs matches the pivot unify(Pivot, LHS), make sure the categories can connect chained_nodes(Pivot, Root). A chain rule is applicable to connect a pivot to a root if the pivot can serve as the semantic head of the rule and the left-hand-side of the rule is appropriate for linking to the root. applicable_chain_rule (Pivot, Parent, Root, RHS) :- 70 choose a chain rule chain_rule(Parent, RHS, SemHead), ... whose sere. head matches pivot unify(Pivot, SemHead), make sure the categories can connect chained_nodes(Parent, Root). The information needed to guide the generation (given as the predicates chain_rule, non_chain_- rule, and chained_nodes) can be computed au- tomatically from the grammar; a program to com- pile a DCG into these tables has in fact been im- plemented. The details of the process will not be discussed further. The careful reader will have no- ticed, however, that no attention has been given to the issue of terminal symbols on the right-hand sides of rules. During the compilation process, the right-hanOi side of a rule is converted from a list of categories and terminal strings to a list of nodes connected together by the difference-list threading technique used for standard DCG compilation. At that point, terminal strings can be introduced into sentence/decl(S) ---> s(finite)/S. (1) sentence/imp(S) ---> vp(nonfinite,[np(_)/you])/S. s(Form)/S ---> Subj, vp(Fona,[Subj])/S. (2) vp(Form,Subcat)/S ---> vp(Form,[Compl[Subcat])/S, Compl. (3) vp(Form,[Subj])/S ---> vp(Forl,[Subj])/VP, adv(VP)/S. vp(finite,[np(_)/O,np(3-sing)/S])/love(S,O) ---> [loves]. vp(finite, [np(_)/O,p/up,np(3-sing)/S])/call_up(S,O) ---> [calls]. (4) vp(finite,[np(3-sing)/S])/leave(S) ---> [leaves]. np(3-sing)/john ---> [john]. (5) np(3-p1)/friends ---> [friends]. (6) adv(VP)/often(VP) ---> [often]. det(3-sing,X,P)/qterm(every,X,P) ---> [every]. n(3-sing,X)/friend(X) ---> [friend]. n(3-pl,l)/friend(X) ---> [friends]• . . • p/up---> [up]. (7) p/on ---> [on]. • Figure 1: Grammar Fragment the string threading and need never be considered further. 4.2 An Example We turn now to a simple example to give a sense of the order of processing pursued by this genera- tion algorithm• The grammar fragment in Figure 1 uses an infix operator / to separate syntactic and semantic category information. Subcategorization for complements is performed lexically. Consider the generation from the category sen~ence/dec1(call_up(john,friends) ). The analysis tree that we will be implicitly traversing in the course of generation is given in Figure 2. The rule numbers are keyed to the grammar. The pivots chosen during generation and the branches corresponding to the semantic head relation are shown in boldface. We begin by attempting to find a nonchain rule that will define the pivot• This is a rule whose left-hand-side semantics matches the root seman- tics decl ( call_up ( john, friends ) ) (although its syntax may differ)• In fact, the only such nonchain rule is sentence/decl(S) ---> s(finite)/S. (1) We conjecture that the pivot is labeled sent ence/decl(call_up(j ohn, friends) ). In terms of the tree traversal, we are implicitly choos- ing the root node [a] as the pivot• We recursively generate from the child's node [b], whose category is s(finite)/call_up(john,friends). For this category, the pivot (which will turn out to be node If]) will be defined by the nonchain rule vp(finite,[np(_)/0, p/up, np(3-sing)/S]) /call_up(S,0) ---> [calls]. (4) (If there were other forms of the verb, these would be potential candidates, but would be eliminated by the chained_nodes check, as the semantic head relation requires identity of the verb form of a sen- tence and its VP head.) Again, we recursively gen- erate for all the nonterminal elements of the right- hand side of this rule, of which there are none. We must therefore connect the pivot [f] to the root [b]. A chain rule whose semantic head 12 [a] sentence /decl(call_up (john,friends)) (:) [b] s(finite) /call_up (john, friends ) [c] np(3-sing) /john If/ (s) John [d] vp(fini~e,[np(3-sing)/john]) /call_up(john,friends) [e] vp(finite,Cp/up,np(3-s£ng)/john]) /call_up(john,friends) vp ( finite, [np (3- pl)/friends, p/up,np(3-sing)/john]) /call_up (john,friends) (4) calls np(3-pl) /friends I (81 friends p/up [h] (T) up [g] Figure 2: Analysis Tree Traversal matches the pivot must be chosen. The only choice is the rule vp (Form, Subcat)/S ---> vp (Form, [Compl I Subcat ] ) IS, Compl. (z) Unifying in the pivot, we find that we must re- cursively generate the remaining RttS element np(_)/friends, and then connect the left-hand side node [e] with category vp (finite, [lex/up, np (3-s ing)/j ohn] ) Icall_up (j ohn, friends) to the same root [b]. The recursive generation yields a node covering the string "friends" follow- ing the previously generated string "calls". The recursive connection will use the same chain rule, generating the particle "up", and the new node to be connected [d]. This node requires the chain rule s(Form)IS ---> Subj, vp(Form, [Subj])/S. (2) for connection. Again, the recursive generation for the subject yields the string "John", and the new node to be connected s(finite)/call_up(john, friends). This last node connects to the root [b] by virtue of identity. This completes the process of generating top-down from the original pivot senl;ence/ decl(call_up(john,friends)). All that re- mains is to connect this pivot to the original root. Again, the process is trivial, by virtue of the base case for connection. The generation process is thus completed, yielding the string "John calls friends up". The drawing summarizes the generation pro- cess by showing which steps were performed top- down or bottom-up by arrows on the analysis tree branches. 13 The grammar presented here was perforce triv- ial, for expository reasons. We have developed more extensive experimental grammars that can generate relative clauses with gaps and sentences with quantified NPs from quantified logical forms by using a version of Cooper storage (Cooper, 1983). We give an outline of our treatment of quantification in Section 6.2. 5 Important Properties of the Algorithm Several properties of the algorithm are exhibited by the preceding example example. First, the order of processing is not left-to-right. The verb was generated before any of its comple- ments. Because of this, the semantic information about the particle "up" was available, even though this information appears nowhere in the goal se- mantics. That is, the generator operated appropri- ately despite a semantically nonmonotonic gram- mar. In addition, full information about the subject, including agreement information was available be- fore it was generated. Thus the nondeterminism that is an artifact of left-to-right processing, and a source of inefficiency in the Earley generator, is eliminated. Indeed, the example here was com- pletely deterministic; all rule choices were forced. Finally, even though much of the processing is top-down, left-recursive rules (e.g., rule (3)) are still handled in a constrained manner by the algo- rithm. For these reasons, we feel that the semantic- head-driven algorithm is a significant improve- ment over top-down methods and the previous bottom-up method based on Earley deduction. 6 Extensions We will now outline how the algorithm and the grammar it uses can be extended to encompass some important analyses and constraints. 6.1 Completeness and Coherence Wedekind (1988) defines completeness and coher- ence of a generation algorithm as follows. Suppose a generator derives a string w from a logical form s, and the grammar assigns to w the logical form a. The generator is complete if s always subsumes a and coherent if a always subsumes s. The gen- erator defined in Section 4.1 is not coherent or complete in this sense; it requires only that a and s be compatible, that is, unifiable. If the logical-form language and semantic in- terpretation system provide a sound treatment of variable binding and scope, abstraction and appli- cation, completeness and coherence will be irrele- vant because the logical form of any phrase will not contain free variables. However, neither semantic projections in lexical-functional grammar (LFG) (Halvorsen and Kaplan, 1988) nor definite-clause grammars provide the means for such a sound treatment: logical-form variables or missing argu- ments of predicates are both encoded as unbound variables (attributes with unspecified values in the LFG semantic projection) at the description level. Then completeness and coherence become impor- tant. For example, suppose a grammar associated the following strings and logical forms. eat(john, X) 'John ate' ea~: (j olin, banana) 'John ate a banana' eat(john, nice(yellow(banana))) 'John ate a nice yellow banana' The generator of Section 4.1 would generate any of these sentences for the logical form eat (john, X) (because of its incoherence) and would generate 'John ate' for the logical form eat (john, banana) (because of its incompleteness). Coherence can be achieved by removing the con- fusion between object-level and metalevel vari- ables mentioned above, that is, by treating logical- form variables as constants at the description level. In practice, this can be achieved by replacing each variable in the semantics from which we are gen- erating by a new distinct constant (for instance with the numbervaxs predicate built into some im- plementations of Prolog). These new constants will not unify with any augmentations to the se- mantics. A suitable modification of our generator would be gen(Cat, String) :- cat_semantics (Cat, Sem), numbervaxs (Sere, O, _), generate(node(Cat,String, ['1 ) ). This leaves us with the completeness problem. This problem arises when there are phrases whose semantics are not ground at the description level, but instead subsume the goal logical form or gener- ation. For instance, in our hypothetical example, the string 'John eats' will be generated for seman- tics eat(john, banana). The solution is to test at the end of the generation procedure whether the 14 feature structure that is found is complete with re- spect to the original feature structure. However, because of the way in which top-down information is used, it is unclear what semantic information is derived by the rules themselves, and what seman- tic information is available because of unifications with the original semantics. For this reason, so- called "shadow" variables are added to the gener- ator that represent the feature structure derived by the grammar itself. Furthermore a copy of the semantics of the original feature structure is made at the start of the generation process. Complete- ness is achieved by testing whether the semantics of the shadow is subsumed by the copy. 6.2 Quantifier Storage We will outline here how to generate from a quan- tiffed logical form sentences with quantified NPs one of whose readings is the original logical form, that is, how to do quantifier-lowering automati- cally. For this, we will associate a quantifier store with certain categories and add to the grammar suitable store-manipulation rules. Each category whose constituents may create store elements will have a store feature. Further- more, for each such category whose semantics can be the scope of a quantifier, there will be an op- tional nonchain rule to take the top element of an ordered store and apply it to the semantics of the category. For example, here is the rule for sen- tences: s(Form, GO-G, Store)/quant(Q,X,R,S) ---> s(Form, GO-G, [qterm(Q,X,R) JStore])/S. The term quant (C~, X, R, S) represents a quantified formula with quantifier Q, bound variable X, re- striction R and scope $, and cltez~(Q,X,R) is the corresponding store element. In addition, some mechanism is needed to com- bine the stores of the immediate constituents of a phrase into a store for the phrase. For example, the combination of subject and complement stores for a verb into a clause store is done in one of our test grammars by lexical rules such as vp(linite, [np(_, SO)/O, np(3-sing, SS)IS], SC) llove(S,O) ---> [loves], {shuffle(SS, SO, SC)}. which states that the store SC of a clause with main verb 'love' and the stores SS and S0 of the subject and object the verb subcategorizes for sat- isfy the constraint shuf:fle(SS, SO, SC), mean- ing that SC is an interleaving of elements of SS and S0 in their original order, s Finally, it is necessary to deal with the noun phrases that create store elements. Ignoring the issue of how to treat quantifiers from within com- plex noun phrases, we need lexical rules for deter- miners, of the form det(3-sJ.ng,X,P, [qterm(every,X,P)] )/X ---> [every]. stating that the semantics of a quantified NP is simply the variable bound by the store element arising from the NP. For rules of this form to work properly, it is essential that distinct bound logical- form variables be represented as distinct constants in the terms encoding the logical forms. This is an instance of the problem of coherence discussed in the previous section. The rules outlined here are less efficient than necessary because the distribution of store ele- ments among the subject and complements of a verb does not check whether the variable bound by a store element actually appears in the seman- tics of the phrase to which it is being assigned, leading to many dead ends in the generation pro- cess. Also, the rules are sound for generation but not for analysis, because they do not enforce the constraint that every occurrence of a variable in logical form be outscoped by the variable's binder. Adding appropriate side conditions to the rules, following the constraints discussed by Hobbs and Shieber (Hobbs and Shieber, 1987) would not be difficult. 6.3 Postponing Lexical Choice As it stands, the generation algorithm chooses par- ticular lexical forms on-line. This approach can lead to a certain amount of unnecessary nonde- terminism. For instance, the choice of verb form might depend on syntactic features of the verb's subject available only after the subject has been generated. This nondeterminism can be elimi- nated by deferring lexical choice to a postprocess. The generator will yield a list of lexical items in- stead of a list of words. To this list a small phono- logical front end is applied. BUG uses such a mechanism to eliminate much of the uninterest- ing nondeterminism in choice of word forms. Of course, the same mechanism could be added to any of the other generation techniques discussed to in this paper. 5Further details of the use of shuffle in scoplng are siren by Pereira and Shieber (1985). 15 7 Further Research Further enhancements to the algorithm are envi- sioned. First, any system making use of a tabular link predicate over complex nonterminals (like the chained_nodes predicate used by the generation algorithm and including the link predicate used ill the BUP parser (Matsumoto et al., 1983)) is subject to a problem of spurious redundancy in processing if the elements in the link table are not mutually exclusive. For instance, a single chain rule might be considered to be applicable twice because of the nondeterminism of the call to chained_nodes. This general problem has to date received little attention, and no satisfactory solution is found in the logic grammar literature. More generally, the backtracking regimen of our implementation of the algorithm may lead to re- computation of results. Again, this is a general property of backtrack methods and is not partic- ular to our application. The use of dynamic pro- gramming techniques, as in chart parsing, would be an appropriate augmentation to the implemen- tation of the algorithm. Happily, such an augmen- tation would serve to eliminate the redundancy caused by the linking relation as well. Finally, in order to incorporate a general facility for auxiliary conditions in rules, some sort of de- layed evaluation triggered by appropriate instanti- ation (e.g., wait declarations (Nalsh, 1986)) would be desirable. None of these changes, however, con- stitutes restructuring of the algorithm; rather they modify its realization in significant and important ways. Acknowledgments Shieber, Moore, and Pereira were supported in this work by a contract with the Nippon Tele- phone and Telegraph Corp. and by a gift from the Systems Development Foundation as part of a coordinated research effort with the Center for the Study of Language and Information, Stanford University; van Noord was supported by the Euro- pean Community and the Nederlands Bureau voor Bibliotheekwezen en Informatieverzorgin through the Eurotra project. We would like to thank Mary Dalrymple and Louis des Tombe for their helpful discussions regarding this work. Bibliography Jonathan Calder, Mike Reape, and Hank Zeevat. 1989. An algorithm for generation in unification categorial grammar. In Proceedings of the ~th 16 Conference of the European Chapter of the As- sociation for Computational Linguistics, pages 233-240, Manchester, England (10-12 April). University of Manchester Institute of Science and Technology. Alain Colmerauer. 1982. PROLOG II: Manuel de r~ference et module th~orique. Technical re- port, Groupe d'Intelligence Artificielle, Facult~ des Sciences de Luminy, Marseille, France. Robin Cooper. 1983. Quantification and Syntac- tic Theory, Volume 21 of Synthese Language Li- brary. D. Reidel, Dordrecht, Netherlands. Marc Dymetman and Pierre Isabelle. 1988. Re- versible logic grammars for machine transla- tion. In Proceedings of the Second International Conference on Theoretical and Methodologi- cal Issues in Machine Translation of Natural Languages, Pittsburgh, Pennsylvania. Carnegie- Mellon University. Arnold Evers. 1975. The transformational cycle in German and Dutch. Ph.D. thesis, University of Utrecht, Utrecht, Netherlands. Per-Kristian Halvorsen and Ronald M. Kaplan. 1988. Projections and semantic description in lexical-functional grammar. In Proceedings of the International Conference on Fifth Gen- eration Computer Systems, pages 1116-1122, Tokyo, Japan. Institute for New Generation Computer Technology. Susan Hirsh. 1987. P-PATR, a compiler for uni- fication based grammars. In Veronica Dahl and Patrick Saint-Dizier, editors, Natural Language Understanding and Logic Programming, II. El- sevier Science Publishers. Jerry R. Hobbs and Stuart M. Shieber. 1987. An algorithm for generating quantifier scopings. Computational Linguistics, 13:47-63. Riny A.C. Huybrechts. 1984.. The weak inad- equacy of context-free phrase structure gram- mars. In G. de Haan, M. Trommelen, and W. Zonneveld, editors, Van Periferie naar Kern. Forts, Dordrecht, Holland. Yuji Matsumoto, Hozumi Tanaka, Hideki Hi- rakawa, Hideo Miyoshi, and Hideki Yasukawa. 1983. BUP: a bottom-up parser embedded in Prolog. New Generation Computing, 1(2):145- 158. Michael Moortgat. 1984. A Fregean restriction on meta-rules. In Proceedings of NELS 14, pages 306-325, Amherst, Massachusetts. University of Massachusetts. Lee Naish. 1986. Negation and Control in Pro. log, Volume 238 of Lecture Notes in Computer Science. Springer-Verlag, Berlin, Germany. Fernando C.N. Pereira and Stuart M. Shieber. 1985. Prolog and Natural-Language Analysis, Volume 10 of CSLI Lecture Notes. Center for the Study of Language and Information, Stan- ford, California. Distributed by Chicago Uni- versity Press. Fernando C.N. Pereira and David H.D. Warren. 1983. Parsing as deduction. In Proceedings of the 21st Annual Meeting, Cambridge, Mas- sachusetts (June 15-17). Association for Com- putational Linguistics. Fernando C.N. Pereira. 1981. Extraposi- ties grammars. Computational Linguistics, 7(4):243-256 (October-December). Carl Pollard. 1988. Categorial grammar and phrase structure grammar: an excursion on the syntax-semantics frontier. In R. Oehrle, E. Bach, and D. Wheeler, editors, Categorial Grammars and Natural Language Structures. D. Reidel, Dordrecht, Holland. Stuart M. Shieber. 1985a. An Introduction to Unification-Based Approaches to Grammar, Volume 4 of CSLI Lecture Notes. Center for the Study of Language and Information, Stanford, California. Distributed by Chicago University Press. Stuart M. Shieber. 1985b. Using restriction to extend parsing algorithms for complex-feature- based formalisms. In 28rd Annual Meeting of the Association for Computational Linguistics, pages 145-152, Morristown, New Jersey. Asso- ciation for Computational Linguistics. Stuart M. Shieber. 1988. A uniform architecture for parsing and generation. In Proceedings of the 12th International Conference on Compu- tational Linguistics, pages 614-619, Budapest, Hungary. Mark Steedman. 1985. Dependency and coordi- nation in the grammar of Dutch and English. Language, 61(3):523-568. Jiirgen Wedekind. 1988. Generation as structure driven derivation. In Proceedings of the 12th In- ternational Conference on Computational Lin- guistics, pages 732-737, Budapest, Hungary. 17
1989
2
A General Computational Treatment Of The Comparative Carol Friedman" Courant Institute of Mathematical Sciences New York University 715 Broadway, Room 709 New York, NY 10005 Abstract We present a general treatment of the com- parative that is based on more basic linguistic elements so that the underlying system can be effectively utilized: in the syntactic analy- sis phase, the comparative is treated the same as similar structures; in the syntactic regular- ization phase, the comparative is transformed into a standard form so that subsequent pro- ceasing is basically unaffected by it. The scope of quantifiers under the comparative is also in- tegrated into the system in a general way. 1 Introduction Recently there has been interest in the devel- opment of a general computational treatment of the comparative. Last year at the Annual ACL Meeting, two papers were presented on the comparative by Ballard [1] and Rayner and Banks [14]. Previous to that a compre- hensive treatment of the comparative was in- corporated into the syntactic analyzer of the Linguistic String Project [15]; in addition the DIALOGIC grammar utilized by TEAM [9] also contains some coverage of the compara- tive. An interest in the comparative is not sur- prising because it occurs regularly in lan- *This work was supported by the Defense Ad- vanced Re.arch Projects Agency under Contract N00014-8.5-K-0163 from the Office of Naval Research. The author's current addr¢~ is: Center for Medical Infornmti~, Columhia~Pre~byterian Medical Center, Columbia University, 161 Fort Waahington Avenue, Room 1310, New York NY 10032. guage, and yet is a very difficult structure to process by computer. Because it can occur in a variety of forms pervasively throughout the grammar, its incorporation into a NL system is a major undertaking which can easily ren- der the system unwieldy. We will describe an approach to the computational treatment of the comparative, which provides more general coverage of the comparative than that of other NLP Systems while not obscuring the underly- ing system. This is accomplished by associat- ing the comparative with simpler, more basic linguistic entities so that it could be processed by the system with only minor modifications. The implementation of the comparative de- scribed in this paper was done for the Pro- re,8 Question Answering System [8] 1 (referred to hereafter as Proteus QAS), and should be adaptable for other systems which have sim- ilar modules. A more detailed discussion of this work is given in [7]. 1.1 The Problem The comparative is a difficult structure to pro- cess for both syntactic and semantic reasons. Syntactically the comparative is extraordinar- ily diverse. The following sentences illustrate a range of different types of comparative struc- tures, some of which resemble other English structures, as noted by Sager [15]. In the ex- amples below, sentences with the comparative that resemble other forms are followed by a 1 The treatment of the comp~'ative in the syntac- tic analysis component was adapted from a previous implementation done by this 8uthor for the Linguistic String Project [15]. 161 sentence illustrating the similar form: conjunction-like : la.Men eat more apples than oranges. lb.Men eat apples and oranges. 2a.More men buy than write books. 2b.Men buy and write books. 3a. We are more for than against the plan. 3b. We are for or against the plan. 4a.He read more than 8 books. 4b.He read ~ or 3 books. wh-relative-clanse-like : 5a.More guests than we invited visited us. 5b.Guests that we invited visited as. subordinate and adverbial : 6a.More visitors came than was ezpected. 6b. Visitors came, which was ezpected. 7a.More visitors came than usual. 7b.Many t~sitors came as usual. Special Comparative Constructions : 8.A taller man than John visited us. 9. John is taller than 6 ft. 10. A man taller than John visited us. 11.He ran faster than ever. The problems in covering the syntax of the comparative are therefore at least as complex as the problems encountered for general coor- dinate conjunctions, relative clauses, and cer- tain subordinate and adverbial clauses. Incor- porating conjunction-like comparatives into a grammar is particularly difficult because that structure can occur almost anywhere in the grammar. Wh-relative-clause-like compara- tives are complicated because they contain an omitted noun where the omission can oc- cur arbitrarily deep within the comparative clause. The comparative is difficult to process for semantic reasons also because the comparative marker can occur on different linguistic cate- gories. Adjectives, quantifiers, and adverbs can all take the comparative form, as in: he is taller than John, he took more courses than John, and he ran faster than John. There- fore the semantics of the comparative has to be consistent with the semantics of different linguistic categories while retaining its own unique characteristics. 2 The Underlying System Proteus QAS answers natural language queries relevant to a domain of student records. It is highly modular and contains fairly standard components which perform: 1. A syntactic analysis of the sentence us- ing an augmented context-free grammar consisting of a context-free component which defines the grammatical structures, a restriction component which contains welbformedness constraints between con- stituents, and a lexicon which classifies words according to syntactic and seman- tic categories. 2. A syntactic regularization of the anal- ysis using Montague-style compositional translation rules to obtain a uniform operator-operand structure. 3. A domain analysis of the regularized structure to obtain an interpretation in the domain. 4. An analysis of the scope of the quanti- tiers. 5. A translation to logical form. 6. Retrieval and answer generation. The syntactic analyzer also covers general coordinate conjunction by containing a con- junction metarule mechanism which automat- ically adds a production containing conjunc- tion to certain context-free definitions. 3 The Syntactic Analysis of the Comparative In Section 1.1 it was shown that the com- parative resembles other complex syntactic structures. This observation suggests that the comparative could be treated as general coordinate conjunctions, wh-relative clauses, and certain subordinate and adverbial clauses 162 by the syntactic analysis component of the system. If the system can already handle these structures, the extension for the compar- ative is straightforward. This approach has the advantage of utilizing the system's exist- ing machinery to process comparative struc- tures which are very complex and diverse; in this way a minimal amount of effort re- sults in extensive coverage. For example, to cover conjunction-like comparative structures, the production containing possible conjunc- tions was modified to include than; to include relative-clause-like comparatives, the produc- tion containing words which can head rela- tive clauses was also modified to include than. Analogous minor grammar changes were made for the other types of similar structures shown above. Using this approach, a comprehen- sive comparative extension was obtained by a trivial modification of only a small number of grammar productions. Thus, a conjunction-like comparative struc- ture such as Sentence la. in Section 1.1 would be analyzed as consisting of an object which contains a conjoined noun phrase more apples CONJ 0 oranges where the value of CONJ is than, and where a quantifier phrase similar to more has been omitted which occurs with oranges. A relative-clause type of compara- tive structure such as Sentence 5a. would be analyzed as a relative clause than we invited 0 adjoined to more guests. Those construc- tions that are unique to the comparative, as shown in Sehtences 8 through 11, have to be uniquely defined. For example, the compara- tive clause in Sentence 8 is defined as a clause where the predicate is omitted, whereas the comparative clause in Sentence 9 is defined as a measure phrase. Although the comparative syntactically re- sembles other structures, this type of similar- ity does not carry over to the underlying struc- ture or to the semantics of the comparative, as will be discussed shortly. There are also some syntactic differences be- tween the comparative and the structures it resembles. For example, the comparative has zeroing patterns that are somewhat different from those associated with conjunctions: + John slept more than Mary [slept]. - John slept and Mary [slept]. The comparative constructions also have scope marker constraints that are not appli- cable to non-comparative structures. These differences are handled by special add-on con- straints that specifically deal with the com- parative, and do not interfere with the other restrictions. The treatment of the comparative marker is complicated because it can occur in a large number of different locations in the head clause 2, as illustrated by a few examples be- low: He wanted to travel to more coun- tries than he was able to. He is taller than Mary. He ate 3 more apples than Mary did. He ate more in the fall than in the winter. Because the comparative marker can occur in such a variety of locations and also be deeply embedded in the head clause, it cannot be con- veniently handled in the BNF component of the grammar. Instead, the constraint com- ponent deals with this problem by means of special constraints that assign and pass up the comparativ e marker; other constraints test that the comparative clause is in the scope of the marker. 4 Underlying Structure Basically, linguists such as Chomsky [3,4], Bresnan [2], Harris [10], and Pinkham [13] agree on fundamental aspects concerning the underlying structure of the comparative. They regard its underlying structure as con- sisting of two complete clauses where informa- tion in the comparative clause which is iden- tical to information in the head clause is re- quired to be zeroed. Harris' work is particularly suitable for computational purposes because he claims that one underlying structure is the source of 2This phrase was used by Bresnan [2] to refer to the clause of the comparative that contains the com- parative marker. 163 all comparative forms. We modified his in- terpretation somewhat to obtain a more con- venient form for computation. In our ver- sion, the underlying structure contains a main clause where the comparison is the primary relation; each quantity in the relation con- tains an embedded clause specifying the quan- tity being compared. An example of this form is shown below for the sentence John ate more apples than Mary, which resembles a conjunction-like comparative structure where the verb phrase has been omitted: Nx [John ate Nx apples] > N2 [Mary ate N2 apples] This form is also appropriate for all the different comparative forms shown in Sec- tion 1.1. For example, the underlying form for a relative-clause-like comparative, such as Sentence 5a. is: N1 [Nx guests visited us] > N2 [we invited N2 guests] The underlying form for a sentence such as a man taller than John visited us is slightly dif- ferent because the comparative structure it- self is embedded in a noun phrase. The main clause is a man visited us, and the compar- ative structure is a clause adjoining a man, whose underlying structure is: NI [the man is N1 tall] > N2 [John is N2 tall] The notion that there is one underlying form for all comparatives has important im- plications for a computational treatment: • Regularization procedures can be written to transform all comparative structures into one standard form consisting of a comparative operator and two complete clauses which specify the quantities be- ing compared. • In the standard form, each clause of the comparative operator is a simpler struc- ture which can be processed using basi- cally the usual procedures of the system. This means that further processing does not have to be modified for the compara- tive. This process can be illustrated by a simple ex- ample. When the sentence more guests than we invited visited us is regularized, a structure consisting of an operator connecting two com- plete clauses is obtained: (> (visited (er guests) (us)) (invited (we) (than guests))) The symbols er and than, shown above, roughly correspond to quantities being com- pared, and in subsequent processing they are each interpreted as denoting a certain type of quantity. Notice that each clause of the comparative is also in operator-operand form where generally the verb of a sentence is con- sidered the operator and the subject and ob- ject (and sometimes sentence adjunct phrases) are considered the operands z. Each of the two clauses can be processed in the usual manner provided that er and than are treated appro- priately. This will be described further in Sec- tion 5 which contains a discussion of semantics and the comparative. The regularization process was modified to be a two phase process. The first phase uses ordinary compositional translation rules to perform the standard regularization so that the surface analysis is transformed into a uni- form operator-operand form. The composi- tional regularization procedure is effective for fairly basic sentence structures but not for complex ones such as the comparative. The compositional rules associated with compara- tive structures only include labels categoriz- ing the type of comparative structure. The second phase, written specifically for the com- parative, completes the regularization process by filling in the missing elements, permuting the structures to obtain the correct operator- operand form, and supplying the appropriate quantifiers er and than to the items being comparativized. An example of this process is shown for the relative-clause type of com- parative in more guests than we invited visited as, where the comparative clause than we in- vited is analyzed syntactically as being a right adjunct modifier of guests. 3However, if the predicate is an ad~ectlvsl phrase, the adjective is considered the operator and the verb be the tense c~-rier. Thus, ignoring tense information, the regularized form of John is t611 is: (tall (John)). 164 Phase I: (visited (more guests (reln-than (invited (we) 0))) (us)) Phase 2: (> (visited (er guests) (us)) (invited (we) (than guests))) Another example is shown below for a conjunction-like comparative, such as John ate more apples than oranges: Phase 1: (ate (John) (conj-than (more oranges) (0 oranges))) Phase 2: (> (ate (John) (er apples) • (ate (John) (than oranges))) There are a few key points that should be made concerning the regularization proce- dures. The Montague-style translation rules could not readily be used to regularize the comparative constructions as they were de- fined in the context-free component. To use the rules, the grammar would have to be mod- ified substantially because the translation of the comparative is different and more com- plex than that of the structures it resembles. In particular, it would then not be possible to use the general conjunction mechanism to obtain coverage of that type of comparative structure. In the case of the usual relative clause, the regularized form is also substan- tially different from the regularized form of the relative-clause type of comparative shown above. For a typical relative clause, such as that we invited 0 in g.ests that we invited vis- ited us, the regularized form occurs as a clause embedded in the main clause as follows: (visited (guests (invited (we) 0)) (us)) The second important point is that be- cause of regularization further processing of sentences containing a comparative is signifi- cantly simplified and only minor changes are required specifically for the comparative. In Prote,s QAS, as well as other NLP Sys- tems, several other processing components are needed after syntactic regularization until the final result is obtained. Therefore a signifi- cant result of our approach is that subsequent components do not have to be modified for the comparative. As long as the underlying sys- tem can handle adjectives, degree expressions, quantifiers, and adverbs, the remainder of the processing of sentences with the comparative is basically no different than the processing of ordinary sentences because at that point the comparative is represented as being composed of fundamental linguistic entities. 5 Semantics of the Com- parative Semantically the comparative denotes the comparison of two quantities relative to a cer- tain scale. This interpretation is consistent with work in formal semantics ( [12,11], [6,5]), although our formalism is not the same. Since the comparative marker can occur with adjectives, quantifiers, and adverbs, we would like to integrate its semantic treat- ment with the semantics of those fundamen- tal linguistic categories and also remain true to the semantics and syntax of the compara- tive. This can be done by noting that once the comparative is regularized, the compara- tive marker becomes a higher order operator connecting two clauses and what remains of the marker within each clause functions as a quantitative phrase. For example, the regu- larized form for/s John taller than Mary is: (> (tall (DEG er) (John)) (tall (DEG than) (Mary)).) In this form er and than are each interpreted as a type of degree phrase that occurs with adjectives. In a question answering applica- tion such as that of Proteus QAS, each clause of the above form is equivalent to the regu- larized form of how tall is John, where how is also interpreted as a degree phrase modifying tall: (tall (DEG how) (John)) The interpretation of a sentence containing the comparative is therefore reduced to the interpretation of two similar simpler clauses, each containing an adjective operator and an 165 operand which is a degree phrase. Issues con- cerning the correct scale and criteria of com- parison for adjectives are non-trivial, but are generally not different from those issues con- cerning adjectives not being comparativized. For example, determining the scale and crite- ria that should be used to interpret is John more refiable than Jim raises similar issues to those for ho~a reliable is Jim. The semantic treatment of adverbs gener- ally parallels that of adjectives; the interpre- tation of quantifiers in the comparative form is also equivalent to the interpretation of cer- tain interrogatives. For example, the regular- ized form of did John take more courses than Mary consists roughly of the two clauses John took er courses and Mary took than courses, which is treated analogously to how many in how many courses did John take. 6 Quantifier Analysis An interesting problem involving the compar- ative concerns the scope of quantifiers when there is a higher order sentential operator such as the comparative. The problem is not dis- cussed much in the literature, but was dis- cussed by Rayner and Banks [14] when they described their treatment ofquantifiers for ev- eryone spent more money in London than in New York. The basic issue is whether the quantifier every in everyone should be given wider scope than the comparative itself, in which case it is applicable to both clauses of the comparative. Our approach addresses this problem in a general way by adding a prelimi- nary phase to the standard quantifier analysis. Our approach has several key features: • The replication of a quantified noun phrase does not lead to impossible scop- ing combinations, as frequently happens when these phrases are replicated for the purpose of obtaining a complete clause. • Our approach is applicable to all gen- eral higher order operators connecting two clauses. • The scope of quantifiers is determined in a late stage of processing so that corn- mittment is not done prematurely. • A procedure using pragmatics and do- main knowledge can easily be incorpo- rated into the system as a separate com- ponent to aid in scope determination. In Proteus QAS, the scope of quantifiers is determined subsequent to the regularization and domain analysis components in a manner similar to other NLP Systems, as described by Woods [16]. The basic quantifer analysis pro- cedure initially handled simple clauses, and therefore had to be modified to accommodate scope determination when a sentence contains a higher order operator such as a compara- tive or a coordinate conjunction. A prelim- inary quantifier analysis phase was added to find and label quantifiers which have a wider scope than the comparative. In addition, mi- nor modifications were made to the compo- nent which translates the regularized form to logical form, in order to handle the translation of wider scope quantifiers. Generally, in the case of the comparative, the criteria used for determining whether or not a quantifier should have a wider scope in- volves the location of the quantifier relative to the comparative marker in the surface form. Usually, a preference is given to the wider scope interpretation if the quantifier precedes the marker. Using this approach, the sen- tence everyone spent more money in London than in New York is first interpreted syntac- tically as consisting of two complete clauses, which are roughly everyone spent er money in London and everyone spent than money in New York. The semantics of each clause is interpreted the same as that of a simpler sen- tence how much money did everyone spend in London. The preliminary quantifier analysis phase prefers the reading where the scope of everyone is wider than the comparative opera- tor because everyone precedes more. The sen- tence is translated to logical form so that the quantified expression YX : person(X) occurs outside the comparative operator, and there- fore has scope over both c|auses of the com- parative. The interpretation is roughly: 166 VX:person(X)(>(spent (X) (er money) (in London)) (spent (X) (than money) (in New York))) A different scope interpretation is obtained for more students read than wrote a book, where the two clauses are er students read a book and than students wrote a book. The nar- row scope interpretation of a in a book is ob- tained because a follows more. In this case, the quantified expressions for each clause of the comparative are completely independent of the other. 7 Concluding Remarks We have presented a method for incorporat- ing general comparatives into a system with- out unduly complicating the system. This is done in the syntactic analysis component by treating the comparatives the same as simi- lar structures so that features of the syntac- tic analyzer that already exist may be uti- lized. The various comparative structures are then regularized so that they are in a stan- dard form consisting of a comparative opera- tor and two complete clauses that contain a quantity er or than which is interpreted by the semantic component as a quantity such as how, how many, or how much, as ap- propriate. A preliminary quantifier analysis component was added to determine whether a sentence containing a higher order operator has any quantifiers which have a wider scope than the operator, and to label those that do. The remainder of the processing is done as usual except for minor modifications. The treatment of the comparative that we have presented is more extensive and general than that of other NLP Systems to date, and also is simple to implement. Only a small number of productions of the BNF component were changed to cover the comparative struc- tures described in this paper. In addition, three restrictions were modified for the com- parative, and a set of separate add-on restric- tious were included to handle comparative zeroing patterns and scope marker require- ments. Special regularization procedures were written to regularize the different compara- tive forms so that the standard Montague- style compositional translation rules could be used prior to the comparative regularization phase. Although we can process many forms of the comparative, there is still substantial work that remains which involves comparative sen- tences where the comparative clause itself has been omitted, as in New York banks are start- ing to offer higher interest rates. In some cases the comparison is between two different time periods; in other cases the comparison involves different types of like objects, such as the interest rates of New York banks com- pared to the interest rates of Florida banks. The context can often be an aid in helping to recover the missing information, but the re- covery problem is still quite a challenge. Sen- tences with this type of anaphora are very in- teresting because they occur surprisingly reg- ularly in language, and yet the recovery possi- bilities are more limited and more controlled than those occurring in discourse in general. Possibly these type of sentences can provide us with clues as to what elements are significant for the recovery of the missing information. Acknowledgements I would like to thank Ralph Grishman, Naomi Sager, and Tomek Strzalkowski for their help and comments. References [1] B. BaUard. A general computational treatment of comparatives for natural language question answering. In Proc. of the ~6th Annual Meeting of the As- sociation for Computational Linguistics, pages 41-48, 1988. [21 Joan W. Bresnan. Syntax of the com- parative clause construction in English. Linguistic Inquiry, IV(3):275-343, 1973. [3] Noam Chomsky. Aspects of the Theory of Syntaz. M.I.T. Press, Cambridge, Mass., 1965. 167 [4] Noam Chomsky. On wh-movement. In P. Culicover, T. Wasow, and A. Akma- jian, editors, Formal Syntaz, pages 71- 132, Academic Press, .New York, 1977. [5] M.J. Cresswell. Logics and Language. Methuen, London, 1973. [6] M.J. Cresswell. The semantics of degree. In B.H.Partee, editor, Montague Gram- mar, pages 261-292, Academic Press, New York, 1975. [7] C. Friedman. A Computational Treat- ment of the Comparative. PhD thesis, New York University, 1989. Reprinted as PROTEUS Project Memorandum 21, New York University, Courant Insti- tute of Mathematical Science, Proteus Project, New York, 1989. [8] R. Grishman. PROTEUS Parser Refer- ence Manual. PROTEUS Project Memo- randum 4, New York University, Courant Institute of Mathematical Science, Pro- teus Project, New York, July 1986. [9] B. Grosz, D. Appelt, P. Martin, and F. Pereira. Team: an experiment in the de- sign of transportable natural-language in- terfaces. Artilical Intelligence, 32(2): 173- 243, 1987. [10] Zellig Harris. A Grammar of English On Mathematical Principles. John Wi- ley and Sons, New York, N.Y., 1982. [11] Ewan Klein. The interpretation of adjec- tival comparatives. Journal of Linguis- tics, (18):113-136, 1982. [12] Ewan Klein. A semantics for positive and comparative adjectives. Linguistics and Philosophy, (4):1-45, 1980. [13] J. Pinkham. The Formation of Compara- tive Clauses in French and English. Gar- land Publishing, New York, 1985. [14] M. Rayner and A. Banks. Parsing and in- terpreting comparatives. In Proc. of the 26th Annual Meeting of the Association for Computational Linguistics, pages 49- 60, 1988. [15] Naomi Sager. Natural Language Infor- mation Processing: A Computer Gram- mar of English and Its Applications. Addison-Wesley, Reading, Mass., 1981. [16] W.A. Woods. Semantics and quantifi- cation in natural language question an- swering systems. Advances in Comput- ers, 17:1-87, 1978. 168
1989
20
THE LEXICAL SEMANTICS OF COMPARATIVE EXPRESSIONS IN A MULTI-LEVEL SEMANTIC PROCESSOR Duane E. Olawsky Computer Science Dept. University of Minnesota 4-192 EE/CSci Building 200 Union Street SE Minneapolis, MN 55455 [olawsky~umn-cs.es.umn.edu] ABSTRACT Comparative expressions (CEs) such as "big- ger than" and "more oranges than" are highly ambiguous, and their meaning is context depen- dent. Thus, they pose problems for the semantic interpretation algorithms typically used in nat- ural language database interfaces. We focus on the comparison attribute ambiguities that occur with CEs. To resolve these ambiguities our nat- ural language interface interacts with the user, finding out which of the possible interpretations was intended. Our multi-level semantic processor facilitates this interaction by recognizing the oc- currence of comparison attribute ambiguity and then calculating and presenting a list of candi- date comparison attributes from which the user may choc6e. I I PROBLEM DESCRIPTION. Although there has been considerable work on the development of natural language database inter- faces, many difficult language interpretation prob- lems remain. One of these is the semantic inter- pretation of comparative expressions such as those shown in sentences (1) through (3). (1) Does ACME construct better buildings than ACE? (2) Does ACME construct buildings faster than ACE? (3) Are more oranges than apples exported by Mexico? To interpret a comparative expression (CE) a 'natural language processor must determine (1) the entities to he compared, and (2) the at- tribute(s) of those entities to consider in per- forming the comparison. The selection of com- parison attributes is made difficult by the high level of lexical ambiguity exhibited by compara- tive predicates. For example, what pieces of data should be compared to answer query (1)? If the database contains information about foundation type, structural characteristics, wiring, and in- sulation, any of these attributes could be used. Similarly, when comparing orange and apple ex- ports as in query (3), we might compare numeric quantity, weight, volume, or monetary value. To further complicate matters, the plausible compar- ison attributes for a comparative predicate change with the arguments to which that predicate is ap- plied. Table 1 shows several examples of likely comparison attributes to use with the predicate "bigger" depending on the types of entity that are being compared. Since the system must de- termine for a comparative predicate the lexical definition intended by the user, this problem is, at heart, one of lexical ambiguity resolution. The problems discussed so far are similar to the well known vagueness and context sensitivity of adjectives (although they occur here even in sen- tences without adjectives such as (3)). Any pro- posed method of CE interpretation should also treat several other phenomena that are unique to comparatives. These are bipredicational com- parisons, cross-class comparisons, and pairability constraints. Bipredlcational comparisons in- volve two predicates, as shown in example (4) (the 169 Table 1: Examples of argument sensitivity in the meaning of ~bigger". Argument type hotels number of rooms hospitMs number of beds houses square feet number of rooms, or number of bedrooms wheat farms number of acres d~iry farms number of cows countries number of people, or land ~rea cars length, curb weight, passenger space, or passenger limit predicates are in boldface), and they use a differ- ent comparison attribute for each argument of the comparative. (4) John's car is wider than Mary's car is long. Bipredicational CEs have strong pairabillty constrn;nts (Hale 1970). That is, there are re- strictions on the pairing of predicates in s bipred- icational CE. Example (5) gives a sentence that is semantically anomalous because it violates palrability constraints. (5) ? Bob's car is wider than it is heavy. A crc~s-class comparison involves arguments of radically different types as shown in (6). (6) Is the Metrodome bigger than Ronald Reagan? I Interpreting this comparison requires that we find a stadium attribute and a person attribute which are in some sense comparable (e.g. stadium-height and person-height). Pairability constraints also apply indirectly to cross-class comparisons as can be seen in the oddness of (7). I Although this is am unusual comparison to request, it is perfectly un~ble, and the literal interpretation is easily answered. As pointed out to me by Karen Rysn, temce (6) has several po~ible metaphoric interpretations (e.g. "Does the Metrodome get more news coverage than IRonaid Reapn?"). In this paper we will generally ignore metaphm-ic intcrpretatiom. HoweveF, using the approach we describe below, they could be handled in much the same way as the more liter, d ones. (7) ? The party was longer than my car. ~- Although we have only one predicate ("longer") in this sentence, it is difficult to find a comparable pair of attributes. The attribute describing the length of a party is not comparable to any of the attributes describing the length of a car. When faced with ambiguous input a natural language interface has two options. In the first one, it guesses at what the user wants and pro- rides the answer corresponding to that guess. In the second, it interacts with the user to obtain a more completely specified query. Although Op- tion 1 is easier to implement, it is also inflexible and can lead to miscommunication between the user and the interface. With Option 2, the system lets the user select the desired interpretation, re- suiting in greater flexibility and less chance of mis- understanding. It is the second option that we are exploring. To carry out Option 2 for CE interpre- tation the system must present to the user a list of the permissible comparison attribute pairs for the given CE. In Section 3 we will see how pairabil- ity constraints can be used to delimit these pairs. Comparatives add significant expressive power to an interface (Ballard 1988), and it is therefore im- portant that reliable techniques be developed to resolve the lexical ambiguities that occur in CEs. 2 PRIOR WORK. For purposes of discnssion we will divide compara- tive expressions into the following commonly used classes: adjectival, adverbial, and adnomlnal, where the comparative element is based on an ad- jective, an adverb, or a noun, respectively. See (1)--(3) for an example of each type. Within linguistics, adjectival comparatives are the most studied of these three varieties. (See (Rusiecki 1985) for a detailed description of the various types of adjectival comparative.) For work on the syntax of CEs see (Bresnan 1973), (Pinkham 1985) and (Ryan 1983). Klein (1980), (1982) presents a formal semantics for adjectival CEs without using degrees or extents. It would be diffi- cult to apply his work computationally since there is no easy way to determine the positive and neg- ative extensions of adjectives upon which his the- ory rests. Hoeksema (1983) defines a set-theoretic 2Scnt~mce (7) can perhaps be interpreted metaphori- cally (perhaps with humorotm intent), but it se~ns more difficult to do so than it does with (6). It is certainly hard to im~ what truth conditions (T) might have! 170 semantics for adjectival comparatives based on primitive grading relations that order the domain with respect to gradable adjectives. HIS primary concern is the relationship of comparatives to co- ordination and quantification, and he pays little attention to lexical ambiguities. Cresswell's work (Cresswell 1976) handles both adjectivals and ad- nominals and is closer in spirit to our own (see Section 3.1). It contains analogs of our Codomain Agreement Principle, mappings and base orders. The main difference is that whereas Cressweli al- ways uses degrees, we also allow base orders to be defined directly on the domain entities. Most of the work done on lexical ambiguity resolution (e.g. (Hirst 1984) and (Wilks 1975)) has focussed on homonymy (when words have a small number of unrelated meanings) rather than polysemy (when words have many closely related meanings) as occurs with CEs. The techniques developed for homonymy depend on large seman- tic differences between meanings and thus are not as useful for CEs. Although comparatives are frequently used as examples in the NLP literature (e.g. (Hendrix, Sacerdoti, Sagalowicz, and Slocum 1978), (Mar- tin, Appelt, and Pereira 1983) and (Pereira 1983)), no one has presented a detailed treatment of the ambiguities in the selection of comparison attributes. Most NLP researchers provide neither a detailed explanation of how they treat compar- atives nor any characterization of the breadth of their treatment. Two exceptions are the recent papers of Ballard (1988) and Rayner and Banks (1988). The former treats adjectival and adnomi- hal comparatives, and is primarily concerned with the interpretation of expressions like "at least 20 inches more than twice as long as". The selection of comparison attributes is not discussed in any detail. Rayner and Banks (1988) describe a logic programming approach to obtaining a parse and an initial logical formula for sentences containing a fairly broad range of CEs. They do not dis- cuss lexical semantics and thus do not deal with comparison attribute selection. This paper is an abbreviated version of a longer paper (Olawsky 1989), to which the reader is re- ferred for a more detailed presentation. 3 SOLUTION APPROACH. In ~his section we describe a rule-based semantic processor that follows Option 2. To provide for user-controlled comparison attribute selection we augment the common lexical translation process (e.g. (Bronnenberg, Bunt, Landsbergen, Scha, Schoenmakers, and van Utteren 1980) and (Ryan, Root, and Olawsky 1988)) with a Mapping Selec- tor that communicates with the user and returns the results to the rule-based translator. The im- plementation of the approach described here is in progress and is proceeding well. 3.1 Semantic Description of Com- paratives. We base our approach on the semantic interpreta- tion of a comparative predicate as a set-theoretic relation. A comparison defined by the relation 7~ is true if the denotations of the first and second arguments of the comparative predicate (i.e. its subject and object 3) form an element pair of 7~. It is tempting to claim that comparatives should be defined by orders rather than relations (we call this the Comparison Order Claim). However, it can be shown (Olawsky 1989) that the compar- ison relation Lw for a bipredicational comparative like longer than ... wide is neither asymmetric nor antisymmetric 4, and hence, Lw is not an order. 5 Comparison relations are not defined directly in our semantic description. Instead they are speci- fied in terms of three components: a base order, a subject mapping, and an object mapping. The base order is a set-theoretic order on some do- main (e.g. the obvious order on physical lengths). The subject mapping is a mapping from the do- main of the denotation of the subject of the CE to the domain of the base order (e.g. the map- ping from a rectangle to its length). The object mapping is defined analogously. Let comparison relation ~ be defined by the base order B, and the subject and object mappings M, and Mo. Then (a,b) E 7~ if and only if (M,(a),Mo(b)) E B. It should be noted here that comparison attribute selection is now recast as the selection of subject and object mappings. 3Our rea~ns for calling the first and second arguments of a CE the subject and object are syntactic and beyond the scope of this paper (see (Ryan 1983)). 4It is ~ euy to show that Lt# is nontransitive. SKleln ((1980), p. 23) and Hoel~enm ((1983), pp. 410- 411) both make clalms slmilar (but not identical) to the Comparmon Order Claim. It seems to us that bipred- icationak pose a problem for Hoeksema's analysis (see (Olawaky 1989)). Klein appears to relax his assumptions slightly when he deals with them. Cresswell (1976) dearly avoids the Comparison Order Claim. 1'7'1 By definition, the subject and object mappings must have the same codomain, and this codomain must be the domain of the base order. We call this the Codomain Agreement Principle, and it is through this principle that pairability constraints are enforced. For example, when interpreting the CE in sentence (5), we must find a subject map- ping for the width of Bob's car and an object map- ping for its weight, and these mappings must have the same codomain. However, this is impossible since all width mappings will have LENGTH as a codomain, and all weight mappings will have WEIGHT as a codomain. The Codomain Agree- ment Principle also helps explain the interpreta- tion of sentences (6) and (7). Before concluding this section we consider the semantic description of CEs in TEAM ((Grosz, Haas, Hendrix, Hobbs, Martin, Moore, Robinson, and Rosenschein 1982) and (Martin, Appelt, and Pereira 1983)), comparing it to ours. Since com- parative expressions were not the main focus in these papers, we must piece together TEAM's treatment of CEs from the examples that are given. In (Grosz, Haas, Hendrix, Hobbs, Mar- tin, Moore, Robinson, and Rosenschein 1982), the CE "children older than 15 years" is translated to ((*MORE* OLD) child2 (YEAR 15)) where "*MORE* maps a predicate into a comparative along the scale corresponding to the predicate" (p. 11). This implies that TEAM requires the same nmpping to be used for both the subject and ob- ject of the comparative. That would not work well for bipredicational CEs, and could also lead to problems for crose-claes comparisons. In (Martin, Appelt, and Pereira 1983) the examples contain predicates (e.g. salary.of and earn) which, on the surface, are similar to mappings. However, in con- trast to our approach, it does not appear that any special significance is given to these predicates. There is nothing in either paper to indicate that the many types of CEs are consistently translated to a base order, subject mapping and object map- ping as is done in our systerrL Furthermore, there is nothing analogous to the Codomain Agreement Principle discussed in either paper." Now, we move on to a presentation of how the semantic descrip- tion presented above is applied in our system. 3.2 General Comments. We use a multi-level semantic processor (see (Bates and Bobrow 1983), (Bronnenberg, Bunt, Landsbergen, Scha, Schoenmakers, and van Ut- teren 1980), (Grosz, Haas, Hendrix, Hobbs, Mar- tin, Moore, Robinson, and Rosenschein 1982), (Martin, Appelt, and Pereira 1983) and (Ryan, Root, and Olawsky 1988) for descriptions of simi- lar systems). At each level queries are represented by logic-based formulas (see (Olawsky 1989) for examples) with generalized quantifiers ((Barwise and Cooper 1981), (Moore 1981) and (Pereira 1983)) using predicates defined for that level. The initial level is based on often ambiguous English- oriented predicates. At the other end is a de- scription of the query in unambiguous database- oriented terms (i.e. the relation and attribute names used in the database). Between these lev- els we have a domain model level where formulas represent the query in terms of the basic entities, attributes and relationships of the subject domain described in a domain model. These basic con- cepts are treated as unambiguous. Linking these levels are a series of translators, each of which is responsible for handling a particular semantic in- terpretation task. In this paper we restrict our attention to the translation from the English-oriented level (EL) to the domain model level (DML) since this is where CEs are disambiguated by choosing unam- biguous mappings and base orders from the do- main model. To perform its task the EL-DML translator uses three sources of information. First, it has access to the domain model, a frame-based representation of the subject domain. Second, it uses the semantic lexicon which tells how to map each EL predicate into a DML formula. Finally, this translator will, when necessary, invoke the Mapping Selector--a program that uses the se- mantic lexicon and the domain model to guide the user in the selection of a comparison attribute pair. For our semantic formulas we extend the usual ontology of the predicate calculus with three new classes: sets, mass aggregations, and bunches. Sets are required for count noun adnominal com- paratives (e.g. "Has ACME built more ware- houses than ACE?") where we compare set cardi- nalities rather than entity attribute values. Given a class of mass entities (e.g. oil), a mass aggre- gation is the new instance of that class result- ing from the combination of zero or more old in- stances. For example, if John combines the oil from three cans into a large vat, the oil in that vat is an aggregation of the oil in the cans. It is not necessary that the original instances be phys- ically combined; it is sufficient merely to consider 172 them together conceptually. Mass aggregations are needed for mass noun adnominal compara, tires. Finally, we define the term bunch to refer ambiguously both to sets and to ma~ aggrega- tions. Bunches are used in EL where mass aggre- gations and sets are not yet distinguished. Sets, mass aggregations and hunches are described in semantic formulas by the *SET.OF ~, *MASS- OF*, and *BUNCH-OF* relations, respectively. These relations are unusual in that their second arguments are unary predicates serving as char- acteristic functions defining the components of the first argnment---a set, aggregation or hunch. For example, (*MASS-OF* rn (Awl(wheat wJJ)) is true in case m is the aggregation of all mass enti- ties • such that Awl(wheat w)/(e) is true (i.e. e is wheat). 3.3 Base Orders and Mappings. EL and DML formulas contain, for each CE, a base order and two mappings. Two sample EL base orders are more and less. DML base orders are typically defined on domains such as VOL- UME, and INTEGER, hut they can also be de- fined on domains that are not usually numeri- cally quantified such as BUILDING-QUALITY, or CLEVERNESS. More and less are ambiguous between the more specific DML orders. Most EL mappings /~ correspond one-for-one with an English adjective (or adverb). They are binary relations where the first argument is an entity • from the domain and the second is the degree of ~-ness that e possesses. For example, if bi~ is an EL mapping, then in (bi~ e b), b is the degree of bigness for e. Of course, bif is sm- hignous. In contrast to adjectival and adverbial CEs, all adnominais use the ambiguous EL map- ping *MUCH-MANY* which pairs a bunch with its size. In most cases, a DML mapping is a relation whose first argument is an entity from some class in the core of the domain model and whose second argument is from the domain of a base order. In the mapping predication (DM_w-storage-rolume w v) the first argument is a warehouse, and the second is a volume. DM.w-storage.volurne could serve as the translation of big ~ when applied to a warehouse. CEs based on count nouns generally use the *CARDINALITY* mapping which is like other mappings except that its first argument is a set of entities from a domain model class rather than a member of the class. The second argument is always an integer. Mass noun comparatives re- quire a slightly different approach. Since we are dealing with a mass aggregation rather than a set, the *CARDINALITY* mapping is inapplicable. To measure the size of an aggregation we com- bine, according to some function, the attribute values (e.g. weight or volume) of the components of the aggregation, s Thus, the mappings used for mass adnominal comparatives are based on the attributes of the appropriate class of mass enti- ties. 3.4 EL-DML Translation Rules. As stated above, EL and DML are linked by a translator that uses rules defined in the se- mantic lexicon (see (Olawsky 1989) for sample rules). These rules constitute definitions of the EL predicates in terms of DML formulas. Our system employs three kinds of translation rules-- Trans, MTrans, and BTrans. Trans rules have four components: a template to he matched against an EL predication, an EL context spec- ification, a DML context specification, and the DML tr~r~latlon of the EL predication. ~ The context specifications are used to resolve am- higuities on the basis of other predications in the EL formula and the (incomplete) DML for- mula. A rule is applicable only if its context specifications are satisfied. Although a predica- tion in an EL context specification must unif~ with some predication in the context, subsuml>- tion relationships are used in matching DML context specifications. Thus, the DML context specification (DM.huilding b) will be satisfied by (DM_wareho~ae b) since DM_building subsumes DM.warehouse. MTrans rules are intended for the translation of subject and object mapping predications from EL to DML. They have two ex- tra components that indicate the base order and the mapping to he used in DML. This additional information is used to enforce the Codomain Agreement Principle and to help in the user inter- action described in Section 3.5. Finally, BTrans eAlthough the ~regation function would likely be SUM for attributes such as weight, volume, and value, othor functions are poesible. For example, AVERAGE might be used for & nutritional-quallty attribute of an agri- cultural commodity. The aggregation function is not ex- plicltly reflected in our system until the database level 7Trans rules are nearly identical to the lexical trans- lation rules used in the ATOZ system (Ryan, Root, and Olawsky 1988). However, our rules do have some addi- tional features, one of which will be discussed below. 173 rules are used to translate *BUNCH-OF* predi- cations to DML. One noteworthy feature of our translation rules is that they can look inside a functional A- argument to satisfy a context specification, s We call these A-context specifications, and they may be used inside both EL and DML context specifications for rules of all three types. How- ever, it is only in BTrans rules that they can occur as a top level specification. Top level A-context specifications (e.g. (Ab [(DM.building b)])) are matched to the functional argument of the rele- vant *BUNCH-OF* predication. This match is performed by treating the body of the A-context specification as a new, independent context spec- ification which must be satisfied by predications inside the body of the functional argument. In Trans and MTrans rules, a A-context specifica- tion can occur only as an argument of some normal predicational context specification. For example, the specification (*MA$$-OF*b (Ac [(DM_commodi~y c)])) can be used in any DML context specification. It checks whether b is a mass of some commodity. Just as standard con- text specifications provide a way to examine the properties of the arguments of a predication being translated, A-context specifications provide a way to determine the contents of a bunch by inspect- ing the definition of its characteristic function. Before continuing, we compare our context matching mechanism to the similar one used in the PHLIQA1 system (Bronnenberg, Bunt, Landsbergen, Scha, Schoenmakers, and van Ut- teren 1980). This system uses a typed seman- tic language, and context checking is based en- tirely on the type system. As a result, PHLIQA1 can duplicate the effect of context specifications like (DM.building b) by requiring that b have type DM_buildin~. However, PHLIQA1 can- not handle more complex specifications such as ((DM_building b) (DM.b-owner b ACME)) since there is no semantic type in PHLIQA1 that would correspond to this subset of the buildings in the domain. 9 The same comments apply to A-context specifications which can be declared in PHLIQA1 $This is an extension to the rules used in ATOZ (Ryan, Root, and Olawsky 1988) which do not Allow functions M arguments and therefore never need this kind of context checking. 9One could p~-haps modify the PHLIQA1 world model to contain such subclasses of buildings, but this would eventually lead to a very complex model It would also be difficult or impo~ible to keep such a model hierarchical in structure. by specifying a functional semantic type. That is, (Ab (DM_building b)) is written as the type DM_buildin$ ---, truthvalue, a function from build- ings to truth values. As with standard context specifications, (Ab (DM_building b) (DM_b-owner b A CME)) cannot be expressed as a type re- striction. Thus, the context specifications used in PHLIQA1 offer less discrimination power than those used in our system. There is one other difference regarding A- context specifications that should be noted here. The context specification (Ab (DM_budding b)) will be satisfied by the expression (A w (DM.warehouse w)). However, in PHLIQA1 the type DM_building --* truthvalue will not match the type DM~warehouse-* truthvalue. From this, we see that PHLIQA1 does not use subsumption information in matching A-context specifications, while our system does. 3.5 Translation and Mapping Se- lection. When translating an input sentence containing a comparative expression from EL to DML, the sys- tem first applies Trans and Btrans rules to trans- late the predications that do not represent map- pings or base orders. Next, comparison attributes must be selected. The system recognizes compar- ison attribute ambiguity when there is more than one applicable MTrans rule for a particular EL mapping predicate. We define a candidate map- ping as any DML mapping that, on the basis of an applicable MTraus rule, can serve as the transla- tion of a mapping in an EL formula. Assume that for an EL predication (big ~ w a) in a given context there are three applicable MTrans rules trans- lating big' to the three DML mappings DMow- storage-volume, DM.w-storage-area, and DM_b- total-area, respectively. All three of these DML mappings would then be candidates with either VOLUME or AREA as the corresponding base order. The system examines the semantic lexicon to determine a list of candidate mappings for each EL mapping. A candidate is removed from one of these lists if there is no compatible mapping in the other list. Compatible mappings are those that allow the Codomain Agreement Principle to be satisfied, and they are easily identified by ex- amining the base order component of the MTrans rules being used. All of the remaining candidates 174 in one of the lists are presented to the user who may select a candidate mapping. Next, the se- mantic processor presents to the user those can- didates for the other EL mapping that are com- patible with her first choice. She must select one of these remaining candidates as the translation for the second mapping. Based on her choices, two MTraus rules (one for each EL mapping) are applied, and in this way the EL mapping predica- tions are translated to DML formulas. Once this is completed, the processor can easily translate the EL base order to the DML base order listed in both of the MTraus rules it used (with any neces- sary adjustments in the direction of comparison). 4 COMMENTS AND CONCLU- SIONS. We are currently examining some additional is- sues. First, once candidate mappings are ob- tained, how should they be explained to the user? In the present design text is stored along with the declaration of each mapping, and that text is used to describe the mapping to the user. This ap- proach is somewhat limited, especially for adnom- inal comparatives given their flexibility and the relatively small information content of the *CAR- DINALITY ~ mapping. A more general technique would use natural language generation to explain the semantic import of each mapping as applied to its arguments. Perhaps there are compromise approaches between these two extremes (e.g. some kind a pseudo-English explanations). Second, it seems desirable that the system could work automatically without asking the user which mappings to use. Perhaps the system could choose a mapping, do the query, present the re- suits and then tell the user what interpretation was assumed (and offer to try another interpreta- tion). This works well as long as either (a) the sys- tem almost always selects the mapping intended by the user, or (b) the cost of an incorrect choice (i.e. the wasted query time) is small. If the sys- tem frequently makes a poor choice and wastes a lot of time, this approach could be quite an- noying to a user. Crucial to the success of this automatic approach is the ability to reliably pre- dict the resources required to perform a query so that the risk of guessing can be weighed against the benefits. A similar issue was pointed out by an anonymous reviewer. We noted in Section 1 that for sentence (3) (repeated here as (8)) (8) Are more oranges than apples exported by Mexico? the comparison could be based on quantity, weight, volume, or value. If the answer is the same regardless of the basis for comparison, a "friendly" system would realize this and not re- quire the user to choose comparison attributes. Unfortunately, this realization is based on exten- sional rather than intentional equivalence, and hence, the system must perform all four (in this case) queries and compare the answers. The extra cost could be prohibitive. Again, the system must predict query performance resource requirements to know whether this approach is worthwhile for a particular query. See (Olawsky 1989) for more information on further work. To summarize, we have examined a number of issues associated with the semantic interpretation of comparative expressions and have developed techniques for representing the semantics of CEs and for interacting with the user to resolve com- parison attribute ambiguities. These techniques will work for adjectival, adverbial, and adnomi- hal comparatives and for both numerically and non-numerieally based comparisons (see (Olawsky 1989) for more on this). We are presently com- pleting the implementation of our approach in Common Lisp using the SunView x° window sys- tem as a medium for user interaction. Most pre- vious techniques for handling lexical ambiguity work best with homonymy since they depend on large semantic differences between the possible in- terpretations of a lexieal item. Our approach, on the other hand, does not depend solely on these semantic differences and handles polysemy well. 5 ACKNOWLEDGEMENTS. I wish to thank the University of Minnesota Grad- uate School for supporting this research through the Doctoral Dissertation Fellowship program. I also want to thank Maria Gini, Michael Kac, Karen Ryan, Ron Zacharski, and John Carlis for discussions and suggestions regarding this work. References BMlard, Bruce W. June 1988 A General Compu- tational Treatment of Comparatives for Natural Language Question Answering. In: ~6th Annual X°SunView is a trademark of Sun Microsystenm, Inc. 175 Meeting of the Association for Computational Lin. guisticz. Buffalo, NY. Barwise, Jan and Cooper, Robin. 1981 Generalized Quantifiers and Natural Language. Linguistics and Philosophy 4(2): 159-219. Bates, Madeleine and Bobrow, Robert J. 1983 Infor- mation Retrieval Using s Transportable Natural Language Interfxce. In: Research and Develop- ment in Information Retrieval: Proceedings of the Sixth Annual International A CM SIGIR Confer- ence, Bethesda, Md. New York: 81-86. Bresnan, Joa~n W. 1973 Synte~x of the Comparative Clause Construction in English. Linguistic Inquiry 4(3): 275-343. Bronnenberg, W. J. H. J.; Bunt, H. C.; Lxndsber- gen, S. P. J.; Schx, R. J. H.; Schoenmakers, W. J.; and van Utteren, E. P. C. 1980 The Question- Answering System PHLIQA1. In: Bolc, L., Ed., Natural Language Question Answering Systems. Macmillan. Cresswell, M. J. 1976 The Semantics of Degree. In: Pxrtee, Barbara, Ed., Montague Grammar. Aca- demic Press: 261-292. Grmz, Barbaxa; Haas, Norman; Hendrix, Gary; Hobbs, Jerry; Martin, Paul; Moore, Robert; Robinson, Jane; and Rosenschein, Stanley. November 1982 DIALOGIC: A Core Natural- Language Processing System. Tech. Note 270, Artificial Intelligence Center, SRI International, Menlo Park, California. Hale, Austin. 1970 Conditions on English compara- tive clause pairings. In: Jacobs, R. A. and Rosen- bourn, P., Eds., Readings in English Transforma- tional Grammar. Ginn & Co., Waltham, Mass.: 30-50. Hendrix, Gaxy G.; Sacerdoti, Earl D.; Sagalowicz, Daniel; and Slocum, Jonathan. 1978 Develop- ing a Natural Language Interface to Complex Dxt~. A CM Transactions on Database Systems 3(2): 105-147. Hirst, Graeme John. May 1984 Semantic Interpreta- tion Against Ambiguity. PhD thesis, Computer Science Dept., Brown University. Hoeksema, Jack. 1983 Negative Polarity and the Comparative. Natural Language and Linguistic Theory 1: 403-434. Klein, Ewxn. 1980 A Semantics for Positive and Com- parative Adjectives. Linguistics and Philosophy 4: 1-46. Klein, Ewan. 1982 The Interpretation of Adjectival Comparatives. Linguistics 18: 113-136. Martin, Paul; Appelt, Douglas; and Pereirx, Fer- nxndo. 1983 Transportability and Generality in a Natural-Language Interface System. In: Proceed- ings of the Eighth International Joint Conference on Artificial Intelligence, Karisruhe, West Ger- many. William Kaufmxnn, Inc., Los Altos: 573- 581. Moore, Robert C. 1981 Problems in Logical Form. In: Proceedings of the 19th Annual ~[eeting. As- sociation for Computational Linguistics, Stanford, Ca/ifornia: 117-124. Olawsky, Duxne E. April 1989 The Lexical Seman- tics of Comparative Expressions in a Mull-Level Semantic Processor. Technical Report CSci TR 89-19, Computer Science Dept., University of Min- nesota, Minneapolis, MN. Percirx, Fernxndo. 1983 Logic for Natural Language Analysis. Technical Note 275, Artificial Intelli- gence Center, Computer Science and Technology Division, SRI International, Menlo Park, Califor- nlx. Ph.D. dissertation, Department of Artificial Intelligence, University of Edinburgh. Pink, ham, Jessie Elizabeth. 1985 The Formation of Comparative Clauses in French and English. Gar- land Publishing Inc, New York. Also available from Indian~ University Linguistics Club, Bloom- ington, IN, August 1982. P~yner, Mxnny and Banks, Amelie. June 1988 Pars- ing and Interpreting Comparatives. In: ~6th An- nual Meeting of the Association for Computational Linguistic. Buffalo, NY. Rusiecki, Jan. 1985 Adjectives and Comparison in Englidt: A Semantic Study. Longman Inc., New York. Ryem, Karen L.; Root, Rebecca; and Olawsky, Duxne. February 1988 Application-Specific Issues in NLI Development for a Diagnostic Expert System. In: Association for Computational Linguistics Second Coherence on Applied Natural Language Process- ing. Austin, Texas. Ryxn, Karen L. 1983 A Grammar of the English Comparative. PhD thesis, University of Min- nesota. Reproduced by Indiana University Lin- guistics Club, Bloomington Indiana, 1986. Wilks, Yorick. 1975 An Intelligent Analyzer and Un- derstander of English. CACM 18(5): 264-274. 176
1989
21
AUTOMATIC ACQUISITION OF THE LEXICAL SEMANTICS OF VERBS FROM SENTENCE FRAMES* Mort Webster and Mitch Marcus Department of Computer and Information Science University of Pennsylvania 200 S. 33rd Street Philadelphia, PA 19104 ABSTRACT This paper presents a computational model of verb acquisition which uses what we will call the princi- ple of structured overeommitment to eliminate the need for negative evidence. The learner escapes from the need to be told that certain possibili- ties cannot occur (i.e., are "ungrammatical") by one simple expedient: It assumes that all proper- ties it has observed are either obligatory or for- bidden until it sees otherwise, at which point it decides that what it thought was either obliga- tory or forbidden is merely optional. This model is built upon a classification of verbs based upon a simple three-valued set of features which repre- sents key aspects of a verb's syntactic structure, its predicate/argument structure, and the map- ping between them. 1 INTRODUCTION The problem of how language is learned is per- haps the most difficult puzzle in language under- standing. It is necessary to understand learning in order to understand how people use and organize language. To build truly robust natural language systems, we must ultimately understand how to enable our systems to learn new forms themselves. Consider the problem of learning new lexical items in context. To take a specific example, how is it that a child can learn the difference between the verbs look and see (inspired by Landau and Gleitman(1985) )? They clearly have similar core meanings, namely ~perceive by sight". One ini- tially attractive and widely-held hypothesis is that *This work was partially supported by the DARPA grant N00014-85-K0018, and Alto grant DAA29-84-9- 0027. The authors also wish to thank Beth Levin and the anonymotm reviewers of this paper for many helpful com- ments. We ~ b~efit~l greatly from disctumion of issues of verb acquisition in children with Lila Gleitman. word meaning is learned directly by observation of the surrounding non-linguistic context. While this hypothesis ultimately only begs the question, it also runs into immediate substantive difficulties here, since there is usually looking going on at the same time as seeing and vice versa. But how can one learn that these verbs differ in that look is an active verb and see is stative? This difference, although difficult to observe in the environment, is clearly marked in the different syntactic frames the two verbs are found in. For example, see, be- ing a stative perception verb, can take a sentence complement: (1) John saw that Mary was reading. while look cannot: (2) * John looked that Mary was reading. Also look can be used in an imperative, (3) Look at the ball! while it sounds a bit strange to command someone to see, (4) ? See the ball! (Examples like "look Jane, see Spot run!" notwithstanding.) This difference reflects the fact that one can command someone to direct their eyes (look) but not to mentally perceive what someone else perceives (see). As this example shows, there are clear semantic differences between verbs that are reflected in the syntax, but not ob- vious by observation alone. The fact that children are able to correctly learn the meanings of look and see, as well as hundreds of other verbs, with mini- mal exposure suggests that there is some correla- tion between syntax and semantics that facilitates the learning of word meaning. Still, this and similar arguments ignore the fact that children do not have access to the negative 177 evidence crucial to establishing the active/stative distinction of the look/see pair. Children cannot know that sentences like (2) and (4) do not oc- cur, and it is well established that children are not corrected for syntactic errors. Such evidence renders highly implausible models like that of Pinker(198?), which depend crucially on negative examples. How then can this semantic/syntactic correlation be exploited? STRUCTURED OVERCOM- MITMENT AND A LEARNING ALGORITHM In this paper, we will present a computational model of verb acquisition which uses what we will call the principle of structured o~ercomrnitment to eliminate the need for such negative evidence. In essence, our learner learns by initially jumping to the strongest conclusions it can, simply assum- ing that everything within its descriptive system that it hasn't seen will never occur, and then later weakening its hypotheses when faced with contra- dictory evidence. Thus, the learner escapes from the need to be told that certain possibilities can- not occur (i.e. are"ungrammatical') by the simple expedient of assuming that all properties it has ob- served are either always obligatory or always for- bidden. If and when the learner discovers that it was wrong about such a strong assumption, it reclassifies the property from either obligatory or forbidden to merely optional. Note that this learning principal requires that no intermediate analysis is ever abandoned; anal- yses are only further refined by the weakening of universals (X ALWAYS has property P) to existen- rials (X SOMETIMES has property P). It is in this sense that the overcommitment is"structured." For such a learning strategy to work, it must be the case that the set of features which underlies the learning process are surface observable; the learner must be able to determine of a particular instance of (in this case) a verb structure whether some property is true or false of it. This would seem to imply, as far as we can tell, a commitment to the notion of em learning as selection widely presup- posed in the linguistic study of generative gram- mar (as surveyed, for example, in Berwick(1985). Thus, we propose that the problem of learning the category of a verb does not require that a natu- ral language understanding system synthesize em de novo a new structure to represent its seman- tic class, but rather that it determine to which of a predefined, presumably innate set of verb cate- gories a given verb belongs. In what follows below, we argue that a relevant classification of verb cat- egories can be represented by simple conjunctions of a finite number of predefined quasi-independent features with no need for disjunction or complex boolean combinations of features. Given such a feature set, the Principal of Struc- tured Overcommitment defines a partial ordering (or, if one prefers, a tangled hierarchy) of verbs as follows: At the highest level of the hierarchy is a set of verb classes where all the primary four fea- tures, where defined, are either obligatory or for- bidden. Under each of these "primary" categories there are those categories which differ from it only in that some category which is obligatory or for- bidden in the higher class is optional in the lower class. Note that both obligatory and forbidden categories at one level lead to the same optional category at the next level down. The learning system, upon encountering a verb for the first time, will necessarily classify that verb into one of the ten top-level categories. This is be- cause the learner assumes, for example, that if a verb is used with an object upon first encounter, that it always has an object; if it has no object, that it never has an object, etc. The learner will leave each verb classification unchanged upon en- countering new verb instances until a usage occurs that falsifies at least one of the current feature val- ues. When encountering such a usage i.e. a verb frame in which a property that is marked obliga- tory is missing, or a property that is marked for- bidden is present (there are no other possibilities) - then the learner reclassifies the verb by mov- ing down the hierarchy at least one level replacing the OBLIGATORY or FORBIDDEN value of that feature with OPTIONAL. Note that, for each verb, the learner's classifica. tion moves monotonically lower on this hierarchy, until it eventually remains unchanged because the learner has arrived at the correct value. (Thus this learner embodies a kind of em learning in the limit. 3 THE FEATURE SET AND THE VERB HIERARCHY As discussed above, our learner describes each verb by means of a vector of features. Some of these features describe syntactic properties of the verb (e.g."Takes an Object"), others de- scribe aspects of the theta-structure (the predi- cate/argument structure) of the verb (e.g."Takes 178 an Agent",~Ikkes a Theme"), while others de- scribe some key properties of the mapping be- tween theta-structure and syntactic structure (e.g."Theme Appears As Surface Object"). Most of these features are three-valued; they de- scribe properties that are either always true (e.g. that"devour" always Takes An Object), always false (e.g. that "fall" never Takes An Object) or properties that are optionally true (e.g. that"eat" optionally Takes An Object). Always true values will be indicated as"q-" below, always false values as"-" and optional values as~0 ". All verbs are specified for the first three features mentioned above: "Takes an Object" (OBJ),"Takes an Agent" (AGT), and"Takes a Theme" (THEME). All verbs that allow OBJ and THEME are specified for"Theme Appears As Ob- ject" (TAO), otherwise TAO is undefined. At the highest level of the hierarchy is a set of verb classes where all these primary features, where defined, are either obligatory or forbidden. Thus there are at most 10 primary verb types; of the eight for the first three features, only two (-I--q-, and -H-+) split for TAO. The full set of features we assume include the primary set of features (OBJ, AGT, THEME, and TAO), as described above,,and a secondary set of features which play a secondary role in the learn- ing algorithm, as will be discussed below. These secondary features are either thematic properties, or correlations between thematic and syntactic roles. The thematic properties are: LOC - takes a locative; INST - takes an instrument; and DAT - takes a dative. The first thematic-syntactic map- ping feature "Instrument as Subject" is fake if no instrument can. appear in subject position (or, true if the subject is always an instrument, al-" though this is never the case.) The second such feature "Theme as Chomeuf (TAC) is the only non-trinary-valued feature in our learner; it spec- ifies what preposition marks the theme when it is not realized as subject or object. This feature, if not -, either takes a lexical item (a preposition, actually, as its value, or else the null string. We treat verbs with double objects (e.g. "John gave Mary the ball.") as having a Dative as object, and the theme as either marked by a null preposition or, somewhat alternatively, as a bare NP chomeur. (The facts we deal with here don't decide between these two analyses.) Note that this analysis does not make explict what can appear as object; it is a claim of the analysis that if the verb is OBJ:÷ or OBJ:0 and is TAO:- or TAO:0, then whatever other thematic roles may occur can be realized as the object. This may well be too strong, but we are still seeking a counterexample. Figure 1 shows our classification of some verb classes of English, given this feature set. (This classification owes much to Levin(1985), as well as to Grimshaw(1983) and Jackendoff(1983).) This is only the beginning of such a classification, clearly; for example, we have concentrated our efforts solely on verbs that take simple NPs as comple- ments. Our intention is merely to provide a rich enough set of verb classes to show that our clas- sification scheme has merit, and that the learning algorithm works. We believe that this set of fea- tures is rich enough to describe not only the verb classes covered here but other similar classes. It is also our hope that an analysis of verbs with richer complement structures will extend the set of fea- tures without changing the analysis of the classes currently handled. It is interesting to note that although the partial ordering of verb classes is defined in terms of fea- tures defined over syntactic and theta structures, that there appears to be at least a very strong se- mantic reflex to the network. Due to lack of space, we label verb cla-~ses in Figure 1 only with exem- plars; here we give a list of either typical verbs in the class, and/or a brief description of the class, in semantic terms: • Spray, load, inscribe, sow: Verbs of physical contact that show the completive/noncomple- tire 1 alternation. If completive, like "fill". • Clear, empty: Similar to spray/load, but if completive, like "empty". • Wipe: Like clear, but no completive pattern. • Throw: The following four verb classes all in- volve an object and a trajectory. '~rhrow" verbs don't require a terminus of the trajec- tory. • Present: Like "throw", as far as we can tell. • Give: Requires a terminus. z This is the differ~ce between: I ]osded.the hay on the truck. sad I loaded the truck with hay. In the second case, but not the first, them is a implication that the truck is completely full. 179 SPRAY, LOAD EMPTY SEARCH BREAK, DESTROY TOUCH PUT DEVOUR FLY BREATHE FILL GIVE T FLOWER IO'+IA°TI+*'--+ IT+O II++01°'TID TI ,*+T, I s~ I I + I + n o i o ii o i o i - i ~th i o i I m ~ E i l H i l i ~ i N / _ i i JR i E ~ i , i i + i l U i ' P i B ~ ~ ; ' H i -- ~ , i ~ _ _ i ' + ~ ~ I I n - - - - | i l l i -- - - I ~ , J +i mmmi ~ imPili-im m , i i l - i i - e m i R ~ ~ ~ i + i I E m i l i ~ m i-in i i . i im i . . . . . . . . | / i i I b -m i ~ ~ irJil ~ i o iil-i i ~ i , . i l i i | D ~ N I ~ E ÷ , | n i a i l i o | . . . . t.m,__~__ : I + - i , l . . . . . I - - [ + l ~ - - : ' m i , i i ~ m , i m . . . . ~ ~ ~m ~ ~ m-roll mill mira m m mmlm i Figure 1: Some verb feature descriptions. ( .... ) I~wAYs l 1. (--0.) (-+--) IS~-IM.mm~ (+++o) IALWAYS +m~.~.l (*+.0.) 1+1 (00+0) (..~ .) (..÷÷) (+-++) ( +,,..*+ .,,. 0 ) ( .+. O.t. +. ) "Ik ( 0 + .,,+ O) iqJSH (++00) F j=-++ 1 10+001 ~ t ~ qlmul I Figure 2: The verb hierarchy. 180 • Poke, jab, stick, touch: Some object follows a trajectory, resulting in surface contact. • Hug: Surface contact, no trajectory. • Fill: Inherently ¢ompletive verbs. • Search: Verbs that show a completive/non- completive alternation that doesn't involve physical contact. • Die, flower: Change of state. Inherently non- agentive. • Break: Change of state, undergoing causitive alternation. • Destroy: Verbs of destruction. • Pierce: Verbs of destruction involving a tra- jectory. * Devour, dynamite: Verbs of destruction with incorporated instruments • Put: Simple change of location. • Eat: Verbs of ingesting allowing instruments • Breathe: Verbs of ingesting that incorporate instrument • Fall, swim: Verbs of movement with incorpo- rated theme and incorporated manner. • Push: Exerting force; maybe something moves, maybe not. • Stand: Like "break s, but at a location. • Rain: Verbs which have no agent, and incor- porate their patient. The set of verb classes that we have investigated interacts with our learning algorithm to define the partial order of verb classes illustrated schemati- cally in Figure 2. For simplicity, this diagram is organized by the values of the four principle features of our system. Each subsystem shown in brackets shares the same principle features; the individual verbs within each subsystem differ in secondary features as shown. If one of the primary features is made optional, the learning algorithm will map all verbs in each subsystem into the same subordinate subsystem as shown; of course, secondary feature values are maintained as well. In some cases, a sub-hierarchy within a subsystem shows the learning of a sec- ondary feature. We should note that several of the primary verb classes in Figure 2 are unlabelled because they cor- respond to no English verbs: The class "----" would be the class of rain if it didn't allow forms like ~hail stones rained from the sky", while the class '~+--I--t-" would be the class of verbs like "de- strof' if they only took instruments as subjects. Such classes may be artifacts of our analysis, or they may be somewhat unlikely classes that are filled in languages other than English. Note that sub-patterns in the primary feature subvector seem to signal semantic properties in a straightforward way. So, for example, it appears that verbs have the pattern {OBJ:+, THEME:+, TAO:-} only if they are inherently completive; consider "search" and "fill". Similarly, the rare verbs that have the pattern {OBJ:-, THEME:-}, i.e those that are truly intransitive, appear to in- corporate their theme into their meaning; a typi- cal case here is =swim". Verbs that are {OBJ:-, AGT:-} (e.g. =die") are inherently stative; they allow no agency. Those verbs that are {AGT:+} incorporate the instrument of the operation into their meaning. We will have to say about this be- low. 4 THE LEARNING ALGORITHM AT WORK Let us now see how the learning algorithm works for a few verbs. Our model presupposes that the learner receives as input a parse of the sentence from which to de- rive the subject and object grammatical relations, and a representation of what NPs serve as agent, patient, instrument and location. This may be seen as begging the question of verb acquisition, because, it may be asked, how could an intelligent learner know what entities function as agent, pa- tient, etc. without understanding the meaning of the verb? Our model in fact presupposes that a learner can distinguish between such general cat- egories as animate, inanimate, instrument, and locative from direct observation of the environ- ment, without explicit support from verb meaning; i.e. that it will be clear from observation em who is acting on em what em where. This assumption is not unreasonable; there is strong experimental ev- idence that children do in fact perceive even some- thing as subtle as the difference between animate and inanimate motion well before the two word stage (see Golinkoff et al, 1984). Thisnotion that agent, patient and the like can be derived from direct observation (perhaps focussed by what NPs 181 appear in the sentence) is a weak form of what is sometimes called the em semantic bootstrap- ping hypothesis (Pinker(1984)). The theory that we present here is actually a combination of this weak form of semantic bootstrapping with what is called em syntactic bootstrapping, the notion that syntactic frames alone offer enough information to classify verbs (see Naigles, Gleitman, and Gleit- man (in press) and Fisher, Gleitman and Gleit- man(1988).) With this preliminary out of the way, let's turn to a simple example. Suppose the learner encoun- ters the verb "break", never seen before, in the context (6) The window broke. The learner sees that the referent of "the window" is inanimate, and thus is the theme. Given this and the syntactic fzarne of (6), the learner can see that em break (a) does not take an object, in this case, (b) does not take an agent, and (c) takes a patient. By Structured Overcommitment, the learner therefore assumes that em break em never takes an object, em never takes a subject, and em always takes a patient. Thus, it classifies em break as {OBJ:-, AGT:-, THEME:+, TAO:-} (ifTAO is undefined, it is assigned "-'). It also assumes that em break is {DAT:-, LOC:-, INST:-, ... } for similar reasons. This is the class of DIE, one of the toplevel verb classes. Next, suppose it sees (7) John broke the window. and sees from observation that the referent of "John" is an agent, the referent of "the window" a patient, and from syntax that "John" is sub- ject, and "the window" object. That em break takes an object conflicts with the current view that em break NEVER takes an object, and therefore this strong assumption isweakened to say that em break SOMETIMES takes an object. Simi- larly, the learner must fall back to the position that em break SOMETIMES can have the theme serve as object, and can SOMETIMES have an agent. This takes {OBJ:-, AGT:-, THEME:+, TAO:-} to {OBJ:0, AGT:0, THEME:+, TAO:0}, which is the class of both em break and em stand. However, since it has never seen a locative for ern break, it assumes that em break falls into exactly the category we have labelled as "break".2 2And how would it distinguish between The vase stood on the table. mad There are, of course, many other possible orders in which the learner might encounter the verb em break. Suppose the learner first encounters the pattern (8) John broke the window. beR)re any other occurrences of this verb. Given only (8), it will assume that em break always takes an object, always takes an agent, always has a pa- tient, and always has the patient serving as ob- ject. The learner will also assume that em break never takes a location, a dative, etc. This will give it the initial description of {OBJ:+, AGT:+, THEME:+, TAO:+, ..., LOC:-), which causes the learner to classify em break as falling into the toplevel verb class of DEVOUR, verbs of de- struction with the instrument incorporated into the verb meaning. Next, suppose the learner sees (9) The hammer broke the window. where the learner observes that '~hammer" is an inanimate object, and therefore must serve as in- strument, not agent. This means that the earlier assumption that agent is necessary was an over- commitment (as was the unmentioned assump- tion that an instrument was forbidden). The learner therefore weakens the description of em break to {OBJ:+, AGT:0, THEME:-{-, TAO:+, ..., LOC:-, INST:0}, which moves em break into the verb class of DESTROY, destruction without incorporated instrument. Finally (as it turns out), suppose the learner sees (10) The window broke. Now it discovers that the object is not obliga- tory, and also that the theme can appear as sub- ject, not object, which means that TAO is op- tional, not obligatory. This now takes em break to {OBJ:0, AGT:0, THEME:+, TAO:0, ... }, which is the verb class of break. We interposed (9) between (8) and (10) in this sequence just to exercise the learner. If (10) fol- lowed (8) directly, the learner would have taken em break to verb class BREAK all the more quickly. Although we will not explicitly go through the ex- ercise here, it is important to our claims that any permutation of the potential sentence frames of em break will take the learner to BREAK, although some combinations require verb classes not shown The base broke on the table? This is a probl~n we discuss at the end of this paper. 182 on our chart for the sake of simplicity (e.g. the class {OBJ:0, AGT:-, THEME:+, TAO:0} if it hasn't yet seen an agent as subject.). We were somewhat surprised to note that the trajectory of em break takes the learner through a sequence of states whose semantics are useful ap- proximations of the meaning of this verb. In the first case above, the learner goes through the class of "change of state without agency", into the class of BREAK, i.e. "change of state involving no lo- cation". In the second case, the trajectory takes the learner through "destroy with an incorporated instrument", and then DESTROY into BREAK. In both of these cases, it happens that the trajec- tory of em break through our hierarchy causes it to have a meaning consistent with its final mean- ing at each point of the way. While this will not always be true, it seems that it is quite often the case. We find this property of our verb classifica- tion very encouraging, particularly given its gene- sis in our simple learning principle. We now consider a similar example for a dif- ferent verb, the verb em load, in somewhat terser form. And again, we have chosen a somewhat indi- rect route to the final derived verb class to demon- strate complex trajectories through the space of verb classes. Assume the learner first encounters (II) John loads the hay onto the truck. From (11), the learner builds the representa- tion {OBJ:+, AGT:+, THEME:+, TAO:+, ..., LOC:+, ..., DAT:-}, which lands the learner into the class of PUT, i.e. "simple change of location". We aasume that the learner can derive that "the truck" is a locative both from the prepositional marking, and from direct observation. Next the learner encounters (12) John loads the hay. From this, the learner discovers that the location is not obligatory, but merely optional, shifting it to {OBJ:+, AGT:+, THEME:+, TAO:+, ..., LOC:O ..., DAT:-}, the verb class of HUG, with the general mean/ng of "surface contact with no trajectory." The next sentence encountered is (13) John loads the truck with hay. This sentence tells the learner that the theme need only optionally serve as object, that it can be • shifted to a non-argument position marked with the preposition em with. This gives em load the description of {OBJ:+, AGT:+, THEME:+, TAO:0, TAC:with, ..., LOC:0 .... DAT:-}. This new description takes em load now into the verb class of POKE/TOUCH, surface contact by an object that has followed some trajectory. (We have explicitly indicated in our description here that {DAT:-} was part of the verb description, rather than leaving this fact implicit, because we knew, of course, that this feature would be needed to distinguish between the verb classes of GIVE and POKE/TOUCH. We should stress that this and many other features are encoded as "-" until encountered by the learner; we have simply sup- pressed explicitly representing such features in our account here unless needed.) Finally, the learner encounters the sentence (14) John loads the truck. which makes it only optional that the theme must occur, shifting the verb representation to {OBJ:+, AGT:+, THEME:0, TAO:0, TAC:with, ..., LOC:0 ..., DAT:-}. The principle four fea- tures of this description put the verb into the gen- eral area of WIPE, CLEAR and SPRAY/LOAD, but the optional locative, and the fact that the theme can be marked with em with select for the class of SPRAY/LOAD, verbs of physical contact that show the completive/noncompletive alterna- tion: Note that in this case again, the semantics of the verb classes along the learning trajectory are rea- sonable successive approximations to the meaning of the verb. 5 FURTHER RESEARCH AND SOME PROBLEMS One difficulty with this approach which we have not yet confronted is that real data is somewhat noisy. For example, although it is often claimed that Motherese is extremely clean, one researcher has observed that the verb "put", which requires both a location and an object to be fully grammat- ical, has been observed in Motherese (although extremely infrequently) without a location. We strongly suspect, of course, that the assumption that one instance suffices to change the learner's model is too strong. It would be relatively easy to extend the model we give here with a couple of bits to count the number of counterexamples seen for each obligatory or forbidden feature, with two or three examples needed within some limited time period to shift the feature to optional. Can the model we describe here be taken as a psychological model? At first glance, clearly not, 183 because this model appears to be deeply conser- vative, and as Pinker(1987) demonstrates, chil- dren freely use verbs in patterns that they have not seen. In our terms, they use verbs as if they had moved them down the hierarchy without ev- idence. The facts as currently understood can be accounted for by our model given one simple as- sumption: While children summarize their expo- sure to verb usages as discussed above, they will use those verbs in highly productive alternations (as if they were in lower categories) for some pe- riod after exposure to the verb. The claim is that their em usage might be non-conservative, even if their representations of verb class are. By this model, the child would restrict the usage of a given verb to the represented usages only after some pe- riod of time. The mechanisms for deriving criteria for productive usage of verb patterns described by Pinker(1987) could also be added to our model without difficulty. In essence, one would then have a non-conservative learner with a conserva- tive core. REFERENCES [1] [2] Berwick, 1t. (1985) The Acquisition of Syntac- tic Knowledge. Cambridge, MA: MIT Press. Fisher, C.; Gleitman, H.; and Gleitman, L. (1988) Relations between verb syntax and verb semantics: On the semantic content of subcategorization frames. Submitted for pub- lication. [3] Golinkoff, R.M.; Harding, C.G.; Carson, V.; and Sexton, M.E. (1984) The infant's percep- tion of causal events: the distinction between animate and inanimate object. In L.P. Lip- sitt and C. Rovee-Collier (Eds.) Advances in Infancy Research 3: 145-65. [4] Grirnshaw, J. (1983) Subcategorization and grammatical relations. In A. Zaenen (Ed.), Subjects and other subjects. Evanston: Indi- ana University Linguistics Club. [5] Jackendoff, I~. (1983) Semantics and cogni- tion. Cambridge, MA: The MIT Press. [6] Landau, B. and Gleitman, L.R. (1985) Lan- guage and ezperience: Evidence from the blind child. Cambridge, MA: Harvard Univer- sity Press. [7] Levin, B. (1985) Lexical semantics in review: An introduction. In B. Levin (Ed.), Lexical semantics in review. Lezicon Project Working Papers, 1. Cambridge, MA: MIT Center for Cognitive Science. [8] Naigles, L.; Gleitman, H.; and Gleitman, L.R. (in press) Children acquire word mean- ing components from syntactic evidence. In E. Dromi (Ed.) Linguistic and conceptual de- velopment. Ablex. [9] Pinker, S. (1984) Language Learnability and Language Development. Cambridge, MA: Harvard University Press. [10] Pinker, S. (1987) Resolving a learnability paradox in the acquisition of the verb lexi- con. Lezicon project working papers 17. Cam- bridge, MA: MIT Center for Cognitive Sci- ence. 184
1989
22
COMPUTER AIDED INTERPRETATION OF LEXICAL COOCCURRENCES Paola Velardi (*) Mafia Teresa Pazienza (**) (*)University of Ancona, Istituto di Informatica, via Brecce Bianche, Ancona (**)University of Roma, Dip. di lnformatica e Sistemistica, via Buonarroti 12, Roma ABSTRACT This paper addresses the problem of developing a large semantic lexicon for natural language processing. The increas~g availability of machine readable documents offers an opportunity to the field of lexieal semantics, by providing experimental evidence of word uses (on-line texts) and word definitions (on-line dictionaries). The system presented hereafter, PETRARCA, detects word e.occurrences from a large sample of press agency releases on finance and economics, and uses these associations to build a ease-based semantic lexicon. Syntactically valid cooccurenees including a new word W are detected by a high-coverage morphosyntactic analyzer. Syntactic relations are interpreted e,g. replaced by case relations, using a a catalogue of patterns/interpretation pairs, a concept type hierarchy, and a set of selectional restriction rules on semantic interpretation types. Introduction Semantic knowledge codification for language processing requires two important issues to be considered: 1. Meaning representation. Each word is a world: how can we conveniently circumscribe the semantic information associated to a lexic,;d entry? 2. Acquisition. For a language processor, to implement a useful application, several thousands of terms must have an entry in the semantic lexicon: how do we cope with one such a prohibitive task? 185 The problem of meaning representation is one which preoccupied scientists of different disciplines since the early history of human culture. We will not attempt an overall survey of the field of semantics, that provided material for many fascinating books; rather, we will concentrate On the computer science perspective, i.e. how do we go about representing language expressions on a computer, in a way that can be useful for natural language processing applications, e.g. machine translation, information retrieval, user-friendly interfaces. In the field of computational linguistics, several approaches were followed for representing semantic knowledge. We are not concerned here with semantic languages, which are relatively well developed; the diversity lies in the meaning representation principles. We will classify the methods of meaning representations in two categories: conceptual (or deep) and coilocative (or surface). The terms "conceptual" and "collocative" have been introduced in [81; we decided to adopt an existing terminology, even though our interpretation of the above two categories is broader than for their inventor. 1. Conceptual Meaning Conceptual meaning is the cognitive content of words; it can be expressed by features or by primitives. Conceptual meaning is "deep" in that it expresses phenomena that are deeply embedded in language. 2. Collocatlve meaning. What is communicated through associations between words or word classes. Coilocative meaning is "superficial" in that does not seek for "the deep sense" of a word, but rather it "describes" its uses in everyday language, or in some sub-w, rid language (economy, computers, etc.). It provides more than a simple analysis of cooccurr~aces, because it attempts an explanation of word associations in terms of conceptual relations between a lexical item and other items or classes. Both conceptual and collocative meaning representations are based on some subjective, human-produced set of primitives (features, conceptual dependencies, relations, type hierarchies etc.) on which there is no shared agreement at the current state of the art. As far as conceptual meaning is concerned, the quality and quantity of phenomena to be shown in a representation is subjective as well. On the contrary, surface meaning can rely on the solid evidence represented by word associations; the interpretation of an association is subjective, but valid associations arc an observable, even though vast, phenomenon. To confu'm this, one can notice that different implementations of lexicons based on surface meaning are surprisingly similar, whereas conceptual lexicons arc very dishomogeneous. In principle, the inferential power of collocative, or surface [18] meaning representation is lower than for conceptual meaning. In our previous work on semantic knowledge representation, however, [10l [18] [12] we showed that a semantic dictionary in the style of surface meaning is a useful basis for semantic interpretation. The knowledge power provided by the semantic lexicon (limited to about I000 manually entered defmitions) was measured by the capability of the language processor DANTE [2] [18] [11] to answer a variety of questions concerning previously analyzed sentences (press agency releases on finance and economics). It was found that, even though the system was unable to perform complex inferences, it could successfully answer more than 90% of the questions [12]L In other terms, surface semantics seems to capture what, at first glance, a human reader understands of a piece of text. In[26] , the usefulness of this meaning representation method is demonstrated for TRANSALTOR, a system used for machine translation in the field of computers. An important advantage of surface meaning is that makes it easier the acquisition of the semantic lexicon. This issue is examined in the next section. Acquisition of Lexical Semantic Knowledge. Acquiring semantic knowledge on a systematic basis is quite a complex task. One needs not to look at metaphors or idioms to fred this; even the interpretation of apparently simple sentences is riddled with such difficulties that makes it hard even cutting out a piece of the problem. A manual codification of the lexicon is a prohibitive task, regardless of the framework adopted for semantic knowledge representation; even when a large team of knowledge enters is available, consistency and completeness are a major problem. We believe -that automatic, or semi-automatic acquisition of the lexicon is a critical factor in determining how widespread the use of natural language processors will be in the next few years. ' Recently a few methods were presented for computer aided semantic knowledge acquisition. A widely used approach is accessing on-line dictionary defmitions to solve ambiguity problems [3] or to derive type hierarchies and semantic features [24]. The information presented in a standard dictionary has in our view some intrinsic limitation: s definitions are often circular e.g. the definition of a term A may refer to a term B that in turn points to A; * definitions are not homogeneous as far as the quality and quantity of provided information: they can be very sketchy, or give detailed structural information, or list examples of use-types, or attempt some conceptual meaning definition; • a dictionary is the result of a conceptualization effort performed by some human specialist(s); this effort may not be consistent with, or The test was performed over a 6 month period on about S0 occasional visitors and staff members of the IBM Rome scientific center, unaware of the system capabilities and structure. The user would look at 60 different releases, previously analyzed by the system (or re-analyzed during the demo), and freely asks questions about the content of these texts. In the last few months, the test was extended to a different domain, e.g. the Italian Constitution, without significant performance changes. See the referenced papers for examples of sentences and of (answered and not answered) query types (in general wh-questions). 186 exl (from [8]): boy = + artimate -adult + male ex2. (from [251): help = Y carrying out Z, X uses his resources W in order for W to help Y to carry out Z; the use of resources by X and the carrying out of Z by Y are simultaneous ex2 (from I161): throw = actor PROPELs and object from a source LOCation to a destination LOCation Figure I. suitable for, the objectives of an application for which a language processor is built. Examples of conceptual meaning representation in the literature A second approach is using corpora rather than human-oriented dictionary entries. Corpora provide an experimental evidence of word uses, word associations, and language phenomena as metaphors, idioms; and metonymies. The problem and at the same time the advantage of corpora is that they are raw texts whereas dictionary entries use some formal notation that facilitates the task of linguistic data processing. No computer program may ever be able to derive formatted data from a completely unformatted source. Hence the ability of extracting lexical semantic information form a corpus depends upon a powerful set of mapping rules between phrasal patterns and human-produced semantic primitives and relations. We do not believe that a semantic representation framework is "good" if it mimics a human cognitive model; more realistically, we believe that a set of primitives, relations and mapping rules is "fair', when its coverage over a language subworld is suitable for the purpose of some useful language processing activity. Corpora represent an 'objective" description of that subworld, against which it is possible to evaluate the power of a representation scheme; and they are particularly suitable for the acquisition of a colloeative meaning based semantic lexicon. Besides our work [19], the only knowledge acquisition system based on corpora (as far as we know) is described in [7]. In this work, when an unknown word is encountered, the system uses pre-existing knowledge on the context in which the word occurred to derive its conceptual category. 187 The context is provided by on line texts in the economic domain. For example, the unknown word merger in "another merger offer" is categorized as merger-transaction using semantic knowledge on the word offer and on pre-analyzed sentences referring to a previous offer event, as suggested by the word another. This method is interesting but reties upon a pre-existing semantic lexicon and contextual knowledge; in our work, the only pre-existing knowledge is the set of conceptual relations and primitives. PETRARCA: a method for the acquisition and interpretation of cooccurrences PETRARCA detects cooccurrences using a powerful morphologic and syntactic anal~er [141 I11; cooccurences are interpreted by a set of phrasal-patterns/ semantic-interpretation mapping rules. The semantic language is Conceptual Graphs [17]; the adopted type hierarchy and conceptual relations are described in [10l. The following is a summary description of the algorithm: For any word W, 1. (A) Parse every sentence in the corpus that uses W. Ex: W = AGREEMENT "Yesterday an agreement was reached among the companies". exl (from I181): agreement = is a decision act participant pe-rson, organization theme transaction cause communication_exchange manner interesting important effective .. ex2 (from [26]): person = /sa creature agent_of take put fred speech-action mental-action consistof hand foot.. source_of speech-action destination_of speech-action power human speed slow mass human Figure 2. Examples of eollocative meaning representation in the literature 2. (A) Determine all syntactic attachments of W * (e.g. syntactically valid cooccurrences) Ex: . NP_PP(AGREEMENT,AMONG,COMPANY). VP_OBJ(TO REACH,AGREEMENT). (A) Generate a semantic interpretation for each attachment : step 3 might produce more than one interpretation for a single word pattern, due to the low selectivity of some semantic rule. step 3 might fail to produce an interpretation for metonymies and idioms, which violate semantic constraints. Strong syntactic evidence (unambiguous syntactic rules) is used to "signal" the user this type of failure. Ex: Knowledge sources used by PETRARCA IAGREEMENT}- • (PARTICIPANT)- • ICOMPANYi. 4. (A) Generalize the interpretations. Ex: Given the following examples: [AGREEMENT l- • (PARTICIPANT)- > ICOMPANYI. [AGREEMENT]- > (PARTICIPANT)- • [COUNTRY.ORGANIZATIONI. [AGREEMENT}- • (PARTICIPANT)- • [PRESIDENT I. derive the most general constraint: [AGREEMENT]- • (PARTICIPANT)- > IHUMAN.ENTITYI. The above is a new case description added to the definition of AGREEMENT 5. (M) Check the newly derived entry. To perform its analysis, PETRARCA uses five knowledge sources: I. an on line natural corpus (press agency releases) to select a variety of language expressions including a new word W; 2. a high coverage morphosyntactic analyzer, to derive phrasal patterns centered around W; 3. a catalogue of patterns/interpretation pairs, called Syntax-to-Semantic (SS rules); 4. a set of rules expressing selectional restriction on conceptual relation uses (CR rules); 5. a hierarchy of conceptual classes and a catalogue associating to words concept types. Steps marked (A) are automatic; steps marked (M) axe manual. The only manual step is the last one: this step is however necessary because of the following: The natural corpus and the parser are used in steps 1 and 2 of the above algorithm; SS rules, CR rules and the word/concept catalogue are used in step 3; the type hierarchy is used in steps 3 and 4 188 The parser used by PETRARCA is a high coverage morphosyntactic analyzer developed in the context of the DANTE system. The lexical parser is based on a Context Free grammar, the complete set of Italian prefixes and suffixes, and a lexicon of 7000 elementary lernmata (stems without affixes). At present, the morphologic component has an 100% coverage over the analyzed corpus (100,000 words) 1141 1131. The syntactic analysis determines syntactic attachment between words by verifying grammar rules and forms agreement; the system is based on an Attribute Grammar, augmented with lookahead sets I1]; the coverage is about 80%; when compiled, the parsing time is around 1-2 see. of CPU time for a sentence with 3-4 prepositional phrases; the CPU is an IBM mainframe. The syntactic relations detected by the parser are associated to possible semantic interpretations using SS rules. An excerpt of SS rules is given below for the phrasal pattern: noun..phrase( NP) + prepositional..phrase( PP) (di=o.D. i NP PP('wordl,d|."word2) •- tel(PO.f~E$S,di°'word2,*lmrdl). l'clne dl Pletro (the do s of Peter)'/ NP_PP('wordl,dl,'word2) <. reI(.SOC RELATION,dl,'word2,'wordl). /'lit mtdre rq Elet,o (the mitther of Peter)'/ NP PP('wm'dhdi,'word2) < • rei(PART1CIPANT,di,*wofdl,'word2). /'riunione dei deleptl (the meeting of the delesliel)'/ NP PP('wocdl.di.'word2) <- rel($UBSET0dt.'wocd2.'wordl). /'due d! nol (two of us)'/ NP_PP('wo~I,di.'word2) < - mI(PART OF.di.'wortl2,'wordl). /'p=glne del Itbro (the pitgel of the book)'/ NP_PP('wonll.dl.'word2) •. ml(MATTER.dl,'wordl.'word2). I'oglFtto dl legno (itn object of wood)'/ NP_PP('wordl,dl,'word3) < - rel(PRODUeER,di,'wordl,*word2). /'rul~ito del leonl (the rmlr of the lions)'/ NP_PP("~mrdl,dl,'wottl '2) <- reI(CHARACTERISTIC.d.I,'word2.'wordl). /'rintelllgenza delrtlomo (the intelligence of the man)'/ Overall, we adopted about 50 conceptual relations to describe the set of semantic relations commonly found in language; see [10] for a complete list. The catalogue of SS rules includes about 200 pairs. Given a phrasal pattern produced by the syntactic parser, SS rules select a first set of conceptual relations that are candidate interpretations for the pattern. Selectional restriction rules on conceptual relations are used to select a unique interpretation, when possible. Writing CR rules was a very complex task, that required a process of progressive refinement based on the observation of the results. The following is an example of CR rule for the conceptual relation PARTICIPANT: participant -- 189 has..participant: meeting, agreement, fly, sail is.participant: human_entity Examples of phrasal patterns interpreted by the participant relation are: John flies (to New York); the meeting among parties; the march of the pacifists," a contract between Fiat and A lfa; the assembly of the administrators, etc. An interesting result of the above algorithm is the following: in general, syntax will also accept semantically invalid cooccurrences. In addition, in step 3, ambiguous words can be replaced by the "wrong" concept names. Despite this, selectional restrictions are able to interpret only valid associations and reject the others. For example, consider the sentence: "The party decided a new strategy". The syntax detects the association SUBJ(DECIDE, PARTY). Now, the word "party" has two concept names associated with it: POL PARTY, and FEAST, hence in step 3 both interpretations are examined. I lowever, no conceptual relation is found to interpret the pattern "FEAST DECIDE". This association is hence rejected. Simalirily, in the sentence: "An agreement is reached among the companies, the syntactic analyzer will submit to the semantic interpreter two associations: NP_PP(A GREEMENT, AMONG, COMPA N Y) and VP_PP(REACIt, AMONG,COMPANY) Now, the preposition among in the SS rules, points to such conceptual relations as PARTICIPANT, SUBSET (e.g. "two among all us"), and LOCATION (e.g. "a pine among the trees'% but none of the above relates a MOVE ACT with a IIUMAN ORGANIZATION. The association is m hence rejected. Future experimentation issues This section highlights the current limitations and experimentation issues with PETRARCA. Definition of type hierarchies PETRARCA gets as input not only the word W, but a list of concept labels CWi, corresponding to the possible senses of W. For each of these CWi, the supertype in the hierarchy must be provided. Notice .however that the system knows nothing about conceptual classes; the hierarchy is only an ordered set of labels. In order to assign a supertype to a concept, three methods are currently being investigated. First, a program may "guide" the user towards the choice of the appropriate supertype, visiting top down the hierarchy. This approach is similar to the one described in I261. Alternatively, the user may give a fist of synonymous or near synonymous words. If one of these was already included in the hierarchy, the same supertype is proposed to the user. A third method lets the system propose the supertype. The system assumes CW=W and proceeds through steps 1, 2 and 3 of the case descriptions derivation procedure. As the supertype of CW is unknown, CR rules are less effective at determining a unique interpretation of syntactic patterns. If in some of these patterns the partner word is already defined in the dictionary, its case descriptions can be used to restrict the analysis. For example, suppose that the word president is unknown in: The president nominated etc. Pertini was a good president' the knowledge on possible AGENTs for NOMINATE let us infer PRESIDENT < HUMANENTITY; from the second sentence, it is possible to further restrict to: PRESIDENT< HUMAN ROLE. The third m method is interesting because it is automatic, however it has some drawbacks. For example, it is slow as compared 1:o methods 1 and 2; a trained user would rather use his experience to decide a supertype. Secondly, if the word is found with different meanings in the sample sentences, the system might never get to a consistent solution. Finally, if the database includes very few or vague examples, the answer may be useless (e.g. ACT, or TOP). It should also be considered that the effort required to assign a supertype to, say, 10.000 words is comparable with the encoding of the morphologic lexicon. This latter required about one month of data entry by 5-6 part-time researchers, plus about 2-3 months for an extensive testing. The complexity of hierarchically organizing concepts however, is not circumscribed to the time consumed in associating a type label to some thousand words. All NLP researchers experimented the difficulty of associating concept 190 types to words in a consistent way. Despite the efforts, no commonly accepted hierarchies have been proposed so far. In our view, there is no evidence in humans of primitive conceptual categories, except for a few categories as animacy, time, etc. We should perhaps accept the very fact that type hierarchies are a computer method to be used in NLP systems for representing semantic knowledge in a more compact form. Accordingly, we are starting a research on semi-automatic word clustering (in some given language subworld described by a natural corpus), based on fuzzy set and conceptual clustering theories. Interpretation of idiomatic expressions In the current version of PETRARCA, in case of idiomatic expressions the user must provide the correct interpretation. In case of metaphors, syntactic evidence is used to detect a metaphor, under the hypothesis that input sentences to the system are syntactically and semantically correct. At the current state of implementation, the system does not provide automatic interpretation of metaphors. However, an interesting method was proposed in 1201. According to this method, when for example a pattern such as "car drinks" is detected, the system uses knowledge of canonical definitions of the concepts "DRINK" and "CAR" to establish whether ~CAR" is used metaplaorically as a HUMANENTITY, or "DRINK" is used metaphorically as 1"O BE FEDBY". An interesting user aided computer program for idiomatic expressions analysis is also described in 1231. Generalization of case descriptions In PERTRARCA, phrasal patterns are first mapped into 'low level" case description; in step 4, "similar" patterns are merged into "high level' case descriptions. In a first implementation, two or three low level case descriptions had to be derived before creating a more general semantic rule. This approach is biased by the availability of example sentences. A word often occurs in dozens of different contexts, and only occasionally two phrasal patterns reflect the same semantic relation. For example, consider the sentences: The company signs a contract for newfimding The ACE stipulates a contract to increase its influence Restricting ourselves to the word "contract', we get the following semantic interpretations of syntactic patterns: 14SIGNI, > frHBlmtl~ > l¢Ol~Crl 2.1COl~t~-~r}. ~ ll~ll~l~- • ll~l~llqO-'l Ms'rII~JI.&TIBI- > crI-IIBMII). > l¢OlCraAc~rl 4.[CONTRA~WI- > (PIJRPOSli). • ll~lll In patterns 1 and 3 "sign" and "stipulate" belong to the same supertype, i.e. INFORMATIONEXCHANGE; hence a new case description can be tentatively created for CONTRACT: ICOl,¢rr~cl+.l. • (TI'llIMI~. > IlI,+F'ORMA'rioI,,I+BXO.IA I~F. ! Indeed, one can tell, talk about, describe etc. a contract. Conversely, patterns 3 and 4 have no common supertype; hence two "low level" case descriptions are added to the definition of CONTRACT. lCONTRAC'rl. • (PURPOSE)- ~ ILmlJNDINGI ICOiCTRACI"I- > (PURPOSE)- • lll'~'ll, ltt.,~IIl Even with a large number of input sentences, the system createsmany of these specific patterns; a human user must review the results and provide for case descriptions generalization when he/she feels this being reasonable. A second approach is to generalize on the basis of a single example, and then retract (split) the rule if a counterexample is found. Currently, we axe ~a'udying different policies and comparing the results; one interesting issue is the exploitation of counterexamples. Concluding remarks Even though PETRARCA is still an experiment and has many unsolved issues, it is, to our knowledge, the first reported system for extensive semantic knowledge acquisition. There is room for many improvements; for example, PETRARCA only detects, but does not interpret idioms; neither it knows what to do with errors; if a wrong interpretation of a phrasal pattern is derived, error correction and refinement of the knowledge base is performed by the programmer. However PETRARCA is able to process automatically raw language expressions and to perform a first 191 classification and encoding of these data. The rich linguistic material produced by PETRARCA provides a basis for future analysis and refinements. Despite its limitations, we believe this method being a first, useful step towards a more complete system of language learning. References 111 F. Antonacci, P. Velardi, M.T. Pazienza, A High Coverage Grammar for the Italian Language, Journal of the Assoc. for Literary and Linguistic Computing, in print 1988. 121 F. Antonacci, M.T. Pazienza, M. Russo, P.Velardi, Representation and Control Strategies for large Knowledge Domains : an Application to NLP, Journal of Applied Artificial Intelligence, in print 1988. [31 JL. Binot and K. Jensen A Semantic Expert Using an On-line Standard Dictionary Proceedings of the IJCAI Milano, 1987 [41 K. Dahlgren and J. McDoweU Kind Types in Knowledge Reimesentation Proceedings of the Coling-86 1986 151 Heidorn G.E. "Augmented Phrase Structure Grammar" in "Theoretical Issues in Natural Language Processing" N ash- Webber and Schank ,eds, A CL 1975 161 J. Katz, P. Postal An Integrated Theory of Linguistic Descriptions Cambridge, M.LT. Press, 1964. 171 P. Jacobs, U. Zernik Acquiring Lexical Knowledge from Text: a Ca.~e Study, Proceedings of the AAAI88, St. Paul, August 1988 [8] Geoffrey Leech Semantics: The Study of Meaning second edition, Penguin Books 1981. 191 Michalsky R.S., Carbonell J.C., Mitchell T.M. Machine Learning vol i Tioga Publishing Company Palo Alto, 1983 [101 M.T. Pazienza and P. Velardi, A Structured Representation of Word Senses for Semantic Analysis Third Conference of the European [111 [12l I131 [141 [15] [161 I171 1181 Chapter of the ACL, Copenhagen, April 1-3 1987. M.T. Pazienza and P. Velardi, Integrating Conceptual Graphs and Logic in a Natural Language Understanding System, in "Natural Language Understanding and Logic Programming I I ~, V. Dahl and P. Saint-Dizier editors, North-Holland, 1988. M.T. Pazienza, P. Velardi, Using a Semantic Knowledge Base to Support A Natural Language Interface to a Text Database, 7th International Conference on Entity-Relationship Approach, Rome, November 16-18 1988. M. Russo, A Rule Based System for the Morphologic and Morphosyntactic Analysis of the Italian Language, in "Natural Language Understanding and Logic Programming 11", V. Dahl and P. Saint-Dizier editors, North-Holland, 1988. M. Russo, A Generative Grammar-Approach • for the Morphologie and Morphosyntactie Analysis of Italian, Third Conference of the European Chapter of the ,4CL, Copenhagen, April 1-3 1987. Shank R.C. Conceptual Dependency: a theory of natural language understanding. Cognitive Psicology, vol 3 1972 Shank R.C, Goldman, Rieger, Riesbeck Conceptual Information Processing N oth-H olland/ american Elsevier 1975 J.F. Sowa, Conceptual Structures: Information Processing in Mind and Machine, ,4 ddison. Wesley, Reading, 1984. P. Velardi, M.T. Pazienza and M. DeGiovanetti, Conceptual Graphs for the Analysis and generation of sentences, IBM Journal of Research and Development, March 1988. !191 120] 1211 1221 I231 1241 1251 1261 1271 P. Velardi, M.T. Pazienza, S. Magrini Acquisition of Semantic Patterns from a natural corpus of texts ,4CM-SIG,4RT special issue on knowledge acquisition in print E. Way Dinamic Type Hierachies: An Approach to Knowledge Representation through Metaphor PhD dissertation, Dept. of System Science, State Univ. of N Y at Binghamton 1987. Y. Wilks, Preference Semantics Memoranda from the Artificial Intelligence Laboratory, Stanford University Stanford, 1973 Y. Wilks, Deep and Superficial Parsing, in "Parsing natural Language" M. King editor, Academic Press, 1983. U. Zemik Strategies in Language Acquisition: Learning Phrases from Examples in Contexts. Phd dissertation, Tech. Rept. UCL,4-,41-87-1, University of California, Los ,4ngeles 1987 R. Byrd, N. Calzolari, M. Chodorow, J. Klavans, M. Neff, O. Rizk Large lexicons for Natural Language Processing: Utilizing the grammar Coding System of LDOCE Computational Linguistics, special issue of the Lexicon D. Walker, ,4. Zampolli, N. Calzolari editors July-December 1987 1987. I. Mel'cuk, A. Polguere A Formal Lexicon in Meaning-Text Theory (or How To Do Lexica with Words) Computational Linguistics, special issue of the Lexicon D. Walker, ,4. Zampoili, N. Calzolari editors July-December 1987 1987. S. Nirenburg, V. Raskin 111e Subworld Concept Lexicon and the Lexicon Management System Computational Linguistics, special issue of the Lexicon D. Walker, ,4. Zampoili, N. Calzolari editors July-December 1987 1987. J. Pustejovsky Constraints on the Acquisition of Semantic Knowledge, Journal of Intelligent Information Systems, vol 3, n. 3, fall 1988 192
1989
23
A HYBRID APPROACH TO REPRESENTATION IN THE JANUS NATURAL LANGUAGE PROCESSOR Ralph M. Weischedel BBN Systems and Technologies Corporation 10 Moulton St. CambHdge, MA 02138 Abstract In BBN's natural language understanding and generation system (Janus), we have used a hybrid approach to representation, employing an intensional logic for the representation of the semantics of ut- terances and a taxonomic language with formal semantics for specification of descriptive constants and axioms relating them. Remarkably, 99.9% of 7,000 vocabulary items in our natural language ap- plications could be adequately axiomatlzed in the taxonomic language. 1. Introduction Hybrid representation systems have been ex- plored before [9, 24, 31], but until now only one has been used in an extensive natural language process- ing system. KL-TWO [31], based on a propositional logic, was at the core of the mapping from formulae to lexical items in the Penman generation system [28]. In this paper we report some of the design decisions made in creating a hybrid of an intensional logic with a taxonomic language for use in Janus, BBN's natural language system, consisting of the IRUS-II under- standing components [5] and the Spokesman genera- tion components. To our knowledge, this is the first hybrid approach using an intensional logic, and the first time a hybrid representation system has been used for understanding. In Janus, the meaning of an utterance is represented as an expression in WML (World Model Language)[15], which is an intensional logic. However, a logic merely prescribes the framework of semantics and of ontology. The descriptive constants, that is the individual constants (functions with no arguments), the other function symbols, and the predicate symbols, are abstractions without any detailed commitment to ontology. (We will abbreviate descriptive constants throughout the remainder of this paper as constants.) Axioms stating the relationships between the con- stants are defined in NIKL [8, 22]. We wished to ex- plore whether a language with limited expressive power but fast reasoning procedures is adequate for core problems in natural language processing. The NIKL axioms constrain the set of possible models for the logic in a given domain. Though we have found clear examples that argue for more expressive power than NIKL provides, 99.9% of the examples in our expert system and data bass applications have fit well within the constraints of NIKL. Based on our experience and that of others, the axioms and limited inference algorithms can be used for classes of anaphora resolution, interpretation of highly polysemous or vague words such as have and with, finding omitted relations in novel nomina/ compounds, and selecting modifier attachment based on selection restrictions. Sections 2 and 3 describe the rationale for our choices in creating this hybrid. Section 4 illustrates how the hybrid is used in Janus. Section 5 briefly summarizes some experience with domain- independent abstractions for organizing constants of the domain. Section 6 identifies related hybrids, and Section 7 summarizes our conclusions. 2. _Commitments to Component Hepresentation Formalisms We chose well-documented representation /an- guages in order to focus on formally specifying domains and using ~hat specification in language processing rather than on defining new domain- independent representation languages. A critical decision was our selection of intensional logic as the semantic representation language. (Our motivations for that choice are covered in Section 2.1.) Given an intensional logic, the fundamental question was how to support inference for semantic and discourse processing. The novel aspect of the design was selecting a taxonomic language and as- sociated inference techniques for that purpose. 2.1. Why an Intensional Logic First and foremost, though we had found first- order representations adequate (and desirable) for NL interfaces to relational data bases, we felt a richer semantic representation was important for future ap- plications. The following classes of representation challenges motivated our choice. • Explicit representations of time and world. Object-oriented simulation systems were an ap- plication that involved these, as were expert systems supporting hypothetical worlds. The underlying application systems involved a tree of possible worlds. Typical questions about these included What if the stop time were 20 hours? to set up a possible world and run a 193 simulation, and In which situations is blue attri. tion greater than 50%? where the whole tree of worlds is to be examined. The potential of time- varying entities existed in some of the applica- tions as well, whether attribute values (as in How often has U$$ Enterprise been C3?) or entities (When was CV22 decommissioned~ The time and world indices of WML provided the opportunity to address such semantic phenomena (though a modal temporal logic or other logics might serve this prupose). • Distributive/collective quantification. Collective readings could arise, though they appear rare, e.g., Do USS Frederick's capabilities include anti.submarine warfare or When did the ships collide? See [25] for a computational treatment of distributive/collective readings in WML. • Generics and Mass Terms. Mass terms and generally true statements arise in these applica- tions, such as in Do nuclear carriers carry JP5?, where JP5 is a kind of jet fuel. Term-forming operators and operators on predicates are one approach and can be accommodated in inten- sional logics. • Propositional Attitudes. Statements of user preference, e.g., I want to leave in the afternoon, should be accommodated in inter- faces to expert systems, as should statements of belief, I believe I must fly with a U.S. carrier. Since intensionel logics allow operators on predicates and on propositions, such state- ments may be conveniently represented. Our second motivation for choosing intensional logic was our desire to capitalize on other advantages we perceived for applying it to natural language processing (NLP), such as the potential simplicity and compositionality of mapping from syntactic form to semantic representation and the many studies in lin- guistic semantics that assume some form of inten- sional logic. However, the disadvantages of intensional logic for NLP include: • The complexity of logical expressions is great even for relatively straightforward utterances using Montague grammar[21]. However, by adopting intensional logic while rejecting Mon- tague grammar, we have made some inroads toward matching the complexity of the proposi- tion to the complexity of the utterance; that simplicity is at the expense of using a more powerful semantic interpreter and of sacrificing compositionality in those cases where language itself appears non-compositional. • Real-time inference strategies are a challenge for so rich a logic. However, our hypothesis is that large classes of the linguistic examples re- quiring common sense reasoning can be 194 handled using limited inference algorithms on a taxonomic language. Arguments supporting this hypothesis appear in [2, 13] for interpreting nominal compounds; in [6, 7, 29], for common sense reasoning about modifier attachment; and in [32] for phenomena in definite reference resolution. This second disadvantage, the goal of tractable, real.time inference strategies, is the basis for adding taxonomic reasoning to WML, giving a hybrid representation. 2.2. Why a Taxonomic Language Our hypothesis is that much of the reasoning needed in semantic processing can be supported by a taxonomy. The ability to pre-compile pre-specified inferential chains, to index them via concept name and role name, and to employ taxonomic inheritance for organizing knowledge were critical in selecting taxonomic representation to supplement WML. The well-defined semantics of NIKL was the basis for choosing it over other taxonomic systems. A fur- that benefit in choosing NIKL is the availability of KREME [1], which can be used as a sophisticated browsing, editing, and maintenance environment for taxonomies such as those written in NIKL; KREME has proven effective in a number of BBN expert sys- tem efforts other than NLP and having a taxonomic knowledge base. In choosing NIKL to axiomatize the constants, one could use its built-in, incomplete inference algorithm, the classifier [27]. In Janus, the classifier is used only for consistency checking when modifying or loading the taxonomic network; any concepts or roles iden- tiffed by the (classifier as identical are candidates for further axiomatization. Our semantic procedures do not need even as sophisticated an algorithm as the NIKL classifier; pre-compiled, pre-defined inference chains in the network are simpler, faster, and have proven adequate for NLP in our applications. 2.3. Two Critical Choices in the Hybrid 2.3.1. Representing Predicates of Arbitrary Arity Choosing a taxonomic language, at least in cur- rent implementations, means that one is restricted to unary and binary predicates. However, this not a limitation in expressive power. One can represent a predicate P of n arguments via a unary predicate P' and n binary predicates, which is what we have done. (P rl ..... m) will be true iff the following expression is. (3 b) (^ (r ]:)) (R1 b r].) (R2 b r2) ... (Rn b rn)) Davidson [5] has argued for such a representation of processes on semantic grounds, since many event descriptors appear with a variable number of ar- guments. 2.3.2. Time and World Indices Any concept name or role name in the network is a constant in the logical language. We use concepts only to represent sets of entities indexed by time and world. Roles are used only to represent sets of pairs of entities, i.e., binary relations. Given time and world indices potentially on each constant in WML, we must first state the role those indices play in the NIKL por- tion of the hybrid. (1, go) Figure 1: Two Typical Facts Stated in NIKL In a first-order extensional logic, the normal semantics of SUPERC and of roles in NIKL are well defined [26]. For instance, the diagram in figure 1 would mean (V x)((a x) = (a x)) (V x)((a x) = (3yX^(C y) (R x y))). Due to a suggestion by David Stallard, we have chosen to interpret SUPERC and the role link similarly, but interpreted under modal necessity, i.e., as propositions true at all times in all worlds. Thus in the diagram in Figure 1, (A z), (B z), (C z), and (R x y) are intensions, i.e., functions with arguments of time and world [t, w] to extensions. Rewriting the axioms above by quantifying over all times and worlds, the axioms for the diagram in Figure 1 in the hybrid representation are (V x)(V t)(V w)((B x)(t..,] ~ (A x)[t.w]) (v x)(V O(V w)((B x)[t,w] (3 y)(^ (C y)[t.w] (R x y)[t.w])). Though this handles the overwhelming majority of constants we need to axiomatize, it does not allow for representing constants taking intensional arguments because the axioms above allow for quantification over extensions only)The semantics of predicates which should have intensions as arguments are unfor- tunately specified separately. Examples that have arisen in our applications involve changes in a reading on a scale, e.g., USS Stark's readiness downgraded from C1 to C4. 2 We would like to treat that sentence as: (^ (DOWNGRADE a) (SCALE a ([NTENS[ON Stark-readiness)) (PREVIOUS a C1) (NEW a C4)). That is, for the example we would like to treat the scale as intensional, but have no way to do so in NIKL. Therefore, we had to annotate the definition of downgrade outside of the formal semantics of NIKL. Only 0.1% of the 7,000 (root) word vocabulary in our applications could not be handled with NIKL. (The additional problematic vocabulary were upgrade, project, report, change, and expect.) 3. Example Representational Decisions Here we mention some of the issues we focussed on in developing Janus. The specification of WML appears in [15]; specifications for NIKL appear in [22, 26]. Few constants. One decision was to use as few constants as possible, deriving as many entities as possible using operators in the intensionai logic. In this section we illustrate this point by showing how definitely referenced sets, information about kinds, in- definitely identified sets, and generic information can be stated by derivation from a single constant whose extension is the set of all individuals of a particular class. Some of the expressive power of the hybrid is illustrated below as it pertains to minimizing the con- stants needed From the constants BLACK-ENTITIES, GRAY-ENTITIES, CATS and MICE, the operators THE, POWER, KIND, and SAMPLE are used to derive the entities corresponding to definite sets, generic classes, and indefinite sets. In a semantic network without the hybrid, one might choose (or need) to represent each of our derived entities by a node in the network. Our use of the operator THE, and the operator POWER for definite plurals follows Scha [25]. The operators KIND and SAMPLE follow Cad.son's analysis [10] of the semantics of bare plurals. THE, as an operator, takes three arguments: a variable, a sort (unary predicate), and a proposition. Its denotation is the unique salient object in context such that it is in the sort and such that if the variable is bound to it, the proposition is true. POWER takes a sort as argument and produces the predicate cor- responding to the power set of the set denoted by the sort. These operators are useful for representing definite plurals; the black cats would be represented as (THE x (POWER CATS) (BLACK-ENTITIES x)). vlt is possible that one could extend NIKL semantics to allow for inter~sional aK3uments . but this has not been done. 2An analogy in more common terminology would be His tempera- ture dropped from 104 degrees to 99 degrees. 195 SAMPLE takes the same arguments as THE, but indicates some set of entities satisfying the sort and proposition, not necessarily the largest set. KIND takes a sort as argument, and produces an individual representing the sort; its only use is for bare plurals that are surface subjects of a generic statement. If we are predicating something of a bare plural, KIND is used; for instance, cats as in cats are ferocious is represented as (KIND CATS). An indefinite set aris- ing as a bare plural in a VP is represented using SAMPLE; for instance, gray mice as in Cats eat gray mice is represented as (SAMPLE x MICE (GRAY- ENTITIES x)). The examples above demonstrate that an inten- sional logic enables derivation of many entities from fewer constants than would be needed in NIKL or other frame-based systems. The next example il- lustrates how the intensional logic lets us express some propositions that can be stated in many seman- tic network systems, but not in NIKL. Generic assertions. Generic statements such as Cats eat mice are often encoded in a semantic net- work or frame system. This is not possible in the semantics of NIKL, but is possible in the hybrid. The structure in Figure 2 would not give the desired generic meaning, but rather would mean (ignoring time and world) that (V x) ((CATS x) = (3 y)(^ (MICE y)(EAT x y))), i.e., every cat eats some mouse. EAT (1,oo) Figure 2: Illustration Distinguishing NIKL Networks from other Semantic Nets Again, following Carlson's linguistic analysis [10], in the hybrid we would have a generic statement about the kind corresponding to cats, that these eat in- definitely specified sets of mice. GENERIC is an operator which produces a predicate on kinds, intui- tively meaning that the resulting predicate is typically true of individuals of the kind that is its argument. Our formal representation (ignoring tense for simplicity) is (GENERIC (LAMBDA (x) (EAT x(SAMPLE y MICE)))) (KIND CATS). Next we illustrate a potential powerful feature of the hybrid which we have chosen not to exploit. Derivable definitions. The hybrid gives a powerful means of defining lexical items. To define pi/o~ one wants a predicate defining the set of people that typi- cally are the actors in a flight, i.e., (LAMBDA (x') { ^ (PERSON x') (GENERIC (LAMBDA (x) (3 y)(^ (FLYING-EVENT y) (ACTOR y x)))) x') }) Though the hybrid gives us the representational capacity to make such definitions, we have chosen as part of our design no_._tt to use it. For to use it, would mean stepping outside of NIKL to specify constants, and therefore, that the reasoning algorithms based on taxonomic semantics would not be the simple, ef- ficient strategies, but rather might require arbitrarily complex theorem proving for expressions in inten- sional logic. 3 4. Use of the Taxonomy in Janus By domain mode/we mean the set of axioms en- coded in NIKL regarding the constants. The domain model serves several purposes in Janus. Of course, in defining the constants of our semantic represen- tation language, it provides the constants that can ap- pear in formulae that lexical items map to. For in- stance, vessel and ship map to VESSEL. In the ex- ample above regarding pilot, the constants were PER- SON, FLYING-EVENT, and ACTOR; in the formula • above stating that cats eat mice, the constants were EAT, MICE, and CATS, In this section, we divide the discussion in three parts: current uses of the domain model in Janus; a plausible, but rejected use; and proposals for its use, but not yet implemented. 4.1. Current Uses 4.1.1. Selection Restrictions The domain model provides the semantic classes (or sorts of a sorted logic) that form the primitives for selection restrictions. Its use for this purpose is nei- ther novel nor surprising, merely illustrative. In the case of deploy, a MILITARY-UNIT can be the logical subject, and the object of a phrase marked by to must be a LOCATION. Almost all selection restrictions are based on the semantic class of the entities described by a noun phrase. That is, almost all may be checked by using taxonomic knowledge regarding constants. A table of semantic classes for the operators dis- cussed earlier is provided in Figure 3. Though the logical form for ~e carriers, all carriers, some carriers, a carrier, and carriers (both in the KIND and SAMPLE case) varies, the selection restriction must check the =USC/ISI [19] has proposed e first-order formula defining the set of items that have ever been the actor in a flight. Their definition is solely within NIKL using the QUA link [14], which is exactly the set of fillers of a slot. While having eve._..rr flown could be a sense of pilot, it seems less useful than the sense of normally flying a plane. 196 NIKL network for consistency between the constant CARRIERS and the constraint of the selection restric- tion. To see this, consider the case of command (in the sense of a military command) which requires that its direct object in active clauses be a MILITARY- UNIT and that its surface subject in passive clauses be a MILITARY-UNIT, i.e., its logical object must be a MILITARY-UNIT. Suppose USS Enterprise, carrier, and aircraft carrier all have semantic class CARRIER. Since an ancestor of CARRIER in the taxonomy is MILITARY-UNIT, each of those phrases satisfy the aforementioned selection restriction on the verb command. Phrases whose class does not have MILITARY-UNIT as an ancestor or as a descendent 4 will not satisfy the selection restriction. That is, definite evidence of consistency with the selection restriction is normally required. Expression Semantic Class (THE x P (R x)) P (POWER P) P (KIND P) P (SAMPLE x P (R x)) P (LAMBDA x P (R x)) P Figure 3: Relating Expressions to Classes s There are three cases where more must be done. For pronouns, Janus saves selection restrictions that would apply to the pronoun's referent, later applying those constraints to eliminate candidate referents. Metonymy is an exception, discussed in Section 4.3.2. There are cases of selection restrictions requiring in- formation additional to the semantic class, but these are checked against the type of the logical expression s for a noun phrase, rather than its seman- tic class only. Co/fide requires a set of agents. The type of a plural, for instance, is (SET P), where P is its semantic class. The selection restriction on collide could be represented as (SET PHYSICAL-OBJECT). 4.1.2. Highly Polysemous Words Have, with, and of, are highly polysemous. Some of their senses are very specific, frozen, and predict- able, e.g., to have a col~ these senses may be itemized in the |exicon. However, other senses are vague, if considered in a domain-independent way; nevertheless, they must be resolved to precise mean- ings if accessing a data base, expert system, etc. US$ Frederick has a speed of 30 knots has this flavor, for the general sense is associating an attribute with an entity. To handle such cases, we look for a relation R in the domain model which could be the domain- dependent interpretation. If A has B, the B of A, or ,4 with B are input, the semantic interpreter looks for a role R from the class associated with A to the class associated with B. If no such role exists, the search is for a role relating the nearest ancestor of the class of A to any ancestor of the class of B. The implicit as- sumption is that items structured closely together in the domain model can be related with such vague words, and that items that can be related via such vague words will naturally have been organized closely together in the domain model. While describing the procedure as a search, in fact, an explicit run-time search may not be neces- sary. All SUPERCs (ancestors) of a concept are com- piled and stored when the taxonomy is loaded. All roles from one concept to another are also pre- compiled and stored, maintaining the distinction be- tween roles that are explicit locally versus those that are compiled. Furthermore, the ancestors and role relations are indexed. One need only walk up the chain of ancestors if no locally defined role relates the two concepts, but some inherited (not locally defined) role does; then one walks up the ancestor chain(s) only to find the closest applicable role. Thus, in many cases, "semantic reasoning" is reduced to efficient table lookup. 4.1.3. Relation to Underlying System Adopting WML offers the potential of simplifying the mapping from surface form to semantic represen- tation, although it does increase the complexity of mapping from WML to executable code, such as SQL or expert system function calls. The mapping from intensional logic to executable code is beyond the scope of this paper; our first implementation was reported in [30]; the current implementation will be described elsewhere. This process makes use of a model of underlying system capabilities in which each element relates a set of domain model constants to a method for ac- cessing the related information in the database, ex- pert system, simulation program, etc. For example, the constant HARPOON-CAPABLE, which defines a set of vessels equipped with harpoon missiles, is as- sociated with an undedying system model element which states how to select the subset of exactly those vessels. In a Navy relational data base that we have dealt with, the relevant code selects just those records of a table of unit characteristics with a "Y" in the HARP field. ~Ne ched~ whether the constraint is a descendent of the class of the noun phrase to determine whether consistency is possible. For instance, if decom/ssion requires a VESSEL as the object of the de<:ommisioning, those units and they satisfy the selection constrainL SThe ruJels may need to be used tecureively to get to a constanL aEvery expression in WML has a type. 4.1.¢ Knowledge Acquisition We have developed two complementary tools to greatly increase our productivity in porting BBN's Janus NL understanding and generation system to new domains. IRACQ [3] supports learning lexical semantics from examples with only one unknown 197 word. IRACQ is used for acquiring the diverse, com- plex patterns of syntax and semantics arising from verbs, by providing examples of the verb's usage, Since IRACQ assumes that a large vocabulary is available for use in the training examples," a way to rapidly infer the knowledge bases for the overwhelm- ing majority of words is an invaluable complement. KNACQ [33] serves that purpose. The domain model is used to organize, guide, and assist in acquir- ing the syntax and semantics of domain-specific vocabulary. Using the browsing facilities, graphical views, and consistency checker of KREME[1] on NIKL taxonomies, one may select any concept or role for knowledge acquisition. KNACQ presents the user with a few questions and menus to elicit the English expressions used to refer to. that concept or role. To illustrate the kinds of information that must be acquired consider the examples in Figure 4. The vessel speed of Vinson The vessels with speed above 20 knots The vessel's speed is 5 knots Vinson has speed less than 20 knots Its speed Which vessels have a CROVL of C3? Which vessels are deployed C3? Figure 4: Examples for Knowledge Acquisition To handle these one would have to acquire infor- mation on lexical syntax, lexical semantics, and map- ping to expert system structure for all words not in the domain-independent dictionary. For purposes of this exposition, assume that the words, vessel, speed, Vinson, CROVL, C3, and deploy are to be defined. A vessel has a speed of 20 knots or a vessel's speed is 20 knots would be understood from domain- independent semantic rules regarding have and be, once lexical information for vessel and speed is ac- quired. In acquiring the definitions of vessel and speed, the system should infer interpretations for phrases such as the speed of a vessel, the vessel's speed, and the vessel speed. Given the current implementation, the required knowledge for the words vessel, speed, and CROVL is most efficiently acquired using KNACQ; names of instances of classes, such as Vinson and C3 are automatically inferred from instances; and knowledge about deploy and its derivatives would be acquired via IRACQ. To illustrate this acquistion centered around the domain model, consider acquistion centered around roles. At~'ibutes are binary relations on classes that can be phrased as the <relation> of a <class>. For instance, suppose CURRENT-SPEED is a binary relation relating vesselis to SPEED, a subclass of ONE-D-MEASUREMENT. An attribute treatment is the most appropriate, for the speed of a vessel makes perfect sense. KNACQ asks the user for one or more English phrases associated with this functional role; the user response in this case is speed. That answer is sufficient to enable the system to understand the kernel noun-phrases listed in Figure 5. -Since ONE-D- MEASUREMENT is the range of the relation, the software knows that statistical operations such as average and maximum apply to speed. The lexical information inferred is used compositionally with the syntactic rules, domain independent semantic rules, and other lexical semantic rules. Therefore, the generative capacity of the lexical semantic and syn- tactic information is linguistically very great, as one would require. A small subset of the examples il- lustrating this without introducing new domain specific lexical items appears in Figure 5. KERNEL NOUN PHRASES the speed of a vessel the vessers speed the vessel speed RESULTS from COMPOSITIONALITY The vessel speed of Vinson Vinson has speed 1 The vessels with a speed of 20 knots The vessel's speed is 5 knots Vinson has speed less than 20 knots Their greatest speed Its speed Which vessels have speed above 20 knots Which vessels have speeds Eisenhower has Vinson's speed Carriers with speed 20 knots Their average speeds Figure 5: Attribute Examples Some lexicalizations of roles do not fall within the attribute category. For these, a more general class of regularities is captured by the notion of caseframe rules. Suppose we have a role UNIT-OF, relating CASREP and MILITARY-UNIT. KNACQ asks the user which subset of the following six patterns in Figure 6 are appropriate plus the prepositions that are appropriate. 1. <CASREP> is <PREP> <MILITARY-UNIT> 2. <CASREP> <PREP> <MILITARY-UNIT> 3. <MILITARY-UNIT> <CASREP> 4. <MILITARY-UNIT> is <PREP> <CASREP> 5. <MILITARY-UNIT> <PREP> <CASREP> 6. <CASREP> <MILITARY-UNIT> Figure 6: Patterns for the Caseframe Rules For this example, the user would select patterns (1), 198 (2), and (3) and select for, on. and of as prepositions. 7 The information acquired through KNACQ is used both by the understanding components and by BBN's Spokesman generation components for paraphrasing, for providing clarification responses, and for answers in English. Mapping from the WML structures to lex- ical items is accomplished using rules acquired with KNACQ, as well as handcrafted mapping rules for lexical items not directly associated with concepts or roles. 4.2. Where an Alternative Mechanism was Selected Though the domain model is central to the seman- tic processing of Janus, we have not used it in all possible ways, but only where there seems to be clear benefit. In telegraphic language, omitted prepositions, as in List the creation date file B, may arise. Alter- natively, if the NLP system is part of a speech under- standing system, prepositions are among the most difficult words to recognize reliably. Omitted preposi- tions could be treated with the same heuristic as im- plemented for interpreting the meaning of have, with, and of. However, we have chosen a different in- ference technique for omitted prepositions. Though one could represent selection restrictions directly in a taxonomy (as reported in [7, 29]), selec- tion restrictions in Janus are stored separately, in- dexed by the semantic class of the head word. We believe it more likely that Janus will have the selec- tional pattern involving the omitted preposition, than that the omitted preposition corresponds to a usage unknown to Janus and inferable from the domain model relations. Consequently, Janus applies the selection restrictions corresponding to all senses of the known head, to find what senses are consistent with the proposed phrase and with what prepositions. In practice, this gives rise to far fewer possibilities than considering all relations possible whether or not they can be expressed with a preposition. 4.3. Proposals not yet Implemented (Possible Future Directions) In this section, we speculate regarding some pos- sible future work based on further exploiting the domain model and hybrid representation system described in this paper. 7Normally, if pattern (1) is valid, pattern (2) will be as well and vice versa. Similarly, if pattern (4) is valid, pattern (5) will normally be also. As a result, the menu items are coupled by default (selecting (1) automatically selects (2) and vice versa), but this default may be simply overridden by selecting either and then decelecting the other. The most frequent examples where one does not have the coupling of these patterns is the preposition of. 4.3.1. An Approach to Bridging It has long been observed [11 ] that mention of one class of entities in a communication can bring into the foreground other classes of entities which can be referred to though not explicitly introduced. The process of inferring the referent when such a refer- ence occurs has been called bridging [12]. Some ex- amples, taken from [12], appear below, where the ref- erence requiring bridging is underlined. 1. I looked into the room. The ceilinq was very high. 2. I walked into the room. The chandeliers sparkled brightly. 3. I went shopping yesterday. The time I started was 3 PM. We believe a taxonomic domain model provides the basis for an efficient algorithm for a broad class of examples of bridging, though we do not believe that it will cover all cases. If A is the class of a discourse entity arising from previous utterances, then any entity of class B, such that the NIKL domain model has a role from A to B (or from B to A) can be referred to by a definite NP. This has not yet been integrated into the Janus model of reference processing [4]. 4.3.2. Metonymy Unstated relations in a communication must be inferred for full understanding of nominal compounds and metonymy. Those that can be anticipated can be built into the lexicon; the challenge is to deal with those that are novel to Janus. Finding the omitted relation in novel nominal compounds using a taxonomy has been explored and reported elsewhere [13]. We propose treating many novel cases of metonymy in the following way: 1. Wherepatterns of metonymy can be identified,, such as using a description of a part to refer to the whole (and other patterns identified in [17]), pro-compile chains of relations between classes in the domain model, e.g., (PART-OF A B) where A and B are concepts. 2. In processing an input, when a selection restriction on an NP fails, record the failed restriction with the partial interpretation for possible future processing, after all attempts at a literal interpretation of the input have failed. 3. If no literal interpretation of the input can be found, look among the precompiled relations of step 1 above for any class that could be so related to the class of the NP that appears. 4. If a relation is applicable, attempt to resume interpretation assuming the referent of the NP is in the related class. This has not been implemented, but offers an efficient 199 alternative to the abductive theorem-proving approach described in [16]. 5. Top-Level Abstractions in the NIKL Taxonomy WML and NIKL together provide a framework for representation. The highest concepts and relations in the NIKL network provide a representational style in which more concrete constantsmust fit. The first abstraction structure used in Janus was the USC/ISI "upper structure" [19]. Because it seemed tied to sys- temic linguistics in critical ways, rather than to a more general ontological style, we have replaced it with another domain-independent set of concepts and roles. For any application domain, all domain- dependent constants must fit underneath the domain- independent structure. The domain-independent taxonomy consists of 70 concepts and 24 roles cur- rently, but certainly could be further expanded as one attempts to further axiomatize and model notions use- ful in a broad class of application domains. During the evolution of Janus, we explored whether the domain-independent taxonomy could be greatly expanded by a broad set of primitives used in the Longman Dictionary of Contemporary English [18] (LDOCE) to define domain-independent con- stants. LDOCE defines approximately 56,000 words in terms of a base vocabulary of roughly 2,000 items, s We estimate that about 20,000 concepts and roles should be defined corresponding to the 2,000 multi- way ambiguous words in the base vocabulary. The appeal, of course, is that if these basic notions were sufficient to define 56,000 words, they are generally applicable, providing a candidate for general-purpose primitives. The course of action we followed was to build a taxonomy for all of the definitions of approximately 200 items from the base vocabulary using the defini. tJons of those vocabulary items themselves in the dictionary. In this attempt, we encountered the follow- ing difficulties: • Definitions of the base vocabulary often in- volved circularity. • Definitions included assertional information and/or knowledge appropriate in defeasible reasoning, which are not fully supported by NIKL. For example, the first definition of cat is "a small four-legged animal with soft fur and sharp claws, often kept as a pet or for catching mice or rats." • Multiple views and/or vague definitions and usage arose in LDOCE. For instance, the e'rhough the authors of LDOCE definitions try to stay within the base vocabulary, exceptions do arise such as diagrams and proper nouns, e.g., Catholic Church. second definition of cat (p. 150) is "an animal related to this such as the lion or tiger" (italics added). Such a vague definition helped us little in axiomatizing the notion. Thus, we decided that hand-crafted abstractions would be needed to axiomatize by hand the LDOCE base vocabulary if general-purpose primitives were to result. On the other hand, concrete concepts cor- responding to a lower level of abstraction seem ob- tainable from LDOCE. In particular the LDOCE defini- tions of units of measurement for the avoirdupois and metric systems were very useful. A more detailed analysis of our experience is presented in [23]. 6. Related Work Several hybrid representation schemes have been created, although only ours seems to have explored a hybrid of intensional logic with an axiomatizable frame system. The most directly related efforts are the fol- lowing: • KL-TWO[31], which marries a frame system (NIKL) with propositional logic (RUP[20]), Limited inference in propositional logic is the goal of KL-'FWO. Limited aspects of universal" quantification are achieved via allowing demons in the inference process. KL-TWO and its clas- sification algorithm [27] are at the heart of the lexicalization process of the text generator Pen- man [28]. • KRYPTON [9], which marries a frame system with first-order logic. The frame system is designed to be less expressive than NIKL to allow rapid checking for disjointness of two class concepts in order to support efficient resolution theorem proving. KRYPTON has not as yet been used in any natural language processor. 7. Conclusions Our conclusions regarding the hybrid represen- tation approach of intensional logic plus NIKL-based axioms to define constants are based on three kinds of efforts: • Bringing Janus up on two large expert system and data base applications within DARPA's Battle Management Programs. The combined lexicon in the effort is approximately 7,000 words (not counting morphological variations). • The efforts synopsized in Section 5 towards general purpose domain notions. • Experience in developing IRACQ and KNACQ, acquisition tools integrated with the domain model acquisition and maintenance facility KREME, 200 First, a taxonomic language with a formal seman- tics can supplement a higher order logic in support of efficient, limited inferences needed in a naturaJ lan- guage processor. Based on our experience and that of others, the axioms and limited inference algorithms can be used for classes of anaphora resolution, inter- pretation of have, with, and of, finding omitted rela- tions in novel nominal compounds, applying selection restrictions, and mapping from the semantic represen- tation of the input to code to carry out the user's re- quest. Second, an intensional logic can supplement a taxonomic language in trying to define word senses formally. Our effort with LDOCE definitions showed how little support is provided for defining word senses in a taxonomic language. A positive contribution of intensional logic is the ability to distinguish universal statements from generic ones from existential ones; definite sets from unspecified ones; and necessary and sufficient information from assertional information, allowing for a representation closer to the semantics of English. Third, the hybridization of axioms for taxonomic knowledge with an intensional logic does not allow us to represent all that we would like to, but does provide a very effective engineering approach. Out of 7,000 lexical entries (not counting morphological variations), only 0.1% represented concepts inappropriate for the formal semantics of NIKL. The ability to pre-compile pre-specified, inferential chains, to index them via concept name and role name, and to employ taxonomic inheritance for or- ganizing knowledge were critical in selecting taxor~omic representation to supplement WML. These techniques of pre-compiling pre-specified inferential chains and of indexing them should also be applicable to other knowledge representations than taxonomies. At a later date, we hope to quantify the effec- tiveness of the semantic heuristics described in this paper. Acknowledgements This research was supported by the Advanced Research Projects Agency of the Department of Defense and was monitored by ONR under Contracts N00014-85-C-0079 and N00014-85-C-0016. The views and conclusions contained in this document are those of the author and should not be interpreted as necessarily representing the official policies, either ex- pressed or implied, of the Defense Advanced Research Projects Agency or the U.S. Government. This brief report represents a total team effort. Significant contributions were made by Damaris Ayuso, Rusty Bobrow, Ira Haimowitz, Erhard Hinrichs, Thomas Reinhardt, Remko Scha, David Stallard, and Cynthia Whipple. We also wish to acknowledge many discussions with William Mann and Norman Sondheimer in the early phases of the project. References 1. Abrett, G. and Burstein, M. ~l'he KREME Knowledge Editing Environment'. /nt. J. Man-Machine Studies 27 (1987), 103-126. 2. Ayuso Planes, D. The Logical Interpretation of Noun Compounds. Master Th., Massachusetts In- stitute of Technology,June 1985. 3. Ayuso, D.M., Shaked, V., and Weischedel, R.M. An Environment for Acquiring Semantic Information. Proceedings of the 25th Annual Meeting of the As- sociation for Computational Linguistics, ACL, 1987, pp. 32-40. 4. Ayuso, Damaris. Discourse Entities in Janus. Proceedings of the 27th Annual Meeting of the As- sociation for Computational Linguistics, 1989. 5. BBN Systems and Technologies Corp. A Guide to IRUS-II Application Development in the FCCBMP. BBN Report 6859, BBN Systems and Technologies Corp., Cambridge, MA, 1988. 6. Bobrow, R. and Webber, B. PSI-KLONE: Parsing and Semantic Interpretation in the BBN Natural Lan- guage Understanding System. Proceedings of the 1980 Conference of the Canadian Society for Com- putational Studies of Intelligence, CSCSVSCEIO, May, 1980. 7. Bobrow, R. and Webber, B. Knowledge Represen- tation for Syntactic/Semantic Processing. Proceed- ings of the National Conference on Artificial Intel- ligence, AAAI, August, 1980. 8. Brachman, R.J. and Schmolze, J.G. "An Overview of the KL-ONE Knowledge Representation System". Cognitive Science 9, 2 (April 1985). 9. Brachman, R.J., Gilbert, V.P., and Levesque, H.J. An Essential Hybrid Reasoning System: Knowledge and Symbol Level Accounts of Krypton. Proceedings of UCAI85, International Joint Conferences on Artifi- cial Intelligence, Inc., Los Angeles, CA, August, 1985, pp. 532-539. 10. Cad.son, G.. Reference to Kinds in English. Gar- land Press, New York, 1979. 11. Chafe, W. Discourse Structure and Human Knowledge. In Language Comprehension and the Acquisition of Knowledge, Winston and Sons, Washington, 1972. 12. Clark, H.H. Bridging. Theoretical Issues in Natural Language Processing, 1975, pp. 169-174. 13. Finin, T.W. The Semantic Interpretation of Nominal Compounds. Proceedings of The First An- nual National Conference on Artificial Intelligence, 201 The American Association for Artificial Intelligence, August, 1980, pp. 310-312. 14. Freeman, M. The QUA Link. Proceedings of the 1981 KL-ONE Workshop, Bolt Beranek and Newman Inc., 1982, pp. 55-65. 15. Hinrichs, E.W., Ayuso, D.M., and Scha, R. The Syntax and Semantics of the JANUS Semantic Inter- pretation Language. In Research and Development in Natural Language Understanding as Part of the Strategic Computing Program, Annual Technical Report December 1985. December 1986, BBN Laboratories, Report No. 6522, 1987, pp. 27-31. 16. Hobbs, et. al. Interpretation as Abduction. Proceedings of the 26th Annual Meeting of the As- sociation for Computational Linguistics, 1988, pp. 95-103. 17. Lakoff, G. and Johnson, M.. Metaphors We Live By. The University of Chicago Press, Chicago, 1980. 18. Longman Dictionary of Contemporary English. Essex, England, 1987. 19. Mann, W.C., Arens, Y., Matthiessen, C., Naberschnig, S., and Sondheimer, N.K. Janus Abstraction Structure -- Draft 2. USC/Information Sciences Institute, 1985. 20. David A. McAIlester. Reasoning Utility Package User's Manual. AI Memo 667, Massachusetts In- stitute of Technology, Artificial Intelligence Laboratory, April, 1982. 21. Montague, Richard. The Proper Treatment of Quantification in Ordinary English. In Approaches to Natural Language, J. Hintikka, J. Moravcsik and P. Suppes, Eds., Reidel, Dordrecht, 1973, pp. 221-242. 22. Moser, M.G. An Overview of NIKL, the New Im- plementation of KL-ONE. In Research in Knowledge Representation for NaturaJ Language Understanding - AnnuaJ Report, I September 1982 - 31 August 1983, Sidner, C. L., et al., Eds., BBN Laboratories Report No. 5421, 1983, pp. 7-26. 23. Reinhardt, T. and Whipple, C. Summary of Con- clusions from the Longman's Taxonomy Experiment. In Goodman, B., Ed.,, BBN Systems and Tech- nologies Corporation, Cambridge, MA, 1988, pp.. 24. Rich, C. Knowledge Representation languages and the Predicate Calculus: How to Have Your Cake and Eat It Too. Proceedings of the Second National Conference on Artificial Intelligence, AAAI, August, 1982, pp. 193-196. 25. Scha, R. and Stallard, D. Multi-level Plurals and Distributivity. 26th Annual Meeting of the Association for Computational Linguistics, Association for Com- putational Linguistics, June, 1988, pp. 17-24. 26. Schmolze, J. G., and Israel, D.J. KL-ONE: Semantics and Classification. In Research in Knowledge Representation for Natural Language Un- derstanding - Annual Report, 1 September 1982 - 31 August 1983, Sidner, C.L., et al., Eds., BBN Laboratories Report No. 5421, 1983, pp. 27-39. 27. Schmolze, J.G., Lipkis, T.A. Classification in the KL-ONE Knowledge Representation System. Proceedings of the Eighth International Joint Con- ference on Artificial Intelligence, 1983. 28. Sondheimer, N. K. and Nebel, B. A Logical-form and Knowledge-base Design for Natural Language Generation. Proceedings AAAI-86 Fifth National Con- ference on Artificial Intelligence, The American As- sociation for Artificial Intelligence, Los Altos, CA, Aug, 1986, pp. 612-618. 29. Sondheimer, N.K., Weischedel, R.M., and Bobrow, R.J. Semantic Interpretation Using KL-ONE. Proceedings of COLING-84 and the 22nd Annual Meeting of the Association for Computational Linguis- tics, Association for Computational Linguistics, Stan- ford, CA, July, 1984, pp. 101-107. 30. Stallard, David. Answering Questions Posed in an Intensional Logic: A Multilevel Semantics Ap- proach. In Research and Development in Natural Language Understanding as Part of the Strategic Computing Program, R. Weischedel, D.Ayuso, A. Haas, E. Hinrichs, R. Scha, V. Shaked, D. Stallard, Eds., BBN Laboratories, Cambridge, Mass., 1987, ch. 4, pp. 35-47. Report No. 6522. 31. Vilain, M. The Restricted Language Architecture of a Hybrid Representation System. Proceedings of IJCAI85, International Joint Conferences on Artificial Intelligence, Inc., Los Angeles, CA, August, 1985, pp. 547-551. 32. Weischedel, R.M. "Knowledge Representation and Natural Language Processing". Proceedings of the/EEE 74, 7 (July 1986), 905-920. 33. Weischedel, R.M., Bobrow, R., Ayuso, D.M., and Ramshaw, L. Portability in the Janus Natural Lan- guage Interface. Notebook of Speech and Natural Language Workshop, 1989. To be reprinted by Mor- gan Kaufmann Publishers. 202
1989
24
PLANNING TEXT FOR ADVISORY DIALOGUES" Johanna D. Moore UCLA Department of Computer Science and USC/Information Sciences Institute 4676 Admiralty Way Marina del Key, CA 90292-6695, USA C~cile L. Paris USC/information Sciences Institute 4676 Admiralty Way Marina del Key, CA 90292-6695, USA ABSTRACT Explanation is an interactive process re- quiring a dialogue between advice-giver and advice-seeker. In this paper, we argue that in order to participate in a dialogue with its users, a generation system must be capable of reasoning about its own utterances and there- fore must maintain a rich representation of the responses it produces. We present a text planner that constructs a detailed text plan, containing the intentional, attentional, and .,,e~,~nc~ ~tructures of the text it generates. INTRODUCTION Providing explanations in an advisory situa- tion is a highly interactive process, requiring a dialogue between advice-giver and advice- seeker (Pollack eta/., 1982). Participating in a dialogue requires the ability to reason about previous responses, e.g., to interpret the user's follow-up questions in the context of the on- going conversation and to determine how to clarify a response when necessary. To pro- vide these capabilities, an explanation facility must understand what it was trying to convey and how that information was conveyed, i.e., the intentional structure behind the explana- tion, including thegoal of the explanation as a whole, the subgoal(s)of individual parts of the explanation, and the rhetorical means used to achieve them. Researchers in natural language under. standing have recognized the need for such information. In their work on discourse anal- ysis, Grosz and Sidner (1986) argue that it is necessary to represent the intentional struc- ture, the attentional structure (knowledge about which aspects of a dialogue are in focus at each point), and the linguistic structure of "The research described in this paper was sup- ported by the Defense Advanced Research Projects Agency (DARPA) under a NASA Ames cooperative agreement number NCC 2-520. The authors would like to thank William Swartout for comments on ear- lier versions of this paper. 203 the discourse. In contrast, most text gener- ation systems (with the notable exception of KAMP (Appelt, 1985)) have used only rhetor- ical and attentional information to produce coherent text (McKeown, 1985, McCoy, 1985, Paris, 1988b), omitting intentional informa- tion, or conflating intentional and rhetorical information (Hovy, 1988b). No text gener- ation system records or reasons about the rhetorical, the attentional, as well as the in- tentional structures of the texts it produces. In this paper, we argue that to success- fully participate in an explanation dialogue, a generation system must maintain the kinds of information outlined by Grosz and Sidner as well as an explicit representation of the rhetorical structure of the texts it generates. We present a text planner that builds a de- tailed text plan, containing the intentional, attentional, and rhetorical structures of the responses it produces. The main focus of this paper is the plan language and the plan structure built by our system. Examples of how this structure is used in answering follow- up questions appear in (Moore and Swartout, 1989). WHY A DETAILED TEXT PLAN? In order to handle follow-up questions that may arise if the user does not fully understand a response given by the system, a generation facility must be able to determine what por- tion of the text failed to achieve its purpose. If the generation system only knows the top-level discourse goal that was being achieved by the text (e.g., persuade the hearer to perform an action), and not what effect the individual parts of the text were intended to have on the hearer and how they fit together to achieve this top-level goal, its only recourse is to use a different strategy to achieve the top-level goal. It is not able to re-explain or clarify any part of the explanation. There is thus a need for a text plan to contain a specification of the intended effect of individual parts of the text on the hearer and how the parts relate to one another. We have developed a text planner that records the following information about the responses it produces: • the information that Grosz and Sidner (1986) have presented as the basics of a discourse structure: - intentional structure: a represen- tation of the effect each part of the text is intended to have on the hearer and how the complete text achieves the overall discourse pur- pose (e.g., describe entity, persuade hearer to perform an action). - attentional structure: information / about which objects, properties and events are salient at each point in the discourse. User's follow- up questions are often ambiguous. Information about the attentional state of the discourse can be used to disambiguate them (cf. (Moore and Swartout, 1989)). • in addition, for generation we require the following: - rhetorical structure: an agent must understand how each part of the text relates rhetorically to the oth- ers. This is necessary for linguis- tic reasons (e.g., to generate the appropriate clausal connectives in multi-sentential responses) and for responding to requests for elabora- tion/clarification. • assumption information: ad'vice- giving systems must take knowl- edge about their users into account. However, since we cannot rely on having complete user models, these systems may have to make assump- tions about the hearer in order to use a particular explanation strat- egy. Whenever such assumptions are made, they must be recorded. The next sections describe this new text plan- ner and show how it records the information needed to engage in a dialogue. Finally, a brief comparison with other approaches to text gen- eration is presented. TEXT PLANNER The text planner has been developed as part of an explanation facility for an expert sys- tern built using the Explainable Expert Sys- tems (EES) framework (Swartout and Smo- liar, 1987). The text planner has been used in two applications. In this paper, we draw our examples from one of them, the Program Enhancement Advisor (PEA) (Neches et al., 1985). PEA is an advice-giving system in- tended to aid users in improving their Com- mon Lisp programs by recommending trans- formations that enhance the user's code. 1 The user supplies PEA with a program and in- dicates which characteristics of the program should be enhanced (any combination of read- ability, maintainability, and efficiency). PEA then recommends transformations. After each recommendation is made, the user is free to ask questions about the recommendation. We have implemented a top-down hier- archical expansion planner (d la Sacerdoti (1975)) that plans utterances to achieve dis- course goals, building (and recording) the in- tentional, attentional, and rhetorical struc- ture of the generated text. In addition, since the expert system explanation facility is in- tended to be used by many different users, the text planner takes knowledge about the user into account. In our system, the user model contains the user's domain goals and the knowledge he is assumed to have about the domain. THE PLAN LANGUAGE In our plan language, intentional goals are represented in terms of the effects the speaker intends his utterance to have on the hearer. Following Hovy (1988a), we use the terminol- ogy for expressing beliefs developed by Cohen and Levesque (1985) in their theory of ratio- nal interaction, but have found the need to extend the terminology to represent the types of intentional goals necessary for the kinds of responses desired in an advisory setting. Although Cohen and Levesque have subse- quently retracted some aspects of their theory of rational interaction (Cohen and Levesque, 1987), the utility of their notation for our pur- poses remains unaffected, as argued in (Hovy, 1989). 2 a PEA recommends transformations that improve the 'style' of the user's code. It does not attempt to understand the content of the user's program. 2Space limitations prohibit an exposition of their terminology in this paper. We provide English para- phrases where necessary for clarity. (BR8 S II x) should be read as 'the speaker believes the speaker and hearer mutually believe x.' 204 EFFECT: (PERSUADE S H (GOAL H Eventually(DONE H ?act))) CONSTRAINTS: (AND (GOAL S ?domain-goal) (STEP ?act ?domain-goal) (BMB S H (GOAL H ?domaln-goal))) NUCLEUS: (FOR.ALL ?domain-goal (MOTIVATION ?act ?domain-goal)) SATELLITES: nil Figure 1: Plan Operator for Persuading the Hearer to Do An Act EFFECT: (MOTIVATION ?act ?domain-goal) CONSTRAINTS: (AND (GOAL S ?domain-goal) (STEP ?act ?domain-goal) (BMB S H (GOAL H ?domain-goal)) (ISA ?act REPLACE)) NUCLEUS: ((SETQ ?replacee (FILLER-OF OBJECT ?act)) (SETQ ?replacer (FILLER-OF GENERALIZED-MEANS ?act)) (BMB S H (DIFFERENCES ?repLacee ?repLacer ?domain-goal)) ) SATELLITES: nll Figure 2: Plan Operator for Motivating a Replacement by Describing Differences between Replacer and Replacee Rhetorical structure is represented in terms of the rhetorical relations defined in Rhetorical Structure Theory (RST) (Mann and Thompson, 1987), a descriptive theory characterizing text structure in terms of the relations that hold between parts of a text (e.g., CONTRAST, MOTIVATION). The defini- tion of each RST relation includes constraints on the two entities being related as well as constraints on their combination, and a spec- ification of the effect which the speaker is attempting to achieve on the hearer's be- lids. Although other researchers have cate- gorized typical intersentential relations (e.g., (Grimes, 1975, Hobbs, 1978)), the set of rela- tions proposed by RST is the most complete and the theory sufficiently detailed to be eas- ily adapted for use in generation. In our plan language, each plan operator consists of: an effect: a characterization of what goai(s) this operator can be used to achieve. An effect may be an in- tentional goal, such as persuade the hearer to do an ac~ionorarhetorical relation, such as provide motivation for an action. a constraint list: a list of conditions that must be true before the operator can be applied. Constraints may refer to facts in the system's knowledge base or in the user model. • a nucleus: the main topic to be ex- pressed. The nucleus is either a prim- itive operator (i.e., speech acts such as inform, recommend and ask) or a goal intentional or rhetorical) which must be ther expanded. All operators must contain a nucleus. • satellites: subgoal(s)that express addi- tional information which may be needed to achieve the effect of the operator. When present, satellites may be specified as required or optional. Examples of our plan operators are shown in Figures 1 and 2. The operator shown in Figure 1 can be used if the speaker (S) intends to persuade the hearer (H) to intend to do some act. This plan operator states that if an act is a step in achieving some domain goal(s) that the hearer shares, one way to persuade the hearer to do the act is to motivate the act in terms of those domain goals. Note that this plan operator takes into account not only the system's knowledge of itself, but also the sys- tem's knowledge about the user's goals, as em- bodied in a user model. If any domain goals that satisfy the constraints are found, this op- erator will cause the planner to post one or more MOTIVATION subgoals. This plan opera- tor thus indicates that one way to achieve the intentional goal of persuading the hearer to perform an action is by using the rhetorical means MOTIVATION. 205 EFFECT: (BMB S H ?x) CONSTRAINTS: nil NUCLEUS: (INFORM S H ?x) SATELLITES: (((PERSUADE S H 7x) *optional*)) Figure 3: Plan Operator for Achieving Mutual Belief of a Proposition SYSTEM USER SYSTEM " USER SYSTEM What characteristics of the program would you like to enhance? Maintainability. You should replace (setq x I) with (serf x I). Serf can be used to assign a value to any generalized-variable. Serq can only be used to assign a value to a simple-variable. A generalized-variable is a storage location that can be named by any accessor function. What is a generalized variable? For example, the car and cdr of a cons are generalized-variables, named by the accessor functions car and cdr. Other examples are an element of an array or a component of a structure. Figure 4: Sample Dialogue [11 P-] [31 [4] [51 Plans that achieve intentional goals and those that achieve rhetorical relations are dis- tinguished for two reasons: (1) so that the completed plan structure contains both the in- tentional goals of the speaker and the rhetor- ical means used to achieve them; (2) because there are many different rhetorical strategies for achieving any given intentional goal. For example, the system has several plan opera- tors for achieving the intentional goal of de- scribing a concept. It may describe a concept by stating its class membership and describ- ing its attributes and its parts, by drawing an analogy to a similar concept, or by giving examples of the concept. There may also be many different plan operators for achieving a particular rhetorical strategy. (The plan- ner employs selection heuristics for choosing among applicable operators in a given situa- tion (Moore and Swartout, 1989).) Our plan language allows both general and specific plans to be represented. For ex- ample, Figure 2 shows a plan operator for achieving the rhetorical relation MOTIVATION. This is a very specific operator that can be used only when the act to be motivated is a replacement (e.g., replace sezq with sezf). In this case, one strategy for motivating the act is to compare the object being replaced and the object that replaces it with respect to the domain goal being achieved. On the other hand, the operator shown in Figure 3 is general and can be used to achieve mu- tual belief of any assertion by first inform- ing the hearer of the assertion and then, op- tionaUy, by persuading him of that fact. Be- cause we allow very general operators as well as very specific ones, we can include both domain-independent and domain-dependent strategies. A DETAILED EXAMPLE Consider the sample dialogue with our sys- tem shown in Figure 4, in which the user in- dicates that he wishes to enhance the main- tainability of his program. While enhanc- ing maintainability, the system recommends that the user perform the act replace-I, namely 'replace setq with serf', and thus posts the intentional goal (BMB S H (GOAL H Evenzually(DONE H replace-I))). This discourse goal says that the speaker would like to achieve the state where the speaker believes that the hearer and speaker mutually believe that it is a goal of the hearer that the replace- ment eventually be done by the hearer. The planner then identifies all the opera- tors whose effect field matches the discourse goal to be achieved. For each operator found, the planner checks to see if all of its con- straints are satisfied. In doing so, the text planner attempts to find variable bindings in the expert system's knowledge base or the user model that satisfy all the constraints in 206 EFFECT: (BMB S H (GOAL H Eventually(DONE H ?act))) CONSTRAINTS: none NUCLEUS: (RECOMMEND S H ?act) SATELLITES: (((BMB S H (COMPETENT H (DONE H ?act))) *optional*) ((PERSUADE S H (GOAL H Eventually(DONE H 7act))) *optional*) ) Figure 5: High-level Plan Operator for Recommending an Act apply-SETQ-t o-SETF-~rans formal; ion apply-lo cal-1;ransf ormat ions-whos e-rhs-us e-is-mor e-general-1:han-lhs-us • apply-local-1;rans f orma1~ions-thal;-enhance-mainl;ainability apply-1~ransforma¢ ions-1~hal;-enhanc e-mainl; ainabili~y enhanc e-mainl; ainabili1: y enhance-program Figure 6: System goals leading to replace setq wil;h sel;f the constraint list. Those operators whose constraints are satisfied become candidates for achieving the goal, and the planner chooses one based on: the user model, the dialogue history, the specificity of the plan operator, and whether or not assumptions about the user's beliefs must be made in order to satisfy the operator's constraints. Continuing the example, the current dis- course goal is to achieve the state where it is mutually believed by the speaker and hearer that the hearer has the goal of even- tually executing the replacement. This dis- course goal can be achieved by the plan op- erator in Figure 5. This operator has no constraints. Assume it is chosen in this case. The nucleus is expanded first, 3 causing (RECOMMEND S H replace-l) to be posted as a subgoal. RECOMMEND is a primitive operator, and so expansion of this branch of the plan is complete. 4 Next, the planner must expand the satel- lites. Since both satellites are optional in this case, the planner must decide which, if any, are to be posted as subgoals. In this example, the first satellite will not be expanded because the user model indicates that the user is ca- 31n some cases, such as a satellite posting the rhetorical relation background, the satellite is ex- panded first. +At this point, (RECOMMEND S H replace-l) must be translated into a form appropriate as input.to the realization component, the Penman system (Mann, 1983, Kasper, 1989). Based on the type of speech act, its arguments, and the context in which it occurs, the planner builds the appropriate structure. Bateman and Paxis (1989) have begun to investigate the prob- lem of phrasing utterances for different types of users. pable of performing replacement acts. The second satellite is expanded, s posting the in- tentional subgoal to persuade the user to per- form the replacement. A plan operator for acldeving this goal using the rhetorical rela- tion MOTIVATION was shown in Figure i. When attempting to satisfy the con- straints of the operator in Figure 1, the system first checks the constraints (GOAL S ?domain-goal) and (STEP replace-1 ?domain-goal). These constraints state that, in order to use this operator, the system must find an expert system goal, ?domain-goal, that replace-I is a step in achieving. This results in several possible bindings for the variable ?domain-goal. In this case, the applicable system goals, listed in order from most specific to the top-level goal of the system, are shown in Figure 6. The last constraint of this plan opera- tor, (BMB S H (GOAL H ?domain-goal)), is a constraint on the user model stating that the speaker and hearer should mutu~IIy believe that ?domain-goal is a goal of the hearer. Not all of the bindings found so far will sat- isfy this constraint. Those which do not will not be rejected immediately, however, as we do not assume that the user model is com- plete. Instead, they will be noted as possible bindings, and each will be marked to indicate that, if this binding is used, an assumption is being made, namely that the binding of Sin other situations, the system could choose not to expand this satellite and await feedback from the user instead (Moore and Swartout, 1989). 207 (BMB S H (GOAL H Eventually (DONE H replace-I))) NI (MOTIVATION replace1 enhance-maintainability) (RECOMMEND S H replace-I) (PERSUADE S H (GOAL H Eventually (DONE H replace-I))) NI (MOTIVATION replace-1 enhance-maintainability) .I (BMB S H (DIFFERENCES setq serf enhance-maintainability)) NI N (BMB S H (DIFFERENCE setq serf use)) S (INFORM S H (IDENTITY (VALUE-OF use serf) S assign-value.to-generalized-variableJJ (BMR S H (KNOW H generalized-variable)) (CONTRAST (IDENTITY (VALUE-OF use setq))) N N I (ELABORATION general zed-variable) (INFORM S H (IDENTITY (VALU E-OF use setq) ~ ~ S assign-value-to-sim pie-variable)) ~ , (INFORM S H (CLASS-ASCRIPTION (ELABORATION-OBJECT-ATTRIBUTE generalized-variable storage-location)) generalized-variable named-by) repla(el = replm:eSETQwithSETF N [ N • Nucleus S = Satellite (INFORM S H (IDENTrI"Y (VALUE-OF named-by accessor-function ))) Figure 7: Completed Text Plan for Recommending Replace SETQ with SETF ?domain-goal is assumed to be a goal of the user. In this example, since the user is using the system to enhance a program and has in- dicated that he wishes to enhance the main- tainability of the program, the system infers the user shares the top-level goal of the system (enhance-program), as well as the more spe- cific goal enhance-mainZainabilizy. There- fore, these are the two goals that satisfy the constraints of the operator shown in Figure I. The text planner prefers choosing binding environments that require no assumptions to be made. In addition, in order to avoid ex- plaining parts of the reasoning chain that the user is familiar with, the most specific goal is chosen. The plan operator is thus instanti- ated with enhance-mainzainability as the binding for the variable ?domain-goal. The selected plan operator is recorded as such, and all other candidate operators are recorded as untried alternatives. The nucleus of the chosen plan op- erator is now posted, resulting in the subgoal (MOTIVATION replace-1 enhance- mainZainability). The plan operator cho- sen for achieving this goal is the one that 208 was shown in Figure 2. This operator mo- tivates the replacement by describing differ- ences between the object being replaced and the object replacing it. Although there are many differences between sezq and serf, only the differences relevant to the domain goal at hand (enhance-mainzainabilizy) should be expressed. The relevant differ- ences are determined in the following way. From the expert system's problem-solving knowledge, the planner determines what roles eezq and eezf play in achieving the goal enhance-maintainabilizy. In this case, the system is enhancing maintainability by ap- plying transformations that replace a specific construct with one that has a more general usage. SeZq has a more specific usage than sezf, and thus the comparison between sezq and sezf should be based on the generality of their usage. Finally, since the term generalized- variable has been introduced, and the user model indicates that the user does not know this term, an intentional goal to define it is posted: (BMB S H (KNOW H generalized-variable)). This goal is achieved with a plan operator that describes concepts by stating their class membership and describing their attributes. Once com- pleted, the text plan is recorded in the dia- logue history. The completed text plan for response (3) of the sample dialogue is shown in Figure 7. ADVANTAGES As illustrated in Figure 7, a text plan pro- duced by our planner provides a detailed rep- resentation of the text generated by the sys- tem, indicating which purposes different parts of the text serve, the rhetorical means used to achieve them, and how parts of the plan are related to each other. The text plan also contains the assumptions that were made dur- ing planning. This text plan thus contains both the intentional structure and the rhetor- ical structure of the generated text. From this tree, the dominance and saris/action- precedence relationships as defined by Grosz and Sidner can be inferred. Intentional goals higher up in the tree dominate those lower down and a left to right traversal of the tree provides satisfaction-precedence ordering. The attentional structure of the generated text can also be derived from the text plan. The text plan records the order in which top- ics appear in the explanation. The global vari- able *local-contezt ~ always points to the plan node that is currently in focus, and previously focused topics can be derived by an upward traversal of the plan tree. The information contained in the text plan is necessary for a generation system to be able to answer follow-up questions in context. Follow-up questions are likely to refer to the previously generated text, and, in addition, they often refer to part of the generated text, as opposed to the whole text. Without an ex- plicit representation of the intentional struc- ture of the text, a system cannot recognize that a follow-up question refers to a portion of the text already generated. Even if the system realizes that the follow-up question refers back to the original text, it cannot plan a text to clarify a part of the text, as it no longer knows what were the intentions behind various pieces of the text. Consider again the dialogue in Figure 4. When the user asks 'What is a gener- alized variable?' (utterance (4) in Fig- ure 4), the query analyzer interprets this ques- tion and posts the goal: (BMB S H (KNOW H generalized-variable) ). At this point, the explainer must recognize that this discourse goal was attempted and not achieved by the 209 last sentence of the previous explanation. 6 Failure to do so would lead to simply repeat- ing the description of a generalized variable that the user did not understand. By exam- ining the text plan of the previous explanation recorded in the dialogue history, the explainer is able to determine whether the current goal (resulting from the follow-up question) is a goal that was attempted and failed, as it is in this case. This time, when attempting to achieve the goal, the planner must select an al- ternative strategy. Moore (1989b) has devised recovery heuristics for selecting an alternative strategy when responding to such follow-up questions. Providing an alternative explana- tion would not be possible without the explicit representation of the intentional structure of the generated text. Note that it is important to record the rhetorical structure as well, so that the text planner can choose an alterna- tive rhetorical strategy for achieving the goal. In the example under consideration, the re- covery heuristics indicate that the rhetorical strategy of giving examples should be chosen. RELATED WORK Schemata (McKeown, 1985) encode standard patterns of discourse structure, but do not in- dude knowledge of how the various parts of a schema relate to one another or what their intended effect on the hearer is. A schema can be viewed as a compiled version of one of our text plans in which all of the non- terminal nodes have been pruned out and only the leaves (the speech acts) remain. While schemata can produce the same initial behav- ior as one of our text plans, all of the ratio- nale for that behavior has been compiled out. Thus schemata cannot be used to participate in dialogues. If the user indicates that he has not understood the explanation, the system cannot know which part of the schema failed to achieve its effect on the hearer or which rhetorical strategy failed to achieve this ef- fect. Planning a text using our approach is essentially planning a: schema from more fine- grained plan operators. From a library of such plan operators, many varied schemata can re- sult, improving the flexibility of the system. In an approach taken by Cohen and Ap- pelt (1979) and Appelt (1985), text is planned by reasoning about the beliefs of the hearer and speaker and the effects of surface speech aWe are also currently implementing another in- terface which allows users to use a mouse to point at the noun phrases or clauses in the text that were not understood {Moore, 1989b). acts on these beliefs (i.e., the intentional ef- fect). This approach does not include rhetori- cal knowledge about how clausal units may be combined into larger bodies of coherent text to achieve a speaker's goals. It assumes that appropriate axioms could be added to gen- erate large (more than one- or two-sentence) bodies of text and that the text produced will be coherent as a by-product of the planning process. However, this has not been demon- strated. Itecently, Hovy (1988b) built a text struc- turer which produces a coherent text when given a set of inputs to express. Hovy uses an opportunistic planning approach that or- ders the inputs according to the constraints on the rhetorical relations defined in Rhetori- cal Structure Theory. His approach provides a description of what can be said when, but does not include information about why this infor- mation can or should be included at a partic- ular point. Hovy's approach confiates inten- tional and rhetorical structure and, therefore, a system using his approach could not later reason about which rhetorical strategies were used to achieve intentional goals. STATUS AND FUTURE WORK The text planner presented is imple.mented in Common Lisp and can produce the text plans necessary, to participate in the sample ~lialogue described m this paper and several others (see (Moore, 1989a, Paris, 1988a)). W e currently have over 60 plan operators and the system can answer tlie following types of (follow-up) questions: - Why? - Why conclusion? - Why are you trying to achieve goal? - Why are you using method to achieve goal? Why are you doing act? How do you achieve goal? - How did you achieve goal (in this case)? - What is a concept? - What is the difference between concept1 and concept2? - H u h ? The text planning system described in this paper is being incorporated into two expert systems currently under development. These systems will be installed and used in the field. This will give us an opportunity to evaluate the techniques proposed here. We are currently studying how the atten- tional structure inherent in our text plans can be used to guide the realization process, for example in the planning of referring expres- sions and the use of cue phrases and pronouns. We are also investigating criteria for the ex- pansion and ordering of optional satellites in our plan operators. Currently we use informa- tion from the user model to dictate whether or not optional satellites are expanded, and their ordering is specified in each plan opera- tor. We wish to extend our criteria for satel- lite expansion to include other factors such as pragmatic and stylistic goals (Hovy, 1988a) (e.g., brevity) and the conversation that has occurred so far. We are also investigating the use of attentional information to control the ordering of these satellites (McKeown, 1985). We also believe that the detailed text plan constructed by our planner will allow a system to modify its strategies based on experience (feedback from the user). In (Paris, 1988a), we outline our preliminary ideas on this issue. We have also begun to study how our planner can be used to handle incremental generation of texts. In (Moore, 1988), we argue that the detailed representation provided by our text plans is necessary for execution monitoring and to indicate points in the planning process where feedback from the user may be helpful in incremental text planning. CONCLUSIONS In this paper, we have presented a text plan- ner that builds a detailed text plan, contain- ing the intentional, attentional, and rhetor- ical structures of the responses it produces. We argued that, in order to participate in a dialogue with its users, a generation system must be capable of reasoning about its past utterances. The text plans built by our text planner provide a generator with the infor- mation needed to reason about its responses. We illustrated these points with a sample di- alogue. REFERENCES Douglas E. Appelt. 1985. Planning Natu- ral Language Utterances. Cambridge Univer- sity Press, Cambridge, England. John A. Bateman and C~cile L. Paris. 1989. Phrasing a text in terms the user can understand. In Proceedings of the Eleventh International Joint Conference on Artificial Intelligence, Detroit, MI, August 20-25. Philip It. Cohen and Hector J. Levesque. 1985. Speech Acts and RationaLity. In Pro- ceedings of the Twenty-Third Annual Meet- ing of the Association for Computational Lin- 210 guistics, pages 49-60, University of Chicago, Chicago, Illinois, July 8-12. Philip I~. Cohen and Hector J. Levesque. 1987. Intention is Choice with Commitment, November. Philip R. Cohen and C. Raymond Per- ranlt. 1979. Elements of a Plan-based Theory of Speech Acts. Cognitive Science, 3:177-212. Joseph E. Grimes. 1975. The Thread of Discourse. Mouton, The Hague, Paris. Barbara J. Grosz and Candace L. Sidner. 1986. Attention, Intention, and the Struc- ture of Discourse. Computational Linguistics, 12(3):175-204. Jerry Hobbs. 1978. Why is a Discourse Coherent? Technical Report 176, SRI Inter- national. Eduard H. Hovy. 1988a. Generating Nat- ural Language Under Pragmatic Constraints. Lawrence Erlbaum, Hillsdale, New Jersey. Eduard H. Hovy. 1988b. Planning Coher- ent Multisentential Text. In Proceedings of the Twenty-Sixth Annual Meeting of the As- sociation for Computational Linguistics, State University of New York, Buffalo, New York, June 7-10. Eduard H. Hovy. 1989. Unresolved Issues in Paragraph Planning, April 6-8. Presented at the Second European Workshop on Natural Language Generation. Robert Kasper. 1989. SPL: A Sentence Plan Language for Text Generation. Technical report, USC/ISI. William C. Mann and Sandra A. Thomp- son. 1987. Rhetorical Structure Theory: A Theory of Text Organization. In Livia Polanyi, Editor, The Structure of Discourse. Ablex Publishing Corporation, Norwood, N.J. William Mann. 1983. An Overview of the Penman Text Generation System. Technical report, USC/ISI. Kathleen F. McCoy. 1985. Correcting Object-Related Misconceptions. PhD thesis, University of Pennsylvania, December. Pub- lished by University of Pennsylvania as Tech- nical Report MS-CIS-85-57. Kathleen R McKeown. 1985. Text Gener- ation: Using Discourse Strategies and Focus Constraints to Generate Natural Language Text. C~mbridge University Press, Cam- bridge, England. 211 Johanna D. Moore and William R. Swartout. 1989. A Reactive Approach to Ex- planation. In Proceedings of the Eleventh In- ternational Joint Conference on Artificial fn- telligence, Detroit, MI, August 20-25. Johanna D. Moore. 1988. Planning and Reacting. In Proceedings of the AAAI Work- shop on Text Planning and Generation, St Paul, Minnesota, August 25. Johanna D. Moore. 1989a. Responding to "Huh?": Answering Vaguely Articulated Follow-up Questions. In Proceedings of the Conference on Human Factors in Computing Systems, Austin, Texas, April 30 - May 4. Johanna D. Moore. 1989b. A Reactive Ap- proach to Explanation in Expert and Advice- Giving Systems. PhD thesis, University of California, Los Angeles, forthcoming. Robert Neches, William R. Swartout, and Johanna D. Moore. 1985. Enhanced Main- tenance and" Explanation of Expert Systems through Explicit Models of their Develop- meat. IEEE Transactions on Software En- gineering, SE- 11(11), November. C~cile L. Paris. 1988a. Generation and Explanation: Building an Explanation Fa- cility for the Explainable Expert Systems Framework, July 17-21. Presented at the Fourth International Workshop on Natural Language Generation. C~cile L. Paris. 1988b. Tailoring Object Descriptions to the User's Level of Exper- tise. Computational Linguistics Journal, 14 (3), September. Martha E. Pollack, Julia Hirschberg, and Bonnie Lynn Webber. 1982. User Participa- tion in the Reasoning Processes of Expert Sys- tems. In Proceedings of the Second National Conference on Artificial Intelligence, Pitts- burgh, Pennsylvania, August 18-20. Earl D. Sacerdoti. 1975. A Structure for Plans and Behavior. Technical Report TN- 109, SRI. William R. Swartout and Stephen W. Smoliar. 1987. On Making Expert Systems more like Experts. Expert Systems, 4(3), Au- gust.
1989
25
Two Constraints on Speech Act Ambiguity Elizabeth A. Hinkelman and James F. Allen Computer Science Department The University of Rochester Rochester, New York 14627 ABSTRACT Existing plan-based theories of speech act interpretation do not account for the conventional aspect of speech acts. We use patterns of linguistic features (e.g. mood, verb form, sentence adverbials, thematic roles) to suggest a range of speech act interpretations for the utterance. These are filtered using plan-bused conversational implicatures to eliminate inappropriate ones. Extended plan reasoning is available but not necessary for familiar forms. Taking speech act ambiguity seriously, with these two constraints, explains how "Can you pass the salt?" is a typical indirect request while "Are you able to pass the salt?" is not. 1. The Problem Full natural language systems must recognize speakers' intentions in an utterance. They must know when the speaker is asserting, asking, or making a social or official gesture [Searle 69,Searle 75], in addition to its content. For instance, the ordinary sentence (I) Can you open the door?. might in context be a question, a request, or even an offer. Several kinds of information complicate the recognition process. Literal meaning, lexical and syntactic choices, agents' beliefs, the immediate situation, and general knowledge about human behavior all clarify what the ordinary speaker is after. Given an utw.xance and context, we model how the utterance changes the hearer's state. Previous work falls roughly into three approaches, each with characteristic weaknesses: the idiom approach, the plan based approach, and the descriptive approach. The idiom approach is motivated by pat phrases like: (2) a: Can you please X7 b: Would you kindly X? c: I'd like X. d: May I X? e: How about X? They are literally questions or statements, but often used as requests or in (e), suggestions. The system could look for these particular strings, and build the corresponding speech act using the complement as a parameter value. But such sentences are not true idioms, because the literal meaning is also possible in many contexts. Also, one can respond to the literal and nonliteral acts: "Sure, it's the 9th." The idiom approaches are too inflexible to choose the literal " r~ding or to accommodate ambiguity. They lack a theory connecting the nonliteral and literal readings. Another problem is that some classic examples are not even pat phrases: '212 (3) a: It's cold in here. b: Do you have a watch on? In context, (a) may be a request to close the window. Sentence Co) may be asking what time it is or requesting to borrow the watch. The idiom approach allows neither for context nor the reasoning connecting utterance and desired action. The plan based approach [Allen 83,McCafferty 86,Perrault 80,Sidner 81] presumes a mechanism modelling human problem solving abilities, including reasoning about other agents and inferring their intentions. The system has a model of the current situation and the ability to choose a course of action. It can relate uttered propositions to the current situation: being cold in here is a bad state, and so you probably want me to do something about it; the obvious solution is for me to close the window, so, I understand, you mean for me to close the window. The plan based approach provides a tidy, independently motivated theory for speech act interpretation. It does not use language-specific information, however. Consider (4) a: Can you speak Spanish? b: Can you speak Spanish, please? Here the addition of a word alters an utterance which is a yes/no question in typical circumstances to a request. This is not peculiar to "please": (5) a: Can you open the door? b: Are you able to open the door? Here two sentences, generally considered to have the same semantics, differ in force: the first may be a question, an offer, or a requesL the second, only a question. Further, different languages realize speech acts in different ways: greetings, for example (or see [Hem 84]). (6) a: You want to cook dinner. b: You wanna toss your coats in there? The declarative sentence (a) can be a request, idiomatic to Hebrew, while the nearest American expression is interrogative Co). Neither is a request in British English. The plan based approach has nothing to say about these differences. Neither does it explain the psycholinguistic [Gibbs 84] finding that people access idiomatic interpretations in context more quickly than literal ones. Psycholinguistically plausible models cannot derive idiomatic meanings from literal meanings. Descriptive approaches cover large amounts of data. [Brown 80] recognized the diversity of speech act phenomena and included the first computational model with wide coverage, but lacked theoretical claims and did not handle the language-specific cases well. [Gordon 75] expresses some very nice generalizations, but lacks motivation and sufficient detail. It does not account for examples like numbers 3, 4, 6 or 7. In number 3, for example, one asks a question by asking literally whether the hearer knows the answer. A plan-based approach would argue that knowing the answer is a precondition for stating it, and this logical connection enables identification of the real question. But Gordon and Lakoff write off this one, because their sincerity conditions are inadequate. We augment the plan-based approach with a linguistic component: compositional rules associating linguistic features with partial speech act descriptions. The rules express linguistic conventions that are often motivated by planning theory, but they also allow for an element of arbitrariness in just which forms are idiomatic to a language, and just which words and features mark it. For this reason, conventions of use cannot be handled directly by the plan reasoning mechanism. They require an interpretation process paralleling syntactic and semantic interpretation, with the same provisions for merging of partial interpretations and postponement of ambiguityresolution. The compositionality of partial speech act interpretations and use of ambiguity are both original to our approach. Once the utterances have been interpreted by our conventional rules to produce a set of candidate conventional interpretations, these interpretations are filtered by the plan reasoner. Plan reasoning processes unconventional forms in the same spirit as earlier plan-~ models, finding non-conventional interpretations and motivating many conventional ones. We propose a limited version of plan reasoning, based on an original claim about conversational implicatare, which is adequate for filtering conventional interpretations. Section 2 will explain the linguistic computation which interprets linguistic features as speech act descriptions. Section 3 describes plan reasoning techniques that are useful for speech act interpretation and presents our view of plan reasoning. Section 4 presents the overall process combining these two parts. 2. Linguistic Constraints Speech act interpretation has many similarities to the plan recognition problem. Its goal is, given a situation and an utterance, to understand what the speaker was doing with that utterance, and to find a basis for an appropriate response. In our case this will mean identifying a set of plan structures representing speech acts, which are possible interpretations of the utterance. In this section we show how to use compositional, language-specific rules to provide evidence for a set of partial speech act interpretations, and how to merge them. Later, we use plan reasoning to constrain, supplement, and decide among this set. 2.1. Notational Aside Our notation is based on that of [Allen 87]. Its essential form is (category <slot filler> <slot filler>...). Categories may be syntactic, semantic, or from the knowledge base. A filler may be a word, a feature, a knowledge-base object (referent) or another (category...) structure. Two slots associated with syntactic categories may seem unusual: SEN and RgF. They contain the unit's semantic interpretation, divided into two components. The SEM slot contains a structural- semantic representation of this instance, based on a small, finite set of thematic roles for verbs and noun phrases. It captures the linguistic generalities of verb subcategorization and noun phrase structure. Selectional restrictions, identification of referents, and other phenomena involving world knowledge are captured in the ~ slot. It contains a translation of the SEN slot's logical form into a framelike knowledge representation language, in which richer and more specific role assignments can be made. SF.~ thematic roles correspond to different knowledge base roles according to the event class being described, and in REF file corresponding e.v~nt and argument instances are identified ff possio e. Distinguishing logical form from knowledge ~p resentafion is an experiment in.tended to clarify e notion of semantic roles in logtcal form, and to reduce the complexity of the interpretation process. The senten~ "Can you speak Spanish?" is shown below. (S MOOD YES-NO-Q VOICE ACT SUBJ (NP HEAD you SEM (HUMAN hl) REF Suzanne) AUXS can MAIN-V speak TENSE PRES OBJ (NP HEAD Spanish SEM (LANG ID sl) REF Isl) SEM (CAPABLE TENSE PRES AGENT hl THEME (SPEAK OBJECT s i) ) REF (ABLE-STATE AGENT Suzanne ACTION (USE-LANGUAGE AGENT Suzanne LANG isl))) The outermost category is the syntactic category, sentence. It has many ordinary syntactic features, subject, object, and verbs. The subject is a noun phrase that describes a human and refers to a person 213 named SuTanne, the object a language, Spanish. The semantic structure concerns the capability of the person to speak a language. In the knowledge base, this becomes Suzanne's ability to use Spanish as a language. 2.2. Evidence for Interpretations The utterance provides clues to the hearer, but we have already seen that its relation to its purpose may be complex. We need to make use oflexical and syntacttc as well as semantic and referential information. In this section we will look at rules using all of these kinds of information, introducing the notation for rules as we go. Rules consist of a set of features on the left-hand side, and a set of partial speech act descriptions on the other. The rule should be interpreted as saying that any structure matching the left hand side must be interpreted as one of the speech acts indicated on the right hand side. The speech act descriptions themselves are also in (category <slot filler> ... <slot filler>) notation. Their categories are simply their types in the knowledge base's abstraction hierarchy, in which the category SPgZCH-ACT abstracts all speech act types. Slot names and filler types also are defined by the abstraction hierarchy, but a given rule need not specify all slot values. Here is a lexical rule: the adverb "please" occurring in any syntactic unit signals a request, command, or other act in the directive class. (? ADV please) -(1)=> (DIRECTIVE-ACT) • ~athough this is a very simple rule, its correctness has been ~tablished by examination of some 43 million words of Associated Press news stories. This corpus contains several hundred occurrences of "please", the most common form being the preverbal adverb in a directive utterance. A number of useful generalizations are based on the syntactic mood of sentences. As we use the term, mood is an aggregate of several syntactic features taking the values DECLARATIVE, IMPERATIVE, YES-NO-Q, WH-Q. Many different speech act types occur with each of these values, but in the absence of other evidence an imperative is likely to be a command and a declarative, an Inform. An interrogative sentence may be a question or possibly another speech act. (S MOOD YES-NO-Q) =(2)=> ( (ASK-ACT PROP V(REF) ) (SPEECH-ACT)) The value function v returns the value of the specified slot of the sentence. Thus rule 2 has the proposition slot PROP filled with the value of the REF slot of the sentence. It matches sentences whose mood is that of a yes/no question, and interprets them as asking for the truth value of their explicit propositional content. Thus matching this rule against the structure for "Can you speak Spanish?" would produce the interpretations ((ASK-ACT PROP (ABLE-STATE AGENT Suzanne ACTION (USE-LANGUAGE AGENT Suzanne LANG lsl))) ( SPEECH-ACT ) ) Interrogative sentences with modal verbs and a subject "you" are typically requests, but may be some other act: (S MOOD YES-NO-Q =(3)=> VOICE ACT SUEJ (NP PRO you) AUXS {can could will would might} MAIN-V +action) ((REQUEST-ACT ACTION V(ACTION REF)) (SPEECH-ACT)) Rule 3 interprets "Can you...?" questions as requests, looking for the subject "you" and any of these modal verbs. Lists in curly brackets (e.g. {can could will would might}) signify disjunctions; one of the members must be matched. In this rule, the value function v follows a chain of slots to find a value. ThUS V(ACTION REF) iS the value of the ACTION slot in the structure that is the value of the REF slot. Note that an unspecified speech act is also included as a possibility in both rules. This is because it is also possible that the utterance might have a different interpretation, not suggested by the mood. Some rules are based in the semantic level. For example, the presence of a benefactive case may mark a request, or it may simply occur in a statement or question. (S MAIN-V +action SEM (? BENEF ?)) =(4)=> ( (DIRECTIVE-ACT ACT V(REF) ) (SPEECH-ACT) ) Recall that we distinguish the semantic level from the reference level, inasmuch as the semantic level is simplified by a strong theory of thematic roles, or cases, a small standard set of which may prove adequate to explain verb subcategorization phenomena [Jackendoff 72] The reference level, by 214 contrast, is ihe language of the knowledge base, in which very specific domain roles are possible. To the extent that referents can be identified in the knowledge base (often as skolem functions) they appear at the reference level. This rule says that any way of stating a desire may be a request for the object of the want. (S MOOD DECL = (5) => VOICE ACT TENSE PRES REF (WANT-ACT ACTOR ! s) ) (REQUEST-ACT ACT V(DESID WANT-ACT REF) ) It will match any sentence that can be interpreted as asserting a want or desire of the agent, such as (7) a: I need a napkin. b: I would like two ice creams. The object of the request is the WANT-ACT's desideratum. (The desideratum is already filled by reference processing.) One may prefer an account that handles generalizations from the REF level by plan reasoning; we will discuss this point later. For now, it is sufficient to note that rules of this type are capable of representing the conventions of language use that we are after. 2.3, Applying the Rules We now consider in detail how to apply the rules. For now, assume that the utterance is completely parsed and semantically interpreted, unambiguously, like the sentence "can you speak Spanish?" as it appeared in Sect. 2.1. Interpretation of this sentence begins by finding rules that match with it. The matching algorithm is a standard unification or graph matcher. It requires that the category in the rule match the syntactic structure gaven. All slots present in the rule must be found on the category, and have equal values, and so on recursively. Slots not present in the rule are ignored. If the rule matches, the structures on the right hand side are filled out and become partial interpretations. We need a few general rules to fill in information about the conversation: ( ? ) = ( 6 ) => ( ( SPEECH-ACT AGENT ! s ) ) Rule 6 says that an utterance of any syntactic category maps to a speech act with agent specified by the global variable is. (The processes of identifying speaker and heater are assumed, to be contextually defined.) The partial interpretation it yields for the Spanish sentence is a speech act with agent Mrs. de Prado: ((SPEECH-ACT AGENT Mrs. de Prado)) The second rule is analogous, filling in the hearer. (?) =(7)ffi> ((SPEECH-ACT HEARER !h)) For our example sentence, it yields a speech act with hearer Suzanne. ( (SPEECH-ACT HEARER Suzanne) ) Rule no. 2 given earlier, for yes/no produces these interpretations: questions, ((ASK-ACT PROP(ABLE-STATE AGENT Suzanne ACTION (USE-LANGUAGE AG~%"2 Suzanne LANG lsl))) (SPEECH-ACT)) The indirect request comes from rule no. 3 above. To apply it, we match the subject "you" and the modal auxialiary "can*, and the features of yes/no mood and active voice. ((REQUEST-ACT ACTION (USE-LANGUAGE AGENT Suzanne LANG lsl))) (SPEECH-ACT)) We now have four sets of partial descriptions, which must be merged. 2.4. Combining Partial Descriptions The combining operation can be thought of as taking the cross product of the sets, merging partial interpretations within each resulting set, and returning those combinations that are consistent internally. We expect that psycholinguistic studies will provide additional constraints on this set, e.g. commitment to interpretations triggered early in the sentence. The operation of merging partial interpretations is again unification or graph matching; when the operation succeeds the result contains all the information from the contributing partial interpretations. The cross product of our first two sets is simple; it is the pair consisting of the interpretation for speaker and hearer. These two can be merged to form a set containing the single speech act with speaker Mrs. de Prado and hearer Suzanne. The cross product of this with the results of the mood rule contains two pairs. Within the first pair, the ASK-ACT is a subtype of SPEECH-ACT and 215 therefore matches, resulting in a request with the proper speaker and hearer. The second pair results in no new information, just the SPEECH-ACT with speaker and hearer. (Recall that the mood rule must allow for other interpretations of yes/no questions, and here we simply propagate that fact.) Now we must take the cross product of two sets of two interpretations, yielding four pairs. One pair is inconsistent because REQUEST-ACT and ASK- ACT do not unify. The REQUEST-ACT gets speaker and hearer by merging with the SPEECH- ACT, and the ASK-ACT slides through by merging with the other SPEECH-ACT. Likewise the two SPEECH-ACTs match, so in the end we have an ASK-ACT,. REQUEST-ACT, and the simple SPEECH-ACT. ((REQUEST-ACT AGENT Mrs. de Prado HEARER Suzanne ACTION (USE-LANGUAGE AGENT Suzanne LANG is1))) (ASK-ACT AGENT Mrs. de Prado HEARER Suzanne PROP (ABLE-STATE AGENT Suzanne ACTION (USE AGENT Suzanne OBJECT is1))) (SPEECH-ACT AGENT Mrs. de Prado) HEARER Suzanne)) At this stage, the utterance is ambiguous among these three interpretations. Consider their classifications in the speech act hierarchy. The third abswaets the other two, and signals that there may be other possibilities, those it also abstracts. Its significance is that it allows the plan reasoner to suggest these further interpretations, and it will be discussed later. If there are any expectations generated by top-down plan recognition mechanisms, say, the answer in a question/answer pair, they can be merged in here. 2.5. Further Linguistic Considerations We have used a set of compositional rules to build up multiple interpretations of an utterance, based on linguistic features. They can incorporate lexieal, syntactic, semantic and referential distinctions. Why does the yes/no question interpretation seem to be favored in the Spanish example? We hypothesize that for utterances taken out of context, people make pure frequency judgements. And questions about one's language ability are much more common than requests to speak one. Such a single-utterance request is possible only in contexts where the intended content of the Spanish-speaking is clear or 216 clearly irrelevant, since "speak" doesn't subcategorize for this crucial information. (cf. "Can you read Spanish? I have this great article .... ") The statistical base can be overridden by lexical information. Recall 5(b) "Can you speak Spanish, please?" The "please" rule (above) yields only the request interpretation, and fails to merge with the ASK-ACT. It also merges with the SPEECH-ACT, but the result is again a request, merely adding the possibility that the request could be for some other action. No such action is likely to be identified. The "please" rule is very strong, because it can override our expectations. The final interpretations for "Can you speak Spanish, please?" do not include the literal interpretation: ((REQUEST-ACT AGENT Mrs. de Prado HEARER Suzanne ACTION (USE-LANGUAGE AGENT Suzanne LANG isl))) ((REQUEST-ACT AGENT Mrs. de Prado HEARER Suzanne) Here S,_,-~,nne is probably being asked to continue the present dialogue in Spanish. Some linguistic features are as powerful as "please", as can be seen by the incoherence of the following, where each sentence contains contradictory features. (8) a: *Should you go home, please? b: *Shouldn't you go home, please? c: *Why not go home, please? Modal verbs can be quite strong, and intonation as well. Other features are more suggestive than definitive. The presence of a benefactive case (rule above) may be evidence for an offer or request, or just happen to appear in an inform or question. Sentence mood is weak evidence: it is often overridden, but in the absence of other evidence it it becomes important The adverbs "kindly" and "possibly" are also weak evidence for a request, and large class of sentential adverbs is associated primarily with Inform acts. (9) a: *Unfortunately, I promise to obey orders. b: Surprisingly, I'm leaving next week. c: Actually, I'm pleased to see you. Explicit performative utterances [Austin 62] are declarative, active, utterances whose main verb identifies the action explicitly. The sentence meaning corresponds exactly to the action performed. (S MOOD DECL VOICE ACT SUBJ (NP HEAD i) MAIN-V +performat ire TENSE PRES) =(8)=> v(~') Note that the rule is not merely triggering off a keyword. Presence of a performative verb without the accompanying syntactic features will not satisfy the pefformative rule. 2.6. The Limits of Conventionality We do not claim that all speech acts are conventional. There are variations in convention across languages, of course, and dialects, but idiolects also vary greatly. Some people, even very cooperative ones, do not recognize many types of indirect requests. Too, there is a form of request for which the generalization is obvious but only special cases seem idiomatic: (10) a: Got a light? b: Got a dime? c: Got a donut? (odd requesO d: Do you have the time? e: Do you have a watch on? There are other forms for which the generalization is obvious but no instance seems idiomatic: if someone was assigned a task, asking whether it's done is as good as a request. (I I) Did you wash the dishes? In the next examples, there is a clear logical connection between the utterance and the requested action. We could write a rule for the surface pattern, but the rule is useless because it cannot verify the logical connection. This must be done by plan reasoning, because it depends on world knowledge. The first sentences can request the actions they are preconditions of. The second set can request actions they are effects of. Because these requests operate via the conditions on the domain plan rather than the speech act itself, they are beyond the reach of theories like Gordon&Lakoff 's, which have very simple notions of what a sincerity condition can be. (12) a: Is the garage open? b: Did the dryer stop? c: The mailman came. d: Are you planning to take out the garbage? (13) a: Is the ear fixed? b: Have you fixed the ear? c: Did you fix the car? Plan reasoning provides an account for all of these examples. The fact that certain examples can be handled by either mechanism we regard as a strength of the theory: it leads to robust natural language processing systems, and explains why "Can you X?" is such a successful construction. Both mechanisms work well for such utterances, so the hearer has two ways to understand it correctly. These last examples, along with "It's cold in here", really require plan reasoning. 3. Role of Plan Reasoning Plan reasoning constitutes our second constraint on speech act recognition. There are four roles for plan reasoning in the recognition process. Specifically, plan reasoning 1) eliminates speech act interpretations proposed by the linguistic mechanism, if they contradict known intentions and beliefs of the agent. 2) elaborates and makes inferences based on the remaining interpretations, allowing for non-conventional speech act interpretations. 3) can propose interpretations of its own, when there is enough context information to guess what the speaker might do next. 4) provides a competence theory motivating many of the conventions we have described. Plan reasoning rules are based on the causal and structural links used in plan construction. For instance, in planning start with a desired goal proposition, plan an action with that effect, and then plan for its preconditions. There are also recognition schemas for attributing plans: having observed that an agent wants an effect, believe that they may plan an action with that effect, and so on. For modelling communication, it is necessary to complicate these rules by embedding the antecedent and consequent in one-sided mutual belief operators [Allen 83]. In the Allen approach, our Spanish example hinges on the acts" preconditions: SnT~rme will not attribute a qknUestion to Mrs. de Prado if she believes she already ows the answer, but this knowledge could be the basis for a request. Sentences like "It's cold inhere" are also interpreted by extended reasoning about the intentions an agent could plausibly have. We use extended reasoning for difficult cases, and the more restricted plan-hased conversational implicature heuristic [Hinkelman 87], [Hinkelman forthcoming] as a plausibility filter adequate for most common eases. 4. Two Constraints Integrated Section 2 showed how to compute a set of possible speech act interpretations compositionally, from conventions of language use. Section 3 showed how plan reasoning, which motivates the conventions, can be used to further develop and restrict the interpretations. The time has come to integrate the two into a complete system. 4.1. Interaction of the Constraints The plan reasoning phase constrains the results of the linguistic computation by eliminating interpretations, and reinterpreting others. The linguistic computation constrains plan reasoning by providing the input; the final interpretation must be in the range specified, and only if there is no plausible interpretation is extended inference explicitly invoked. Recall that the 217 linguistic rules control ambiguity: because the right hand side of the rule must express all the possibilities for this pattern, a single rule can limit the range of interpretations sharply. Consider (14) a: I hereby inform you that it's cold in here. b: It's cold in here. The explicit performative rules, triggered by "hereby" and by a pefformafive verb in the appropriate syntactic context, each allow for only an explicit performadve interpretation. (a) is unambiguous, and if it is consistent with context no extended reasoning is needed for speech act identification purposes. (In fact the hearer will probably find the formality implausible, and try to explain that.) By contrast, the declarative rule proposes two speech acts for (b), the Inform and the generic speech act. The ambiguity allows the plan reasoner to identify other interpretations, particularly if in context the Inform interpretation is implausible. The entire speech act interpretation process is now as follows. Along with the usual compositional linguistic processes, we build up and merge hypotheses about speech act interpretations. The resulting interpretations are passed to the implicature module. The conversational implicatures are computed, discounting interpretations if they are in conflict with contextual knowledge. If a.plausible, non-contradictory interpretation results, ~t can be accepted. Allen-style plan reasoning is invoked to identify the speech act only if remaining ambiguity interferes with planning or if no completely plausible interpretations remain. After that, plan reasoning may proceed to elaborate the interpretation or to plan a response. Consider the central example of this paper. Three interpretations were were proposed for "Can you speak Spanish?", in Section 2. As they become available, the next step in processing is to check plausibility by attempting to verify the act's conversational implicatures. We showed how the Ask act is ruled out by its implicatures, when the answer is known. Likewise, in circumstances where Suzanne is known not to speak Spanish, the Request is eliminated. The genoric speech act is present under most circumstances, but adds little information except to allow for other possibilities. Because in any of these contexts a specific interpretation is acceptable, no further inference is necessary for idendfying the speech act. If it is merely somewhat likely that Suzanne speaks Spanish, both specific interpretations are possible and both may even be intended by Mrs. de Prado. Further plan reasoning may elaborate or eliminate possibilides, or plan a response. But it is not required for the main effort of speech act identification. 218 c ~ 4.2. The Role of Ambiguity If no interpretations remain after the plausibility check, then the extended plan reasoning may be invoked to resolve a possible misunderstanding or mistaken belief. If several remain, it may not be necessary to disambiguate. Genuine ambiguity of intentions is quite common in speech and often not a problem. For instance, the speaker may mention plans to go to the store, and leave unclear whether this constitutes a promise. In cases of genuine ambiguity, it is possible for the hearer to respond to each of the proposed interpretations, and indeed, politeness may even require it. Consider (b)-(g) as responses to (a). (15) a: Do you have our grades yet? b: No, not yet. c: Yes, I'm going to announce them in class. d: Sure, here's your paper. (hands paper.) e: Here you go. (hands paper.) f: *No. g: *Yes. The main thing to note is that it is infelicitous to ignore the Request interpretation; the polite responses acknowledge that the speaker wants the grades. Note that within the framework of "speaker-based" meaning, we emphasize the role of the hearer in the fin.~ understanding of an utterance. An important point is that while the speech act attempted depends on the speaker's intentions, the speech act accomplished also depends on the hearer's ability to recognize the intentions, and to some extent their own desires in the matter. Consider an example from [Clark 88]: (16) a: Have some asparagus. b: No, thanks-. (17) a: Have some asparagus. b: OK, if I have to .... The first hearer treats the sentence as an offer, the second as a command. If the speaker intended otherwise, it must be corrected quickly or be lost. 4.3. The Implementation Our system is implemented using common lisp and the Rhetorical knowledge representation system [Miller 87], which provides among other things a hierarchy of belief spaces. The linguistic speech act inte~retadon module been implemented, with merging, as well as the implicature calculadon and checking module. So given the appropriate contexts, the Spanish example runs. Extended plan reasoning will eventually be added. There are of course open problems. One would like to experiment with large interpretation rule sets, and with the constraints from other modules. The projection problem, both for conversational ~mplicature and for speech act interpretation, has not been examined directly. If a property like conversational implicature or presupposition is computed at the clause level, one wants to know whether the property survives negation, conjunction, or any other syntactic embedding. [Horton 87] has a result for projection of presuppositions, which may be generalizable. The other relevant work is [Hirschberg 85] and [Gazdar 79]. Plan recognition for discourse, and the processing of cue words, are related areas. 5. Conclusion To determine what an agent is doing by making an utterance, we must make use of not only general reasoning about actions in context, but also the linguistic features which by convention are associated with specific speech act types. To do this, we match patterns of linguistic features as part of the standard linguistic processing. The resulting partial interpretations axe merged, and then filtered by determining the plausibility of their conversational implicatures. Assuming no errors on the part of the speaker, the final interpretation is constrained to lie within the range so specified. If there is not a plausible interpretation, full plan reasoning is called to determine the speaker's intentions. Remaining ambiguity is not a problem but simply a more complex basis for the heater's planning processes. Linguistic patterns and plan reasoning together constrain speech act interpretation sufficiently for discourse purposes. Acknowledgements This work was supported in part by NSF research grants no. DCR-8502481, IST-8504726, and US Army Engineering Topographic Laboratories research contract no. DACA76-85-C-0001. References [Allen 83] Allen, J., "Recognizing Intentions From Natural Language Utterances," in Computational Models of Discourse, Brady, M. and Berwick, B. (ed.), MIT Press, Cambridge, MA, 1983, 107-166. [Allen 87] Allen, J., Natural Language Understanding, Benjamin Cummings Publishing Co., 1987. [Austin 62] Austin, J. L., How to Do Things with Words, Harvard University Press, Cambridge, MA, 1962. [Brown 80] Brown, G. P., "Characterizing Indirect Speech Acts," American Journal of Computational Linguistics6:3-4, July-December 1980, 150-166. [Clark 88] Clark, H., Collective Actions in Language Use, Invited Talk, September 2 I, 1988. [Gazdar 79] Gazdar, G., Pragmatics: Implicature, Presupposition and Logical Form, Academic Press, New York, 1979. [Gibbs 84] Gibbs, R., "Literal Meaning and Psychological Theory," Cognitive Science 8, 1984, 275-304. [Gordon 75] Gordon, D. and Lakoff, G., "Conversational Postulates," in Syntax and Semantics V. 3, Cole, P. and Morgan, J. L. (ed.), Academic Press, New York, 1975. [Hinkelman 87] Hinkelman, E., "Thesis Proposal: A Plan-Based Approach to Conversational Implicature," TR 202, Dept. Computer Science, University of Rochester, June 1987. [I-Iirschberg 85] Hirschberg, J., "A Theory of Scalar of Implicature," MS-CIS-85-56, PhD Thesis, Department of Computer and Information Science, University of Pennsylvania, December 1985. [Horn 84] Horn, L. R. and Bayer, S., Short-Circuited Implicature: A Negative Contribution, Vol. 7, 1984. [Horton 87] Horton, D. L., "Incorporating Agents' Beliefs in a Model of Presupposition," Technical Report CSRI-201, Computer Systems Research Institute, University of Toronto, Toronto, Canada, August 1987. [Jackendoff 72] Jackendoff, R. S., Semantic Interpretation in Generative Grammar, MIT Press, Cambridge, 1972. [McCafferty 86] McCafferty, A. S., Explaining Implicatures, 23 October 1986. [Miller 87] Miller, B. and Allen, J., The Rhetorical Knowledge Representation System: A User's Manual, forthcoming technical report, Department of Computer Science, University of Rochester, 1987. ['Perrault 80] Perrault, C. R. and Allen, J. F., "A Plan-Based Analysis of Indirect Speech Acts," American Journal of Computational Linguistics 6:3- 4, July-December 1980, 167-82. [Searle 69] Searle, J., in Speech Acts, Cambridge University Press, New York, 1969. [Searle 75] Searle, J., "Indirect Speech Acts," in Syntax and Semantics, v3: Speech Acts, Cole and Morgan (ed.), Academic Press, New York, NY, 1975. [Sidner 81] Sidner, C. L. and Israel, D. J., "Recognizing Intended Meaning and Speakers' Plans," Proc. IJCA1 '81, 1981, 203-208. 219
1989
26
TREATMENT OF LONG DISTANCE DEPENDENCIES IN LFG AND TAG: FUNCTIONAL UNCERTAINTY IN LFG IS A COROLLARY IN TAG" Aravind K. Joshi Dept. of Computer & Information Science University of Pennsylvania Philadelphia, PA 19104 [email protected] K. Vijay-Shanker Dept. of Computer & Information Science University of Delaware Newark, DE 19716 [email protected] ABSTRACT In this paper the functional uncertainty machin- ery in LFG is compared with the treatment of long distance dependencies in TAG. It is shown that the functional uncertainty machinery is redundant in TAG, i.e., what functional uncertainty accom- plishes for LFG follows f~om the TAG formalism itself and some aspects of the linguistic theory in- stantiated in TAG. It is also shown that the anal- yses provided by the functional uncertainty ma- chinery can be obtained without requiring power beyond mildly context-sensitive grammars. Some linguistic and computational aspects of these re- sults have been briefly discussed also. 1 INTRODUCTION The so-called long distance dependencies are char- acterized in Lexical Functional Grammars (LFG) by the use of the formal device of functional un- certainty, as defined by Kaplan and Zaenan [3] and Kaplan and Maxwell [2]. In this paper, we relate this characterization to that provided by Tree ~,djoining Grammars (TAG), showing a di- rect correspondence between the functional uncer- tainty equations in LFG analyses and the elemen- tary trees in TAGs that give analyses for "long dis- tance" dependencies. We show that the functional uncertainty machinery is redundant in TAG, i.e., what functional uncertainty accomplishes for LFG follows from the TAG formalism itself and some fundamental aspects of the linguistic theory in- stantiated in TAG. We thus show that these anal- yses can be obtained without requiring power be- yond mildly context-sensitive grammars. We also *This work was partially supported (for the first au- thor) by the DRRPA grant N00014-85-K0018, AltO grant DAA29-84-9-0027, and NSF grant IRI84-10413-A02. The first author also benefited from some discussion with Mark Johnson and Ron Kaplan at the Titisee Workshop on Uni- fication Grammars, March, 1988. briefly discuss the linguistic and computational significance of these results. Long distance phenomena are associated with the so-called movement. The following examples, 1. Mary Henry telephoned. 2. Mary Bill said that Henry telephoned. 3. Mary John claimed that Bill said that Henry telephoned. illustrate the long distance dependencies due to topicalization, where the verb telephoned and its object Mary can be arbitrarily apart. It is diffi- cult to state generalizations about these phenom- ena if one relies entirely on the surface structure (as defined in CFG based frameworks) since these phenomena cannot be localized at this level. Ka- plan and Zaenan [3] note that, in LFG, rather than stating the generalizations on the c-structure, they must be stated on f-structures, since long distance dependencies are predicate argument dependen- cies, and such functional dependencies are rep- resented in the f-structures. Thus, as stated in [2, 3], in the sentences (1), (2), and (3) above, the dependencies are captured by the equations (in the LFG notation 1) by 1" TOPIC =T OBJ, T TOPIC =T COMP OBJ, and 1" TOPIC =T COMP COMP OBJ, respectively, which state that. the topic Mary is also the object of tele. phoned. In general, since any number of additional complement predicates may be introduced, these equations will have the general form "f TOPIC =T COMP COMP ... OBJ Kaplan and Zaenen [3] introduced the formal device of functional unc'ertainty, in which this gen- eral case is stated by the equation 1 Because of lack of space, we will not define the LFG notation. We assume that the reader is familiar with it. 220 T TOPIC -T COMP°OBJ The functional uncertainty device restricts the labels (such as COMP °) to be drawn from the class of regular expressions. The definition of f- structures is extended to allow such equations [2, 3]. Informally, this definition states that if f is a f-structure and a is a regular set, then (fa) = v holds if the value of f for the attribute s is a f- structure fl such that (flY) -- v holds, where sy is a string in a, or f = v and e E a. The functional uncertainty approach may be characterized as a localization of the long dis- tance dependencies; a localization at the level of f- structures rather than at the level of c-structures. This illustrates the fact that if we use CFG-like rules to produce the surface structures, it is hard to state some generalizations directly; on the other hand, f-structures or elementary trees in TAGs (since they localize the predicate argument depen- dencies) are appropriate domains in which to state these generalizations. We show that there is a di- rect link between the regular expressions used in LFG and the elementary trees of TAG. I.I OUTLINE OF THE PAPER In Section 2, we will define briefly the TAG for- malism, describing some of the key points of the linguistic theory underlying it. We will also de- scribe briefly Feature Structure Based Tree Ad- joining Grammars (FTAG), and show how some elementary trees (auxiliary trees) behave as func: tions over feature structures. We will then show how regular sets over labels (such as COMP °) can also be denoted by functions over feature struc- tures. In Section 3, we will consider the example of topicalization as it appears in Section 1 and show that the same statements are made by the two formalisms when we represent both the elemen- tary trees of FTAG and functional uncertainties in LFG as functions over feature structures. We also point out some differences in the two analy- ses which arise due to the differences in the for- malisms. In Section 4, we point out how these similar statements are stated differently in the two formalisms. The equations that capture the lin- guistic generalizations are still associated with in- dividual rules (for the c-structure) of the grammar in LFG. Thus, in order to state generalizations for a phenomenon that is not localized in the c- structure, extra machinery such as functional un- certainty is needed. We show that what this extra machinery achieves for CFG based systems follows as a corollary of the TAG framework. This results from the fact that the elementary trees in a TAG provide an extended domain of locality, and factor out recursion and dependencies. A computational consequence of this result is that we can obtain these analyses without going outside the power of TAG and thus staying within the class of con- strained grammatical formalisms characterized as mildly context.sensitive (Joshi [1]). Another con- sequence of the differences in the representations (and localization) in the two formalisms is as fol- lows. In a TAG, once an elementary tree is picked, there is no uncertainty about the functionality in long distance dependencies. Because LFG relies on a CFG framework, interactions between uncer- tainty equations can arise; the lack of such interac- tions in TAG can lead to simpler processing of long distance dependencies. Finally, we make some re- marks as to the linguistic significance of restrict- ing the use of regular sets in the functional uncer- tainty machinery by showing that the linguistic theory instantiated in TAG can predict that the path depicting the "movement" in long distance dependencies can be characterized by regular sets. 2 INTRODUCTION TO TAG Tree Adjoining Grammars (TAGs) are tree rewrit- ing systems that are specified by a finite set of elementary trees. An operation called adjoining ~ is used to compose trees. The key property of the linguistic theory of TAGs is that TAGs allow factoring of recursion from the domain of depen- dencies, which are defined by the set of elemen- tary trees. Thus, the elementary trees in a TAG correspond to minimal linguistic structures that localize the dependencies such as agreement, sub- categorization, and filler-gap. There are two kinds of elementary trees: the initial trees and auxiliary trees. The initial trees (Figure 1) roughly corre- spond to "simple sentences". Thus, the root of an initial tree is labeled by S or ~. The frontier is all terminals. The auxiliary trees (Figure 1) correspond roughly to minimal recursive constructions. Thus, if the root of an auxiliary tree is labeled by a non- terminal symbol, X, then there is a node (called the foot node) in the frontier which is labeled by X. The rest of the nodes in the frontier are labeled by terminal symbols. 2We do not consider lexicalized TAGs (defined by Sch- abes, Abeille, and Joshi [7]) which allow both adjoining and sub6titution. The ~uhs of this paper apply directly to them. Besides, they are formally equivalent to TAGs. 221 ~ U p: WP ' A I I P, V Ag~m~ A~am~tm 2. The relation of T/to its descendants, i.e., the view from below. This feature structure is called b,. troo¢ S X brooc "-...~. ....... v J A a m . ~ p mat • Figure 1: Elementary Trees in a TAG We will now define the operation of adjoining. Consider the adjoining of/~ at the node marked with * in a. The subtree of a under the node marked with * is excised, and/3 is inserted in its place. Finally, the excised subtree is inserted be- low the foot node of w, as shown in Figure 1. A more detailed description of TAGs and their linguistic relevance may be found in (Kroch and ao hi [51). 2.1 FEATURE STRUCTURE BASED TREE ADJOINING GRAMMARS (FTAG) In unification grammars, a feature structure is as- sociated with a node in a derivation tree in order to describe that node and its relation to features of other nodes in the derivation tree. In a FTAG, with each internal node, T/, we associate two fea- ture structures (for details, see [9]). These two feature structures capture the following relations (Figure 2) 1. The relation ofT/to its supertree, i.e., the view of the node from the top. The feature struc- ture that describes this relationship is called ~. Figure 2: Feature Structures and Adjoining Note that both the t, and b, feature structures hold for the node 7. On the other hand, with each leaf node (either a terminal node or a foot node), 7, we associate only one feature structure (let us call it t,3). Let us now consider the case when adjoining takes place as shown in the Figure 2. The notation we use is to write alongside each node, the t and b statements, with the t statement written above the b statement. Let us say that troo~,broot and tloot= bLoo~ are the t and b statements of the root and foot nodes of the auxiliary tree used for adjoining at the node 7. Based on what t and b stand for, it is obvious that on adjoining the statements t, and troot hold for the node corresponding to the root of the auxiliary tree. Similarly, the statements b, and b/oo~ hold for the node corresponding to the foot of the auxiliary tree. Thus, on adjoining, we unify t, with troot, and b, with b/oot. In fact, this adjoining-is permissible only if t.oo~ and t. are compatible and so are b/oot and b~. If we do not adjoin at the node, 7, then we unify t, with b,. More details of the definition of FTAG may be found in [8, 9]. We now give an example of an initial tree and an auxiliary tree in Figure 3. We have shown only the necessary top and bottom feature structures for the relevant nodes. Also in each feature structure 3The linguistic relevance of this restriction has been dis- cussed elsewhere (Kroch and Joshi [5]). The general frame- work does not necessarily require it. 222 shown, we have only included those feature-value pairs that are relevant. For the auxiliary tree, we have labeled the root node S. We could have la- beled it S with COMP and S as daughter nodes. These details are not relevant to the main point of the paper. We note that, just as in a TAG, the elementary trees which are the domains of depen- dencies are available as a single unit during each step of the derivation. For example, in al the topic and the object of the verb belong to the same tree (since this dependency has been factored into al) and are coindexed to specify the movemeat due to topicalization. In such cases, the dependencies be- tween these nodes can be stated directly, avoiding the percolation of features during the derivation process as in string rewriting systems. Thus, these dependencies can be checked locally, and thus this checking need not be linked to the derivation pro- cess in an unbounded manner. t - ... t - .,. o,: • b.~':~] P,: s "[d~:l~! I I .-m I I Figure 3: Example of Feature Structures Associ- ated with Elementary Trees to adjoining, since this feature structure is not known, we will treat it as a variable that gets in- stantiated on adjoining. This treatment can be formalized by treating the auxiliary trees as func- tions over feature structures (by A-abstracting the variable corresponding to the feature structure for the tree that will appear below the foot node). Adjoining corresponds to applying this function to the feature structure corresponding to the subtree below the node where adjoining takes place. Treating adjoining as function application, where we consider auxiliary trees as functions, the representation of/3 is a function, say fz, of the form (see Figure 2) ~f.($roo, A ...(broot A f)) If we now consider the tree 7 and the node T?, to allow the adjoining of/3 at the node ~, we must represent 7 by (...~. A f~(b.) A...) Note that if we do not adjoin at ~7, since t, and /3, have to be unified, we must represent 7 by the formula (...~Ab~A...) which can be obtained by representing 7 by 2.2 A CALCULUS TO REPRESENT FTAG In [8, 9], we have described a calculus, extending the logic developed by Rounds and Kasper [4, 6], to encode the trees in a FTAG. We will very briefly describe this representation here. To understand the representation of adjoining, consider the trees given in Figure 2, and in partic- ular, the node rl. The feature structures associated with the node where adjoining takes place should reflect the feature structure after adjoining and as well as without adjoining. Further, the feature structure (corresponding to the tree structure be- low it) to be associated with the foot node is not known prior to adjoining, but becomes specified upon adjoining. Thus, the bottom feature struc- ture associated with the foot node, which "is b foot before adjoining, is instantiated on adjoining by unifying it with a feature structure for the tree that will finally appear below this node. Prior (...t~ A X(b~) A...) where I is the identity function. Similarly, we must allow adjoining by any auxiliary tree adjoin- able at 7/(admissibility of adjoining is determined by the success or failure of unification). Thus, if /31,... ,/3, form the set of auxiliary trees, to allow for the possibility of adjoining by any auxiliary tree, as well as the possibility of no adjoining at a node, we must have a function, F, given by F = Af.(f~x(f) V... V f:~(f) V f) and then we represent 7 by (. ..t, A F(b,) A .. .). In this way, we can represent the elementary trees (and hence the grammar) in an extended version of K-K logic (the extension consists of adding A- abstraction and application). 223 3 LFG AND TAG ANALYSES FOR LONG DISTANCE DE- PENDENCIES We will now relate the analyses of long distance de- pendencies in LFG and TAG. For this purpose, we will focus our attention only on the dependencies due to topicalization, as illustrated by sentences 1, 2, and 3 in Section 1. To facilitate our discussion, we will consider reg- ular sets over labels (as used by the functional uncertainty machinery) as functions over feature structures (as we did for auxiliary trees in FTAG). In order to describe the representation of regu- lar sets, we will treat all labels (attributes) as functions over feature structures. Thus, the label COMP, for example, is a function which given a value feature structure (say v) returns a feature structure denoted by COMP : v. Therefore, we can denote it by Av.COMP : v. In order to de- scribe the representation of arbitrary regular sets we have to consider only their associated regular expressions. For example, COMP ° can be repre- sented by the function C* which is the fixed-point 4 of F = Av.(F(COMP : v) V v) s Thus, the equation T TOPIC =T COMP*OBJ is satisfied by a feature structure that satisfies TOPIC : v A C* (OBJ : v). This feature structure will have a general form described by TOPIC : v A COMP : COMP : ... OBJ : v. Consider the FTAG fragment (as shown in Fig- ure 3) which can be used to generate the sentences 1, 2, and 3 in Section 1. The initial tree al will be represented by cat : "~ A F(topic : v A F(pred : telephonedAobj : v)). Ignoring some irrelevant de- tails (such as the possibility of adjoining at nodes other than the S node), we cnn represent ax as al = topic : v A F(obj : v) Turning our attention to /~h let us consider the bottom feature structure of the root of/~1. Since its COMP ~ the feature structure associated with the foot node (notice that no adjoining is allowed at the foot node and hence it has only one feature structure), and since adjoining can take place at the root node, we have the representation of 81 as tin [8], we have established that the fixed-point exists. aWe use the fact that R" = R'RU {e}. aLf(comp : f ^ s~bj : (...) ^...) where F is the function described in Section 2.2. From the point of view of the path from the root to the complement, the NP and VP nodes are irrelevant, so are any adjoinings on these nodes. So once again, if we discard the irrelevant infor- mation (from the point of view of comparing this analyses with the one in LFG), we can simplify the representation of 81 as Af.F(comp : f) As explained in Section 2.2, since j31 is the only auxiliary tree of interest, F would be defined as F = a/.Zl(/)v/. Using the definition of/~1 above, and making some reductions we have F = Af.F(comp : f) V f This is exactly the same analysis as in LFG using the functional uncertainty machinery. Note that the fixed-point of F isC,. Now consider al. Ob- viously any structure derived from it can now be represented as topic : v A C * (obj : v) This is the same analysis as given by LFG. In a TAG, the dependent items are part of the same elementary tree. Features of these nodes can be related locally within this elementary tree (as in a,). This relation is unaffected by any adjoin- ings on nodes of the elementary tree. Although the paths from the root to these dependent items are elaborated by the adjoinings, no external de- vice (such as the functional uncertainty machin- ery) needs to be used to restrict the possible paths between the dependent nodes. For instance, in the example we have considered, the fact that TOPIC = COMP : COMP... : OBJ follows from the TAG framework itself. The regular path restrictions made in functional uncertainty state- ments such as in TOPIC = COMP*OBJ is re- dundant within the TAG framework. 4 COMPARISON OF THE TWO FORMALISMS We have compared LFG and TAG analyses of long distance dependencies, and have shown that what functional uncertainty does for LFG comes out as a corollary in TAG, without going beyond the power of mildly context sensitive grammars. 224 Both approaches aim to localize long distance de- pendencies; the difference between TAG and LFG arises due to the domain of locality that the for- malisms provide (i.e., the domain over which state- ments of dependencies can be stated within the formalisms). In the LFG framework, CFG-like productions are used to build the c-structure. Equations are associated with these productions in order to build the f-structure. Since the long distance depen- dencies are localized at the functional level, addi- tional machinery (functional uncertainty) is pro- vided to capture this localization. In a TAG, the elementary trees, though used to build the "phrase structure" tree, also form the domain for localizing the functional dependencies. As a result, the long distance dependencies can be localized in the el- ementary trees. Therefore, such elementary trees tell us exactly where the filler "moves" (even in the case of such unbounded dependencies) and the functional uncertainty machinery is not necessary in the TAG framework. However, the functional uncertainty machinery makes explicit the predic- tions about the path between the "moved" argu- ment (filler) and the predicate (which is close to the gap). In a TAG, this prediction is not explicit. Hence, as we have shown in the case of topicaliza- tion, the nature of elementary trees determines the derivation sequences allowed and we can confirm (as we have done in Section 3) that this predic- tion is the same as that made by the functional uncertainty machinery. 4.1 INTERACTIONS AMONG UNCER- TAINTY EQUATIONS The functional uncertainty machinery is a means by which infinite disjunctions can be specified in a finite manner. The reason that infinite number of disjunctions appear, is due to the fact that they correspond to infinite number of possible deriva- tions. In a CFG based formalism, the checking of dependency cannot be separated from the deriva- tion process. On the other hand, as shown in [9], since this separation is possible in TAG, only fi- nite disjunctions are needed. In each elementary tree, there is no uncertainty about the kind of de- pendency between a filler and the position of the corresponding gap. Different dependencies corre- spond to different elementary trees. In this sense there is disjunction, but it is still only finite. Hav- ing picked one tree, there is no uncertainty about the grammatical function of the filler, no matter how many COMPs come in between due to adjoin- ing. This fact may have important consequences from the point of view of relative efficiency of pro- cessing of long distance dependencies in LFG and TAG. Consider, for example, the problem of in- teractions between two or more uncertainty equa- tions in LFG as stated in [2]. Certain strings in COMP ° cannot be solutions for (f TOPIC) = (.f COMP" GF) when this equation is conjoined (i.e., when it in- teracts) with (f COMP SUBJ NUM) = SING and (f TOPIC NUM) = PL. In this case, the shorter string COMP SUBJ cannot be used for COMP" GF because of the interaction, although the strings COMP i SUB J, i >_ 2 can satisfy the above set of equations. In general, in LFG, extra work has to be done to account for interactions. On the other hand, in TAG, as we noted above, since there is no uncertainty about the grammat- ical function of the filler, such interactions do not arise at all. 4.2 REGULAR SETS IN FUNCTIONAL UNCERTAINTY From the definition of TAGs, it can be shown that the paths are always context-free sets [11]. If there are linguistic phenomena where the uncertainty machinery with regular sets is not enough, then the question arises whether TAG can provide an adequate analysis, given that paths are context- free sets in TAGs. On the other hand, if regular sets are enough, we would like to explore whether the regularity requirement has a linguistic signif- icance by itself. As far as we are aware, Kaplan and Zaenen [3] do not claim that the regularity requirement follows from the linguistic considera- tions. Rather, they have illustrated the adequacy of regular sets for the linguistic phenomena they have described. However, it appears that an ap- propriate linguistic theory instantiated in the TAG framework will justify the use of regular sets for the long distance phenomena considered here. To illustrate our claim, let us consider the el- ementary trees that are used in the TAG anal- ysis of long distance dependencies. The elemen- tary trees, Sl and/31 (given in Figure 3), are good representative examples of such trees. In the ini- tial tree, ¢zt, the topic node is coindexed with the empty NP node that plays the grammatical role of object. At the functional level, this NP node is the object of the S node of oq (which is cap- tured in the bottom feature structure associated with the S node). Hence, our representation of 225 at (i.e., looking at it from the top) is given by topic : v A F(obj : v), capturing the "movement" due to topicalization. Thus, the path in the func- tional structure between the topic and the object is entirely determined by the function F, which in turn depends on the auxiliary trees that can be adjoined at the S node. These auxiliary trees, such as/~I, are those that introduce complemen- tizer predicates. Auxiliary trees, in general, in- troduce modifiers or complementizer predicates as in/~1. (For our present discussion we can ignore the modifier type auxiliary trees). Auxiliary trees upon adjoining do not disturb the predicate ar- gument structure of the tree to which they are adjoined. If we consider trees such as/~I, the com- plement is given by the tree that appears below the foot node. A principle of a linguistic theory instantiated in TAG (see [5]), similar to the pro- jec~ion principle, predicts that the complement of the root (looking at it from below) is the feature structure associated with the foot node and (more importantly) this relation cannot be disrupted by any adjoinings. Thus, if we are given the feature structure, f, for the foot node (known only af- ter adjoining), the bottom feature structure of the root can be specified as comp : jr, and that of the top feature structure of the root is F(comp : f), where F, as in a,, is used to account for adjoinings at the root. To summarize, in al, the functional dependency between the topic and object nodes is entirely de- termined by the root and foot nodes of auxiliary trees that can be adjoined at the S node (the ef- fect of using the function F). By examining such auxiliary trees, we have characterized the latter path as Af.F(comp : f). In grammatical terms, the path depicted by F can be specified by right- linear productions F -* F comp : / I Since right-linear grammars generate only regular sets, and TAGs predict the use of such right-linear rules for the description of the paths, as just shown above, we can thus state that TAGs give a justi- fication for the use of regular expressions in the functional uncertainty machinery. 4.3 GENERATIVE CAPACITY AND LONG DISTANCE DEPENDENCY We will now show that what functional uncer- tainty accomplishes for LFG can be achieved within the FTAG framework without requiring power beyond that of TAGs. FTAG, as described in this paper, is unlimited in its generative ca- pacity. By placing no restrictions on the feature structures associated with the nodes of elemen- tary trees, it is possible to generate any recursively enumerable language. In [9], we have defined a restricted version of FTAG, called RFTAG, that can generate only TALs (the languages generated by TAGs). In RFTAG, we insist that the fea- ture structures that are associated with nodes are bounded in size, a requirement similar to the finite closure membership restriction in GPSG. This re- stricted system will not allow us to give the analy- sis for the long distance dependencies due to top- icalization (as given in the earlier sections), since we use the COMP attribute whose value cannot be bounded in size. However, it is possible to extend RFTAG in a certain way such that such analysis can be given. This extension of RFTAG still does not go beyond TAG and thus is within the class of mildly context-sensitive grammar formalisms de- fined by Joshi [1]. This extension of RFTAG is discussed in [10]. To give an informal idea of this extension and a justification for the above argument, let us con- sider the auxiliary tree,/~1 in Figure 3. Although we coindex the value of the comp feature in the feature structure of the root node of/~1 with the feature structure associated with the foot node, we should note that this coindexing does not affect the context-freeness of derivation. Stated differ- ently, the adjoining sequence at the root is inde- pendent of other nodes in the tree in spite of the coindexing. This is due to the fact that as the fea- ture structure of the foot of/~1 gets instantiated on adjoining, this value is simply substituted (and not unified) for the value of the comp feature of the root node. Thus, the comp feature is being used just as any other feature that can be used to give tree addresses (except that comp indicates dominance at the functional level rather than at the tree structure level). In [10], we have formal- ized this notion by introducing graph adjoining grammars which generate exactly the same lan- guages as TAGs. In a graph adjoining grammar, /~x is represented as shown in Figure 4. Notice that in this representation the comp feature is like the features 1 and 2 (which indicate the left and right daughters of a node) and therefore not used explicitly. 5 CONCLUSION We have shown that for the treatment of long dis- tance dependencies in TAG, the functional un- 226 NP VP l t camp Figure 4: An Elementary DAG certainty machinery in LFG is redundant. We have also shown that the analyses provided by the functional uncertainty machinery can be ob- tained without going beyond the power of mildly context-sensitive grammars. We have briefly dis- cussed some linguistic and computational aspects of these results. We believe that our results described in this pa- per can be extended to other formalisms, such as Combinatory Categorial Grammars (CCG), which also provide an e~ended domain of locality. It is of particular interest to carry out this investiga- tion in the context of CCG because of their weak equivalence to TAG (Weir and Joshi [12]). This exploration will help us view this equivalence from the structural point of view. REFERENCES [1] A. K. Joshi. How much context-sensitivity is necessary for characterizing structural de- scriptions -- Tree Adjoining Grammars. In D. Dowty, L. Karttunen, and A. Zwicky, editors, Natural Language Processing q Theoretical, Computational and Psychological Perspective, Cambridge University Press, New York, NY, 1985. Originally presented in 1983. [2] R. M. Kaplan and J. T. Maxwell. An al- gorithm for functional uncertainity. In 12 th International Conference on Comput. Ling., 1988. [3] R. M. Kaplan and A. Zaenen. Long distance dependencies,constituent structure, and func- tional uncertainity. In M. Baltin and A. Kroch, editors, Alternative Conceptions of Phrase Structure, Chicago University Press, Chicago. IL, 1988. [4] [5] [6] [7] [8] [9] [lO] [11] [12] R. Kasper and W. C. Rounds. A logical se- mantics for feature structures. In 24 th meet- ing Assoc. Comput. Ling., 1986. A. Kroch and A.K. Joshi. Linguistic Rele- vance of Tree Adjoining Grammars. Technical Report MS-CIS-85-18, Department of Com- puter and Information Science, University of Pennsylvania, Philadelphia, 1985. to appear in Linguistics and Philosophy, 1989. W. C. Rounds and R. Kasper. A complete logical calculus for record structures repre- senting linguistic information. In IEEE Sym- posium on Logic and Computer Science, 1986. Y. Schabes, A. Abeille, and A. K. Joshi. New parsing strategies for tree adjoining gram- mars. In 12 th International Conference on Assoc. Comput. Ling., 1988. K. Vijayashanker. A Study of Tee Adjoining Grammars. PhD thesis, University of Penn- sylvania, Philadelphia, Pa, 1987. K. Vijay-Shanker and A. K. Joshi. Fea- ture structure based tree adjoining grammars. In 12 th International Conference on Comput. Ling., 1988. K. Vijay-Shanker and A.K. Joshi. Unification based approach to tree adjoining grammar. 1989. forthcoming. K. Vijay-Shanker, D. J. Weir, and A. K. Joshi. Characterizing structural descriptions produced by various grammatical formalisms. In 25 th meeting Assoc. Comput. Ling., 1987. D. J. Weir and A. K. Joshi. Combinatory cat- egorial grammars: generative power and rela- tionship to linear context-free rewriting sys- tems. In 26 ta meeting Assoc. Comput. Ling., 1988. 227
1989
27
TREE UNIFICATION GRAMMAR Frdd Popowich School of Computing Science Simon Fraser University Bumaby, B.C. CANADA V5A 186 ABSTRACT Tree Unification Grammar is a declarative unification-bas~:l linguistic framework. The basic grammar stmaures of this framework are partial descriptions of trees, and the framework requires only a single grammar rule to combine these partial descriptions. Using this framework, constraints associated with various linguistic phenomena (reflexivisation in particular) ~ be stated succinctly in the lexicon. INTRODUCTION There is a mind in uni~ca~on-based grammar formalisms towards using a single grammar stmctme to contain the phonological, syntactic and semantic information associated with a linguistic expression. Adopting'the terminology used by Pollard and Sag (1987), this grammar structure is called a sign. Grammar rules, guided by the syntactic information contained in signs, are used to derive signs associated with complex expressions from those of their constituent expressions. The relationship between the signs and the complex signs derived from grammar rule application can be expressed in derivationai structures. These structures both explicitly illustrate relations that are implicit in the syntax of the signs and express relations that are present in the grammar roles. Tree unification grammar (TUG) is a formalism which uses function-argument (FA) specif~ationa as its primary grammar structures. These specifications resemble partially specified derivational stmcmn~ of sign-based formalisms like head-driven phrase structure grammar (HPSG) (Pollard and Sag, 1987) and unification categorial grammar (UCG) (7_,eevat, Klein and Calder, 1987). TUG uses FA specifications as lexical entries and possesses a single grammar rule which combines these specifications to obtain a specification for the complex expression being analysed. The use of FA specifications allows generafisations that are often captured in grammar rules to be captured in the lexicon. MOTIVATION The development of TUG was a consequence of investigating extensions to the UCG framework. As described by Zeevat, Kh~,, .-d Calder (1987), UCG is a grammar formalism which combines SOme of the notiow~s of categorial grammar with those of unification-based formalisms like HPSG and PATR-II (Shicber el.at., 1983). The nsse.t~h .~tM~,,~d in this lmq~r wu ~ o~ at the Univmlity of EdlnbeJqth under the rapport of • BrifiJh C,~-,'~weallh Scholmhlp and at 51hUm FmJ~ Uui~ky unde* ms Advmu~ Synmll Imti~e ~ Fellov~hip. Special thar, Jo to the Omm: f~ Systmm Scknm md zhe L*bm.atm.y for ~ r md Rnem~ at Simon Fruer Unlve~izy fro. additkmal ml~pe~ I would I.'%-.. to t/rank Dm~ P~ md Om ACL mvi~a for thmt ¢,omm~B ,~4 m I l ~ Like HPSG, the fundamental construction used in UCG is the sign. A UCG sign has auributes for phonology, category, semantics and order. Consider the sign for the expression Mary walks shown in (I). (I) Mary-walks smt[f'm] [eli [[fllmary(fl), [el]walk(el,fl)] The phonology attribute of this sign (ie. Mary-walks) represents a phonological specification of the linguistic expression associated with the sign. For our needs we will use a simple sequence of words separated by hyphens. The category structure of a sign is very similar to that used by categorial grammar. There are three primitive categories, namely sent, np, and noun. Complex categories are of the form A / B, where B is a sign and A is a category (either primitive or complex). The semantic representation uses a language called InL (Zcevat, Klein and Calder 1987) which incorporates many of the features of discourse ~p, csentation theory (Kamp 1981). An InL formula is of the form [a]Condition where Condition consists of a predicate name followed by its argument list. Each element of the argument list is either a variable (ie. discourse marker) or an InL formula. The variable a preceding Condition is the index of the fonnnla. The order attribute of a sign contains information which is used to determine the ordering of the phonology of components during rule application. If an argument possesses pre as its order, then the phonology of the functor precedes that of the argument in that of the msuh. The value post describes the opposite situation. There is no restriction on the order of (1) as indicated by the appearance of the 'don't care' variable '_' in the order attribute. InL variables are assigned sorts. A sort can be thought of as a collection of features based on factors like gender and number. Unification of variables of incompatible sons will fail, thus providing a mechanism by which semantic information can restrict possible derivations. There are different sons for events, states and objects. Variables of the object sorx may be further specified with respect to gender (masculine, feminine, or neuter), and number. Unsorted variables will be denoted by the leuer a, events by e, states by s, and gendedess objects by x, y, and z. The letter m will be used to represent variables corresponding to a masculine object, f for feminine, and n for neuter. Unique identifiers which will be used to distinguish variables will appear as numbers following the variable names (ie. nl, ml, s2). Signs may be underspecified and through the application of the grammar rules they may become increasingly specified by the merging of information. Only two grammar rules are proposed in ('Zeevat, Klein and Calder, 1987): (2) Wt-W2: C: S: - ~ Wt: C4(W2:C2:S2:pre): S: _, W2.'C2"S 2:pre (3) W2-WI: Ci S: - -d, W2:C2:S2:post , Wt: C/(W2:C2:S2:post): S: _ 228 They cort~pond to forward (2) end backward O) functional application, the two roles in basic categorlal grammar. Capital letters am used to denote variables that are associated with unspecified values which will be instantiated during a derivation. Colons are used to separate the different attributes of the sign when the sign is displayed in a horizontal rather than vertical manner. Consider the result of applying rule O) to the two signs associated with Mary and walks which are shown below. (4) Mary: np: mary(fl): _ (5) walks sent[fin] / (..:np[nom]:[x]S:post) [el] [[x]S, walk(el,x)] The result of rule application is the sign that was introduced in (1). Rule application builds up the semantics of an expression by instantiating unspecified components, like S in the lexical entry for wa/ks (5), that have been placed into the s~rnantic stmc:ure. Associated with every linguistic expression is a derivation tree which describes how the sign corresponding to the complete expression is derived from grammar rules operating over signs associated with lexical entries. The leaves of this binary tree are labelled with signs for individual words, the root is labelled by the sign for the complete expression, while the other nonterminal nodes are associated with intermediate expressions. Each nonterminal node is labelled with the result obtained by applying a grammar rule to the signs which are referred to by its two daughter nodes. The edges to the daughters of a nonterminal node are designated functor and argument depending on the role that the sign at the daughter node plays during grammar rule application. As an example, the derivation tree provided in Figure 1 illustrates how backward functional application (BFA) (3) relates the signs for Mary (4) and wa/ks (5) to the sign associated with Mary-walks (I). The functor edge of a nontenninal node is represented by a line darker than that of the argument edge. Rule application combines signs and builds derivation trees as a side effect. A more generel form of this operation would be to combine trees to yield Uees directly. Partial descriptions of a complete derivation tree could be combined to yield an increasingly further specified derivation tree. The principle advantage of combining partial descriptions lies in the ease with which certain dependencieJ between different constituents can be described. Consider the general case in UCG where a functor is applied to an argument to produce a result~ Each of these three constituents possesses its own set of features which describes the phonological, syntactic and semantic information associated with it (Bouma, Kcenig and Uszkoreit, 1988). The relationship between these constituents is outlined in Figure 2. The information F associated with the funaor can be dependent on the information G associated with the argument; the dependency relation is shown by the are labelled 0 in. Figure 2. Such a dependency c4m be captured in the lexicel entry for the functor since the ftmctor contains the information associated with the argument in its own category name (as highlighted in bold in Figure 2). We have already seen an example of such a dependency in Figure I - the senumtic information of the funetor is dependent on that of the argumenL While the dependency marked by ~ can be captured in the lexicon in UCG, the dependency marked by p must be captured by the grammar rule; the grammar rule must state how the information F' associated with the result is obtained from that of the functor and that of the argument. If we adopt the premise that F=F, than p becomes an identity relation and there is no need for introducing additional grammar rules to capture a more complicated relation p. Unfortunately, there are cases where the condition F=F" does not apply. For instance, Bomna (1988) argues for the need of a lex feature which would distinguish lexical elements from phrases; a lexical funotor and its result would have different values for this feature (+iex and -lex respectively). Similarly, ff one wanted to encode bar level information (Jackendoff, 1977) into the different constituents then there would be numerous cases where the bar level of a functor and that of its argument would not be the same. Most importantly though, we can provide a straightforward m~ount of reflexivisation if we are not subject to the requirement that F--F' as we shall see shortly. BFA Mary-walks sent{~] [el] [[fl]m~y(fl)0 [el]walk(el&l)] Mary walks np[nom] sent[fin] / (Mary: np[nom]: [fl]mary(fl): post) [fl]mmy(fl) [el] [[fl]mary(fl), [el]walk(el.f l)] post Figure 1: Derivation Tree resuh Figure 2: Dependencies Between Constituents By using a partial description of a derivation tree as a lcxical antry, dependencies corresponding to O in Figure 2 are captured in the lexicon instead of in the grammar rules. For instance, the BFA grammar role states that the phonology of the resulting coostitmmt consists of the phonology of the argument followed by that of the functor. The lexicel entry for walks (5) implicitly describes such a relationship through the presence of the post feature. This fcamre is interpreted by the grammar role, with the relation being explicitly represented in the result. If a partial description like the one introduced for wa/ks in Figure 3 is used as a lexical entry, this reladon is explicitly represented and the presence of a post fcstum is actually not necessary. Furthermore, local relationships other than those corresponding to ¢~ and p can be captured explicidy in the lexical entry. For instance, the features associated with an argmnent can be dependent on those of its functor and information associated with the result can be directly related to that of the argument. One could even have a more long distance dependency, say between an argument and a subconstitoent of its funetor, stated dimctiy in the lexical entry. Most importantly, the use of FA specifications similar to those introduced in Figure 3 allows us to capture the restrictions associated with reflexivisation in the lexicon, without requiring the introduction of additional grammar rules or principles. FUNCTION ARGUMENT SPECIFICATIONS Although the grammar rules operate over trees in TUG, signs still have a role to play in the organisation of information. The signs of TUG differ from those of UCG in several respects. First, 229 order information is not an explicit part of the TUG sign. The subcategorisation information that is contained in the UCG sign is not present in the TUG sign; it is represented in the tree structures of the framework instead. On a point of terminology, the second attribute of the TUG sign is referred to as the syntax instead of the category, since it contains more than just categorial information. Finally. the TUG sign will also contain an attribute for binding information. For now, however, we will restrict our discussion to only the fh'st three attributes of a TUG sign. <a> [sl] _ every-W [np,C] [sl] impl([x]S) -- every o.: W [det] [noun,C] [sl] impi [x]S • > man [noun,_] man(ml) <~> p: W-walks [u~*,fin) LJ P([x]S) (walk(el,x)) W walks [np,nom] {v,fin] {_] P([xIS) walk(el,x) wa/k~ Figure 3: Lexical Entries In TUG, a binary tree called an FA specification is associated with every linguistic expression. These specifications resemble parl~l descriptions of derivation trees. Each node of this binary tree is labelled with a sign. The root node possesses a sign corresponding to the complete expression, while the leaves are labelled with signs for the component words or morphemes. Each nonterm/nal node dominates a functos node aud an argument node. The terms functor-sign and argument-sign will be used to refer to the signs associated with the functor and argument nodes respectively. The left-to-right ordering of functor and argument edges is not relevantl To refer to the sign of the root node of s tree, the term root.sign will be used. The tees rooted at nonterminal nodes of an FA specification will be called subtrees. An FA specification contains an auxiliary list which specifies subtmes of the FA spe~:ification with which other FA specifications must be unified. It is represented as a list of labels contained in angle brackets appearing to the left of the FA specification as illustrated in the lexical entries introduced in Figure 3. Observe that there are two edges leading from the functor-sign of the FA specification for every which do not lead to any nodes. These hang/rig edges are associated with nodes whose terminal or nonterminal status has not yet been established. So an FA specification may either state that a constituent has no subconstiments (terminal node sign), it may state that it has subconstiments (nonterminal node sign), or it may say nothing about whether or not a constituent possesses subconstiments (node with hanging edges). The single grammar rule of TUG is introduced in (6), where H a denotes an FA specification with auxiliary list rr It describes how the FA specification for a complex linguistic expression is obtained from unification of the FA specifications associated with component expressions. This rule states that an FA specification C (which will be called the auxiliary tree) possessing an empty auxiliary Hst [ ] is unified with the subtree of H described by the first element of the auxiliary list of H. [C/a] denotes the list formed by adding C to the front of the list ~ The result of this rule is a more fully i~tanfiated version of the primary tree, H. The resnlt's auxiliary list will consist of all but the lust element of the auxiliary list of the primary tree. Viewed procedurally, this rule states how to construct a new FA specification from two pre-existing FA specifications. Deelaratively, the rule merely states a relationship between FA specifications. To illustrate how FA specifications are manipulated by this singJe grammar rule we will trace the ooustmction of the FA specification associated with the sentence Every man wa/ks, using the lexical entries introduced in Figure 3. The lexical entry for every requires an auxiliary tree to be unified at the location marked by a. For the moment, let us examine the suttee associated with the argument of the lexical entry. This subtree describes a functor-argument relation between two linguistic expressions. One is a functor noun of unspecified case C possessing an index compatible with the 'entity' son, as designated by the presence of x, while the other is an argument determiner with phonology every. Alternatively, one could view the determiner as a ftmctor over the noun as suggested in (popowich, 1988). However, treating the noon as the fonctor allows a uniform treatment of nouns with possessive determiners and those with 'regular' determiners. This is the same treatment that has been adopted in HPSG (Pollard and Sag, 1987). We will propose that for any subtree the functor-sign and the root-sign will generally possess the same syntactic category information, except for bar.levi information (Popowich, 1988), in a manner t~miniseent of the head fe,'~e convention of GPSG (Gazdar et.aL, 1985). Observe that the phonology of the root-sign of this subtree is that of the argument-sign followed by that of the functor-sign. The argument-sign introduces a semantic index of the 'state' sort which will also be the index of the InL formula of any constituent which possesses a universally quantified noun phrase as its argument. This means that sentences like Every man walks will describe a state, even though the word walks describes an event. This argument-sign also introduces the semantic connective/rap/which is associated with the universal quantifier. <> [sH _ every-man [np,C] [sl] impl(man(ml)) -- every man [de~l [~,Cl [sl] impl man(m 1) Figure 4: Intermediate FA Specification When the FA specification for man is treated as a (depth zero) auxiliary tree which is unified with a from the lexical entry for every, we get a more instantiated FA specification which is assoc~ted with every man. This specification, which is introduced in Figure 4 is similar to the lexical entry for every except that x has been i~stantiated to nti , S to man(m]), and W to 230 man. It also differs from the lexical entry for every in that it does not possess any iabelled subtrees with which an auxiliary tree could be unified. As an abbreviatory convention, the index preceding a predicate which contains the index as its first argument will be omitted. So man(ml) is actually an abbreviation for [ml]man(ml) and walk(el,x) is an abbreviation for [el lwalk(el,x). The FA specification for every man can act as an auxiliary tree to be unified with [3 from the lexical entry for w~/~ shown in Figure 3. Any potential auxiliary tree must have an argument- sign whose syntax is compatible with the 'nominative noun phrase' specification. No restrictions are placed on the indices of the root and argument signs; these indices will be specified by the auxiliary tree. The lexical entry for wal~ states how the semantics of the n~ot-sign is formed from that of its functor and argument signs. When the FA specification for every man is combined with this primary tree, P of the primary tree is unified with b~ol of the auxiliary tree, x is instantiated to ml, and S is unified with man(ml). C of the auxiliary tree is instantiated m nora. The resulting FA specification is shown in Figure 5. < > every-man-walks [san~fml [sl] impl (man(m 1)) (walk(e l,m 1 )) every-man walks [npo~om] [v,fin] [sl] impl(man(m 1)) walk(el,m 1) every man (d~l (neun,neml [sl] impl man(m 1) Figure $: Final FA Specificadon The FA specification for the complete sentence describes exactly one FA structure. While FA specifications may contain variables and partially instantiated attributes, FA structures do not. The lexical retries of TUG can be viewed as contributing constraints to the FA structure that is associated with a complex linguistic expression with the single grammar rule being used to combine these constraints. During the analysis of an expression, constraints are continually proposed and never rescinded. Eventually, these constraints will describe the final FA structure(s). Thus we distinguish between information structures and the descriptiona of those structures in a manner similar to the approach proposed by Kaplan end Bresnan (1982) and discussed in detail by Johnson (1987). An FA specification can be interpreted as describing a set of FA straetums. Gnmrmar rule application thin corresponds to the intersection of the sets associated with the component FA specifications. The resulting set is associated with a new FA specification. If the resulting set contains no FA stmcuues, then there is no FA specification associated with the resulting set - grammar rule application fatlsl An ungrammatical sentence (ie. one without an FA structure) will not be assigned an FA specification. The result of the 8rammatical analysis of a sentence is the set of FA structures described by the final FA specification. Grammatical sentences can have one or more FA specifications, each of which will describe at least one FA structure. We are requiring a wellformed FA specification to describe at least one FA structure. In this respect, FA specifications differ from the description languages introduced in (Kaspar and Rounds, 1986) and in (Johnson, 1987). These languages allow descriptions for which there may not be associated structures. FA specifications are actually higher order descriptions which may be defined in terms of these description languages. They are intended to (transparently) describe structures associated with linguistic expressions; they arc not intended to be a powerful language for describing fexmre structures in general. Instead of using FA specifications to describe FA structures, we could use one of these lower level description languages in conjunction with a restriction requiring a wellformed description to describe at least one sl.nlcture. In TUG, many local dependencies between grammatical constituents and some other bounded relationships can be stipulated explicitly in lexical entries. This is because FA specifications for one lexical entry can directly access information contained in the sign associated with a different linguistic expression. For instance, we have already seen how the lexical ent~ for a quantifier can directly specify semantic information (the index) for a sentence in which it is contained. It is possible to incorporate the constraints on reflexivisation perspicuously in the lexicon without causing unnecessarily complicated lexical entries and without requiring the introduction of additional principles or grammar rules. REFLEXIVE ANTECEDENT INFORMATION The TUG treatment of reflexives will be based on the concept of reflexive antecedent information, henceforth R-ardecedera information. R-antecedent information, which will be distinct from the semantic information contained in a sign. will be responsible for determining the antecedents of reflexive pronouns. The constraints on reflexivisation will determine how the R- ante_-'eden__ t information of one sign is related to the information contained in other signs of an FA structure. Since the signs corresponding to the reflexive and its antecedent need not both be present in the FA specification for a verb (as illustrated in sentences like John wrote a book about a picture of himself), we will introduce a reflexive attribute into the TUG sign. This 'binding' attribute will contain the R-antecedent information nee_tied for establishing an anaphnric relationship between the reflexive and its antecedent. Since we have already seen the type of information contained in the first three attributes of the sign, let us consider the information contained in the fourth attribute. The antecedent information is responsible for determining the discourse marker that can be the antecedent of the pronoun. Based on a proposal for the treatment of personal pronouns described in (Johnson and Klein, 1986) we will propose that the R-antecedent information explicitly describes the set of potential discourse markers available as antecedents for reflexives. This is the information that will be contained in the reflexive attribute of a sign. The lexical retry for the reflexive will only need to state that its antec~ient marker is an element from this store. Unlike the Cooper storage mechanism described in (Cooper, 1983) which has been adopted in various proposals for anaphnra (Bach and Panee, 1980, Gazdar et.al., 1985), our reflexive attribute contains a set of antecedents, not a set of anaphors. The R-antecedent information will be represented as an ordered list of discourse markers (sorted variables) corresponding to potential antecedents. Lists will be displayed in square brackets with the different elements separated by commas. The notation [..J/J will be used to designate x as an arbitrary element from a fist with [x/A] denoting the list resulting from the addition of an dement x to a llst A. The sign associated with a reflexive 231 pronoun will resemble the one shown in (7). (7) himself [ np, obj ] true(m) [...ml_] The discourse marker appearing in the semantic formula associated with the reflexive pronoun is an arbitrary element (of the masculine sort) of the reflexive attribute of the pronoun. The condition true introduced in the semantic attribute is always satisfiable for any discourse marker. We will discuss the semantics of the reflexive pronoun in more detail shoaly. The operation of selecting an arbitrary element from a list of arbitrary length is a fairly powerful operation. Nevertheless, it seems to be a sufficiently primitive operation to be included in a framework. It carmot be expressed in the PATR-rl framework (Shieber et.al., 1983) which is often used to implement grammars. If functional uncertainty (Kaplan, Maxwell and Zaenm, 1987) were included as a primitive in PATR-n, then this arbitrary element selection operation could be implemented. The constraints on reflexivisation, which affect the distribution of R-antecedent information and its interaction with other forms of information, are incoq~orated directly into the TUG lexical entries. One constraint is derived from Keenan's (1974) proposal whereby the antecedent for a pronoun is an argument of the functor containing the pronoun. This can be incorporated into TUG by having the R-antecedent information of a functor consist of the R-antecedent information of its parent sign augmented with the semantic index I of its argument. To illustrate this 'flow' of R-antecedent information, consider an analysis of the simple sentence Mary loves herself. A series of FA specifications corresponding to different stages of an analysis for this sentence are shown in Figure 6. To highlight the relevant information, much of the information contained in the signs of ti~se FA specifications has nut been d/splayed. The first FA specification corresponds to the lexical entry for loves. Observe that the R-antecedent information of the functor-sign consists of the semantic index of the argument sign; the reflexive attribute of the sign associated with the object noun • phrase is the same as that of the constituent which contains it Also note that the InL formula from the sign associated with the verb refc~nces the semantic indices of the signs for the two noun phases. The second FA specification from Figure 6 illustrates the effect of unifying a sign (actually a depth zero tree) corresponding to the noun phrase Mary with the argument-sign of the initial FA specification. Note that the semantic index, f/, of Mary is introduced into the reflexive attribute of the functor over Mary. It also appears as the second argument of the semantic predicate love (underlined in the FA specification). Since the lexical entry for the verb also embodies the relation requiring the reflexive attribute of an argument-sign to contain the same information as its parent sign, fl is also introduced into the sign associated with the objea noun phrase. This 'flow' of R-antecedent information is highlighted by the dark arrows in Figure 6. In the final FA specification from this figure, a sign corresponding to the reflexive pronoun is unified with the sign of the object noun phrase in the FA specification. The reflexive pronoun obtains its semantic index from the information contained in its reflexive attribute as highlighted by the small arrow. This semantic index is used as the final argument in the InL formula associated with the verb (which is underlined in the FA specification). By incorporating Keenan's (1974) proposed dependency into FA specifications in this manner, we obtain a relationship much like predication.command (Hellan, 1988) and F.command (Chierchia, 1988). Although these 'command' restrictions on reflexivisation can account for much of the data concerning the distribution of reflexive pronouns, additional restrictions are necessary (Popowich, 1988). Just as the syntactic c-command relation needs to be used in conjunction with a locality restriction (eg. the syntactic 'clause-mate' restriction), the distribution of R-antecedent is restric:ed by a semantic locality restriction. Such a restriction, which is proposed in Pollard and Sag (1983), essentially states that reflexive 'information' cannot pass through categories of a generalised prediccuive type. A generalised predicative takes an NP denotation as its argument, and returns either an NP denotation or a 'proposition.' Adopting the notation used in (Dowry, Wall and Peters, 1981), the semantic type of a functor that takes expressions of semantic type c~ as arguments to produce resulting expressions of type ~ is <a,[3>. This means that the semantic type of a generalised predicative is either <NP' ~ > or <NP' ,S" >, where NP" and S' are the semantic types associated with noun phrases and sentences respectively. Conventional categories that are associated with generalised "l'~ ~...4 ~ or r=~i,,iss~im a,~ri~ in (Popo,~i~ t~ u~s the predicatives include possessed nominals (like picture of himself in a~o,,'c i~u ~ ot ~ R,~mt/c/,,acz of ~ ,~rsw,=L Sm~ ~ two iaak~ the phrase John's picture of himsel]) and verb phases. L'~ kl~tirad in ra~t c~um, v~ wiU sk~llty o~ dlscu~ion b,/usins tho s~sa~ ~. (i) W-Ioves-W' (//) Mary-ioves-W' (iii) Mary-loves-herself . . . ii" ii ii" W Ioves-W' Mary lovea-W ° Mary loves-herself [np, nora] ... [ap, noml ... [rip, nom] ... [i' . . . . . . tm... / \ ? W' loves / W "/ ~' loves h~rself loves [np,obj] ... ~ [np,obj] ... [np,obj] [y]... Iove(sl,x,y) [y]... love(sl.fl,y) Jill... iove(sl,fl,£D ... "'" ... < ... Figure 6: Distribution of R-Antecedmt Information 232 The presence of • general~ed predicative resulu in the blocking of R-antecedent information. Consider a subtree of an FA specification (like a in Figure 7) where the functor-sign is a n Z F.,~d~ [xl [Yl N Figure 7: Predicate-Command and Locality Restrictions generafisod predicative. The R-antecedent information of the generalised predinative is • list consisting of only the semantic index of the argument-sign. Tbe R-antecedent informatinn of the root-sign does not contribute to that of the functor sign. The signs of an FA specification conesponding to genendised predicative functors will be marked with • syntactic feature to distinguish them from non-goneralised predicatives. Functor-signs will be marked with the feature gprd ff they are generalised predicative•. Non-generalised predicative functors which take noun phases as arguments will be m•rked as ÷prd, and other functors will possess the fearer• -prd. Arguments will not be marked with any 'predicate' features. These fcamres are not actually necess•ry for our account of the dism'butiun of reflexive pronouns; our restrictions on reflexivisation can be defined in terms of other basic features. The use of these features will allow the behsvionr of R-antecedent information to be observed more easily, as illustrated in Figure 7. 2 Foe predicative functors, the R- antecedent information of the funotor-sign is composed of the semantic index of the argument-sign and the R-antecedent information from the root-sign. Note that the R-antecedent information of the sign labelled a is not included in that of the generaliscd predicative, but the semantic index of the argument- sign of a is included in that of the functor. For nun-predicative functors, the R-ante¢~lent information of the root-sign will be the same as that of the functor-sign. AN EXAMPLE Now that we have seen bow R-antecedent information can be incorporated into FA specifications, we can exmnine how this infonnatiun interacu with other forms of infonnatiun during the analysis of a more complex sentence. We shall consider the analysis of the smtence Mary Iove~ a picture of herself. After introducing various lexical entries, we shall see how they arc combined with lexical entries introduced earlier in this paper to form more complex FA specifications. shmcsd ot u..~p~ them thee di~m~t ~ dlmcdy iutl~ vmlmm I~iod ran'ms, tl~y c~m bo mn~d~l in L.-;~t to~c.,~ whlch cm tm us0d in lask:e/ ca~ (Sbmbmoud~ 19~.Popowlch, 19~), All otthotazi~/mm~ ~dBmd m ~ i~l~ cm I~ s~plifizd tlm~lh tl~ m of Imld~. In the lexical enu 7 for herself in Figure 8, it is the argument- sign that is assoc~ted with the linguistic expression herself. This sign contains • restriction [...f/_] which specifies that the semantic index f associated with herself is • member of the reflexive attribute of the sign. This arbitrary element of the reflexive store is required to be • variable of the feminine sort. The s~tex of this sign states that herself can act only as a noun phrase of the objective case. Thus it cannot appear in any positions in an FA specification which require the noun phrase to possess some other case. like no,~ive. ~e other noun phrases, the argument-sign contains the semantic connective and which will be used in determining the semantics of the font-sign. Unlike lexical entries for proper names and quantified noun phrases, the semantics of the argument-sign does not associate my restrictive condition on the index it introduces; the condition truc is always rafsfiable for any discourse marker. This ties in with the view of pronotms being semantically underspecified linguistic items. Viewed in terms of DRT (Kamp, 1981), the fonnule tru~(.O (which is an abbreviation for [f]true(/~) merely introduces a discourse marker into the universe but does not introduce any condition on that marker. Since the syntax of our ~antic notation requires a formula to consist of an index- condition pair, we need to introduce a condition like true along with the discourse marker. <> [a] -- herself [np,obj] [t] and(u~e~O) ~ __ [...ft_] Figure 8: Lexical Entry for herself The Icxical entry for the 'depicfive' preposition of. which is used in picmre-nonn constructions, is introduced in Figure 9. Of takes an object noun phrase argument to form a constituent which modifies a common noun. Additional restrictions would be required to ensure that it modifies only depictive nouns like picture and portrait. Tim lexical entry requires an auxiliary tree corresponding to an object noun phrase to be unified with 0t and one for a noun to be unified with [~. It also introduces a semantic formula of(x,y) which requires the entity denoted by x to be of the entity denoted by y. Semantic formulae of the form [aI[A,B] are sbbreviatiuns for formulae of the form [a]and(A)(B). The functor-sign of a has been specified as • generalised predicative - it takes • noun phrase as an argmnent and results in another noun phrase. According to our restrictions on R-antecedent information, the R-antecedem information A of the root-sign of a is not included in that of the generalised predicative but it is included in that of the argument-sign. In this way, the same R-antecedent information that is associated with the root-sign of 0t is also available to the embedded noun phrase (ie. the argument of ot) as highlighted in bold in Figure 9. The functor-sign of the lexical entry for of possesses the feature +prd since it takes a noun phrase as its argument to produce a noun. Since an argument sign always inherils its R-antecedent information from the root-sign, the same R-antecedent infomaation is associated with both the root-sign of the lexical entry and the embedded phrase. In order to obtain the FA specification for picture of herself shown in Fignrc I0, the lexical enU 7 for herself acts as the 233 <~> W-of-W' [hoLm] [x][[x]S, [alP([y]S')(of(x,y))] A ~: w [noun,+prd] [xlS [xlA] c~ of-W" {np,of] [a] P([y]S')(of(x,y)) A W' of [np,obj] [np,of, gprd] (_]P([y]$') of(x,y) A [y] Figure 9: Lexical Entry for of auxiliary tree which is unified with cz of the lexical enu 7 for of, and the lexical entry for picture is unified with [3. Since [f]and(tru~O~ ) is an abbreviation for [j~and([f]tru~O~ ) in Figure 8, the unification of this formula with [_]P([y]S') from the primary tree will result in P becoming instantiated to and, y to~ and 5" to true(/). Note that in this example, P is a variable over our (finite) set of semantic connectives. The FA specification for herself introduces a restriction on the reflexive auribote of the sign associated with herself This restriction requiresfto be a member of the list A which is still uninstantiated. To represent that the restriction [ ...f/_] was unified with A, we will introduce A as a subscrila on this restriction in the FA specifications that we are discussing. This will make it easier to examine the behaviour of R-antecedent information. The lexical entry for the noun picture introduces a marker of the neuter sort, n/, and includes a condition which requires this marker to be a picture pie(M). When this lexical entry is combined with the FA specification for of herself, x from the primary tree gets instantiated to the variable associated with the picture nl. Note that [nl]and(true(jO)(of(nld~) is equivalent to [ni]of(nldO. < • picture-of-herself [noun] [nl][pic(nl), of(nl,f)] A of-herself picture [np,of] [noun,+prd] [nl| and(true( f))(o f(n I ,f)) pic(nl) A [nl I A] herself of [np,obj] [~,of, gprd] [t']end(true(O) of(nl.O [...fw_] A [tl Figure I0: FA Specification for a picture-noun The FA specification for the determiner a is very similar to the one for the universal quantifier introduced in Figure 3. We will not discuss it in detail here. Instead we will just note that it is constructed so that the reflexive attribute of the mot-sign of the FA specification for the phrase a picture of herself will be the same as that of the sign associated with the complex noun picture of herself. Since the reflexive attribute of the sign associated with this complex noun is the same as that of the embedded reflexive noun phrase (see Figure I0), this means that the R-antecedent information, A, of the complex noun phrase a picture of herself is the same as that of the embedded noun phrase associated with the reflexive pronoun. So, any antecedents available to the complex noun phrase will also be available to the embedded reflexive. This will result in the appropriate distribution of R-antecedent when the FA specification associated with a picture of herself acts as an auxiliary tree to be combined with the primary tree corresponding to the lexical entry for love~. The lexical entry for the transitive verb loves (Figure 11) requires two auxiliary trees corresponding to its ohjea and subject noon phrases to be unified with suhtrees a and [3 respectively. It is structured in much the same way as the lexical entry for walks discussed earlier. Note that for a, the functor-sign is not a generalised predicative and so the R-antecedent information of the functor sign is made up of the semantic index y of the argument-sign and the R-antecedent information [x] of the root- sign. [3 does have a generalised predicative functor-sign, so the R-antecedent information A' of the root sign is not included in that of the generalised predicative, [x]. < o., ~• [3: W-Ioves-W' [sengfin] [_] P( [x]S)([a']P'([y]S')(Iove~ s 1,x,y))) A" W a: loves-W' [rip, nom] [v,fin, gprd] [_]P([xlS) [a']P'([ylS ")(love( s l,x,y)) A' [x] W' loves [np,obj] [v,fin,+prd] [1P'([y}S') Iove(s l,x,y) [x] [y,x} / \ Figure II: Lexical Entry forloves When the lexical entry for loves takes the FA specification for a picture of herself as an auxiliary tree to be unified with a, the reflexive attribute A from the auxiliary tree becomes instantiated to [x]. But recall that there is still an additional restriction placed on the A which requires f to be an arbitrary member of A. This means that f must be unified with x; the subject of the verb is stipulated to be an entity possessing a marker of the feminine sort as illustrated in Figure 12. Unification of the auxiliary tree with a also results in y being instantiated to the variable associated with the picture hi. The semantic formula PIC(nld~ in Figure 12 is an abbreviation for the somewhat lengthy formula trill [pie(M), oj~nl J)]. When the FA specification from Figure 12 is combined with the auxiliary tree corresponding to the lexical entry for Mary, the variable f from the primary tree becomes insmntiated to the discourse marker associated with Mary. An attempt to unify an FA specification for a 'masculine' noun phrase with [3 of the primary tree would fail since the nominative noun phrase is required to possess a semantic index of the feminine son (as shown in bold). Thus, for a sentence like John loves a picture of herself there would be no FA spedfication and consequently no FA structure (unless there were some female entity named John). COMPARISON The name "Tree Unification Grammar" suggests that TUG might be related to other unification-based frameworks as well as to other tree-based frameworks. We shall briefly compare TUG with some of the beuer known of these related frameworks. A 234 < 13 > ~: W-loves-a-picture-of-her self [sent, fin] fl P([x]SX[sl ][PIC(nl,0~ove(sl,fja 1)]) A W loves-a-picture-of-herself [np, nora] [v,fm,gprd] [_]P([f]S) [s 1 ] [PIC(nl j),love(s l,f,nl )] A If] a-picture-of-herself loves [np,obj] [v,fin,+prd] [n l]and(PlC(n l,f)) love(s l,f, nl) if] {nl,t'] o..t" "'-.o.... ...:" hcrsclf ""-. ." [np,obj] "'.. ...... [t]and(~c~6)"" [fl Figure 12: FA Specification for a verb phrase more detailed discussion can be found in (Popowich, 1988). Uszkoreit (1986) introduces Categorial Unification Grammar (CUG) as a class of grammars which combine the features of categorial granunars with those of unification granmlars. In CUG, directed acyclic graphs (DAGs) are used as the basic granunar structures. Granunatical c~t~stituents possess attributes for phonology, syntax, and semantics. These constituents are essentially the signs of CUG. Two grammar rides, for forward and backward funct/onal application, are used to form new constituents. CUG is sin~lar to PATR-r[ in that it could serve as a language into which TUGs could be translated. A potential disadvantage of CUG is that it might be too unrestricted in the type of operations that it allows (van Benthem, 1987). In addition, the type of structures allowed in TUG is very restricted (binary trees containing only a fixed number of attributes) while those allowed in CUG are much less resuicted. The structures used by TUG, UCG and other formalisms can be translated into a low-level format consisting of CUG DAGs. A major short- coming of using CUG or PATR-I/as a linguistic formalism is that the dependencies that am necessary for determining anaphoric relationships are 'hidden' in the DAG describing the linguistic expression; information is distributed in a fiat graph structure with no higher order grouping expressed. Although this may be beneficial with respect to implementing grammars, it can make it difficult to work with the structures. The advantage of the FA structure is that it is an explicitly hierarchical ~6v, r.sentation structure - a tree with structured .nodes - instead of a graph of simple nodes. This hierarchical structure allows many linguistic generalisations, particularly those associated with reflexivisation, to be stated easily and transparently. Tree adjoining grammars (TAGs) (Joshi, Levy and Takahashi, 1975, Vijay-Shanker and Joshi, 1988) possess trees as basic grammar structures, and grammar rules are used to alter the structure of these trees. The relationship between TUG and TAG is very superficial as will be illustrated after a short description of the framework. A TAG contains/n/t/a/trees and auxiliary trees. Initial trees are defined as n-ary trees possessing only terminal symbols as leaves. The leaves of an auxiliary tree are all terminal symbols except for a single nontenninal, the fooL which is of the same category as the root of the tree. These two types of trees comprise the class of elementary trees. There is a trec adjoining operation which is used to form derived trees. AppLication of this rule results in the insertion of auxiliary trees into the middle of ~nitlal trees or other derived trees, subject to speci~c restrictions. TAGs are fundamentally different from TUGs since the adjoining operation alters the structure of the ume instead of merely further instentiating it. Adjoining involves the insertion of trees at internal nodes while the TUG operation can be viewed as the overlaying of trees to form larger structures. The TAG framework has fully specified trees that are modified by other fully specified trees in order to obtain more complex fully specified trees. In TUG, partially specified trees are combined (not modified) in order to ohtain a more fully specified complex tree. Feature structure based TAGs (FlAGs) (Vijay-Shanker and Joshi, 1988) are more closely related to TUG than traditional TAGs. The adjoining operation of FTAG amounts to combining a description of the auxiliary tree with that of the tree into which it is adjoined. In this way, a more complete description of the final tree is gradually constructed. However, in FTAG tree descriptions the internal tree structure is not fixed. The descriptions are organised so that additional trees may be adjoined at specific locations. After all the required adjoining operations have been performed, these gaps in the tree structure are closed via unification. In TUG tree descriptions (FA specifications) the internal tree structure is fixed; the fringe nodes of the FA specification are the only ones for which tree structure information may not be specified (as designated by the hanging edges described exriler). The most closely related grammar formalism to TUG is HPSG as described in (Pollard and Sag, 1987). The phrasal signs of HPSG are almost notational variants of the FA specifications of TUG; phrasal signs were not present in the early forms of HPSG (Pollard, 1985) from which UCG and TUG evolved. Aside from the dighfly different appearance of these different structures, FA specifications are slightly more restrictive in that a node may only have two descendents instead of the unlimited number allowed in HPSG. TUG also differs from HPSG in that it requires only one (instead of two) grammar rules. This is a consequence of TUG having essentially phrasal-signs as lexical entries. In this way, a lexical entry can directly access information other than that associated with its sister signs in a derivation tree (or phrasal sign). This allows interesting proposals for the treatment of reflexives in controlled complements and unbounded dependency constructions which am discussed in dc~aJ.l in (Popowich, 1988). SUMMARY In TUG, the phonological, syntactic, semantic and antecedent information describing linguistic expressions is contained in signs which are organised into FA structures. These FA structures are binary ores which encode the functor-argurnent dependencies between the signs corresponding to components of a complex expression. Partial specifications of FA structures are associated with individual lexical entries and these FA specifications are combined by a single grammar role. Dependencies between information associated with different linguistic constituents that. are traditionally captured by grammar roles are captured explicitly in the TUG lexical entries. TUG can in some sense be viewed as a 'lexicalised' UCG, where 'lexicelised' is.used in the sense discussed in (Schabes, Abeille and Joshi, 1988). However, the FA structures described by a TUG analysis of a sentence are difficult to obtain as derivation trees in UCG. As discussed earlier, the UCG grammar roles require the semantic attributes of the root-sign and fonctor-sign of any subtree to be the same. Additional grammar rules would be needed by UCG to allow the diffenmt relationShil~S between semantic infonmation 235 and to allow the three different relations between the R- antecedent information of a root-sign and functor-sign. The R-antecedent information of a functor-sign can either be the same as that of the mot-sign (non-predicative functors), or it can consist of the semantic index of its argument in addition to the R- antecedent information of the mot-sign (po-.dicative functors), or it can contain only the sanantic index of its argument (generalised predicative functors). The R-antecedent information contained in FA specifications is treated on a level equal to the other forms of information; there is no need to invoke special mechanisms for passing this information. Its distribution is governed by the predication command and generalised predicative constraints. The reflexive attribute of the sign contains information that m/ght be needed by a reflexive pronoun. So if a sign for a reflexive pronoun appears in an FA specification, the possible anteee_aen_ ts for the reflexive are easily accessible. During ~ unification, if the sign associated with a reflexive pronoun contains no variables of the appropriate son in its reflexive store, then the use of the pronoun is ungrammatical md tree unification fails. Since an FA specification is associated with each potential antecedent of a reflexive proneen, failure of anaphora resolution can constrain possible analyses; if there is no possible antecedent for a reflexive, there will not be an FA specification. REFERENCES Bach, Emmon, and Barbara Panee. (1980). Anaphora and Semantic Structure. In C. Masek, P. Hendrick and M. Miller (Eds.), Papers from the Paragession on Language and Behavior at the 17th Regional Meeting of the Chicago Lingaistica Society. Chicago, IL Boama, Gosse. (1988). Modifiers and Specifiers in Categurial Unification Grammar. Liagaistica, 26(1), 21-46. Bouma, Gosse, Ester Koanig, and Hans Uszkoreit. (1988). A Flexible Graph-Unification Formalism and its Application to Natural Language Processing. In IBM Jownat of Research and Developmenl. Special Issge on Computational Linguistics. C'hierchis, Germaro. (1988). Aspects of a Categorial Theory of Binding. In R. Oehde, IL Bach, and D. Wheeler (Eds.), Calegorial Grammars and Natural Language Structures. D. Reidal, Dordrecht, Holland. Cooper, Robin. 0983). Quantification and Syntactic Theory. D. Reidel, Dordrecht, Holland. Dowry, David, Robert Wall, and Stanley Peters. 0981). lmroduction to Momague Semantics. D. Reidel, DordrechL Holland. Gazdar, Gerald, Ewan Klein, Geoffrey Pullum, and Ivan Sag. (1985). Generalized Phrase Structure Grammar. Basil Blackweil, London. Hellan, I.,an. (1988). Anaphora in Norwegian and the Theory nfGrammar. Foils Publications, Dordrecht, Holland. Jackendoff, Ray. (1977). X.bar Syntax: A Study of Phrase Structure. MIT Press, Cambridge, MA. Johnson, Mark. (1987). Attribsae-Value Logic and the Theory of Grammar. Doctond dissertadun, Department of Linguistics, Stanford University, CA. Johnson, Mark, and Ewan Klein. (1986). Discourse, Anaphora and Parsing. In: llth International Conference on Compalational Linguistics. Bonn University, West Germany. Joshi, Aravind, Leon Levy, and M. Takahashi. (1975). Tree Adjunct Grmnmm. Y. Camp,,'. Syst. Sci., VoL 10(I). Kamp, Hans. (1981). A Theory of Truth and Semantic Retatsentation. In J. Groenendijk, T. lanssen, and M. Stokhof (F.da.), Formal Method~ in the Study of Langaage. Mathematical Cemm Tracts, Amsterdam. Kaplan, Ron, and Joan Bresnan. (1982). Lexical-Functional Grammar. A Formal System for Grammatical Representation. In I. Bresnan (EcL), The Mental Ret~resen~ation of Grammatical Relation& MIT Press, Cambridge, MA. Kaplan, Run, lohn Maxwell, and Annie Zaenen. (ffanuary 1987). Functional Uncertainty. In: The CSLI Monthly, Centre for the Study of Language and Information, Stanford University, CA. Kasper, RobeR, and William Rounds. (1986). A Logical Semantics for Featm¢ Structures. In: 24th meeting Assoc. Comput. Ling. Columbia University, New York, N.Y. Keunan, Edward. (1974). The Functional Principle: Ge~er~llzlng the Nodon of 'Subject of'. In M. La Galy, R. Fox, and A. Bruck (Ecls.), Papers from the lOth Regional Meeting of the Chicago Linguistics Society. Chicago, [L. Pollard, Cad. (1985). Lectures on I-IPSG. Unpublished lecture notes, CSLL Stanford University, CA. Pollard, Cad, and Ivan Sag. (1983). Reflexives and Reciprocals in English: An Alternative to the Binding Theory. In M. Badow, D. Flickinger, and M. Westcoat (Eds.), Proceedings of the 2nd West Coast Conference on Formal Linguistics. Stanford Linguistics Association, Stanford, CA. Pollard, Carl, and Ivan Sag. (1987). lnformmion.Based Syntax and Semantics, Report 1: Fumtamentals. Centre for the Study of Language and Information, Stanford University, CA. Popowich, Fred. (1988). Reflexives and Tree Unification Grammar. Doctoral dissertation, Centre for Cognitive Science, University of Edinburgh, Edinburgh, Scotland. Schabes, Yves, Anne Abeilie, and Aravind loshi. (1988). Parsing Strategies with "Loxicalized' Grammars: Application to Tree Adjoining Grammars. In: 12th International Conference on Computational Lingt~atic~. Budapest, Hungary. Shieber, Stuart, Hans Uszkoreit, Femando Pereira, Jane Robinson, and M, Tyson. (1983). The Formalism and Implementation of PATR-H. In B. Grosz and M. Stickel (Eds.), Reaearch on Interactive Acquisition and Use of Knowledge. SRI International, Menlo Park, CA. Uszkoreit, Hans. (1986), Categorial Unification Grammars. In: llth International Conference on Computational Linguistics. Bonn University, West Germany. van Benthem, Johan. (1987). Categorial Equations. In E. Klein and J. van Benthem (Eds.), Categories, Polymorphlsm and Unification. Centre for Cognitive Science, University of Edinburgh, and Institute for Language, Logic and Information, University of Amsterdam. Vijay-Shanker, K., and Aravind Joshi. . (1988). Fuamre Structures Based Tree Adjoining Grammars. In: 12th International Conference on Computational Linguistics. Budapest, Hungary. Zeevat, Henk, Ewan Klein, and Jo Calder. (1987). An Inmxluction to Unification Categorial Grammar. In N. Haddock, E. Klein, and G. Morrill (Eds.), Edinburgh Working Papers in Cognitive Science, VoI.I: Categorial Grammar, Unification Grammar, and Parsing. Cemre for Cognitive Science, Univ. of Edinburgh, Scodand. 236
1989
28
A GENERALIZATION OF THE OFFLINE PARSABLE GRAMMARS Andrew Haas BBN Systems and Technologies, 10 Moulton St., Cambridge MA. 02138 ABSTRACT The offline parsable grammars apparently have enough formal power to describe human language, yet the parsing problem for these grammars is solvable. Unfortunately they exclude grammars that use x-bar theory - and these grammars have strong linguistic justification. We define a more general class of unification grammars, which admits x-bar grammars while preserving the desirable properties of offline parsable grammars. Consider a unification grammar based on term unification. A typical rule has the form t o --~ t 1 ... t n where t o is a term of first order logic, and tt...t n are either terms or terminal symbols. Those t i which are terms are called the top-level terms of the rule. Suppose that no top-level term is a variable. Then erasing the arguments of the top- level terms gives a new rule C 0 ---,¢. Cl....C n where each c i is either a function letter or a terminal symbol. Erasing all the arguments of each top-level term in a unification grammar G produces a context-free grammar called the comext-free backbone of G. If the context-free backbone is finitely ambiguous then G is offline parsable (Pereira and Warren, 1983; Kaplan and Bresnan, 1982). The .parsing problem for offline parsable grammars ts solvable. Yet these grammars apparently have enough formal power to describe natural language - at least, they can describe the crossed-serial dependencies of Dutch and Swiss German, which are presently the most widely accepted example of a construction that goes beyond context-free grammar (Shieber 1985a). Suppose that the variable M ranges over integers, and the function letter "s" denotes the successor function. Consider the rule 1 p(M) ---) p(s(M)) A grammar containing this rule cannot be offline parsable, because erasing the arguments of the top-level terms in the rule gives 2 p ---~ p which immediately leads to infinite ambiguity. One's intuition is that rule (1) could not occur in a natural language, because it allows arbitrarily long derivations that end with a single symbol: p(s(0)) ~ p(0) p(s(s(0))) ~ p(s(0)) ~ p(0) p(s(s(s(0)))) ~ p(s(s(0))) ~ p(s(0)) --> p(0) ,,°. Derivations ending in a single symbol can occur in natural language, but their length is apparently restricted to at most a few steps. In this case the offline parsable grammars exclude a rule that seems to have no place in natural language. Unfortunately the offline parsable grammars also exclude rules that do have a place in natural language. The excluded rules use x-bar theory. In x-bar theory the major categories (noun phrase, verb phrase, noun, verb, etc.) are not primitive. The theory analyzes them in terms of two features: the phrase types noun, verb, adjective, preposition, and the bar levels 1,2 and 3. Thus a noun phrase is maJor-cat(n,2) and a noun is major- cat(n,1). This is a very simplified account, but it is enough for the present purpose. See (Gazdar, Klein, Pullum, and Sag 1985) for more detail. Since a noun phrase often consists of a single noun we need the rule 3 major-.cat(n,2) ~ major-.cat(n,l) Erasing the arguments of the category symbols gives 4 major-cat ~ major-cat and any grammar that contains this rule is infinitely ambiguous. Thus the offline parsable grammars exclude rule (3), which has strong linguistic justification. One would like a class of grammars that excludes the bad rule p(s(Y)) -., p(Y) and allows the useful rule 237 major-cat(n,2) --~ major-cat(n,1 ) Offline parsable grammars exclude the second rule because in forming the context-free backbone they erase too much information - they erase the bar levels and phrase types, which are needed to guarantee finite ambiguity. To include x-bar grammars in the class of offline parsable grammars we must find a different way to form the backbone - one that does not require us to erase the bar levels and phrase types. One approach is to let the grammar writer choose a finite set of features that will appear in the backbone, and erase everything else. This resembles Shieber's method of restriction (Shieber 1985b).Or following Sato et.al. (1984) we could allow the grammar writer to choose a maximum depth for the terms in the backbone, and erase every symbol beyond that depth. Either method might be satisfactory in practice, but for theoretical purposes one cannot just rely on the ingenuity of grammar writers. One would like a theory that decides for every grammar what information is to appear in the backbone. Our solution is very close to the ideas of Xu and Warren (1988). We add a simple sort system to the grammar. It is then easy to distinguish those sorts S that are recursive, in the sense that a term of sort S can contain a proper subterm of sort S. For example, the sort "list" is recursive because every non-empty list contains at least one sublist, while the sorts "bar level" and "phrase type" are not recursive. We form the acyclic backbone by erasing every term whose sort is recursive. This preserves the information about bar levels and phrase types by using a general criterion, without requiring the grammar writer to mark these features as special. We then use the acyclic backbone to define a class of grammars for which the parsing problem is solvable, and this class includes x-bar grammars. Let us review the offline parsable grammars. Let G be a unification grammar with a set of rules R, a set of terminals T, and a start symbol S. S must be a ground term. The ground grammar for G is the four-tuple (L,T,R' ,S), where L is the set of ground terms of G and R" is the set of ground instances of rules in R. If the ground grammar is finite it is simply a context-free grammar. Even if the ground grammar is in.f'mite, we can define the set of derivation trees and the language that it generates just as we do for a context-free grammar. The language and the derivation trees generated by a unification grammar are the ones generated by its ground grammar. Thus one can consider a unification grammar as an abbreviation for a ground grammar. The present paper excludes grammars with rules whose right side is empty; one can remove this restriction by a straightforward extension. A ground grammar is depth-bounded if for every L > 0 there is a D > 0 such that every parse tree for a string of length L has a depth < D. In other words, the depth of a p.arse tree is bounded by the length of the stnng it derives. By definition, a unification grammar is depth- bounded iff its ground grammar is depth-bounded. One can prove that a context-free grammar is depth-bounded iff it is finitely ambiguous (the grammar has a f'mite set of symbols, so there is only a finite number of strings of given length L, and it has a finite number of rules, so there is only a finite number of possible parse trees of given depth D). Depth-bounded grammars are important because the parsing problem is solvable for any depth-bounded unification grammar. Consider a bottom-up chart parser that generates partial parse trees in order of depth. If the input (~ is of length L, there is a depth D such that all parse trees for any substring of a have depth less than D. The parser will eventually reach depth D; at this depth there are no parse trees, and then the parser will halt. The essential properties of offline parable grammars are these: Theorem 1. It is decidable whether a given unification grammar is offline parsable. Proof: It is straightforward to construct the context-free backbone. To decide whether the backbone is finitely ambiguous, we need only decide whether it is depth-bounded. We present an algorithm for this problem. Let C a be the set of pairs [A,B] such that A B by a tree of depth n. Clearly C t is the set of pairs [A,B] such that (A ----) B) is a rule of G. Also, Cn+ 1 is the set of pairs [A,C] such that for some B, [A,B] ~ C a and [B,C] ¢ C t. Then if G is depth-bounded, C a is empty for some n > 0. If G is not depth-bounded, then for some non-terminal A, A =~ A. The following algorithm decides whether a cfg is depth-bounded or not by generating C n for successive values of n until either C a is empty, proving that the grammar is depth-bounded, or C a contains a pair of the form [A, A], proving that the grammar is not depth-bounded. The algorithm always halts, because the grammar is either depth- bounded or it is not; in the first case C n -- ~ for some n, and in the second case [A,A] e C a for some n. 238 Algorithm 1. n:= 1; C I := {[A,BI I (A ~ B) is a rule ofG } while true do [ if C n = ~ then return true; if (3 A. [A,A] ~ Ca) then return false; Cn, I := {[A,C] 1(3 B. [A,B] ~ C n ^ [B,C] ~ Ct)}; n := n+t; ] Theorem 2. If a unification grammar G is offline parsable, it is depth-bounded. Proof: The context-free backbone of G is depth-bounded because it is finitely ambiguous. Suppose that the unification grammar G is not depth-bounded; then there is a string a of symbols in G such that cx has arbitrarily deep parse trees in G. If t is a parse tree for a in G, let t' be formed by replacing each non-terminal f(xt...xn) in t with the symbol f. t' is a parse tree for ct in the context-free backbone, and it has the same depth as t. Therefore a has arbitrarily deep parse trees in the context-free backbone, so the context-free backbone is not depth-bounded. This contradiction shows that the unification grammar must be depth-bounded. Theorem 2 at once implies that the parsing problem is solvable for offline parsable grammars. We define a new kind of backbone for a unification grammar, called the acyclic backbone, The acyclic backbone is like the context-free backbone in two ways: there is an algorithrn to decide whether the acyclic backbone is depth- bounded, and ff the acyclic backbone is depth- bounded then the original grammar is depth- bounded. The key difference between the acyclic backbone and the context-free backbone is that in forming the acyclic backbone for an x-bar grammar, we do not erase the phrase type and bar level features. We consider the class of unification grammars whose acyclic backbone is depth- bounded. This class has the desirable properties of offline parsable grammars, and it includes x-bar grammars that are not offline parsable. For this purpose we augment our grammar formalism with a sort system, as defined in (GaUier 1986). Let S be a finite, non-empty set of sorts. An S-ranked alphabet is a pair (Y~,r) consisting of a set ~ together with a function r :Y~ -+ S* X S assigning a rank (u,s) to each symbol f in I:. The string u in S* is the arity off and s is the sort off. Terms are defined in the usual way, and we require that every sort includes at least one ground term. As an illustration, let S = { phrase, person, number I. Let the function letters of 57 be { np, vp, s, 1st, 2nd, 3rd, singular, plural }. Let ranks be assigned to the function letters as follows, omitting the variables. r(np) = ([person, n umber],phrase) r(vp) = ([person, number],phrase) r(s) = (e,phrase) r(lst) = (e,number) r(2nd) = (e,number) r(3rd) = (e,number) r(singular) = (e,person) r(plural) = (e,person) We have used the notation [a,b,c] for the string of a, b and c, and e for the empty string. Typical terms of this ranked alphabet are np(lst,singular) and vp(2nd, plural). A sort s is cyclic if there exists a term of sort s containing a proper subterm of sort s. If not, s is called acyclic. A function letter, variable, or term is called cyclic if its sort is cyclic, and acyclic if its sort is acyclic. In the previous example, the sorts "person","number", and "phrase" are acyclic. Here is an example of a cyclic sort. Let S = {list,atom} and let the function letters of E be { cons, nil, a, b, c }. Let r(a) = (e,atom) r(b) = (e,atom) r(c) = (e,atom) r(nil) = (e,list) r(cons) = ([atom,list],list) The term cons(a,nil) is of sort "list", and it contains the proper subterm nil, also of sort "list". Therefore "list" is a cyclic sort. The sort "list" includes an infinite number of terms, and it is easy to see that every cyclic sort includes an infinite number of ground terms. If G is a unification grammar, we form the acyclic backbone of G by replacing all cyclic terms in the rules of G with distinct new variables. More exactly, we apply the following recursive transformation to each top-level term in the rules of G. transform(f(t t...tn) ) -- if the sort of f is cyclic then new-variable0 else f(transform(t 1)...transform(tn)) where "new-variable" is a function that returns a new variable each time it is called (this new variable must be of the same sort as the function letter t'). Obviously the rules of the acyclic backbone subsume the original rules, and they contain no cyclic function letters. Since the 239 acyclic backbone allows all the rules that the original grammar allowed, if it is depth-bounded, certainly the original grammar must be depth- bounded. Applying this transformation to rule (1) gives p(X) --~ p(Y) because the sort that contains the integers must be cyclic. Applying the transformation to rule (3) leaves the rule unchanged, because the sorts "phrase type" and "bar level" are acyclic. In any x-bar grammar, the sorts "phrase type" and "bar level" will each contain a finite set of terms; therefore they are not cyclic sorts, and in forming the acyclic backbone we will preserve the phrase types and bar levels. In order to get this we result we need not make any special provision for x-bar grammars - it follows from the general principle that if any sort s contains a finite number of ground terms, then each term of sort s will appear unchanged in the acyclic backbone. We must show that it is decidable whether a given unification grammar has a depth-bounded acyclic backbone. We will generalize algorithm 1 so that given the acyclic backbone G' of a unification grammar G, it decides whether G' is depth-bounded. The idea of the generalization is to use a set S of pairs of terms with variables as a representation for the set of ground instances of pairs in S. Given this representation, one can use unification to compute the functions and predicates that the algorithm requires. First one must build a representation for the set of pairs of ground terms [A,B] such that (A --> B) is a rule in the ground grammar of G'. Clearly this representation is just the set of pairs of terms [C,D] such that (C ~ D) is arule ofG'. Next there is the function that takes sets S t and S 2 and finds the set link(Si,S 2) of all pairs [A,C] such that for some B, [A,B] e S t and [B,C] S 2. Let T t be a representation for S t and T 2 a representation for S 2, and assume that T t and T 2 share no variables. Then the following set of terms is a representation for link(St,S2): { s([A,C]) I (3 B,B'. [A,B] ~ T 1 A [B' ,C] E T 2 A S is the most general unifier of B and B' ) I One can prove this from the basic properties of unification. It is easy to check whether a set of pairs of terms represents the empty set or not - since every sort includes at least one ground term. a set of pairs represents the empty set iff it is empty. It is also easy to decide whether a set T of pairs with variables represents a set S of ground pairs that includes a pair of the form [A,A] - merely check whether A unifies with B for some pair [A,B] in T. In this case there is no need for renaming, and once again the reader can show that the test is correct using the basic properties of unification. Thus we can "lift" the algorithm for checking depth-boundedness from a context-tree grammar to a unification grammar. Of course the new algorithm enters an infinite loop for some unification grammars - for example, a grammar containing only the rule 1 p(M) -+ p(s(M)) In the context-free case the algorithm halts because if there are arbitrarily long chains, some symbol derives itself - and the algorithm will eventually detect this. In a grammar with rules like (1), there are arbitrarily long chains and yet no symbol ever derives itself. This is possible because a ground grammar can have infinitely many non-terminals. Yet we can show that if the unification grammar G contains no cyclic function letters, the result that holds for cfgs will still hold: if there are arbitrarily long chain derivations, some symbol derives itself. This means that when operating on an acyclic backbone, the algorithm is guaranteed to halt. Thus we can decide for any unification grammar whether its acyclic backbone is depth- bounded or not. The following is the central result of this paper: Theorem 3. Let G' be a unfication grammar without cyclic function letters. If the ground grammar of G' allows arbitrarily long chain derivations, then some symbol in the ground grammar derives itself. Proof: In any S-ranked alphabet, the ntunber of terms that contain no cyclic function letters is finite (up to alphabetic variance). To see this, let C be the number of acyclic sorts in the language. Then the maximum depth of a term that contains no cyclic function letters is C+I. For consider a term as a labeled tree, and consider any path from the root of such a tree to one of its leaves. The path can contain at most one variable or function.. letter of each non-cyclic sort, plus one variable of a cyclic sort. Then its length is at most C+I. Furthermore, there is only a finite number of function letters, each taking a fixed number of arguments, so there is a finite bound on the 240 number of arguments of a function letter in any term. These two observations imply that the number of terms without cyclic function letters is finite (up to alphabetic variance). Unification never introduces a function letter that did not appear in the input; therefore performing unifications on the acyclic backbone will always produce terms that contain no cyclic function letters. Since the number of such terms is finite, unification on the acyclic backbone can produce only a finite number of distinct terms. Let D t be the set of lists (A,B) such that (A B) is a rule of G'. For n> 0 let Dn+ t be the set of lists s((Ao,...An,B)) such that (Ao,...An) ~ D n, (A',B) ~ D t, and s is the most general unifier of A n and A' (after suitable renaming of variables). Then the set of ground instances of lists in D n is the set of chain derivations of length n in the ground grammar for G'. Once again, the proof is from basic properties of unification. The lists in D a contain no cyclic function letters, because they were constructed by unification from Dr, which contains no cyclic function letters. Let N be the number of distinct terms without cyclic function letters in G' - or more exactly, the number of equivalence classes under alphabetic variance. Since the ground grammar for G' allows arbitrarily long chain derivations, DN÷ t must contain at least one element, say (Ao,...AN+I). This list contains two terms that belong to the same equivalence class; let A i be the first one and Aj the second. Since these terms are alphabetic variants they can be unified by some substitution s. Thus the list s((Ao,...AN+t)) contains two identical terms, s(Ai) and s(Aj). Let s" be any subsitution that maps s((AO,...AN÷t)) to a ground expression. Then st(s((A0,...AN+I))) is a chain derivation in the ground grammar for G'. It contains a sub-list s' (s(Ai,...Aj)), which is also a chain derivation in the ground grammar for G'. This derivation begins and ends with the symbol s' (s(Ai)) --- s'(s(Aj)). So this symbol derives itself in the ground grammar for G', which is what we set out to prove. FinaU.y, we can show that the new class of grammars m a superset of the offline parsable grammars. Theorem 4. If G is a typed unification grammar and its context-free backbone is finitely ambiguous, then its acyclic backbone is depth- bounded. Proof: Asssume without loss of generality that the top-level function letters in the rules of G ~e acyclic. Consider a "backbone" G' formed by replacing the arguments of top-level terms in G with new variables. If the context-free backbone of G is finitely ambiguous, it is depth-bounded, and G' must also be depth-bounded (the intuition here is that replacing the arguments with new variables is equivalent to erasing them altogether). G' is weaker than the acyclic backbone of G, so if G' is depth-bounded the acyclic backbone is also depth-bounded. The author conjectures that grammars whose acyclic backbone is depth-bounded in fact generate the same languages as the offline parsable grammars. Conclusion The offline parsable grammars apparently have enough formal power to describe natural language syntax, but they exclude linguistically desirable grammars that use x-bar theory. This happens because in forming the backbone one erases too much information. Shieber's restriction method can solve this problem in many practical cases, but it offers no general solution - it is up to the grammar writer to decide what to erase in each case. We have shown that by using a simple sort system one can automatically choose the features to be erased, and this choice will allow the x-bar grammars. The sort system has independent motivation. For example, it allows us to assert that the feature "person" takes only the values 1st, 2nd and 3rd. This important fact is not expressed in an unsorted definite clause grammar. Sort-checking will then allow us to catch errors in a grammar - for example, arguments in the wrong order. Robert Ingria and the author have used a sort system of this kind in the grammar of BBN Spoken Language System (Boisen et al., 1988). This grammar now has about 700 rules and considerable syntactic coverage, so it represents a serious test of our sort system. We have found that the sort system is a natural way to express syntactic facts, and a considerable help in detecting errors. Thus we have solved the problem about offline parsable grammars using a mechanism that is already needed for other purposes. These ideas can be generalized to other forms of unification. Consider dag unification as in Shieber (1985b). Given a set S of sorts, assign a sort to each label and to each atomic dag. The arity of a label is a set of sorts (not a sequence of sorts as in term unification). A dag is well-formed ff whenever an arc labeled 1 leads to a node n, 241 either n is atomic and its sort is in the arity of 1, or n has outgoing arcs labeled Ir..l n, and the sorts of 11...1 n are ill the arity of 1. One can go on to develop the theory for dags much as the present paper has developed it for terms. This work is a step toward the goal of formally defining the class of possible grammars of human languages. Here is an example of a plausible grammar that our definition does not allow. Shieber (1986) proposed to make the list of arguments of a verb a feature of that verb, leading to a grammar roughly like this: vp ~ v(Args) arglist(Args) v(cons(np,nil)) ~ [eat] arglist(nil) ----r e arglist(cons(X,L)) ~ X arglist(L) Such a grammar is desirable because it allows us to assert once that an English VP consists of a verb followed by a suitable list of arguments. The list of arguments must be a cyclic sort, so it will be erased in forming the acyclic backbone. This will lead to loops of the form arglist(X) ~ arglist(Y) Therefore a grammar of this kind will not have a depth-bounded acyclic backbone. This type of grammar is not as stroagly motivated as the x-bar grammars, but it suggests that the class of grammars proposed here is still too narrow to capture the generalizations of human language. Geoffrey; and Sag, Ivan. (1985) Generalized Phrase Structure Grammar. Oxford: Basil Blackwell. Pereira, Fernando, and Warren, David H. D. (1983) Parsing as Deduction. In Proceedings of the 21st Annual Meeting of the Association for Computational Linguistics, Cambridge, Massachusetts. Sato, Taisuke, and Tamaki, Hisao. (1984) Enumeration of Success Patterns in Logic Programs. Theoretical Computer Science 34, 227 -240. Shieber, Stuart. (1985a) Evidence against the Context-freeness of Natural Language. Linguistics and Philosophy 8(3), 333-343. Shieber, Stuart. (1985b). Using Restriction to Extend Parsing Algorithms for Complex- Feature-Based Formalisms. In Proceedings of the 23rd Annual Meeting of the Association for Computational Linguistics, 145-152. University of Chicago, Chicago, Illinois. Shieber, Stuart. (1986) An Introduction to Unification-Based Approaches to Grammar. Center for the Study of Language and Information. Xu, Jiyang, and Warren, David S. (1988) A Type System for Prolog. In Logic Programming: Proceedings of the Fifth International Conference and Symposium, 604-619. MIT Press. ACKNOWLEDGEMENTS The author wishes to acknowledge the support of the Office of Naval Research under contract number N00014-85-C-0279. REFERENCES Boisen, Sean; Chow, Yen-lu; Haas, Andrew; lngria, Robert; Roucos, Salim; StaUard, David; and Vilain, Marc. (1989) Integration of Speech and Natural Language Final Report. Report No. 6991, BBN Systems and Technologies Corporation. Cambridge, Massachusetts. Bresnan, Joan, and Kaplan, Ronald. (1982) LFG: A Formal System for Grammatical Representation. in The Mental Representation of Grammatical Relations. M1T Press. Gallier, Jean H. (1986) Logic for Computer Science. Harper and Row, New York, New York. Gazdar, Gerald; Klein, Ewan; Pullum, 242
1989
29
A THREE-VALUED INTERPRETATION OF NEGATION IN FEATURE STRUCTURE DESCRIPTIONS Anuj Dawar Dept. of Comp. and Info. Science University of Pennsylvania Philadelphia, PA 19104 K. Vijay-Shanker Dept. of Comp. and Info. Science University of Delaware Newark, DE 19718 April 20, 1989 ABSTRACT Feature structures are informational elements that have been used in several linguistic theories and in computa- tional systems for natural-language processing. A logi- caJ calculus has been developed and used as a description language for feature structures. In the present work, a framework in three-valued logic is suggested for defining the semantics of a feature structure description language, allowing for a more complete set of logical operators. In particular, the semantics of the negation and implication operators are examined. Various proposed interpretations of negation and implication are compared within the sug- gested framework. One particular interpretation of the description language with a negation operator is described and its computational aspects studied. 1 Introduction and Background A number of linguistic theories and computational ap- proaches to parsing natural language have employed the notion of associating informational dements, called feature structures, consisting of features and their values, with phrases. Rounds and Kasper [KR86, RK86] developed a logical calculus that serves as a description language for these structures. Several researchers have expressed a need for extending this logic to include the operators of negation and impli- cation. Various interpretations have been suggested that define a semantics for these operators (see Section 1.2), but none has gained universal acceptance. In [Per87], Pereira set forth certain properties that any such interpretation should satisfy. In this paper we present an extended logical calculus, with a semantics in three-valued logic (based on Kleene's three-valued logic [Kh52]), that includes an interpretation of negation motivated by the approach given by Kart- tunen [Kar84]. We show that our logic meets the condi- tions stated by Pereira. We also show that the three-valued framework is powerful enough to express most of the pro- posed definitions of negation and implication. It therefore makes it possible to compare these different approaches. 1.1 Rounds-Kasper Logic In [Kas87] and [RK86], Rounds and Kasper introduced a logical formalism to describe feature structures with dis- junctive specification. The language is a form of modal propositional logic (with modal operator ":'). In order to define the semantics of this language, fea- ture structures are formally defined in terms of acyelic finite automata. These are finite-state automata whose transition graphs are acyclic. The formal definition may be found in [RK86]. A fundamental property of the semantics is that the set of automata satisfying a given formula is upward-closed under the operation of subsumption. This is important, because we consider a formula to be only a partial descrip- tion of a feature structure. The property is stated in the following theorem IRK86]: Theorem 1.1 A C 8 if and only i/for every formula, ~, if A ~ ~ then B ~ cb. 1.2 The Problem of Adding Negation Several researchers in the area have suggested that the logic described above should be extended to include nega- tion and implication. Karttunen [Kar84] provides examples of feature struc- tures where a negation operator might be useful. For in- stance, the most natural way to represent the number and person attributes of a verb such as sleep would be to say 18 that it is not third person singular rather than expressing it as a disjunction of the other tive possibilities. Kaxttunen also suggests an implementation technique to handle neg- ative information. Johnson [Joh87], defined an Attribute Value Logic (AVL), similar to the Rounds-Kasper Logic, that included a classical form of negation. Kasper [Kas88] discusses an interpretation of negation and implication in an implemen- tation of Functional Unification Grammar [Kay79] that in- cludes conditionals. Kasper's semantics is classical, but his unification procedure uses notions similar to those of three-valued logic a . One aspect of the classical approach is that the prop- erty of upward-closure under subsumption is lost. Thus the evaluation of negation may not be freely interleaved with unification 2 In [Kas88], Kasper localized the effects of negation by disallowing path expressions within the scope of a negation. This restriction may not be linguistically war- ranted as can be seen by the following example from Pereira [Per87] which expresses the semantic constraint that the subject and object of a clause cannot be coref- erential unless the object is a reflexive pronoun: oh3 : type : reflezive V -~(subj : ref ~ obj : re f) Moshier and Rounds [MR87] proposed an intuitionistic interpretation of negation that preserves upward-closure. They replace the notion of saris/action with one of model- theoretic/arcing an described in Fitting [Fit69]. They also provide a complete proof system for their logic. The satis- liability problem for this logic was shown to be PSPACE- complete. 1.3 Outline of this Paper In the following section we will present our proposed solu- tion in a three-valued framework, for defining the seman- tics of feature structure descriptions including negation 3. This solution is a formalization of the notion of negation in Karttunen [Kar84]. In Section 3 we will show that the framework of three-valued logic is flexible enough to express most of the different interpretations of negation mentioned above. In Section 4 we will show that the satis- fiability problem for the logic we propose is NP-complete. lsee Section 3.4 2see Pereira [Per87] p.1006 3 We shall concentrate only on the problem of extending the logic to include the negation operator, and later in Section 3.4 discuss Implication. 2 Feature Structure Descriptions with Negation We will now present our extended version of the Rounds- Kasper logic including negation. We do this by giving the semantics of the logic in a three-valued setting. This provides an interpretation of negation that is intuitively appealing, formally simple and computationally no harder than the original Rounds-Kasper logic. With each formula we associate the set (Tset) of au- tomata that satis/y the formula, a set (Fset) of automata that contradict it and a set (Uset) of automata which nei- ther satisfy nor contradict it 4. Different interpretations of negation are obtained by varying definitions of what con- stitutes "contradiction." In the semantics we will define, we choose a definition in which contradiction is equivalent to essential incompatibility 5. We will define the Tset and the Fset so that they are upward-closed with respect to subsumption for all formulae. Thus, we avoid the prob- lems associated with the classical interpretation of nega- tion. In our logic, negation is defined so that an automaton ,4 satisfies -,~b if and only if it contra£1icts ~. 2.1 The Syntax The symbols in the descriptive language, other than the connectives :, v, A,-, and ~ are taken from two primitive domains: Atoms (A}, and Labels (L). The set of well formed formulae (W), is given by: NIL; TOP; a; 1 : @; ~ A ~b; @ V ~b; "-~ and pl ~- P2, where a E A; 1E L; ~,~ E W and Pa,P2 E L'. 2.2 The Semantics Formally, the semantics is defined over the domain of par- tial functions from acycLic finite automata ~ to boolean val- ues. Definition 2.1 An acyclic finite automaton is a 7-tuple A =< Q, E, r, 6, qo, F, A >, where: 1. Q is a non-empty finite set (of states), ~. E is a countable set (the alphabet), 4A similar notion was used by Kasper [Kas88], who introduces the notion of compatibility. We shall comps.re this approach with ou~ in greeter detail in Section 3.4. Sln general, a feature structure is incompatible with a formula i£ the information it contains is inconsistent with that in the formula. We will distinguish two kinds of incompatibility. A feature struc- ture is essentiall~/incompatible with a formula if the information in it contradicts the information in the formula. It is trivially incom- patible with the formula if the inconsistency is due to an excess of mformtstion within the formula itself. Sin this paper we will not consider cyclic feature structures 19 3. r is a countable set (the output alphabet), 4. 6 : Q × E -" Q is a finite partial/unction (the tran- sition function), 5. qo ~ Q (the initial state), 6. F C Q (the set of final states), 7. A : F "-* r is a total function (the output function), 8. the directed graph (Q, E) is acyclic, where pEq iff .for some 1 6 Z, 6(p, l) = q, 9..for every q ~ Q, there exists a directed path from qo to q in ( Q, E), and 10. for every q ~ F, 6(q, I) is not defined for any I. A formula ~ over the set of labels L and the set of atoms A is chaxacterized by a partial function: ~r, : {'41"4 =< Q, L, A, 6, q0, F, A >} "7" {True, False} ~#,('4) is True iff "4 satisfies ~b. It is False if'4 contra- dicts ~b r and is undefined otherwise. The formal definition is given below. Definition 2.2 For any formula ¢~, the partial func- tion .~'¢ over the set of acyclic finite automata, "4 =< Q, L, A, 6, qo, F, A >, is defined as follows: 1. if ~ = NIL then ~('4) = True for all "4; ~. if ~ = TOP then ~(,4) = False for all .4; 3. if O m a for some a ~ A then ~(.,4) = True if .4 is atomic and A(q0) = a :7:(.4) = False if "4 is atomic and A(qo) = b for some b, b # a (see Note ~.) ~'~( "4 ) is undefined otherwise; 4. if @ f l : @t for some l ~ L and @x 6 W then ~r ~ ( "4 ) __ ~r ~, ( "4 / l ) if.Aft is defined. (see Note 3.) :F,('4) is undefined otherwise; rand therefore it satisfies the formula "-4, 5. if ~ = ~a A ~2 for some ~bi , ~2 E W then .~'+('4) = True if~r+,('4) = True and jr('4)= True y+('4) = False if ~r~,('4) = False or ~'~('4) - False ~('4) is undefined otherwise ; 6. 7+('4) 7~('4) Y+('4) V ~b2 for some ~,,~2 6 W then • .~ True if.~'~, ('4) = True or 9r¢2('4) = True = False if ~x('4 ) = False and F~2('4 ) = False is undefined otherwise ; 7. if ~b -- "~1 for some ~h E W then :~('4) = True if Y:~, ('4) = False ~r,#('4) = False if gr~x ('4) = True ~('4) is undefined otherwise ; 8. if¢= m ~+('4) ~( "4) 7+('4) ~ I~ for some pa,p2 E L" then = True if 6(qo,p,) and 6(qo,p2) are defined and 6(q0, pl) ---- 6(qo,p2) = False if "4/pa and "4/P2 are both defined and are not unifiable is undefined otherwise (see Note 4.). Notes: I. We have not included an implication operator in the formal language, since we find that defining im- pllcation in terms of negation and disjunction (i.e ~b =~ ~b ~ -~@ V ~b) yields a semantics for implica- tion that corresponds exactly to our intuitive un- derstanding of implication. 2. As one would expect, an atomic formula is satisfied by the corresponding atomic feature structure. On the other hand, only atomic feature structures are defined as contradicting an atomic formula. Though a complex feature structure is clearly incompatible with an atomic formula we do not view it as being essentially incompatible with it. An interpretation of negation that defines a complex feature structure as contradicting a (and hence satisfying -,a) is also possible. However, our definition is motivated by the linguistic intention of the negation operator as given by Karttunen [Kar84]. Thus, for instance, we require that an automaton satisfying the formula case : ".dative have an atomic value for the case feature. 3. In J. above, we state that: ~'~('4) = jr', ('4/1) if.Aft is defined. When "4/l is defined, ~t ('4/I) may still 20 4. be True, False or undefined. In any of these cases, ~#(.A) -- ~I(.A/I) s. ~r~(.A) is not defined if .All is not defined. Not only is this condition required to preserve upward-closure, it is also linguistically motivated. Here again, we could have said that a formula of the form I : ~bz is contradicted by any atomic feature structure, but we have chosen not to do so for the reasons outlined in the previous note. We have chosen to state that the set of automata that are incompatible with the formula pz ~ p2 is not the set of automata for which 6(qo,pl) and 6(qo,p~) axe defined and 8(q0,pz) ~ 6(q0,p2), since such an automaton could subsume one in which 6(qo,px) = 6(q0,p~). Thus, we would lose the property of upward-closure under subsumption. However, an automaton, .4, in which 6(q0,pl) and 8(qo,p2) are defined and .A/p1 is not unifiable 9 with ~4/p2 can- not subsume one in which 6(q0,pa) = 6(q0,p2). 2.2.1 Upward-Closure As has been stated before, the set of automata that satisfy a given formula in the logic defined above is upward-closed under subsumption. This property is formally stated be- low. Theorem 2.1 Given a formula ~b and two acyclie finite automata .4 and IJ, if ~(.A) is defined and .4 C B then y.(B) ~, defined and ;%(B) = 7.(~4). Proof: The proof is by induction on the structure of the formula. The details may be found in Dawar [Daw88]. 2.3 Examples We now take a look at the examples mentioned earlier and see how they are interpreted in the logic just defined. The first example expressed the agreement attribute of the verb sleep by the following formula: agreement : "~(person : third A number : singular) (1) This formula is satisfied by any structure that has an agree- ment feature which, in turn, either has a person feature with a value other than third or a number feature with a value other than singular. Thus, for instance, the following two structures satisfy the given formula: agreement: [person: second] SEquality here is strong equality (i.e. if .g,x(A]l) is undefined then so is .~',(.4).) 9Two automata are not unifiable if and only if they do not have a least upper bound [ [p r,on ] ] agreement : number : plural On the other hand, for a structure to contradict formula(1) it must have an agreement feature defined for both person and number with values third and singular respectively. All other automata would have an undefined truth value for formula(1). Turning to the other example mentioned earlier, the formula: obj : type : reflexive x/"~(subj : ref ~ obj : re f) (2) is satisfied by the first two of the following structures, but is contradicted by the third (here co-index boxes are used to indicate co-reference or path-equivalence). [obj. [type-reflexive ]] [ obj: [ ref:[] ] ] subj : [ ref :[] ] j] type : reflezive subj: [ re1: [] ] 3 Comparison with Other Interpreta- tions of Negation As we have stated before, the semantics for negation de- scribed in the previous section is motivated by the dis- cussion of negation in Karttunen [Kar84], and that it is closely related to the interpretation of Kssper [Kas88]. In this section, we take a look at the interpretations of nega- tion that have been suggested and how they may be related to interpretations in a three-valued framework. 3.1 Classical Negation By classical negation, we mean an interpretation in which an automaton .4 satisfies a formula -~b if and only if it does not satisfy ~b. This is, of course, a two-valued logic. Such an interpretation is used by Johnson in his Attribute-Value Language [Joh87]. We can express it in our framework by making ~'~ a total function such that wherever 9re(A) was undefined, it is now defined to be False. Returning to our earlier example, we can observe that for formula(1) the structure [ agreement: [ person: third] ] has a truth value of .false in the classical semantics but has an undefined truth value in the semantics we define. This illustrates the problem of non-monotonicity in the classical semantics since this structure does subsume one that satisfies formula (1). 21 3.2 Intultionistic Logic In [MR87], Moshier and Rounds describe an extension of the Rounds-Kasper logic, including an implication opera- tor and hence, by extension, negation. The semantics is based on intnitionistic techniques. The notion of satisfying is replaced by one of forcing. Given a set of automata/C, a formula ~b, and .A such that .4 ~ /C, .A forces in IC "~b (,4 hn -~b) if and only if for all B ~/C such that A ~ B, B does not force ~b in/~. Thus, in order to find if a formula, ~b, is satisfiable, we have to find a set ]C and an automaton ~4 such that forces in IC ~. Moshier and Rounds consider a version in which forcing is always done with respect to the set of all automata, i.e. IC*. This means that the set of feature structures that satisfy --~b is the largest upward-closed set of feature structures that do not satisfy @ (i.e. the set of feature structures incompatible with ~b). We can capture this in the three-valued framework described above by modifying the definition of ~r¢ to make it False for all automata that are incompatible (trivially or essentially) with ~b (we call this new function ~r~). The definition of ~'~ differs from that of ~r+ in the following cases: • ~b=a ~r¢(A) = True if A is atomic and A(q0) = a ~r~(A) = False otherwise ~'~(~t) = True if ~'~(.A) ---- True :~(A) = False if All is defined and vs(wl/! ~_ B =~ ~,,(B) = False) ~r~(.A) is undefined otherwise. ~'~(Ft) = True if ~+,(.,4) = True and .~+~(.A) = True :r~,(A) = False if VB(A E_ S =~ ~r~t(B) # True or Y;2(B) # True) ~(A) is undefined otherwise ; • ~=~v~2 7;(,4) = Tr.e if ~,(.A) = True or ~r~a(A ) = True ~(A) = False if ¥B(.A C B ~';,(B) # True and Jr;.(B) # True) ~r~(.4) is undefined otherwise ; • ~ = Pl ~ P2 7;(,4) = True if 8(qo, p,) and ~(qo, p2) are defined and ~(qo,pl) = 6(qo,p2) F~(A) = False if A/p1 and .A/p2 are both defined and are not unifiable or if .4 is atomic ~'~(.4) is undefined otherwise . In the other cases, the definition of ~'~ parallels that of 7+. To illustrate the difference between ~'~ and 3r~, we define the following (somewhat contrived) formula: cb = (11 :avl2 : a) AI2 : b We also define the automaton ,4 = [11 : b] We can now observe that F~(A) is undefined but 3r~(A) = False. To see how this arises, note that in either system, the truth value of ,4 is undefined with respect to each of the conjuncts of ¢i. This is so because ,4 can certainly be extended to satisfy either one of the conjuncts, just as it can be extended to contradict either one of them. But, for ~c'~#(.A) to be False, .4 must have a truth value of False for one of the conjuncts and therefore .~'¢(.4) is undefined. On the other hand, since .4 can never be extended to sat- isfy both conjuncts of ~ simultaneously, it can never be extended to satisfy ~b. Hence .4 is certainly incompatible with ~, but because this incompatibility is a result of the excess of information in the formula itself, we say that it is only trivially incompatible with ~. To see more clearly what is going on in the above ex- ample, consider the formula -~b and apply distributivity and DeMorgan's law (which is a valid equivalence in the logic described in the previous section, but not in the in- tuitionistic logic of this section) which gives us: -,~b = (-'la : a A "./2 : a) V -~12 : b We can now see why we do not wish .4 to satisfy -~b, which would be the case if .~'~#(~4) were False. One justification given for the use of forcing sets other than /C* is the interpretation of formulae such as -~h : NIL. It is argued that since h : NIL denotes all feature structures that have a feature labeled h, -,h : NIL should denote those structures that do not have such a feature. However, the formula -~h : NIL is unsatisfiable both in the interpretation given in the last section as well as in the /C* version of intuitionistic logic. It is our opinion that the use of negation to assert the non-existence of features is an operation distinct from the use of negation to describe values mad should be described by a distinct operator. The present work attempts to deal only with the latter notion of negation. The authors expect to present in a forthcomi~ag paper a simple extension to the current semantics that will deal with issues of existence of features. 22 3.3 Karttunen's Implementation of Negation As mentioned earlier, our approach was motivated by Karttunen's implementation as described in [Kax84]. In the unification algorithm given, negative constraints are attached to feature structures or automata (which them- selves do not have any negative values). When the feature structure is extended to have enough information to deter- mine whether it satisfies or falsifies I° the formula then the constraints may be dropped. We feel that our definition of the Uset elegantly captures the notion of associating constraints with automata that do not have sufficient in- formation to determine whether they satisfy or contradict a given formula. 3.4 Kasper's Interpretation of Negation and Conditionals As mentioned earlier, Kasper ~Kas88] used the operations of negation mad implication in extending Functional Unifi- cation Grammar. Though the semantics defined for these operators is a classical one, for the purposes of the algo. rithm Kasper identified three chases of automata associ- ated with any formula: those that satisfy it, those that are incompatible with it and those that are merely compatible with it. We can observe that these are closely related to our Tact, Fset and User respectively. For instance, Kasper states that an automaton .A satisfies a formula f : v if it is defined for f with value v; it is incompatible with f : v if it is defined for f with value z (z ~ v) and it is merely compatible with f : v if it is not defined for f. In three- valued logic, we incorporate these notions into the formal semantics, thus providing a formal basis for the unification procedure given by Kasper. Our logic also gives a more uniform treatment to the negation operator since we have removed the restriction that disallowed path equivalences in the scope of a negation. 4 Computational Issues In this section, we will discuss some computational as- pects related to determining whether a formula is satisfi- able or not. We will Show that the satisfiability problem is NP-complete, which is not surprising considering that the problem is NP-complete for the logic not involving nega- tion (Rounds-Kasper logic). The NP-hardness of this problem is trivially shown if we observe that for any formula, ~b, without negation, Tset(¢) is exactly the set of automata that satisfy ~ ac- cording to the definition of satisfaction given by Rounds l°It is not clear whether falsification is equivalent to incomp~- ibility or only essential incompatibility, but from the examples in- volvin~ ease and agreement, we believe that only emJential incom- patibihty is intended. and Kasper [KR86, RK86] in their original logic. Since the satisfiabllity problem in that logic is NP-complete, the given problem is NP-haxd. In order to see that the given problem is in NP, we observe that a simple nondeterministic algorithm 11 can be given that is linear in the length of the input formula ~b and that returns a minimal automaton which satisfies ~b, provided it is satisfiable. To see this, note that the size (in terms of the number of states) of a minimal automa- ton satisfying ~b is linear in the length of ¢ and verifying whether a given automaton satisfies ~b is a problem linear in the length of ~b and the size of the automaton. The details of the algorithm can be found in Dawar [DawS8]. 5 Conclusions A logical formalism with a complete set of logical operators has come to be accepted as a means of describing feature structures. While the intended semantics of most of these operators is well understood, the negation and implication operators have raised some problems, leading to a vari- ety of approaches in their interpretation. In the present work, we have presented an interpretation that combines the following advantages: it is formally simple as well as uniform (it places no special restriction on the negation operator); it is motivated by the linguistic applications of feature structures; it takes into account the partial na- ture of feature structures by preserving the property of monotonicity under unification and it is computationally no harder than the Rounds-Kasper logic. More signifi- cantly, perhaps, we have shown that most existing inter- pretations of negation can also be expressed within three- valued logic. This framework therefore provides a means for comparing and evaluating various interpretations. References [Daw88] [Fit69] Anuj Dawar. The Semantics of Negation in Fea- ture Structure Descriptions. Master's thesis, Uni- versity of Delaware, 1988. Melvin Fitting. Intuitionistic Logic and Model Theoretic Forcing. North-Holland, Amsterdam, 1969. [Joh87] Mark Johnson. Attribute Value Logic and the Theory of Grammar. PhD thesis, Stanford Uni- versity, August 1987. [K~84] Lauri Karttunen. Features and values. In Pro. ceedings of the Tenth International Conference on Computational Linguistics, July 1984. llthis algorithm assumes that the se¢ of atoms is finite. 23 [Kas87] Robert T. Kasper. Feature Structures: A Logical Theory with Application to Language Analysis. PhD thesis, University of Michigan, 1987. [Kas88] Robert T. Kasper. Conditional descriptions in Functional Unification Grammar. In Proceedings of the ~6th Annual Meeting o] the Association/or Computational Linguistics, pages 233-240, June 1988. [Kay79] M. Kay. Functional grammax. In Proceedings of the Fifth Annual Meeting of the Berkeley Linguis- tics Society, 1979. [Kle52] S.C. Kleene. Introduction to Metamathematics. Van Nostrand, New York, 1952. [KR86] Robert T. Ka~per and William C. Rounds. A logical semantics for feature structures. In Pro- ceedings o/the ~th Annual Meeting o.( the Asso- ciation for Computational Linguistics, 1986. [MR87] M. Drew Moshier and William C. Rounds. A logic for partially specified data structures. In ACM Symposium on the Principles o~ Program. ruing Languages, pages 156-167, ACM, 1987. [Per87] Fernando C. N. Pereira. Grammars and logics of partial information. In Jean-Louis Lassez, ed- itor, Proceedings o] the 4th International Con- ference on Logic Programming, pages 989-1013, May 1987. IRK86] William C. Rounds and Robert T. Kasper. A complete logical calculus for record struc- tures representing linguistic information. In IEEE Symposium on Logic in Computer Science, pages 34-43, IEEE Computer Society, June 1986. 24
1989
3
DISCOURSE ENTITIES IN JANUS Damaris M. Ayuso BBN Systems and Technologies Corporation 10 Moulton Street Cambridge, Massachusetts 02138 [email protected] Abstract This paper addresses issues that arose in apply- ing the model for discourse entity (DE) generation in B. Webber's work (1978, 1983) to an interactive multi- modal interface. Her treatment was extended in 4 areas: (1)the notion of context dependence of DEs was formalized in an intensional logic, (2)the treat- ment of DEs for indefinite NPs was modified to use skolem functions, (3)the treatment of dependent quantifiers was generalized, and (4) DEs originating from non-linguistic sources, such as pointing actions, were taken into account. The discourse entities are used in intra- and extra-sentential pronoun resolution in BBN Janus. 1 Introduction Discourse entities (DEs) are descriptions of ob- jects, groups of objects, events, etc. from the real world or from hypothesized or possible worlds that are evoked in a discourse. Any communicative act, be it spoken, written, gestured, or system-initiated, can give rise to DEs. As a discourse progresses, an ade- quate discourse model must represent the relevant entities, and the relationships between them (Grosz and Sidner, 1986), A speaker may then felicitously refer anaphorically to an object (subject to focusing or centering constraints (Grosz et al., 1983, Sidner 1981, 1983, Brennan et al. 1987) ) if there is an existing DE representing it, or if a corresponding DE may be directly inferred from an existing DE. For example, the utterance "Every senior in Milford High School has a car" gives rise to at least 3 entities, describable in English as "the seniors in Milford High School", "Milford High School", and "the set of cars each of which is owned by some senior in Milford High School". These entities may then be accessed by the following next utterances, respectively: "They graduate in June." "It's a good school." "They completely fill the parking lot." Webber (1978, 1983) addressed the question of determining what discourse entities are introduced by a text. She defined rules which produce "initial descriptions" (IDs) of new entities stemming from noun phrases, given a meaning representation of a text. An ID is a logical expression that denotes the corresponding object and uses only information from the text's meaning representation. The declarative nature of Webber's rules and the fact that they relied solely on the structure of the meaning representation, made her approach well suited for implementation. The present work recasts her rules in Janus's in- tensional logic framework (described in section 2). Two goals guided our approach: (1)that our DE representations be semantically clear and correct ac- cording to the formal definitions of our language, and (2) that these representations be amenable to the processing required in an interactive environment such as ours, where each reference needs to be fully resolved against the current context. In the following sections, we first present the representational requirements for this approach, and introduce our logical language (section 2). Then we discuss issues that arose in trying to formalize the logical representation of DEs with respect to (1) the context dependence of their denota- tions, and (2) the indeterminacy of denotation that arises with indefinite NPs. For context dependence, we use an intensional logic expression indexed by time and world indices (discussed in section 3). This required us to extend Webber's rules to detect modal and other index-binding contexts. In representing DEs for indefinites (appearing as existential formulae in our meaning representation), we replaced Webber's EVOKE predicate with skolem constants for the independent case, where it does not contain a variable bound by a higher FORALL quantifier (section 4), and do not use EVOKE at all in the de- pendent case. In section 5 we introduce a generalized version of the rules for generating DEs for dependent quantifiers stemming from indefinite and definite NPs which over- comes some difficulties in capturing dependencies be- tween discourse entities. In our multi-modal interface environment, it is im- portant to represent the information on the computer screen as part of the discourse context, and allow references to screen entities that are not explicitly in- troduced via the text input. Section 6 briefly dis- cusses some of these issues and shows how pointing actions are handled in Janus by generating ap- propriate discourse entities that are then used like other DEs. Finally, section 7 concludes and presents plans for future work. This is, to our knowledge, the first implementation of Webber's DE generation ideas. We designed the 243 algorithms and structures necessary to generate dis- course entities from our logical representation of the meaning of utterances, and from pointing gestures, and currently use them in Janus's (Weischedel et al., 1987, BSN, 1988) pronoun resolution component, which applies centering techniques (Grosz et al., 1983, Sidner 1981, 1983, Brennan et al. 1987) to track and constrain references. Janus has been demonstrated in the Navy domain for DARPA's Fleet Command Center Battle Management Program (FCCBMP), and in the Army domain for the Air Land Battle Management Program (ALBM). 2 Meaninq Representation for DE Generation Webber found that appropriate discourse entities could be generated from the meaning representation of a sentence by applying rules to the representation that are strictly structural in nature, as long as the representation reflects certain crucial aspects of the sentence. This has the attractive feature that any syntactic formalism may be used if an appropriate semantic representation is produced. Some of the requirements (described in (Webber 1978, 1983)) on the representation are: (1) it must distinguish be- tween definite and indefinite NPs and between sin- gular and plural NPs, (2)it must specify quantifier scope, (3) it must distinguish between distributive and collective readings, (4)it must have resolved elided verb phrases, and (5) it must reflect the modifier struc- ture of the NPs (e.g., via restricted quantification). An important implied constraint is that the representation must show one recognizable construct (a quantifier, for example) per DE-invoking noun phrase. These constructs are what trigger the DE generation rules. Insofar as a semantic representation reflects all of the above in its structure, structural rules will suffice for generating appropriate DEs, but otherwise infor- mation from syntax or other sources may be neces- sary. There is a trade-off between using a level of representation that shows the required distinctions, and the need to stay relatively close to the English structure in order to only generate DEs that are jus- tiffed by the text. For example, in Janus, in addition to quantiflers from NPs, the semantic representation has quantiflers for verbs (events), and possibly extra quantifiers introduced in representing deeper meaning or by the collective/distributive processing. Therefore, we check the syntactic source of the quantifiers to ensure that we only generate entities for quantifiers that arose from NPs (using the bound variable as an index into the parse tree). Other than the caveat just discussed, the Janus meaning representation language WML (for World Model Language) (Hinrichs et al., 1987) meets all the other constraints for DE generation. WML is a higher- order intensional language that is based on a syn- thesis between the kind of language used in PHLIQA (Scha, 1976) and Montague's Intensional Logic 244 (Montague, 1973). A newer version of WML (Stallard, 1988) is used in the 8BN Spoken Language System (Boisen et al., 1989). The intensionality of WML makes it more powerful than the sample language Webber used in developing her structural rules. The scoping expressions in WML have a sort field (which restricts the range of the variable) and have the form: (1= x s (P x)) where B is a quantifier such as FORALL or EXISTS, a term-forming operator like IOTA or SET, or the lambda abstraction operator LAMBDA. S is the sort, a set-denoting expression of arbitrary complexity specifying the range of x, and (P x) is a predication in terms of x. The formal semantics of WML assigns a type to each well-formed expression which is a func- tion of the types of its parts. If expression E has type T, the denotation of E, given a model M and a time t and world w, is a member of the set which is T's domain. One use of types in our system is for enforc- ing selectional restrictions. The formation rules of WML, its type system, and its recursive denotation definition provide a formal syntax and semantics for WML. 3 Context Dependence of Discourse Entities A formal semantics was assumed though not given for the sample logical language used by Web- bar. The initial descriptions (IDs) of DEs produced by her rules were stated in this language too, and thus are meant to denote the object the DE represents. For example, the rule which applies to the represen- tation for independent definite NPs assigns to the resulting DE an ID which is the representation itself: (t x S (P x)) => ID: (t x S (P x)) where t is Russell's iota operator. Thus, the ID for "the cat" in "1 saw the cat" is (t x cats T). (Since the body of the t in this example has no additional predication on x, it is merely T, for TRUE.) However, because IDs are solely drawn from the meaning representation of the isolated text, they may not suf- fice to denote a unique object. Connection to prior discourse knowledge or information from further dis- course may be necessary to establish a unique referent, or determining the referent may not even be necessary. For example, the ID for "the cat" would need to be evaluated in a context where there is only one salient cat in orddr to obtain a denotation. Our system's representation of a DE is a structure containing several fields. The "logical-form" field con- tains a WML expression which denotes the object the DE'describes (this corresponds roughly to Webber's ID). Given that WML is intensional, we are able to explicitly represent context dependence by having the logical form include an intensional core, plus tense, time, and world information (which includes discourse context) that grounds the intension so that it may be evaluated. For example, the logical form for the DE corresponding to "the cat" in our system is ( (Z~'~'ZNSION (IOTA x eat- T) ) time world) where time, if unfilled, defaults to the present, and world defaults to the real world and current discourse state. The semantics of our IOTA operator makes it denotationless if there is not exactly one salient object that fits the description in the context, else its denota- tion is that unique object. In our interactive system each reference needs to be fully resolved to be used successfully. If unknown information is necessary to obtain a unique denotation for a IOTA term, a simple clarification dialogue should ensue. (Clarification is not implemented yet, currently the set of all values fitting the IOTA is used.) An example using the time index is the noun phrase "the ships that were combat ready on 12/1/88", which would generate a DE with logical form: ( ( INTENS ION (PAST ( INTENSION (IOTA x (SETS ,,hips) (COMBAT-READY x) ) ) ) ) 12/1/88 world) Representing this time index in the logical form is cru- cial, since a later reference to it, made in a different time context must still denote the original object. For example, "Are they deployed?" must have "they" refer to the ships that were combat ready on 12/1/88, not at the time of the latter utterance. In order to derive the proper time and world con- text for the discourse entities, we added structural rules that recognize intensional and index-binding logical contexts. Our DE generation algorithm uses these rules to gather the necessary information as it recurses into the logical representation (applying rules as it goes) so that when a regular rule fires on a language construct, the appropriate outer-scoping time/world bindings will get used for the generated DEs. It should be noted that, as the discussion above suggests, a definite NP always gives rise to a new discourse entity in our system. If it is determined to be anaphoric, then a pointer to the DE it co-refers with (when found) will be added to its "refers-to" field, in- dicating they both denote the same object. 4 DEs for Independent Indefinite NPs In Webber's work, the initial description (ID) for a DE stemming from an independent existential (i.e., with no dependencies on an outer FORALL quantifier), contained an EVOKE predicate. "1 saw a cat": (EXISTS x cat8 (maw I x)) would generate a DE with ID: (t x Gat8 (& (saw I x) (EVOI~ Sent x))) "The cat I saw that was evoked by sentence Sent", where Sent is the parsed clause for '1 saw a cat". The purpose of EVOKE was to make clear that al- though more than one cat may have been seen, the "a" picks out one in particular (which one we do not know except that it is the one mentioned in the utterance), and this is the cat which makes the EVOKE true. Any subsequent reference then picks out the same cat because it will access this DE. The semantics of the EVOKE predicate and the type of the S argument (which is syntactic in nature) were un- clear, so we looked for a different formulation with better understood semantics. Predicate logic already provides us with a mechanism for selecting arbitrary individuals from the domain via skolem functions (used as a mechanism for removing existentials from a formula while preserv- ing satisfiability). Skolem functions have been used in computational linguistics to indicate quantifier scope, for example (VanLehn, 1978). Following a suggestion by R. Scha, we use skolem functions in the logical form of the DE for the "indefinite individuals" intro- duced by independent existentials (Scha et al., 1987). For clarity and consistency with the rest of the lan- guage, we use a sortedskolem form, where the range of the function is specified. Since we use this for representing existentials that are independent, the function has no arguments and is thus equivalent to a sorted constant whose denotation is undetermined when introduced. (In this sense it is consistent with Karttunen's (1976) and Kamp's (1984) view of the indefinite's role as a referential constant, but unlike Kamp, here the sentence's meaning representation is separate from the representation of the evoked entity.) Thus we introduced a new operator to WML named SKOLEM, for expressions of the form (SKOLEM n <sort>), where n is an integer that gets incremented for each new skolem created, as a way of naming the skolem function. For the example above, the core logical form (stripping the outer inten- sion and indices) for the DE of "a cat" would be: (SKOL~M I (SET x oats (saw I x))) denoting a particular cat from the set of aJl the cats I saw. The type of a SKOLEM expression is well- defined and is given by the following type rule: TYPEO¥ (SKOZJCN Ib"~G~S (SETS a)) = a where INTEGERS is the type for integers, and (SETS a) is the type of sets whose members have type a. This type rule says that when the first argument of SKOLEM is of type INTEGER, and the second is a set with elements of type a, then the type of the SKOLEM expression is a. Therefore, the type of the above example is cats. The explicit connection to the originating sentence which the EVOKE predicate provided is found in our scheme outside of the logical 245 representation by having a pointer in the DE's struc- ture to the parse tree NP constituent, and to the struc- ture representing the communicative act performed by the utterance (in the fields "corresponding-constituent" and "originating-communicative-act", respectively). These connections are used by the pronoun resolu- t/on algorithms which make use of syntactic infor- mation. Does the denotation of a skolem constant ever get determined? In narrative, and even in conversation, identifying the individual referred to by the indefinite NP frequently doesn't occur. However, in our inter- active system, each reference must be fully resolved. When the evaluation component of Janus determines a successful value to use for the existential in the text's logical form, the appropriate function denotation for SKOLEM n gets defined, and the "extension" field is set for the discourse entity. Note that many interesting issues come up in the treatment of reference to these indefinite entities in a real system. For example, cooperative responses by the system introduce new entities that must be taken into account. If the user asks "Is there a carrier within 50 miles of Hawaii?", a cooperative "There are two: Constellation and Kennedy" (as opposed to just "Yes") must add those two carriers as entities, which now overshadow the singular skolem entity for "a car- der within 50 miles of Hawaii". On the other hand, a "No" answer should block any further reference to the carrier skolem, since its denotation is null, while still allowing a reference to a class entity derived from it, as in "Is there one near San Diego?" where one refers to the class carriers. The treatment presented works for straightforward cases of independent indefinites. Trickier cases like donkey sentences (Kamp, 1984, Webber, 1981) and interactions with negation have not yet been ad- dressed. 5 Dependent NPs 5.1 Dependent Indefinite NPs Our work uncovered a need for modifications in Webber's structural rules for quantifiers from indefinite and definite NPs which have dependencies on vari- ables bound directly or indirectly by an outer FORALL quantifier. In this section we address the case of dependent existentials arising from indefinite NPs. We first argue that the predicate EVOKE is not needed in this context. Then we point out the need for generalizing the rule to take into account not just FORALL, but all scoping operators that intervene be- tween the outer FORALL and the inner EXISTS. Finally, we show that the dependencies between dis- course entities must be explicitly maintained in the logical forms of newly created DEs that depend on them. Webber's rules are designed to apply from the outermost quantifier in; each time a rule is applied the remaining logical form is modified to be in terms of the just created DE. For example, "Every boy saw a girl he knows" has logical form (for the bound pronoun reading): (FOR~LL x boys (EXISTS y (SET y' girls (knows x y' ) ) (SaW x y) ) ) The first step is to apply the rule for an independent universal quantifier: R0: (FORALL x S (P x)) => de: S This application yields the entity for "the set of all boys" DE I : boys and we rewrite the logical form to be: (FORALL x DE 1 (EXISTS y (SET y' girls (knows x y')) (saw x y) ) ) The steps shown so far are consistent with both Webber's and our approach. Now we want to apply the general rule for existentials within the body of a distributive, in order to generate an entity for the relevant set of girls. Webber uses Rule 3 in (Webber, 1983) (here corrected to position the existential's sort S inside the scope of the outer quantifiers in the generated DE): R3: (¥O~,~.lr.,L YI"''Yk (EXISTS x s (P x))) => de: (SET x things (EXISTS YI" • "Yk (a (msmbQr x S) (P x) (EVOKE Ssnt x) ) ) ) where FORALL Yl""Yk is shorthand for FORALL Yl de 1 (...(FORALL Yk dek, analogously for EXISTS, and S or P depends directly or indirectly on Yl ""Yk' Now the first DE we want to generate with this rule is for "the set of girls, each of which is known by some boy in DE 1, and was seen by him". Does each girl in the set also have to satisfy an EVOKE predicate? It seems that any future reference back to the set formed by the existential seeks to obtain a/I items fitting the description, not some subset constrained by EVOKE. For example, if the example above is fol- lowed by "the girls tried to hide", taking "the girls" anaphorically, one wants a/I the girls seen by some boy in DE 1 that knows them, no less. Our core logical representation for the set of girls is thus: DEE: (SET y girls (EXISTS x DE I (a (knows x y) (saw x y)))) So the modified rule used in producing DE 2 is: 246 R3': (¥ORALL y~...yk (EXISTS x S (P x))) => de: (SET x S t (EXISTS YI"" "Yk (a (.--~.r x s) (]~ x)))) where EVOKE has been removed, and the DE's sort field is S t for the "root type" of S, which is the type of the members of S, in order to appropriately constrain the DE's sort (instead of leaving it as the uncon- strained "things"). A second change that needs to be made is to generalize the left hand side of the rule so that the scoping expressions outscoping the inner EXISTS in the pattern also be allowed to include other scoping operators, such as EXISTS and IOTA. As long as the outermost quantifier is a FORALL, any other depend- ent scoping expression within it will generate a set- denoting DE and will behave as a distributive environ- ment as far as any more deeply embedded expres- sions are concerned. In other words, the distribu- tiveness chains along the dependent quantifiers. To see this, consider the more embedded example "Every boy gave a girl he knew a peach she wanted", where there is an intervening existential between the outer FORALL and innermost EXISTS. The core logi- cal form for this sentence is: (FORALL x boye (EXISTS y (SET y' girls (knowe x ¥' ) ) (EXISTS z (SET z' ~aohea (wan*:a y z' ) ) (gave z y z)))) DE 1 would be as above. Using rule R3' DF_. 2 be- comes: DE 2 : (SET y girle (EXISTS x DE I (a (knowe x y) (EXISTS z (SET z' peaches (wants Y =') ) (gave x y =))))) "The set of girls, each of which is known by some boy in DE 1, and got a peach she wanted from that boy." Now the peach quantifier should generate a set DE in terms of DE 1 and DE 2. Applying R3' gives us: DE3: (SET z peachee (EXISTS x DE I (EXISTS y DE 2 (a (wanta y z) (gave x y z))))) "The set of peaches z such that there is a girl in DE 2 (who is known by some boy in DE I, and who got some peach she wan.tpd from the boy), who wants z, and who got it from some boy in DE 1''. Now a third and final problem becomes apparent: for the general case of arbitrary embedding of de- pendent quantifiers we generate a DE (e.g., DF_,3) de- pendent on other DEs from the outer quantifiers, but the dependencies between those DEs (e.g., DE 1 and DE2) are not maintained. This is counter-intuitive, and also leads to an under-specified set DE. In the peaches example above, envision the situation where a boy b I gave out two peaches Pl and P2 : one to a girl gl he knew, and one to a girl g2 he didn't know, who also got a peach P3 from another boy b 2 who did know her. These are the facts of interest in this scenario: I. (& (gava b I gl p1) (know b I gl) (want= gl Pl)) 2. (& (gave blg2P2) (NOT (know bl gE) ) (wanta gEPE) ) 3. (& (gave bEgEp 3) (know bEgE) (wants g2 P3 ) ) Since b 1 and b 2 are in DE 1 (due to facts 1 and 3), and g2 is in DE 2 (due to fact 3), then P2 is in DE 3 (due to fact 2 and according to the DF_. 3 logical form above). But P2 should notbe in DE 3, since P2 was NOT given to a girl by a boy she knew. The set of peaches obtained for DE 3 is too large. The problem would not arise if in the DE 3 logical form, the variables ranging over DF-- 2 were appropriately connected to DE 1 using the dependent restriction present in the original for- mula (knows xy). A correct DE 3 is: DE 3 : (SET z ~:Hmache,= (EXISTS x DE z (EXISTS y (SET y' DE 2 (knows x y' ) ) (& (want= y =) (gave x y z))))) To be able to do this, the rule-application algorithm must be modified to include the restriction information (for dependent restrictions) when the formula gets rewritten in terms of a newly created DE. Therefore the final generalized rule, which includes other scop- ing operators and works on properly connected DEs is as follows: R3'' : (¥ORALL v I S I (Q2 v2 S2 "'" Q. v S= (EXISTS x S (P x)))) => de: (SET x S t (EXISTS v I S I ...v= S (~ (mem~r x S) (~ x)))) where S or P depend directly or indirectly on v 1...v n, Qi may be FORALL, EXISTS, or IOTA, and the scop- ing operators outside the inner EXISTS have already been processed by any appropriate rules that have replaced their original sorts by the Sis, which are in terms of generated DEs and explicitly show any DE dependencies. The right hand side is as before, with existentials picking out elements from each outer quantifier. 247 act. Since "them" and *it" have different number re- quirements, there is no ambiguity and the anaphor resolution module resolves "them" to the DE cor- responding to "the C1 carriers in the Indian Ocean" and "it" to the DE for Kennedy. We are currently working on having system-initiated actions also generate entities. 7 Conclusions and Further Work Webber's general approach to discourse entity generation from a logical representation proved very useful in our efforts. We were able to recast her basic ideas in our logical framework, and currently use the generated DEs extensively. The fact that the generation of DEs is done via structural rules operating on a semantic represen- tation provided a degree of modularity that allowed our pronoun resolution component to work automatically when we combined a new syntactic component with our semantic and discourse com- ponent (replacing an ATN by a unification grammar, in an independently motivated experiment). We are cur- rently starting to port the DE generation component to the BBN Spoken Language System (Boisen et al., 1989), and plan to integrate it with the intra-sentential mechanisms in (Ingria and Stallard, 1989). The fact that entity representations are mostly semantic in na- ture, not syntactic, also facilitated the addition and use of non-linguistic entities in a uniform way. There are several areas that we would like to study to extend our current treatment. We want to address the interactions between centering phenomena and non-linguistic events that affect dis- course focus, such as changing contexts via a menu selection in an expert system. Our paraphrasing component (Meteer and Shaked, 1988) already uses the discourse entities to a limited extent. One area of future work is to have the language generator make more extensive use of them, so it can smoothly refer to focused objects. Finally, although quantified expressions are al- ready generated in Janus for events implicit in many verbs, they are not being used for DEs. We would like to address the problem of event reference and its interaction with temporal information, using ideas such as those in (Webber, 1988) and in the special issue of ComputationaJ Linguistics on tense and aspect (Vol. 14, Number 2 June 1988). 8 Acknowledgments The work presented here was supported under DARPA contract #N00014-85-C-0016. The views and conclusions contained in this document are those of the author and should not be interpreted as neces- sarily representing the official policies, either ex- pressed or implied, of the Defense Advanced Research Projects Agency or of the United States Government. The author would like to thank Dave Stallard for invaluable discussions during the writing of this paper. Thanks also to Remko Scha, Lance Ramshaw, Ralph Weischedel, and Candy Sidner. References BBN Systems and Technologies Corp. (1988). A Guide to IRUS-II Application Development in the FCCBMP (BBN Report 6859). Cambridge, MA: Bolt Beranek and Newman Inc. Boisen, S., Chow Y., Haas, A, Ingria, R., Roucos, S., Scha, R., Stallard, D., and Vilain, M. (1989). Integration of Speech and Natural Language: Final Report (BBN Report 6991 ). BBN Systems and Technologies Corp. Brennan, Susan E., Friedman, Marilyn W., and Pol- lard, Carl J. (1987). A Centering Approach to Pronouns. Proceedings of the 25th Annual Meeting of the ACL. ACL. Grosz, Barbara J., and Sidner, Candace L. (1986). Attention, Intentions, and the Structure of Dis- course. Computational Linguistics, 12(3), 175-204. Grosz, Barbara J., Joshi, Aravind K., Weinstein, Scott. (1983). Providing a Unified Account of Definite Noun Phrases in Discourse, Proceedings of the 21st Annual Meeting of the ACL. Cambridge, MA: ACL. Hinrichs, E.W., Ayuso, D.M., and Scha, R. (1987). The Syntax and Semantics of the JANUS Semantic Interpretation Language. In Research and Development in Natural Lan- guage Understanding as Part of the Strategic Computing Program, Annual Technical Report December 1985 . December 1986. BBN Laboratories, Report No. 6522. Ingria, Robert J.P., and Stallard, David. (1989). A Computational Mechanism for Pronominal Ref- erence. Proceedings of the 27th Annual Meet- ing of the ACL. ACL. Kamp, Hans. (1984). A Theory of Truth and Seman- tic Representation. In J. Groenendijk. T.M.V. Janssen, and M. Stokhof (Eds.), Truth, Inter- pretation and Information, Selected Papers from the Third Amsterdam Colloquium. Dordrecht: Foris Publications. Karttunen, Laud. (1976). Discourse Referents. In J. D. McCawley (Ed.), Syntax and Semantics, Volume 7. New York: Academic Press. Meteer, Marie and Shaked. Varda. (1988). Strategies for Effective Paraphrasing. Proceedings of COLING-88, Budapest, Hungary, August 22-27. COLING. 248 5.2 Dependent Definite NPs Some of the problems described in the previous section also arise for the rule to handle dependent definite NPs. Definite NPs are treated as IOTA terms in WML. (Webber's logical language in (Webber, 1978) used a similar t. The treatment was later changed (Webber, 1983) to use the definite existential quantifier "Existsl', but this difference is not relevant for the following.) Replacing IOTA for t in Webber's (1978) rule 5: R5: (FOt~,,L Y~.'''Yk (P (IOTA x S (~ x)))) => de: (SET z things (EXISTS YI"" "Yk (m z (IOTA x S (R =))))) where Yl'"Yk are universal quantifiers over DEs as in R3 above, and S or R depend directly or indirectly on Yl"'Yk" The second and third extensions discussed in the previous section are needed here too: generalizing the quantifiers that outscope the inner existential, and keeping the dependencies among the DEs explicit to avoid under-specified sets. An example of an under- specified set arises when the dependent IOTA depends jointly on more than one outer variable; for example, in "Every boy gave a girl he knew the peach they selected", each peach depends on the selection by a boy and a girl together. Take a scenario analogous to that in the previous section, with the facts now as follows (replacing "selected" for "wants*): 1. (& (gave by gl P~) (know b r gl) (8ele,=ted (SETOF bl gl) pr ) ) 2. (& (gave b t g2P2) (NOT (know b I g2) ) (=elected (SETO¥ b 1 g2) P2) ) " 3. (& (gave b292P3) (know b292) (=ele¢ted (SETOF b292) P3)) By an analogous argument as before, using R5, the set of peaches will incorrectly contain P2' given by a boy to a girl who selected it with him, but whom he did not know. The modified rule is analogous to R3" in the previous section: RS' : (FORALL v I S I (Q= v z s= ... O~ v= s= (p (IOTA x s (R x))))) => de: (SET z S t (EXISTS v I S I ...v S (= z (IOTA x S (R x)}))) Note that this problem of under-specified sets does not arise when the dependency inside the IOTA is on one variable, because the definite "the" forces a one-to-one mapping from the possible assignments of the single outer variable represented in the IOTA to the IOTA denotations. If we use the example, "Every boy gave a girl he knew the peach she wanted", with logical form: (FORALL x boys (EXISTS y (SET y' gi=is (know= x y' ) ) (gave x y (IOTA z pea=hem (want,, y =) ) ) ) ) there is such a mapping between the set of girls in the appropriate DE 2 (those who got the peach they wanted from a boy they knew) and the peaches in DE 3 obtained via R5' (the peaches that some girl in DE 2 wanted). Each gid wants exactly one peach, so facts 2 and 3, where the same girl receives two dif- ferent peaches, cannot occur. So the definite ensures that no scenario can be constructed containing extra items, as long as there is only one outer variable in the inner iota. However in the joint dependency ex- ample above using "selected", the one-to-one map- ping is between boy-girl pairs and peaches, so the relationship between the boys and the girls becomes an integral part of determining the correct DE 3. 6 Non-Linguistic Discourse Entities In a dialogue between persons, references can be made not only to linguistically-introduced objects, but also to objects (or events, etc.) that become salient in the environment through some non-linguistic means. For example, a loud noise may prompt a question "What was that ?", or one may look at or point to an object and refer to it, "What's wrong with it ?". It seems an attention-drawing event normally precedes such a reference. In the Janus human-computer environment, non- linguistic attention-drawing mechanisms that we have identified so far include pointing actions by the user, and highlighting (by the system) of changes on the screen as a response to a request (or for other reasons). The appearance of answers to questions also draws the user's attention. We incorporated these into generalized notion of a "communicative act" which may be linguistic in nature (English input or generated English output), a pointing gesture by the user, or some other system-initiated action. Any com- municative act may give rise to DEs and affect the focused entities in the discourse. We have implemented procedures to handle pointing actions by generating discourse entities which are then used in the pronoun resolution com- ponent uniformly with the others. For example, after the request *Show me the C1 carriers in the Indian Ocean" the system will display icons on the color monitor representing the carriers. The user can then say "Which of them are within 200 miles of it? <point with mouse to Kennedy>*. Before the sentence gets processed, a discourse entity with the logical form (IOTA x carriers (nameof x "Kennedy")) • will be created and added to the list of entities currently in focus (the "forward looking centers* of the last linguis- tic act); the DE's "originating-communicative-act" field will point to a newly created "pointing" communicative 249 Montague, Richard. (1973). The Proper Treatment of Quantification in Ordinary English. In J. Hintikka, J. Moravcsik and P. Suppes (Eds.), Approaches to Natural Language. Dordrecht: Reidel. Scha, Remko J.H. (1976). Semantic Types in PHLIQAI. Coling 76 Preprints. Ottawa, Canada. Scha, Remko J.H., Bruce, Bertram C., and Polanyi, Livia. (1987). Discourse Understanding. In Encyclopedia of Artificial Intelligence. John Wiley & Sons, Inc. Sidner, Candace L. (1981). Focusing for the Inter- pretation of Pronouns. American Journal of Computational Linguistics, 7(4), 217-231. Sidner, Candace L. (1983). Focusing in the Com- prehension of Definite Anaphora. In M. Brady and R. C. Berwick (Eds.), Computational Models of Discourse. Cambridge, MA: MIT Press. Stallard, David G. (1988). A Manual for the Logical Language of the BBN Spoken Language Sys- tem. Unpublished. Kurt VanLehn. (1978). Determining the Scope of English Quantifiers (Tech. Rep. 483). MIT Ar- tificial Intelligence Laboratory. Webber, Bonnie L. (1978). A Formal Approach to Discourse Anaphora (BBN Report 3761). Cambridge, MA: Bolt Beranek and Newman. Webber, Bonnie L. (1981). Discourse Model Syn- thesis: Preliminaries to Reference. In Joshi, Webber, and Sag (Eds.), Elements of Dis. course Understanding. Cambridge University Press. Webber, Bonnie L. (1983). So What Can We Talk About Now? In Brady and Berwick (Eds.), Computational Models of Discourse. MIT Press. Webber, Bonnie L. (1988). Discourse Deixis: Refer- ence to Discourse Segments. Proceedings of the 26th Annual Meeting of the ACL. ACL. Weischedel, R., Ayuso, D., Haas, A., Hinrichs, E., Scha, R., Shaked, V., and Stallard, D. (1987). Research and Development in Natural Lan- guage Understanding as Part of the Strategic Computing Program, Annual Technical Report December 1985- December 1986 (BBN Report 6522). Cambridge, MA: Bolt Beranek and Newman. 250
1989
30
EVALUATING DISCOURSE PROCESSING ALGORITHMS Marilyn A. Walker Hewlett Packard Laboratories Filton Rd., Bristol, England B$12 6QZ, U.K. & University of Pennsylvania lyn%lwalker~hplb.hpl.hp.com Abstract In order to take steps towards establishing a method- ology for evaluating Natural Language systems, we conducted a case study. We attempt to evaluate two different approaches to anaphoric processing in dis- course by comparing the accuracy and coverage of two published algorithms for finding the co-specifiers of pronouns in naturally occurring texts and dia- logues. We present the quantitative results of hand- simulating these algorithms, but this analysis natu- rally gives rise to both a qualitative evaluation and recommendations for performing such evaluations in general. We illustrate the general difficulties encoun- tered with quantitative evaluation. These are prob- lems with: (a) allowing for underlying assumptions, (b) determining how to handle underspecifications, and (c) evaluating the contribution of false positives and error chaining. 1 Introduction In the course of developing natural language inter- faces, computational linguists are often in the posi- tion of evaluating different theoretical approaches to the analysis of natural language (NL). They might want to (a) evaluate and improve on a current sys- tem, (b) add a capability to a system that it didn't previously have, (c) combine modules from different systems. Consider the goal of adding a discourse compo- nent to a system, or evaluating and improving one that is already in place. A discourse module might combine theories on, e.g., centering or local focus- ing [GJW83, Sid79], global focus [Gro77], coher- ence relations[Hob85], event" reference [Web86], in- tonational structure [PH87], system vs. user be- liefs [Po186], plan or intent recognition or production [(3o578, AP86, SIS1], control[WSSS], or complex syn- tactic structures [Pri85]. How might one evaluate the relative contributions of each of these factors or com- pare two approaches to the same problem? In order to take steps towards establishing a methodology for doing this type of comparison, we conducted a case study. We attempt to evalu- ate two different approaches to anaphoric processing in discourse by comparing the accuracy and cover- age of two published algorithms for finding the co- specifiers of pronouns in naturally occurring texts and dialogues[Hob76b, BFP87]. Thus there are two parts to this paper: we present the quantitative results of hand-simulating these algorithms (henceforth Hobbs algorithm and BFP algorithm), but this analysis nat- urally gives rise to both a qualitative evaluation and recommendations for performing such evaluations in general. We illustrate the general difficulties encoun- tered with quantitative evaluation. These are prob- lems with: (a) allowing for underlying assumptions, (b) determining how to handle underspecifications, and (c) evaluating the contribution of false positives and error chaining. Although both algorithms are part of theories of discourse that posit the interaction of the algorithm with an inference or intentional component, we will not use reasoning in tandem with the algorithm's op- eration. We have made this choice because we want to be able to analyse the performance of the algo- rithms across different domains. We focus on the linguistic basis of these approaches, using only selec- tional restrictions, so that our analysis is independent of the vagaries of a particular knowledge representa- tion. Thus what we are evaluating is the extent to which these algorithms suffice to narrow the search of an inference component I. This analysis gives us l But note the definition of success in section 2.1. 251 some indication of the contribution of syntactic con- straints, task structure and global focus to anaphoric processing. The data on which we compare the algorithms are important if we are to evaluate claims of general- ity. If we look at types of NL input, one clear di- vision is between textual and interactive input. A related, though not identical factor is whether the language being analysed is produced by more than one person, although this distinction may be con- fluted in textual material such as novels that contain reported conversations. Within two-person interac- tive dialogues, there are the task-oriented master- slave type, where all the expertise and hence much of the initiative, rests with one person. In other two- person dialogues, both parties may contribute dis- course entities to the conversation on a more equal basis. Other factors of interest are whether the di- alogues are human-to-human or human-to-computer, as well as the modality of communication, e.g. spoken or typed, since some researchers have indicated that dialogues, and particularly uses of reference within them, vary along these dimensions [Coh84, Tho80, GSBC86, D J89, WS89]. We analyse the performance of the algorithms on three types of data. Two of the samples are those that Hobbs used when developing his algorithm. One is an excerpt from a novel and the other a sample of jour- nalistic writing. The remaining sample is a set of 5 human-human, keyboard-mediated, task-oriented di- alogues about the assembly of a plastic water pump [Coh84]. This covers only a subset of the above types. Obviously it would be instructive to conduct a similar analysis on other textual types. 2 Quantitative Evaluati0n-Black Box 2.1 The Algorithms When embarking on such a comparison, it would be convenient to assume that the inputs to the algo- rithms are identical and compare their outputs. Un- fortunately since researchers do not even agree on which phenomena can be explained syntactically and which semantically, the boundaries between two mod- ules are rarely the same in NL systems. In this case the BFP centering algorithm and Hobbs algorithm both make ASSUMPTIONS about other system com- ponents. These are, in some sense, a further specifi- cation of the operation of tile algorithms that must be made in order to hand-simulate the algorithms. There are two major sets of assumptions, based on discourse segmentation and syntactic representation. We attempt to make these explicit for each algorithm and pinpoint where the algorithms might behave dif- ferently were these assumptions not well-founded. In addition, there may be a number of UNDER- SPECIFICATIONS in the descriptions of the algorithms. These often arise because theories that attempt to categorize naturally occurring data and algorithms based On them will always be prey to previously un- encountered examples. For example, since the BFP salience hierarchy for discourse entities is based on grammatical relation, an implicit assumption is that an utterance only has one subject. However the novel Wheels has many examples of reported dialogue such as She continued, unperturbed, ~Mr. Vale quotes the Bible about air pollution." One might wonder whether the subject is She or Mr. Vale. In some cases, the algorithm might need to be further speci- ficied in order to be able to process any of the data, whereas in others they may just highlight where the algorithm needs to be modified (see section 3.2). In general we count underspecifications as failures. Finally, it may not be clear what the DEFINITION OF SUCCESS is. In particular it is not clear what to do in those cases where an algorithm produces multi- ple or partial interpretations. In this situation a sys- tem might flag the utterance as ambiguous and draw in support from other discourse components. This arises in the present analysis for two reasons: (1) the constraints given by [GJW86] do not always allow one to choose a preferred interpretation, (2) the BFP algorithm proposes equally ranked interpretations in parallel. This doesn't happen with the Robbs algo- rithm because it proposes interpretations in a sequen- tial manner, one at a time. We chose to count as a failure those situations in which the BFP algorithm only reduces the number of possible interpretations, but Robbs algorithm stops with a correct interpre- tation. This ignores the fact that tIobbs may have rejected a number of interpretations before stopping. We also have not needed to make a decision on how to score an algorithm that only finds one interpretation for an utterance that humans find ambiguous. 2.1.1 Centering algorithm The centering algorithm as defined by Brennan, Friedman and Pollard, (BFP algorithm), is derived from a set of rules and constraints put forth by Grosz, 252 Joshi and Weinstein [GJW83, GJW86]. We shall not reproduce this algorithm here (See [BFP87]). There are two main structures in the centering algorithm, the CB, the BACKWARD LOOKING CENTER, which is what the discourse is 'about', and an ordered list, CF, of FORWARD LOOKING CENTERS, which are the discourse entities available to the next utterance for pronorninalization. The centering framework predicts that in a local coherent stretch of dialogue, speakers will prefer to CONTINUE talking about the same dis- course entity, that the CB will be the highest ranked entity of the previous utterance's forward centers that is realized in the current utterance, and that if any- thing is pronominalized the CB must be. In the centering framework, the order of the forward-centers list is intended to reflect the salience of discourse entities. The BFP algorithm orders this list bY grammatical relation of the complements of the main verb, i.e. first the subject, then object, then indirect object, then other subcategorized-for complements, then noun phrases found in adjunct clauses. This captures the intuition that subjects are more salient than other discourse entities. The BFP algorithm added linguistic constraints on CONTRA-INDEXING to the centering framework. These constraints are exemplified by the fact that, in the sentence he Hkes him, the entity cospecified by he cannot be the same as that cospecified by him. We say that he and him are CONTRA-INDEXED. The BFP algorithm depends on semantic processing to precom- pute these constraints, since they are derived from the syntactic structure, and depend on some notion of c-command[Rei76]. The other assumption that is dependent on syntax is that the the representations of discourse entities can be marked with the gram- matical function through which they were realized, e.g. subject. The BFP algorithm assumes that some other mech~ anism can structure both written texts and task- oriented dialogues into hierarchical segments. The present concern is not with whether there might be a grammar of discourse that determines this struc- ture, or whether it is derived from the cues that cooperative speakers give hearers to aid in process- ing. Since centering is a local phenomenon and is intended to operate within a segment, we needed to deduce a segmental structure in order to analyse the data. Speaker's intentions, task structure, cue words like O.K. now.., intonational properties of utterances, coherence relations, the scoping of modal, operators, and mechanisms for shift'ing control between dis- course participants have all been proposed as ways of determining discourse segmentation [Gro77, GS86, Rei85, PH87, HL87, Hob78, Hob85, Rob88, WS88]. Here, we use a combination of orthography, anaphora distribution, cue words and task structure. The rules are" • In published texts, a paragraph is a new seg- ment unless the first sentence has a pronoun in subject position or a pronoun where none of the preceding sentence-internal noun phrases match its syntactic features. • In the task-oriented dialogues, the action PICK- UP marks task boundaries hence segment bound- aries. Cue words like nezt, then, and now also mark segment boundaries. These will usually co- occur but either one is sufficient for marking a segment boundary. BFP never state that cospecifiers for pronouns within the same segment are preferred over those in previous segments, but this is an implicit assump- tion, since this line of research is derived from Sid- ner's work on local focusing. Segment initial utter- ances therefore are the only situation where the BFP algorithm will prefer a within-sentence noun phrase as the cospecifier of a pronoun. 2.1.2 Hobbs ~ algorithm The Hobbs algorithm is based on searching for a pronoun's co-specifier in the syntactic parse tree of input sentences [Hob76b]. We reproduce this algo- rithm in full in the appendix along with an example. Hobbs algorithm operates on one sentence at a time, but the structure of previous sentences in the dis- course is available. It is stated in terms of searches on parse trees. When looking for an intrasentential antecedent, these searches are conducted in a left-to- right, breadth-first manner. However, when looking for a pronoun's antecedent within a sentence, it will go sequentially further and further up the tree to the left of the pronoun, and that failing will look in the previous sentence. Hobbs does not assume a segmen- tation of discourse structure in this algorithm; the algorithm will go back arbitrarily far in the text to find an antecedent. In more recent work, Hobbs uses the notion of COHERENCE RELATIONS to structure the discourse [HM87]. The order by which Hobbs' algorithm traverses the parse tree is the closest thing in his framework to pre- dictions about which discourse entities are salient. In the main it prefers co-specifiers for pronouns that 253 are within the same sentence, and also ones that are closer to the pronoun in tile sentence. This amounts to a claim that different discourse entities are salient, depending on the position of a pronoun in a sentence. When seeking an intersentential co- specification, Hobbs algorithm searches the parse tree of the previous utterance breadth-first, from left to right. This predicts that entities realized in subject position are more salient, since even if an adjunct clause linearly precedes the main subject, any noun phrases within it will be deeper in the parse tree. This also means that objects and indirect objects will be among the first possible antecedents found, and in general that the depth of syntactic embedding is an important determiner of discourse prominence. Turning to the assumptions about syntax, we note that Hobbs assumes that one can produce the cor- rect syntactic structure for an utterance, with all ad- junct phrases attached at the proper point of the parse tree. In addition, in order to obey linguistic constraints on coreference, the algorithm depends on the existence of a N parse tree node, which denotes a noun phrase without its determiner (See the ex- ample in the Appendix). Hobbs algorithm procedu- rally encodes contra-indexing constraints by skipping over NP nodes whose N node dominates the part of the parse tree in which the pronoun is found, which means that he cannot guarantee that two contra- indexed pronouns will not choose the same NP as a co-specifier. Hobbs also assumes that his algorithm can some- how collect discourse entities mentioned alone into sets as co-specifiers of plural anaphors. Hobbs dis- cusses at length other assumptions that he makes about the capabilities of an interpretive process that operates before the algorithm [Hob76b]. This in- cludes such things as being able to recover syntac- tically recoverable omitted text, such as elided verb phrases, and the identities of the speakers and hearers in a dialogue. 2.1.3 Summary A major component of any discourse algorithm is the prediction of which entities are salient, even though all the factors that contribute to the salience of a dis- course entity have not been identified [Pri81, Pri85, BF83, HTD86]. So an obvious question is when the two algorithms actually make different predictions. The main difference is that the choice of a co-specifier for a pronoun in the Hobbs algorithm depends in part on the position of that pronoun in the sentence. In the centering framework, no matter what criteria one uses to order the forward-centers list, pronouns take the most salient entities as antecedents, irrespective of that pronoun's position. Hobbs ordering of enti- ties from a previous utterance varies from BFP in that possessors come before case-marked objects and indirect objects, and there may be some other differ- ences as well but none of them were relevant to the analysis that follows. The effects ot" some of the assumptions are mea- surable and we will attempt to specify exactly what these effects are, however some are not, e.g. we can- not measure the effect of Hobbs' syntax assumption since it is difficult to say how likely one is to get the wrong parse. We adopt the set collection assumption for both algorithms as well as the ability to recover the identity of speakers and hearers in dialogue. 2.2 Quantitative Results of the Algo- rithms The texts on which the algorithms are analysed are the first chapter of Arthur Hailey's novel Wheels, and the July 7, 1975 edition of Newsweek. The sentences in Wheels are short and simple with long sequences consisting of reported conversation, so it is similar to a conversational text. The articles from Newsweek are typical of journalistic writing. For each text, the first 100 occurrences of singular and plural third- person pronouns were used to test the performance of the algorithms. The task-dialogues contain a total of 81 uses of it and no other pronouns except for I and you. In the figures below note that possessives like h/a are counted along with he and that accusatives like him and her are counted as he and she 2. Wheels Newsweek Tasks N Hobbs 100 .88 100 89 81 51 BFP 90 79 49 Figure I: Number correct for both algorithms for Wheels, Newsweek and Task Dialogues We performed three analyses on the quantitative results. A comparison of the two algorithms on each data set individually and an overall analysis on the three data sets combined revealed no significant dig ferences in the performance of the two algorithms 2Hobbe reports his Mgoritlun's performance and the exam- plea it fails on in [Hob76b, Hob76a]. The numbers reported here vary slightly from those. This is probably due to a dis- crepancy in exactly what the data.set consisted of. 254 (X 2 = 3.25, not significant). In addition for each algorithm alone we tested whether there were signif- icant differences in performance for different textual types. Both of the algorithms performed significantly worse on the task dialogues (X 2 = 22.05 for Hobbs, X 2 = 21.55 for BFP, p < 0.05). We might wonder with what confidence we should view these numbers. A significant factor that must be considered is the contribution of FALSE POSITIVES and ERROR CHAINING. A FALSE POSITIVE is when an algorithm gets the right answer for the wrong rea- son. A very simple example of this phenomena is illustrated by this sequence from one of the task dia- logues. Expl: Now put IT in the pan of water. Exp2: Stand IT up. Exps: Pump the little handle with the red cap on IT. Clil. ok Exp4. Does IT work?? The first it in Expl refers to the pump. Hobbs algorithm gets the right antecedent for it in Exp3, which is the little handle, but then fails on it in Exp4, whereas the BFP algorithm has the pump centered at Expl and continues to select that as the antecedent for it throughout the text. This means BFP gets the wrong co-specifier in Exps but this error allows it to get the correct co-specifier in Exp4. Another type of false positive example is "Every- body and HIS brother suddenly wants to be the Presi- dent's friend, n said one aide. Hobbs gets this correct as long as one is willing to accept that Everybody is really the antecedent of his. It seems to me that this might be an idiomatic use. ERROR CHAINING refers to the fact that once an al- gorithm makes an error, other errors can result. Con- sider: Cli1: Sorry no luck. Expx: I bet IT's the stupid red thing. Exp2: Take IT out. Cli2: Ok. IT is stuck. In this example once an algorithm fails at Expx it will fail on Exp2 and Cli2 as well since the choices of a cospeciller in the following examples are dependent on the choice in Expl. It isn't possible to measure the effect of false pos- itives, since in some sense they are subjective judge- ments. However one can and should measure the ef- fects of error chaining, since reporting numbers that correct for error chaining is misleading, but if the er- ror that produced the error chain can be corrected then the algorithm might show a significant improve- ment. In this analysis, error chains contributed 22 failures to Hobbs' algorithm and 19 failures to BFP. 3 Qualitative Evaluation-Glass Box The numbers presented in the previous section are intuitively unsatisfying. They tell us nothing about what makes the algorithms more or less general, or how they might be improved. In addition, given the assumptions that we needed to make in order to pro- duce them, one might wonder to what extent the data is a result of these assumptions. Figure 1 also fails to indicate whether the two algorithms missed the same examples or are covering a different set of phenomena, i.e. what the relative distribution of the successes and failures are. But having done the hand-simulation in order to produce such numbers, all of this informa- tion is available. In this section we will first discuss the relative importance of various factors that go into producing the numbers above, then discuss if the al- gorithms can be modified since the flexibility of a framework in allowing one to make modifications is an important dimension of evaluation. 3.1 Distributions The figures 2, 3 and 4 show for each pronominal cat- egory, the distribution of successes and failures for both algorithms. HE SHE THEY Total Both Neither Hobbs BFP only only 66 1 1 6 6 3 3 5 1 1 83 5 5 7 Figure 2: Distribution on Wheels Since the main purpose of evaluation must be to improve the theory that we are evaluating, the most interesting cases are the ones on which the algo- rithrns' performance varies and those that neither al- gorithm gets correct. We discuss these below. 255 HE IT THEY Total Both Neither Hobbs BFP only only 53 8 2 Ii 5 4 I 13 3 77 8 12 3 Figure 3: Distribution on Newsweek I Both Neither Hobbs BFP only only IT 48 29 3 1 Figure 4: Distribution on Task Dialogues 3.1.1 Both In the Wheels data, 4 examples rest on the assump- tion that the identities of speakers and hearers is re- coverable. For example in The GM president smiled. "Except Henry will be damned forceful and the papers won't print all HIS language. ~, getting the his correct here depends on knowing that it is the GM president speaking. Only 4 examples rest on being able to pro- duce collections or discourse entities, and 2 of these occurred with an explicit instruction to the hearer to produce such a collection by using the phrase them both. 3.1.2 Hobbs only There are 21 cases that Hobbs gets that BFP don't, and of these these a few classes stand out. In ev- ery case the relevant factor is Hobbs' preference for intrasentential co-specifiers. One class, (n = 3), is exemplified by Put the lit- tle black ring into the the large blue CAP with the hole in IT. All three involved using the preposition with in a descriptive adjunct on a noun phrase. It may be that with-adjuncts are common in visual de- scriptions, since they were only found in our data in the task dialogues, and a quick inspection of Grosz's task-oriented dialogues revealed some as well[Deu74]. Another class, (n = 7), are possessives. In some cases the possessive co-specified with the subject of the sentence, e.g. The SENATE took time from ITS paralyzing New Hampshire election debate to vote agreement, and in others it was within a rela- tive clause and co-specified with the subject of that clause, e.g. The auto industry should be able to pro- duce a totally safe, defect-free CAR that doesn't pol- lute ITS environment. Other cases seem to be syntactically marked sub- ject matching with constructions that link two S clauses (n = 8). These are uses of more-than in e.g. but Chamberlain grossed about $8.3 million more than HE could have made by selling on the home front. There also are S-if-S cases, as in Mondale said: "I think THE MAFIA would be broke if'IT conducted all its business that way." We also have subject match- ing in AS-AS examples as in ... and the resulting EX- POSURE to daylight has become as uncomfortable as IT was unaccustomed, as well as in sentential com- plements, such as But another liberal, Minnesota's Walter MONDALE, said HE had found a lot of in- competence in the agency's operations. The fact that quite a few of these are also marked with But may be significant. In terms of the possible effects that we noted ear- lier, the DEFINITION OF SUCCESS (see section 2.1 fa- vors Hobbs (n = 2). Consider: K: Next take the red piece that is the small- est and insert it into the hole in the side of the large plastic tube. IT goes in the hole nearest the end with the engravings on IT. The Hobbs algorithm will correctly choose the end as the antecedent for the second it. The BFP al- gorithm on the other hand will get two interpreta- tions, one in which the second it co-specifies the red piece and one in which it co-specifies the end. They are both CONTINUING interpretations since the first it co-specifies the CB, but the constraints don't make a choice. 3.1.3 BFP only All of the examples on which BFP succeed and Hobbs fails have to do with extended discussion of one dis- course entity. For instance: Expt: Now take the blue cap with the two prongs sticking out (CB -- blue cap) Exp2: and fit the little piece of pink plastic on IT. Ok? (CB= blue cap) Clit : ok. Exp3: Insert the rubber ring into that blue cap. (CB= blue cap) Exp4: Now screw IT onto the cylinder. On this example, Hobbs fails by choosing the co- specifier of it in Exp4 to be the rubber ring, even 256 though the whole segment has been about the blue cap. Another example from the novel WHEELS is given below. On this one Hobbs gets the first use of he but then misses the next four, as a result of missing the second one by choosing a housekeeper as the co- specifier for HIS. ..An executive vice-president of Ford was preparing to leave for Detroit Metropoli- tan Airport. HE had already breakfasted, alone. A housekeeper had brought a tray to HIS desk in the softly lighted study where, since 5 a.m., HE had been alternately read- ing memoranda (mostly on special blue sta- tionery which Ford vice-presidents used in implementing policy) and dictating crisp in- structions into a recording machine. HE had scarcely looked up, either as the mall ar- rived, or while eating, as HE accomplished in an hour what would have taken... Since an ezecutive vice-president is centered in the first sentence, and continued in each following sen- tence, the BFP algorithm will correctly choose the cospecifier. 3.1.4 Neither Among the examples that neither algorithm gets cor- rectly are 20 examples from the task dialogues of it referring to the global focus, the pump. In 15 cases, these shifts to global focus are marked syntactically with a cue word such as Now, and are not marked in 5 cases. Presumably they are felicitous since the pump is visually salient. Besides the global focus cases, pronominal references to entities that were not linguistically introduced are rare. The only other ex- ample is an implicit reference to 'the problem' of the pump not working: Clil: Sorry no luck. Expl: I bet IT's the stupid red thing. We have only two examples of sentential or VP anaphora altogether, such as Madam Chairwoman, said Colby at last, I am trying to ran a secret intelli- gence service. IT u~as a forlorn hope. Neither Hobbs algorithm nor BFP attempt to cover these examples. Three of the examples are uses of it that seem to be lexicalized with certain verbs, e.g. They hit IT off real well. One can imagine these being treated as phrasal lexical items, and therefore not handled by an anaphoric processing component[AS89]. Most of the interchanges in the task dialogues con- sist of the client responding to cotmnands with cues such as O.K. or Ready to let the expert know when they have completed a task. When both parties contribute discourse entities to the common ground, both algorithms may fail (n = 4). Consider: Expl: Now we have a little red piece left Exp2: and I don't know what to do with IT. Clil: Well, there is a hole in the green plunger inside the cylinder. Expa: I don't think IT goes in THERE. Exp4: I think IT may belong in the blue cap onto which you put the pink piece of plastic. In Exp3, one might claim that it and there are con- traindexed, and that there can be properly resolved to a hole, so that it cannot be any of the noun phrases in the prepositional phrases that modify a hole, but whether any theory of contra-indexing actually give. us this is questionable. The main factor seems to be that even though Expt is not syntactically a question, the little red piece is the focus of a question, and as such is in focus despite the fact that the syntactic construction there is supposedly focuses a hole in the green plunger ...[Sid79]. These examples suggest that a questioned entity is left focused until the point in the dialogue at which the question is resolved. The fact that well has been noted as a marker of response to questions sup- ports this analysis[Sch87]. Thus the relevant factor here may be the switching of control among discourse participants [WS88]. These mixed-initiati.ve features make these sequences inherently different than text. 3.2 Modifiability Task structure in the pump dialogues is an important factor especially as it relates to the use of global focus. Twenty of the cases on which both algorithms fail are references to the pump, which is the global focus. We can include a global focus in the centering framework, as a separate notion from the current CB. This means that in the 15 out of 20 cases where the shift to global focus is identifiably marked with a cue-word such as now, the segment rules will allow BFP to get the global focus examples. BFP can add the VP and the S onto the end of the 257 forward centers list, as Sidner does in her algorithm for local focusing [Sid79]. This lets BFP get the two examples of event anaphora. Hobbs discusses the fact that his algorithm cannot be modified to get event anaphora in [Hob76b]. Another interesting fact is that in every case in which Hobbs' algorithm gets the correct co-specifier and BFP didn't, the relevant factor is Hobbs' pref- erence for intrasentential co-specifiers. One view on these cases may be that these are not discourse anaphora, but there seems to be no principled way to make this distinction. However, Carter has pro- posed some extensions to Sidner's algorithm for lo- cal focusing that seem to be relevant here(chap. 6, [Car87]). He argues that intra-sentential candidates (ISCs) should be preferred over candidates from the previous utterance, ONLY in the cases where no dis- course center has been established or the discourse center is rejected for syntactic or selectional reasons. He then uses Hobbs algorithm to produce an ordering of these ISCs. This is compatible with the centering framework since it is underspecifled as to whether one should always choose to establish a discourse center with a co-specifier from a previous utterance. If we adopt Carter's rule into the centering framework, we find that of the 21 cases that Hobbs gets that BFP don't, in 7 cases there is no discourse center estab- lished, and in another 4 the current center can be re- jected on the basis of syntactic or sortal information. Of these Carter's rule clearly gets 5, and another 3 seem to rest on whether one might want to establish a discourse entity from a previous utterance. Since the addition of this constraint does not allow BFP to get any examples that neither algorithm got, it seems that this combination is a way of making the best out of both algorithms. The addition of these modifications changes the quantitative results. See the Figure 5. N Wheels 100 Newsweek 100 Tasks 81 Hobbs BFP 88 93 89 84 51 64 Figure 5: Number correct for both algorithms after Modifications, for Wheels, Newsweek and Task Dia- logues However, the statistical analyses still show that there is no significant difference in the performance of the algorithms in general. It is also still the case that the performance of each algorithm significantly varies depending on tile data. Tile only significant difference as a result of the modifcations is that tile BFP algorithm now performs significantly better oil tile pump dialogues alone (X 2 = 4.3 I, p < .05). 4 Conclusion We can benefit in two ways from performing such evaluations: (a) we get general results on a methodol- ogy for doing evaluation, (b) we discover ways we can improve current theories. A split of evaluation efforts into quantitative versus qualitative is incoherent. We cannot trust the results of a quantitative evaluation without doing a considerable amount of qualitative analyses and we should perform our qualitative anal- yses on those components that make a significant con- tribution to the quantitative results; we need to be able to measure the effect of various factors. These measurements must be made by doing comparisons at the data level. In terms of general results, we have identified some factors that make evaluations of this type more com- plicated and which might lead us to evaluate solely quantitative results with care. These are: (a) To de- cide how to evaluate UNDERSPECIFICATIONS and the contribution of ASSUMPTIONS, and (b) To determine the effects of FALSE POSITIVES and ERKOR CHAINING. We advocate an approach in which the contribution of each underspeeification and assumption is tabu- lated as well as the effect of error chains. If a prin- cipled way could be found to identify false positives, their effect should be reported as well as part of any quantitative evaluation. In addition, we have takeri a few steps towards de- termining the relative importance of different factors to the successful operation of discourse modules. The percent of successes that both algorithms get indi- cates that syntax has a strong influence, and that at the very least we can reduce the amount of inference required. In 590£ to 82% of the cases both algorithms get the correct result. This probably means that in a large number of cases there was no potential conflict of co-specifiers. In addition, this analysis has shown, that at least for task-oriented dialogues global focus is a significant factor, and in general discourse struc- ture is more important in the task dialogues. How- ever simple devices such as cue words may go a long way toward determining this structure. Finally, we should note that doing evaluations such as this allows us to determine the GENERALITY of our 258 approaches. Since the performance of both Hobbs and BFP varies according to the type of the text, and in fact was significantly worse on the task dialogues than on the texts, we might question how their per- formance would vary on other inputs. An annotated corpus comprising some of the various NL input types such as those I discussed in the introduction would go a long way towards giving us a basis against which- we could evaluate the generality of our theories. 5 Acknowledgements David Carter, Phil Cohen, Nick Haddock, Jerry Hobbs, Aravind Joshi, Don Knuth, Candy Sidner, Phil Stenton, Bonnie Webber, and Steve Whittaker have provided valuable insights toward this endeavor and critical comments on a multiplicity of earlier ver- sions of this paper. Steve Whittaker advised me on the statistical analyses. I would like to thank Jerry Hobbs for encouraging me to do this in the first place. References lAP861 [AS89] [BF83] [BFP87] [Car87] James F. Allen and C. Raymond Perranlt. Analyzing intention in utterances. In Bar- bara J. Grc6z, Karen Sparck Jones, and Bonnie Lynn Webber, editors, Readings in Natural Language Processing, pages 419- 422, Morgan Kauffman, Los Altos, Ca., 1986. Anne Abeille and Yves Schabes. Parsing idioms in lexicalized tags. In Proc. 27th Annual Meeting of the ACL, Association of Computational Linguistics, pages 161- 65, 1989. Roger Brown and Deborah Fish. The psy- chological causality implicit in language. Cognition, 14:237-273, 1983. Susan E. Brennan, Marilyn Walker Fried- man, and Carl J. Pollard. A center- ing approach to pronouns. In Proc. 25th Annual Meeting of the ACL, Association of Computational Linguistics, pages 155- 162, Stanford University, Stanford, Ca., 1987. David M. Carter. Interpreting Anaphors in Natural Language Texts. Ellis Hot- wood, 1987. [Coh78] [Coh84] [Deu74] [D J89] [GJw831 [GJWS6] [Gro77] [cs861 [GSBC861 [HL87] Phillip R. Cohen. On Knowing What to Say: Planning Speech Acts. Technical Re- port 118, University of Toronto; Depart- ment of Computer Science, 1978. Phillip R. Cohen. The pragmatics of re- ferring and the modality of conununica- tion. Computational Linguistics, 10:97- 146, 1984. Barbara Grosz Deutsch. Typescripts of task oriented dialogs. August 1974. Nits Dahlback and Arne Jonsson. Empiri- cal studies of discourse representations for natural language interfaces. In Proc. 27th Annual Meeting of the ACL, Association of Computational Linguistics, pages 291- 298, 1989. Barbara J. Grosz, Aravind K. Joshi, and Scott Weinstein. Providing a unified ac- count of definite noun phrases in dis- course. In Proc. 21st Annual Meeting of the ACL, Association of Computational Linguistics, pages 44-50, Cambridge, MA, 1983. Barbara J. Grosz, Aravind K. Joshi, and Scott Weinstein. Towards a computa- tional theory of discourse interpretation. 1986. Preliminary draft. Barbara J. Grosz. The Representation and Use of Focus in Dialogue Understand- ing. Technical Report 151, SRI Interna- tional, 333 Ravenswood Ave, Menlo Park, Ca. 94025, 1977. Barbara J. Grosz and Candace L. Sidner. Attentions, intentions and the structure of discourse. Computational Linguistics, 12:pp. 175-204, 1986. Raymonde Guindon, P. Sladky, H. Brun- ner, and J. Conner. The structure of user- adviser dialogues: is there method in their madness? In Proc. 24st Annual Meeting of the ACL, Association of Computational Linguistics, pages 224-230, 1986. Julia Hirschberg and Diane Litmus. Now lets talk about now: identifying cue phrases intonationally. In Proc. 25th An- nual Meeting of the ACL, Association of Computational Linguistics, pages 163- 259 [HM87] [HobTSa] [Hob76b] [Hob78] [HobS5] [HTD861 [PH87] [Po186] [Pri81] 171, Stanford University, Stanford, Ca., [Pri85] 1987. Jerry R. Hobbs and Paul Martin. Local Pragmatics. Technical Report, SRI In- ternational, 333 P~venswood Ave., Menlo Park, Ca 94025, 1987. Jerry R. Hobbs. A Computational Ap- proach to Discourse Analysis. Techni- cal Report 76-2, Department of Computer Science, City College, City University of New York, 1976. Jerry R. Hobbs. Pronoun Resolution. Technical Report 76-1, Department of Computer Science, City College, City Uni- versity of New York, 1976. Jerry R. Hobbs. Why is Discourse Coher- ent? Technical Report 176, SRI Interna- tional, 383 Ravenswood Ave., Menlo Park, Ca 94025, 1978. Jerry R. Hobbs. On the Coherence and Structure of Discourse. Technical Re- port CSLI-85-37, Center for the Study of Language and Information, Ventura Hall, Stanford University, Stanford, CA 94305, 1985. Susan B. Hudson, Michael K. Tanenhaus, and Gary S. Dell. The effect of the dis- course center on the local coherence of a discourse. Technical Report, University of Rochester, 1986. Janet Pierrehum- bert and Julia Hirsehberg. The meaning of intonational contours in the interpreta- tion of discourse. In Proc. Symposium on Intentions and Plans in Communication and Discourse, Monterey, Ca., 1987. Martha Pollack. A model of plan infer- ence that distinguishes between the be- liefs of actors andobservers. In Proc. $4st Annual Meeting of the ACL, Association of Computational Linguistics, pages 207- 214, Columbia University, New York, N.Y, 1986. Ellen F. Prince. Toward a taxonomy of given-new information. In Radical Prag- matics, Academic Press, 1981. [Rei76] [Rei85] [ROBS8] [Sch87] [SI81] [Sid79] [Tho80] [Web86] [ws88] [ws89] Ellen F. Prince. Fancy syntax and shared knowledge. Journal of Pragmatics, pp. 65-81, 1985. T. Reinhart. The Syntactic Domain of Anaphora. PhD thesis, MIT, Cambridge Mass., 1976. Rachel Reichman. Getting Computers to Talk Like You and Me. MIT Press, Cam- bridge, MA, 1985. Craige Roberts. Modal Subordina- tion and Pronominal Anaphora in Dis- course. Technical Report No. 127, CSLI, May,1988. Also to appear in Linguistics and Philosophy. Deborah Schiffrin. Discourse Markers. Cambridge University Press, 1987. Candace Sidner and David Israel. Rec- ognizing intended meaning and speak- ers plans. In Proc. International Joint Conference on Artificial Intelli- gence, pages 203-208, Vancouver, BC, Canada, 1981. Candace L. Sidner. "Toward a computa- tional theory of definite anaphora compre- hension in English. Technical Report AI- TR-537, MIT, 1979. Bozena Henisz Thompson. Linguis- tic analysis of natural language com- munication with computers. In COL- ING80: Proc. 8th International Con- terence on Computational Linguistics. Tokyo, pages 190-201, 1980. Bonnie Lynn Webber. Two Steps Closer to Event Reference. Technical Report MS- CIS-86-74, Linc Lab 42, Department of Computer and Information Science, Uni- versity of Pennsylvania, 1986. Steve Whittaker and Phil Stenton. Cues and control in expert client dialogues. In Proc. 26th Annual Meeting of the ACL, Association of Computational Linguistics, 1988. Steve Whittaker and Phil Stenton. User studies and the design of natural language systems. In Proc. 27th Annual Meeting of the ACL, Association of Computational Linguistics, pages 116-123, 1989. 260 A The Hobbs algorithm The algorithm and an example is reproduced below. In it, NP denotes NOUN PHRASE and S denotes SEN- TENCE. 1. Begin at the NP node immediately dominating the pronoun in the parse tree of S. 2. Go up the tree until you encounter an NP or S node. Call this node X, and call the path used to reach it p. 3. Traverse all branches below node X to the left of path p in a left-to-right breadth-first fashion. Propose as the antecedent any NP node encoun- tered that has an NP or S node on the path from it to X. 4. If X is not the highest S node in the sentence, continue to step 5. Otherwise traverse the sur- face parse trees of previous sentences in the text in reverse chronological order until an acceptable antecedent is found; each tree is traversed in a left-to-right, breadth-first manner, and when an NP node is encountered, it is proposed as the antecedent. 5. From node X, go up the tree to the first NP or S node encountered. Call this new node X, and call the path traversed to reach it p. 6. If X is an NP node and if the path p to X did not pass through the N node that X immediately dominates, propose X as the antecedent. 7. Traverse all branches below node X to the left of path p in a left-to-right, breadth-first manner, but do not go below any NP or S node encoun- tered. Propose any NP or S node encountered as the antecedent. 8. Go to step 4. The purpose of steps 2 and 3 is to observe the contra.indexing constraints. Let us consider a sim- ple conversational sequence. UI: Lyn's morn is a gardener. U2: Craige likes her. We are trying to find the antecedent for her in the second utterance. Let us go through the algorithm step by step, using the parse trees for UI and U2 in the figure. 1. NPs labels the starting point of step 1. / NP2 I Lyn Sl / \ NPt VP / \ I Det N V \ I I 's room is \ NP I Det I a \ N3 \ N l gardener S2 /q: NP4 VP " I / "<'-. Craige V NPs I I likes her Figure 6: Parse Trees for Ut and U2 . . . $2 is called X. We mark the path p with a dotted line. We traverse S~ to the left of p. We encounter NP4 but it does not have an NP or S node be- tween it and X. This means that NP4 is contra- indexed with NPs. Note that if the structure corresponded to Craige"s morn likes her then the NP for Craige would be an NP to the left of p that has an NP node between it and X, and Craige would be selected as the antecedent for her. The node X is the highest S node in U2, so we go to the previous sentence Ut. As we traverse the tree of Ut, the first NP we encounter is NP1, so Lyn's morn is proposed as the antecedent for her and we are done. 261
1989
31
A COMPUTATIONAL MECHANISM FOR PRONOMINAL REFERENCE Robed J. P. Ingria David Stallard BBN Systems and Technologies, Incorporated 10 Mouiton Street Mailstop 009 Cambridge, MA 02238 ABSTRACT the syntactically impossible antecedents. This latter This paper describes an implemented mechanism for handling bound anaphora, disjoint reference, and pronominal reference. The algorithm maps over every node in a parse tree in a left-to-right, depth first manner. Forward and backwards coreference, and disjoint reference are assigned during this tree walk. A semantic interpretation procedure is used to deal with multiple antecedents. 1. INTRODUCTION This paper describes an implemented mechanism for assigning antecedents to bound anaphors and per- sonal pronouns, and for establishing disjoint reference between Noun Phrases. This mechanism is part of the BBN Spoken Language System (Boissn, et al. (1989)). The algorithm used is inspired by the index- ing scheme of Chomsky (1960), augmented by tables analogous to the "Table of Coreference" of Jack- endoff (1972). This mechanism handles only intra- ssntentJal phenomena and only selects the syntac- tically and semantically possible antecedents. Ul- timately, it is meant to be used in conjunction with an extra-sentential reference mechanism like that described in Ayuso (1989) to include antecedents from other utterances and to utilize discourse factors in its final selection of an antecedent. In Section 2 the empirical and theoretical back- ground to this treatment is sketched out. In Section 3, the actual algorithm used is described in detail. In Section 4, the associated semantic interpretation mechanism is presented. In Section 5, we compare the algorithm with related work. Finally, in Section 6, remaining theoretical and implementational issues are discussed. 2. THEORETICAL BACKGROUND While most computational systems are interested in the potential antecedents of pronouns, work in generative grammar by Lasnik (1976) and Reinhart (1976) has led to the conclusion that sentential syntax is responsible for assigning possible antecedents to bound anaphors (reflexives, such as "himself", "herself", "themselves", etc., and the reciprocals "each other" and "one another") but not to personal pronouns ("he", "she", "they", etc). In the case of personal pronouns, sentential syntax only determines procedure is called disjoint reference, since the im- possible antecedents can not even overlap in refer- ence with the pronoun; compare the cases in sen- tences (1) and (2), where the underlined items are non-identical in reference, with those in (3) and (4), where they are non-overlapping in reference. In (1) and (2), "he" and "him" cannot refer to "John" (non- identical reference); while in (3) and (4) "John" cannot be a member of the set referred to by "they" and "them" (non-overlapping or disjoint reference). (1) He likes John. (3) They like John. (2) John likes him. (4) John likes them. Disjoint reference is even more noticeable with first and second person pronouns where it does not merely produce impossible interpretations, but actual ungrammaticality: (5) *1 like me. (7) "We like me. (6) "i like us~ (8) °Yo--'u like yo---u. A crucial notion both for assigining antecedents to bound anaphors and for establishing disjoint refer- ence between Noun Phrases is that of c-command, a structural relation. Briefly, a node c-commands its sisters and any nodes dominated by its sisters? Figure 2-1 illustrates this. A B C I E F A c-commands B, C, F, D, and G B c-commands A and E C c-commands D and G D c-commands C and F D I G Figure 2-1: C-Command IThis differs from Roinhart's (1976) definition, for reasons dis- cussed in Section 6. 262 Essentially, the relation between c-command and reference phenomena is the following: 1. A non-pronominal NP cannot overlap in reference with any NP that c-commands it. 2. The antecedent of a bound anaphor must c-command it. 2 3. A personal pronoun cannot overlap in ref- erence with an NP that c-commands it. 2 Condition 1 is motivated by sentences such as those in (9), where the underlined pronouns "he", "him", "they", and "them" must be disjoint in refer- ence with "John". In each case, the pronouns c- command the NP "John". In (ga) "he"/"they" is in the subject position, and so c-commands "John", in the direct object slot. In (gb) the pronouns ("He", "They") are once again in the subject position, and "John" is the object of a preposition, itself contained in the direct object of the sentence. Finally, in (9c), the NP "John" appears as the object of a preposition, which is o-commanded by the subject ("He", "They") and the direct object ("him", "them"). (9) a. He likes John. They' like John. b. He likes pictures of John. They' like pictures of John. c. He told them about John. They' told him about John. Condition 2 is motivated by examples such as those in (10), where the reflexive pronoun "himself" and its antecedent(s) are bracketed. As in the cor- responding examples in (9), "himself" either appears as a direct object (10a), the object of a preposition within the direct object (10b), or as a prepositional object (10c). In all cases, the c-commanding subject ("John") is a possible antecedent; in (10c), where the c-commanding object NP "Bill" is added, it is also a possible antecedent. (10) a. [John] likes [himself]. b. [John] likes pictures of [himself]. c. [John] told [Bill] about [himself]. Condition 3 is motivated by examples such as those in (11). The pronoun under consideration ("him" or "them") always appears as an object or prepositional object and is disjoint in reference to the c-commanding subject "John" (in (1 la,b,c)) and to the c-commanding direct object "Bill" in (1 lc). (11) a. John likes him. John likes them. b. John likes pictures of him. John likes pictures of them. c. John told Bill about him. John told Bill about them. While condition 1 is unconditionally true, con- ditions 2 and 3 are subject to a further constraint, =Within a minimal syntactic domain; this will be explained shortly, which we might term minimafity. Essentially, the structural theory of pronominal reference outlined here may be viewed as making the following claim. Bound anaphors are short-distance anaphors and re- quire their antecedents to be c-commanding NPs within a minimal domain. Ordinary personal pronouns, on the other hand, are long-distance anaphors, and only permit antecedents to come from outside of their minimal domain, and exclude any c- commanding antecedents within their minimal domain. The most immediately dominating finite clause (S) node always constitutes a minimal domain for a bound anaphor or personal pronoun. NP nodes normally do not constitute a minimal domain, unless they contain a possessive. This is illustrated in (12)--(14) (underlining indicates disjoint reference: bracketing indicates co-reference). The subject NP in (13) is not a possible antecedent for the reflexive; while the subject NP in (14) need not be disjoint in reference with the underlined pronoun. Compare (13) with (10b) and (14) with (1 lb). (12) He likes Bill's pictures of John. They' like Bill's pictures of John. (13) John likes [Bill's] pictures of [himself]. (14) [John] likes Bill's pictures of [him]. [John] likes Bill's pictures of [them]. Given these paradigms of reference facts, we now turn to the theoretical linguistics literature for treat- ments that might be implemented in a natural lan- guage system. In the Government-Binding framework of Chomsky (1981), these generalizations are cap- tured by the Binding Theoryma set of well- formedness conditions on syntactic structural representations annotated with subscript and super- script "indices". The paradigm assumed there is Generate and Test: indices are freely assigned and the Binding Conditions are applied to rule in or rule out a particular assignment. Clearly, from a computa- tional standpoint this is grossly inefficient. However, in earlier work, Chomsky (1980, pp. 38--44) proposed a two pass indexing mechanism that captures these facts procedurally. His proposal assigns each non-bound anaphor (i.e. non-pronominal NP or personal pronoun) the pair (r,A) where r (for Referential index) is a non-negative integer and A (for Anaphoric index) is a set of such integers. In the first pass, r and A are assigned from left-to-right in a depth-first manner. Each non-bound anaphor NP is assigned a unique r; in addition, the r index of each NP c-commanding it is added to its A index. This set of indices indicates all the other NPs with which it is disjoint in reference. For non- pronominal NPs, only one pass is needed: (15) John 2 told Bi11(3,{2} ) about Fred(4,{2.3} ) The indices here indicate that "John", "Bill", and "Fred" are all disjoint in reference. In the case of personal pronouns, a second pass is necessary. Consider example (14), repeated here as (16), after the first pass: 263 (16) John 2 likes Bill's(3,{2} ) pictures of him(4,{2,3} ) The indexing at this stage indicates that "Bill" is dis- joint in reference from "John" and that "him" is dis- joint in reference from "Bill", which is correct, and also from "John", which is not. To correct this, Chomsky (1980, pp. 38--44) has a second pass, in which the r indices of NPs outside the current minimal domain are removed from the A index of personal pronouns, thereby allowing them to serve as potential antece- dents. After this second pass, the indexing is: (17) John 2 likes Bilrs(3.(2} ) pictures of him(4.{3} ) At this stage "John" is no longer specified as being disjoint in reference with "him". We have taken this procedure as the basis for a more efficient pronominal reference algorithm that im- proves on two problematic features. First, while Chomsky's procedure requires two passes, our algo- rithm is single pass. While there may not be a great computational loss in the two-pass character of Chomsky's original proposal, clearly it is cleaner to do things in one pass. Moreover, the mechanism is ex- tensionally richer than Chomsky's: it also handles cases of backwards-pronominalization and split- antecedence. A second problem with Chomsky's procedure is that the potential antecedents of a personal pronoun are only implicitly represented: any NP whose r index is not a member of that pronoun's A index set is a syntactically permissible antecedent, but this set of permissible antecedents is not enumerated. For ex- ample, in (17), "John" is indicated as a potential an- tecedent of "him" by virtue of the fact that its r index, 2, is not part of the A index of "him", and in no other way. Our algorithm explicitly indicates the potential antecedents of a personal pronoun. Again, this is more desirable than leaving this information implicit; besides the potential (and perhaps small) computa- tional savings of not needing to recompute this infor- mation, there is the more general consideration that we are not interested in creating syntactic represen- tations for their own sakes, but to make use of them. Explicitly representing antecedence information for personal pronouns contributes to this goal. In the next section, we show how our algorithm overcomes these limitations. 3. THE ALGORITHM Before giving the details of the algorithm, we will sketch its general structure. The algorithm applies to a completed parse tree and traverses it in a left-to- right, depth-first manner. The algorithm uses the no- tion of minimal domain introduced in the preceding section: the S node or NP node (when minimality has been induced by the presence of a possessive) that most immediately dominates the node being processed, and the related notions of "internal" and "external" nodes. Internal nodes are dominated by the current minimal domain node; external nodes c- command the current minimal domain node. Essen- tially, the algorithm passes each node all the nodes that c-command it, subdivided into two sets, those that are internal to the current minimal domain and those that are external. As each node is processed, a subroutine is called that dispatches on the category of the node and performs any actions that are ap- propriate. It is this subroutine that implements the pronominal reference mechanism proper. Given this overview, we can now turn to the data structures that are used by the algorithm, as well as to the details of the algorithm. Each node in a parse tree is a Common LISP structure; two of its slots are used for establishing pronominal reference: :possible-antecedentsma list of all the nodes that can be co-referent or overlapping in reference with it. :lmpossible-antecedentsBa list of all the nodes that are disjoint in reference with it. The algorithm also uses two global variablesB*table-of-proforms* and *table-of-antecedents*rain a "blackboard" fashion. The algorithm uses two major procedures. The first, pass-down-c-commanding-nodes, is respon- sible for actually traversing each node in the tree. The actual algorithm it uses is shown in Figure 6-1 in a LISP-type notation. Its functionality can be stated as follows. Whenever it encounters a new node, it first processes that node by calling the procedure update-node, which will be described shortly. It next determines whether the node being processed counts as a minimal domain for its children. When the node is a finite S node, it does count as a minimal domain, for all its children. Hence, only nodes that it dominates can be internal nodes for its children; all other nodes are now treated as external by its children. When the node is an NP, there are two possibilities. If there is no possessive NP, the NP does not count as a minimal domain, hence, the ex- ternal nodes remain as before and the nodes it dominates are added to the set of internal nodes. However, when the NP does contain a possessive, it does count as a minimal domain, for all the nodes that it dominates, except the possessive itself. 3 Finally, if the node is of any other category, it is not a minimal domain, so the external nodes remain as before and the internal nodes are augmented by the constituents it dominates. 4 In all cases, pass-down-c-commanding-nodes calls itself recur- sively on the children of the node being processed, with the appropriate lists of internal and external nodes as arguments. update-node, in turn, processes the node passed ~rhe reason for this exception will be explained in Section 6. 4Non-finite clauses also need special treatment. However, con- sideration of this case requires discussion of whether non-finite clauses are Ss or VPs, which is beyond the scope of this paper. 264 to it, on the basis of the nodes internal and external to the current minimal domain. In particular, update-node performs the correct pronominal assign- ment. The algorithm used by update-node is shown in Figure 6-2 in a LISP-type notation. We also dis- cuss each clause separately. Clause [I] implements condition 1 (non-pronominal NPs). Since there are no minimality conditions on dis- joint reference for non-pronominal NPs, all NP nodes c-commanding a non-pronominal NP are added to its :impossible-antecedents slot, whether they are in- ternal ([I.A]) or external to the current minimal domain ([I.B]). This handles sentences such as those in (9) and (12). While it might seem odd to specify that a non-pronominal NP has no antecedents, this infor- mation is useful in handling cases of backwards pronominalization, as in (18). (18) [His] mother loves [John], Clause [I.C] handles backwards pronominalization by making use of information in °table-of-proforms*, a table of all the pronouns encountered so far in the course of the tree walk. s After update-node has added all c-commanding NP nodes to the :impossible-antecedents slot of a non-pronominal NP, it then searches *table-of-proforms* for any pronouns that are not on its :impossible-antecedents list; whenever it finds one, it adds the current non-pronominal NP to the pronoun's :possible-antecedents list. The last thing update-node does in processing a non-pronominal NP is to add it to *table-of-antecedents* ([I.D]), whose use will be explained shortly. Clause [11] implements condition 2 (bound anaphors). Since bound anaphors are short-distance anaphors, all and only the c-commanding NPs internal to the current minimal domain are added to the :possible-antecedents slot of a bound anaphor. Clause [111] implements condition 3 (personal pronouns). Since personal pronouns are long- distance anaphors, clause [111] performs a number of operations. First, all the c-commanding NPs internal to the current minimal domain are added to the :impossible-antecedents slot of a personal pronoun ([Ill.A]), disallowing them as antecedents. Next, all the c-commanding NPs external to the current min- imal domain are added to the :possible-antecedents slot of a personal pronoun ([Ill.B]), indicating that they are potential antecedents. Clause [Ill.C] handles sen- tences like (19). (19) [John's] mother loves [him]. in which a non-pronominal NP that does not c- command a personal pronoun serves as its antece- dent. As was noted above, each non-pronominal NP is added to the *table-of-antecedents* by clause [I.D]. When update-node has added all the ap- ~'his lalok) is filled in by Clause [Ill.D]. propriate c-commanding nodes to the :impossible-antecedents slot of a personal pronoun, it then adds any NPs on *table-of-antecedents* that are not already on the pronoun's :impossible-antecedents slot to its :possible-antecedents slot. Finally, when update-node is finished processing a pronominal NP node, it adds it to *table-of-proforms (Jill.D]), for use in backwards pronominalization. Note that, because our algorithm both establishes minimal domains and assigns possible and impossible antecedents during the course of the tree traversal, it can be single pass, in contrast to Chomsky's proce- dure, which assigned impossible antecedents in one traversal and checked for minimality during a second. Since update-node is a general mechanism for adding or modifying information to a node on the basis of c-commanding constituents it is fairly straightforward to extend to handle other phenomena that involve c-command by modifying its top level CASE statement to dispatch on other categories. In fact, we have extended it in this manner to handle examples of "N anaphora"; i.e. cases where the head noun of a Noun Phrase is either "one" (which has been argued in Baker (1978) to be an anaphor for Ns, i.e. a noun and its complements, but not for full Noun Phrases) or phonologically null (0), which seems to have the same possibilities for antecedents. (20) Give me a list of ships which are in the gulf of Alaska that have casualty reports dated earlier than Esteem's oldest one. (21) Is the Willamette's last problem rated worse than Wichita's 0? (when (p=o-n-bLr-p ofg-node) ('loop for other-node e:Eke~na I -node-list (when (and (eqljal (category ot~,,e=-node ) (pEo-n-b=E-antecedent other-node) (add (get;-.,on-of-catego~ other-node ' N-~) (poeeible-anteoedmnt e ofg-nc~e) } ) ) ) ) Figure 3-1: Algorithm for Pro N-BAR Anaphora The addition to the algorithm that deals with this phenomenon is presented in Figure 3-1. This clause is considerably simpler that those that handle disjoint reference and co-reference phenomena for personal pronouns: only external nodes are involved and only forward antecedence is possible] This c_lause finds all the Noun Phrases that c-command an N pro-form and that are external to the current minimal domain. This excludes the possessive in a Noun Phrase such as "Esteem's oldest one" or "Wichita's 9" from serving 265 (defun paee-down-o-oommanding-nodee (=fg-node external-node-list internal-node-list) (update-node ofg-nod@ external-node-liar internal-node-liar) (oond ((finite-olause ofg-node) (let ( (external-node-list (append internal-node-list exteEnal-node-liet) ) ) (loop for node in (Qhildren =fg-node) (let ((internal-node-list (eisters node) ) ) (paso-down- =- oommandlng-nodee node external -node - i i st internal-node-liar) ) ) ) ) ( (equal (oategory ofg-node) 'NP) (mend ( (equal (oategoz~ (first (children ofg-node) ) ) 'NP) (pass-down-o-commanding-nodes (first (children ofg-node)} external-node- list internal-node- list } (let ( (external-node-list (append external-node-list internal-node-list) ) ) (loop for node in (feet (~!idren ofg-node)) (let ( (internal-node-list (eieteEs node) ) ) (poe • - down -o -o~-~anding-node • node external -node - li st internal-node-list) ) ) ) ) (T (loop for node in (c~hildren ofg-node) (let ((internal-node-liar (append (eietere node) internal-node-list))) (pas e - down-o -oc.~anding-node s node external-node- list internal-node-list) ) ) ) ) ) (T (loop for node in (children ofg-node} (let ((internal-node-list (append (sisters node) internal-node-list))) (paea-down-c-c~nanding-nodee node external-node-list internal-node-list) ) ) ) ) ) Figure 6-I: The Tree Walking Algorithm (dofun update-node (ofg-node external-node-list internal-node-list) (odes (oategory ofg-node) 0~ (oond ( (non-pronomlnal ofg-node) [~ (loop for other-node in @~ernal-node-liet [I.A] (when (equal (oltegozy other-node) 'NP} (add other-node (imposstble-antecedente ofg-node} ) ) ) (loop for other-node in internal-node-list [[.S] (when (equal (category other-node) 'NP) (add other-node (4 .-Toeeible-anteoedente ofg-node) ) ) ) (loop for pro in *t&ble-of-proform~* [[.C] (when (not (member pro (4~Doeei~le-anteoedente ofg-node) ) ) (add ofg-node (poeeible-anteoedente pro) ) ) } (push ofg-node *table-of-antecedents*) ) [[.D] ( (boond-enephor cfg-node) [el] (loop foe other-node in internal-node-list (when (equal (oatego=y other-node) 'NP) (add other-node (possible-antecedents ofg-node) ) ) ) ) ( (personal-pronoun ofg-node) [II~ (loop for other-node in internal-node-list [lit.A] (when (equal (oetegory other-node) 'NP) (add other-node (i .-Toeeible-anteoedente ofg-node) ) ) ) (loop foe other-node in external-node-list [lll.B] (when (equal (oategoEy other-node} 'NP) (add other-node (poeeible-antenedenta ofg-node) ) ) ) (loop foe NP in *table-of-anteoedentee [Ill.C] (when (not (member NP (4-.-~oeeible-antecedente ofg-node))) (add NI) (possible-antecedents ofg-node) ) ) ) (push ofg-node *table-of-profo===*) ) ) ) ) ) [lU.D] Figure 6-2: The Reference Algorithm 266 m as the antecedent to its pro-N. External NPs that meet this criterion are filtered, since not all NPs can be antecedents of an N anaphor. For example, proper nouns cannot serve as such antecedents. Each NP that meets these criteria has its N-BAR added to the :possible-antecedents slot of the N- BAR node being processed. 4. INTERACTION WITH SEMANTIC INTERPRETATION Syntactic constraints will not always identify just one allowable referent for a pronoun. Consider (22): (22) The committee awarded the prize to itself. Syntactically, "itself" in this sentence can refer to ei- ther "the prize" or "the committee". The additional use of semantic constraints is required to determine that the proper referent of the reflexive pronoun is "the committee". Applying such constraints is the responsibility of the semantic interpretation component of our system. In the current implementation reported on here, semantic interpretation is applied after both parsing and the c-command tree-traversal have been per- formed. It is a two-stage process in which the first stage is concerned with "structural semantics"nthe semantic consequence of syntactic structurenand the second stage with "lexical semantics"~the specific meanings of individual words with respect to a given application domain. This architecture for semantic interpretation was adopted from the PHLIQA1 system (Bronnenberg, et al. (1980)) and has been used in ~'eating several difficult semantic phenomena (de Bruin and Scha (1988); Scha and Stallard (1988)). The structural semantics stage operates on the parse tree to produce an expression of a language called "EFL" (for English-oriented Formal Language). This language is a higher-order intensional logic which includes a single descriptive constant for each word in the lexicon, however many senses that word may have. (From this standpoint, therefore, EFL is actually an ambiguous logical language.) Expres- sions of EFL are produced from the parse tree by a system of semantic rules, paired one-for-one with the syntactic rules of the grammar, which compute the EFL translation of a tree node from the EFL trans- lations of its daughter nodes. The single EFL of a word is stored in its entry in the lexicon. The lexical semantics stage operates on an ex- pression of EFL to produce zero or more expressions of a language called "WML" (for World Model Language). WML is a higher-order intensional logic, with the same set of operations as EFL, but with un- ambiguous descriptive constants which correspond to the primitive concepts and relations of the particular application domain. WML expressions also have types, which are derived from the primitive disjoint categories of the application domain and which serve to delimit the set of meaningful WML expressions. A set of translation rules pair ambiguous con- stants of EFL with one or more unambiguous expres- sions of WML. Translation to WML is performed by producing all possible combinations formed from replacing the EFL constants with their translations, and filtering to remove combinations which are dis- allowed by WML's type system. In this way selec- tional restrictions are represented and enforced. The algorithms for producing EFL and WML are slightly modified in the case of anaphoric consituents: that is, reflexive pronouns, personal pronouns, and pro N-BARs~ When the structural semantics com- ponent encounters an anaphoric constituent in the course of translating a parse tree to EFL, it creates a new EFL constant "on the fly" to serve as the EFL translation of this constituent. It marks this constant specially and attaches to it the EFL translations of the syntactically possible antecedents of the constituent, along with semantic type information (such as for gender) constraining the antecedents which make sense for it. If the constituent is a personal pronoun or pro N-BAR (but not a reflexive pronoun), a special constant of WML is also attached, marked with the EFL translations of the impossible antecedents of the constituent. This special WML constant represents the possibility of extra-sentential resolution of the anaphor. The EFL to WML translation algorithm treats the anaphoric EFL constant specially, returning as its WML translations the translations of the "possible antecedents" that were attached in the EFL phase, together with the WML constant for extra-sentential reference (when this is appropriate). Expansion and filtering then proceed as described above. (22) is handled as follows. We will suppose the following "domain model" of WML constants and types: AWARD: (FUN (TUPLES AGENTS VALUABLES AGENTS) TV) SUB-TYPE(COMMITTEES,AGENTS) SUB-TYPE(PRIZES,VALUABLES) TYPE-INTERSECTION(VALUABLES,AGENTS) - NULL-SET The structural semantics stage constructs the fol- lowing clausal interpretation in EFL: (AWARD (THE COMMITTEES) (THE PRIZES) ITSELF001 ) where ITSELF001 --~ (THE COMMITTEES) (THE PRIZES) 267 The combinatorially possible WML translations are the following, where anomally with respect to the type system is marked with a ...... * (AWARD (THE COMMITTEES) (THE PRIZES) (THE PRIZES)) (AWARD (THE COMMITTEES) (THE PRIZES) (THE COMMITTEES)) The first interpretation is anomalous because the function "AWARD" is applied to an argument whose type is disjoint with the function's domain (in the third argument place). It is therefore discarded, leaving the second interpretation as the correct one. A different example; in which a pronoun could have an extra-sentential antecedent, is: (23) The committee awarded the prize to it. In this case, neither NP inside the sentence is syntac- tically allowable as an antecedent of "it", and so only the extra-sentential possibility remains. The WML translation for (23) is: (AWARD (THE COMMITTEE) (THE PRIZES) iT001) where IT001 is a WML constant marked for disjoint reference: IT001 ~ (THE COMMITTEES) (THE PRIZES) This information is necessary so that the module responsible for extra-sentential discourse can prevent external resolution of the pronoun to an internally (syntactically) forbidden antecadent--as could other- wise happen if "the committee" or "the prize" was mentioned in preceding discourse. Unless the anaphoric constituent is a reflexive pronoun, an extra-sentential alternative will always be present as a WML translation option, and survive type filtering (since it is given the most general possible type). When both intra- and extra-sentential alter- natives survive type filtering, our current heuristic is to prefer the intra-sentential one. 5. COMPARISON WITH RELATED WORK Hobbs (1978) has done the only previous work we know of to use traversal of a syntactic parse tree to determine pronominal reference and we compare our algorithm with his in this section. Hobbs proposes a syntactic tree-traversal algorithm for pronominal refer- ence that is "part of a larger left-to-right interpretation process" (Hobbs (1978, p. 318)). When a pronoun is encountered, the algorithm moves up to the nearest S or NP node (our "minimal domain nodes") that dominates the pronoun and searches to the left of the pronoun for any NP nodes that are dominated by an intervening $ or NP node to propose as antecedents. The algorithm then proceeds up to the next NP or S node and searches to the left of the pronoun for any NP nodes to propose as antecedents. At this level, search is also made to the right for NP nodes to propose as antecedents. This will handle cases of backwards pronominalization, as in (18). However, this portion of the search is bounded; it does not seek antecedents below any NP or S nodes encountered. The search for c-commanding antecedents and an- tecedents for backwards pronominalization continues in this fashion until the top S is reached. At this point, preceding utterances in the discourse are searched, going from most recent to least recent. Each tree is searched in a left-to-right, breadth-first manner for NPs to propose as antecedents. There are several differences between this a{go- rithm and ours. The major one is that our algorithm is a single-pass, depth-first, exhaustive traversal whereas Hobbs' algorithm first walks down the tree, then up, and then back down and is not guaranteed to be exhaustive. Hobbs also imposes a "nearness" condition on the search for antecedents in the case of backwards pronominalization. However, as Hobbs points out, this restriction rules out the perfectly ac- ceptable (24a) and (24b). (24) a. Mary sacked out in [his] apartment before [Sam] could kick her out. b. Girls who [he] has dated say that [Sam] is charming. These examples show that the question of what the correct nearness constraint, if any, is remains open. Finally, Hobbs' algorithm handles both intra-sentential and extra-sentential pronominal reference relations, while ours is only intended to handle intra-sentential cases. 6. CURRENT STATUS AND FUTURE RESEARCH In this section, we conclude by discussing some of the strengths and weaknesses of the current im- plementation and areas for future research. The shortcomings fall into two general categories: limita- tions of the implementation proper and limitations of the theory of pronominal reference that was imple- mented. There are two general sorts of limitations to the mechanism described here: those that may be over- come by adding additional filtering devices to the basic tree-walking engine and those that may require a change in that basic engine. We begin with limita- tions of the first sort. Currently, the algorithm does not do any checking on the potential antecedents of a pronoun or bound anaphora to see if they agree in person and number, s For bound anaphors, this is straightforward: a bound anaphor and its antecedent must agree in person and number. For personal pronouns, on the other hand, eCuwently, NPs are not specified for gender in our system, so this cannot be checked. 268 the situation is more complicated. In the singular, first ('T', "me"), second ("you"), and third ("he", "him", "she", "her", "it") personal pronouns require agree- ment in both person and number. In the plural, however, the number requirement is dropped because of "split antecedents" cases, in which more than one NP forms part of the antecedent of a pronoun, as in: (25) [John] told [Bill] that [they] should leave. where "John" and "Bill", together, antecede "they". Third person plural pronouns still require that each antecedent of a split antecedent itself be third person. First person ("we", "us") and second person ("you") pronouns also allow split antecedents, but with looser person agreement requirements: (26)a. [I] told [John] that [we] should go. b. [I] told [you] that [we] should go. c. [Bill] told [you] that [you] should go. d. I told [you] that [you] should go. e. ~John told __Bill that w._.ee should go. f. John told Bil.._ll that you should go. Note that a first person plural pronoun allows split antecedents only if at least one of them is itself first person; contrast (26a) and (26b) with (26e). Similarly, a second person plural pronoun allows split antece- dents only if at least one of them is also second personmcontrast (26c) with (26f)--but not if one is first person; contrast (26c) with (26d). While the constraints on singular and third person plural pronouns could be implemented as a local agreement check (e.g. as a pre-condition for being added to a pronoun's :possible-antecedents slot), the person agreement constraint on first and second person plural pronouns would require a separate post- process, since it is not a local constraint on individual split antecedents, but a global constraint on the set of them. Currently, since our algorithm imposes no agreement checks, it allows both the good cases of split antecedents as well as the impossible ones. We need to add the check to our algorithm and extend the semantics to also deal with split antecedents. The algorithm also does not check for "crossover" cases. Roughly speaking, these are examples similar to backwards pronominalization cases such as (18) (repeated here as (27a)), in which the potential an- tecedent is a quantifier or a trace of a moved WH element. In such cases, overlapping reference is im- possible. Contrast (27a) with (27b) and (27c). (27) a. [His] mother loves [John]. b. ~His mother loves everyone. c. Who does hi._? mother love twho? These particular cases can be handled by adding a check to clause [I.C] to prohibit quantified NPs and WH-traces from participating in backwards pronominalization. However, the more general problem of how elements dislocated by WH move- ment or by topicalization interact with the algorithm given here is a topic that requires further work beyond this simple measure. More seriously, there is also a well-known case of pronominal reference within NPs that is not handled by the algorithm. A constraint from the syntactic theory of reference implemented by our algorithm is that if the antecedent-anaphor relation holds between two positions, disjoint reference also holds between them; see examples (10) and (11), and (13) and (14). However, there is one position in English where this generalization is known not to hold: the possessive position of an NP. A bound anaphor is possible here, but a pronoun in the same position is not subject to disjoint reference; see (28): (28) a. [The men] read [each other's] books. b. [The men] read [their] books. (28a) is correctly handled by the algorithm as al- ready outlined; pass-down-c-commanding-nodes treats the nodes internal to the current minimal domain as internal nodes for the possessive in a Noun Phrase, so the NP "the men" will be added to the :possible-antecedents slot of a bound anaphor in this position. However, the same characteristics of the algorithm will also result in the NP "the men" be- ing assigned to the :impossible-antecedents slot of "their" in (28b). One possible remedy for this situa- tion is to add a clause to update-node that checks for possessive pronouns separately from other pronouns and that allows NPs both internal and external to the current minimal domain to be possible antecedents. However, the more far-reaching modifications proposed in the discussion below of the theory of pronominal reference would obviate this change. There are several areas where our implemen- tation points out problems with the structural theory of pronominal reference. The first of these is the defini- tion of c-command itself. 7 Under Reinhart's (1976) original definition, a node A c-commands node B iff the branching node most immediately dominating A also dominates B and A does not dominate B. The difference between the two definitions can be seen in Figure 2-1; in addition to the c-command statements given there, Reinhart's definition adds the following: E c-commands B, C, F, D, and G F c-commands D and G G c-commands C and F These statements are true under Reinhart's definition of c-command, because no branching category inter- venes between the c-commanding and c-commanded nodes, but not under that used in the implemented algorithm, since there is no sisterhood among the nodes. We have found this modified definition to be easier to implement; moreover, various researchers (e.g. Aoun and Sportiche (1983)i have pointed out problems with Reinhart's definition that the modified definition solvas. 7Our algorithm uses a definition that is equivalent to the in co~ttuction with relation of Klima (1964, p. 297), which inspired c-command. 269 The implementation has also brought to light asymmetries in the strictness of c-command used to determine the antecedents of a bound anaphor and that used to determine the non-antecedents of a pronoun. In particular, none of the conjuncts of a conjoined NP can be the antecedent of a reflexive: (29) "John and Mary like himself. However, all of the conjuncts of a conjoined NP are impossible antecedents for any pronoun for which the entire conjoined NP is an impossible antecedent. In (30) John and Mary like him. "John" cannot be an antecedent of "him", despite the fact that "John" does not c-command "him". Contrast this with (19) where a non-c-commanding possessive can be the antecedent of a pronoun. This is handled correctly in the implementation. Whenever our algo- rithm adds a conjoined NP to the :impossible-antecedents slots of a pronoun or a non-pronominal NP, it adds all the conjuncts of that NP, as well. While this works, there is clearly some- thing that is being missed here. Presumably, it should follow by definition that no individual conjunct of a conjoined NP can be a possible antecedent of a Noun Phrase with which the entire conjoined NP is disjoint in reference, s A more serious problem with the theory of pronominal reference elaborated in Chomsky (1980) and (1981), and which our algorithm implements, is the crucial assumption that referentially dependent Noun Phrases can be exhaustively partitioned into bound anaphors vs. personal pronouns and that, therefore, they will be in complementary distribution. However, examples such as (28), as well as (31) (pointed out by Kuno (1987)) and (32) indicate that the notion of exhaustive partitioning of bound anaphors against personal pronouns is incorrect in the general .case, even though it may be the typical state of affairs. (31) a. [John] put the blanket under [himself]. b. [John] put the blanket under [him]. (32) a. Ill buy myself a beer. b. rll buy me a beer. We can keep the insight of the structural theory of pronominal reference (i.e. that structural relations play a role in delimiting reference possibilities), while still incorporating these facts, if we give up the restriction that bound anaphors and personal pronouns are al- ways in complementary distribution. One possible ap- proach to this problem is to use feature decomposition to characterize bound anaphors and pronouns: the feature +short-distance indicates whether a pronominal can be used as a short-distance anaphor while the feature :l:long-distance indicates whether it eThanks to Leland George for this insight, as well as for discussion of short and long distance enephors. can be used as a long-distance anaphor. 9. While, in the normal case, personal pronouns in English are specified to be long-distance anaphors that cannot be used as short-distance anaphors, (i.e. as I-short-distance +long-distance]) this system would allow the feature governing a pronominal's use as a short-distance anaphor to be left free (i.e. as ?short-distance) in certain syntactic contexts in English, such as the possessive position of a Noun Phrase, the object of certain prepositions, and the in- direct object position of verbs. 1° Such a view of the syntax of personal pronouns could be implemented in a unification grammar fairly straightforwardly. While such a treatment of personal pronouns as short-distance anaphors does not handle all the counter-examples to the syntactic theory of pronominal reference raised by researchers such as Kuno, it does begin to address them seriously. Clearly, it is more in accord with the facts than a theory that postulates an exhaustive partitioning of bound anaphors vs. personal pronouns, and so con- stitutes, in our opinion, a promising start towards han- dling the full range of pronoun reference facts in a reasonable manner. Finally, we consider alternate ways of combining our pronominal reference mechanism with parsing and semantic interpretation. One possibility is a fully incremental architecture in which c-command con- straints, semantic interpretations, and external refer- ence resolution are computed simultaneously with the parse. Such an architecture might seem particularly attractive for processing large sets of alternatives, such as are encountered when processing spoken in- put. The intra-sentential reference phenomena described in this paper pose a problem for such an incremental approach, however. The possiblities for internal resolution for an anaphor cannot all be known locally to the anaphor, but must be obtained from elsewhere in the sentence. In many cases antece- dents will lie to the left of the anaphor in the sentence, and thus will have been seen by a left-to-right parser by the time the anaphor is reached. But consider a case of backward pronominalization, as in (18), repeated here as (33): (33) His mother loves John. A wholly incremental mechanism, parsing the NP "his mother" first, would have to conclude that the referent of "his" was extra-sentential, since no intra-sentential referent was seen to the left. And if no extra- sentential referent could be found, the NP would have to be rejected. To be successful, such an incremental mechanism would have to be modified to include a kind of "lazy evaluation" which could rule out certain ~'hle is akin to the feature system :l:anaphod¢ :l:pronomlnal of Chomsky (1981) '°This suggestion was originally made by Lust, et aL (1989) who support it on the basis of language acquisition data. 270 referents for an anaphor but never rule an anaphor empty of referents until utterance processing had been completed. Another alternative would be to separate intra- sentential anaphor resolution from semantic inter- pretation, performing it instead in conjunction with extra-sentential discourse processing. A possible problem for this approach can be seen in sentences where the anaphor is combined with another am- biguous element, so that proliferation of semantic in- terpretations occur, as in: (34) John's car is better than Bill's. where the pro N-BAR, left completely unspecified during semantic interpretation, is free to generate all sorts of combinations with the possessive, including those in which the possession is appropriate to various "relational" interpretations of the pro N-BAR (de Bruin and Scha (1988)). In future work, we plan to combine parsing and semantic interpretation into a single unification gram- mar incorporating semantic information in additional features. Part of that work will be to look for the optimal method of combining it with the pronominal reference mechanism presented here. ACKNOWLEDGEMENTS The work reported here was supported by the Ad- vanced Research Projects Agency under Contract No. N00014-C-87-0085 monitored by the Office of Naval Research. The views and conclusions contained in this document are those of the author and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency of the United States Government. REFERENCES Aoun, Yousef and Dominique Sportiche (1983) "On the Formal Theory of Government", The Linguistic Review2, pp. 211-236. Ayuso, Damaris M. (1989) "Discourse Entities in Janus", 27th Annual Meeting of the Association for Computational Linguistics: Proceedings of the Conference, Association for Computational Lin- guistics, Morristown, N J. Baker, C.L. (1978) Introduction to Generative- Transformational Syntax, Prentice-Hall, Inc., Englewood Cliffs NJ. Boisen S., Y. Chow, A. Haas, R. Ingria, S. Roucos, R. Scha, D. Stallard and M. Vilain (1989) Integration of Speech and Natural Language: Final Report, Report No. 6991, BBN Systems and Tech- nologies Corporation, Cambridge, Massachusetts. Bronnenberg, W.J.H.J., Harry C. Bunt, S.P. Jan Landsbergen, Remko J.H. Scha, W.J. Schoen- makers, and E.P.C. van Utteren (1980) "The Question Answering System PHLIQAI", in Leonard Bolc, ed., Natural Language Question Answenng Systems, Hanser, Munich, pp. 217-305. Chomsky, Noam (1980)"On Binding", Linguistic Inquiry 11.1, pp. 1-46. Chomsky, Noam (1981 ) Lectures on Government and Binding, FORIS PUBLICATIONS, Dordrecht- Holland/Cinnaminson - U.S.A. de Bruin, Jos and Remko Scha (1988) "The Inter- pretation of Relational Nouns", 26th Annual Meet- ing of the Association for Computational Linguis- tics: Proceedings of the Conference, Association for Computational Linguistics, Morristown, NJ, pp. 25--32. Hobbs, Jerry R. (1978) "Resolving Pronoun References", Lingua 44, pp. 311--338. Jackendoff, Ray (1972) Semantic Interpretation in Generative Grammar, MIT Press, Cambridge, MA. Klima, Edward S. (1964) "Negation in English", in J. A. Fodor and J. J. Katz, eds., The Structure of Language: Readings in the Philosophy of Language, Prentice-Hall, Englewood Cliffs, N. J. Kuno, Susumu (1987) Functional Syntax: Anaphora, Discourse, and Empathy, The University of Chicago Press, Chicago and London. Lasnik, Howard (1976) "Remarks on Coreference", Linguistic Analysis 2.1, pp. 1 --22. Lust, Barbara, Reiko Mazuka, Gita Martohardjono, and Jeong Me Yoon (1989) "On Parameter Set- ting in First Language Acquisition: The Case of the Binding Theory", Paper presented at The 12 th GLOW Colloquium, Utrecht, April 5, 1989. Reinhart, Tanya (1976) The Syntactic Domain of Anaphora, Ph.D. Dissertation, MIT, Cambridge, Massachusetts. Scha, Remko and David Stallard (1988) "Multi-Level Plurals and Distributivity", 26th Annual Meeting of the Association for Computational Linguistics: Proceedings of the Conference, Association for Computational Linguistics, Morristown, NJ, pp. 17-24. 271
1989
32
PARSING AS NATURAL DEDUCTION Esther KSnig Universitgt Stuttgart Institut fiir Maschinelle Sprachverarbeitung, Keplerstrasse 17, D-7000 Stuttgart 1, FRG Abstract The logic behind parsers for categorial grammars can be formalized in several different ways. Lam- bek Calculus (LC) constitutes an example for a na- tural deduction 1 style parsing method. In natural language processing, the task of a parser usually consists in finding derivations for all different readings of a sentence. The original Lam- bek Calculus, when it is used as a parser/theorem prover, has the undesirable property of allowing for the derivation of more than one proof for a reading of a sentence, in the general case. In order to overcome this inconvenience and to turn Lambek Calculus into a reasonable parsing method, we show the existence of "relative" normal form proof trees and make use of their properties to constrain the proof procedure in the desired way. 1 Introduction Sophisticated techniques have been developed for the implementation of parsers for (augmented) con- text-free grammars. [Pereira/Warren 1983] gave a characterization of these parsers as being resolu- tion based theorem provers. Resolution might be taken as an instance of Hilbert-style theorem pro- ving, where there is one inference rule (e.g. Modus Ponens or some other kind of Cul Rule) which al- lows for deriving theorems from a set of axioms. In the case of parsing, the grammar rules and the lexicon would be the axioms. When categorial grammars were discovered for computational linguistics, the most obvious way to design parsers for categorial grammars seemed 1 "natural deduction" is used here in its broad sense, i.e. natural deduction as opposed to Hilbert-style deduction to apply the existing methods: The few combi- nation rules and the lexicon constitute the set of axioms, from which theorems are derived by a resolution rule. However, this strategy leads to unsatisfactory results, in so far as extended ca- tegorial grammars, which make use of combina- tion rules like functional composition and type raising, provide for a proliferation of derivations for the same reading of a sentence. This pheno- menon has been dubbed the spurious ambiguity problem [Pareschi/Steedman 1987]. One solution to this problem is to describe normal forms for equivalent derivations and to use this knowledge to prune the search space of the parsing process [Hepple/Morrill 1989]. Other approaches to cope with the problem of spurious ambiguity take into account the peculari- ties of categorial grammars compared to grammars With "context-free skeleton". One characteristic of categorial grammars is the shift of information from the grammar rules into the lexicon: grammar rules are mere combination schemata whereas syntactic categories do not have to be atomic items as in the "context-free" formalisms, but can also be structu- red objects as well. The inference rule of a Hilbert-style deduction system does not refer to the internal structure of the propositions which it deals with. The alterna- tive to Hilbert-style deduction is natural deduction (in the broad sense of the word) which is "natural" in so far as at least some of the inference rules of a natural deduction system describe explicitly how logical operators have to be treated. Therefore na- tural deduction style proof systems are in principle good candidates to function as a framework for ca- tegorial grammar parsers. If one considers catego- ries as formulae, then a proof system would have to refer to the operators which are used in those formulae. 272 The natural deduction approach to parsing with categorial grammars splits up into two general mainstreams both of which use the Gentzen se- quent representation to state the corresponding calculi. The first alternative is to take a general purpose calculus and propose an adequate transla- tion of categories into formulae of this logic. An example for this approach has been carried out by Pareschi [Pareschi 1988], [Pareschi 1989]. On the other hand, one might use a specialized cal- culus. Lambek proposed such a calculus for ca- tegorial grammar more than three decades ago [Lambek 1958]. The aim of this paper is to describe how Lam- bek Calculus can be implemented in such a way that it serves as an efficient parsing mechanism. To achieve this goal, the main drawback of the original Lambek Calculus, which consists of a version of the "spurious ambiguity problem", has to be overcome. In Lambek Calculus, this overgeneration of deriva- tions is due to the fact that the calculus itself does not giye enough constraints on the order in which the inference rules have to be applied. In section 2 of the paper, we present Lambek Calculus in more detail. Section 3 consists of the proof for the existence of normal form proof trees relative to the readings of a sentence. Based on this result, the parsing mechanism is described in section 4. head of a complex category is the head of its value category. The category in the succedens of a se- quent is called goal category. The category which is "decomposed" by an inference rule application is called current functor. Basic Category: a constant Rightward Looking Category: if value and argument are categories, then (value/argument) is a category Leftward Looking Category: if value and argument are categories, then (value\argument) is a category Figure h Definition of categories axiom scheme (axiom) x --* x logical rules (/:left) r -- ~t U, ~, v -- U, (z]y), T, V --* z (/:right) T, y -- T-- (~ly) (\:left) T --,' y U, z, V .--., z U, T, (~\v), v -. (\:right) v, T - T -- (=\v) T non-empty sequence of categories; U, V sequences; x, y, z categories. Figure 2: Cut-free and product-free LC the president of Iceland np/n, n, (n\n)/np, np --* np n, (n\n)/np, np --. n np --* np 2 Lambek Calculus In the following, we restrain ourselves to cut- free and product-free Lambek Calculus, a calculus which still allows us to infer infinitely many deri- ved rules such as Geach-rule, functional composi- tion etc. [Zielonka 1981]. The cut-free and product- free Lambek Calculus is given in figures 1 and 2. Be aware of the fact that we did not adopt Lam- bek's representation of complex categories. Proofs in Lambek Calculus can be represented as trees whose nodes are annotated with sequents. An ex- ample is given in figure 3. A lexical lookup step which replaces lexemes by their corresponding ca- tegories has to precede the actual theorem proving process. For this reason, the categories in the an- tecedens of the input sequent will also be called le- zical categories. We introduce the notions of head, goal category, and current fanctor: The head of a category is its "innermost" value category: The head of a basic category is the category itself. The np ~ np n, n\n -.* n n ---* n n ---* n Figure 3: Sample proof tree 2.1 Unification Lambek Calculus Lambek Calculus, as such, is a propositional cal- culus. There is no room to express additional con- straints concerning the combination of categories. Clearly, some kind of feature handling mechanism is needed to enable the grammar writer to state e.g. conditions on the agreement of morpho-syntactic features or to describe control phenomena. For the reason of linguistic expressiveness and to facili- tate the description of the parsing algorithm below, 273 we extend Lambek Calculus to Unification Lambek Calculus (ULC). First, the definition of basic category must be adapted: a basic category consists of an atomic category name and feature description. (For the definition of feature descriptions or feature terms see [Smolka 1988].) For complex categories, the same recursive definition applies as before. The syntax for categories in ULC is given informally in figure 4 which shows the category of a control verb like "persuade". We assume that variable names for feature descriptions are local to each category in a sequent. The (/:left)- and (\:left)-inference rules have to take care of the substitutions which are involved in handling the variables in the exten- ded categories (figure 5). Heed that the substitu- tion function o" has scope over a whole sequent, and therefore, over a complete subproof, and not only over a single category. In this way, correct varia- ble bindings for hypothetic categories, which are introduced by "right"-rules, are guaranteed. ((s([<pred>:persuade]) <subj>:Subj <obj>:Obj <vcomp>:VComp]) \np(Subj) )/(s(VComp) \np(Obj)) )/np(Obj) Figure 4: Sample category I T --* Y2 a(U v z~ V ~ z) ~J, (=Iv,). T, V --,, a(yl ) = a(y2) Figure 5: (/:lefl)-rule in ffLC np/n, n, (n\n)/np, np -- np np ---* np np/n, n, n\n ~ np n ---* n np/n, n --, np n ----* n np ~ np np/n, n, (n\n)/np, np ---, np .np--,np np/n, n, n\n ---* np n, n\n ~ n np --~ np n---~ n n----~ n Figure 6: Extra proofs 3 Normal Proof Trees The sentence in figure 3 has two other proofs, which are listed in figure 6, although one would like to contribute only one syntactic or semantic reading to it. In this section, we show that such a set of a possibly abundant number of proofs for the same reading of a sequent possesses one distinguished member which can be regarded as the represen- tative or the normal form proof tree for this set. In order to be able to use the notion of a "rea- ding" more precisely, we undertake the following definition of structures which determine readings for our purposes. Because of their similarity to syn- tax trees as used with context-free grammars, we also call them "syntax trees" for the sake of sim- plicity. Since, on the semantic level, the use of a "left'-rule in Lambek Calculus corresponds to the functional application of a functor term to some argument and the "right"-rules are equivalent to functional abstraction [van Benthem 1986], it is es- sential that in a syntax tree, a trace for each of these steps in a derivation be represented. Then it is guaranteed that the semantic representation of a sentence can be constructed from a syntax tree which is annotated by the appropriate partial se- mantic expressions of whatever semantic represen- tation language one chooses. Structurally distinct syntax trees amount to different semantic expres- sions. A syntax tree t condenses the information of a proof for a sequent s in the following way: 1. Labels of single.node trees, are either lexical categories or arguments of lexical categories. 2. The root of a non.trivial tree has either (a) one daughter tree whose root is labelled with the value category of the root's la- bel. This case catches the application of a "right'-inference rule; or (b) two daughter trees. The label of the root node is the value category, the label of the root of one daughter is the functor, and the label of the root of the other daugh- ter is the argument category of an appli- cation of a "left"-inference rule. Since the size of a proof for a sequent is cor- related linearily to the number of operators which occur in the sequent, different proof trees for the same sequent do not differ in terms of size - they are merely structurally distinct. The task of deft- 274 ning those relative normal forms of proofs, which we are aiming at, amounts to describing proof trees of a certain structure which can be more easily cor- related with syntax trees as would possibly be the case for other proofs of the same set of proofs. The outline of the proof for the existence of nor- mal form proof trees in Lambek Calculus is the fol- lowing: Each proof tree of the set of proof trees for one reading of a sentence, i.e. a sequent, is map- ped onto the syntax tree which represents this rea- ding. By a proof reconstruction procedure (PR), this syntax tree can be mapped onto exactly one of the initial proof trees which will be identified as being the normal form proof tree for that set of proof trees. It is obvious that the mapping from proof trees onto syntax trees (Syntax Tree Construction - SC) partitions the set of proof trees for all readings of a sentence into a finite number of disjoint subsets, i.e. equivalence classes of proof trees. Proof trees of one of these subsets share the property of ha- ving the same syntax tree, i.e. reading. Hence, the single proof tree which is reconstructed from such a syntax tree can be safely taken as a representative for the subset which it belongs to. In figure 7, this argument is restated more formally. proof syntax normal trees trees proofs Pn } : N t, N Pn Plm Pnl } Pn* Figure 7: Outline of the proof for normal forms We want to prove the following theorem: Theorem 1 The set of proofs for a sequent can be partitioned into equivalence classes according to their corresponding syntax trees. There is exactly one proof per equivalence class which can be iden- tified as its normal proof. This theorem splits up into two lemmata, the first of which is: Lemma 1 For every proof tree, there exists exactly one syntax tree. The proof for lemma 1 consists of constructing the required syntax tree for a given proof tree. The preparative step of the syntax tree con- struction procedure SC consists of augmenting le- xical categories with (partial) syntax trees. Partial syntax trees are represented by A-expressions to in- dicate which subtrees have to be found in order to make the tree complete. The notation for a cate- gory c paired with its (partial) syntax tree t is c : t. A basic category is associated with the tree con- sisting of one node labelled with the name of the category. Complex categories are mapped onto partial binary syntax trees represented by A-expressions. We omit the detailed construction procedure for partial syntax trees on the lexical level, and give an example (see fig. 8) and an intuitive characte- rization instead. Such a partial tree has to be built up in such a way that it is a "nesting" of functional applications, i.e. one distinguished leaf is labelled with the functor category which this tree is associa- ted with, all other leaves are labelled with variables bound by A-operators. The list of node labels along the path from the distinguished node to the root node must show the "unfolding" of the functor ca- tegory towards its head category. Such a path is dubbed projection line. (s\np)/np : Az,Az2 s ) 's\np' z2 / '(s\np)/np' zl Figure 8: Category and its partial syntax tree On the basis of these augmented categories, the overall syntax tree can be built up together with the proof for a sequent. As it has already been discussed above, a "left"-rule performs a functio- nal application of a function t/ to an argument expression to, which we will abbreviate by tf[t~ ]. "right"-rules turn an expression tv into a function (i.e. partial syntax tree) t/ = Atatv by means of A-abstraction over to. However, in order to retain the information on the category of the argument and on the direction, we use the functor category itself as the root node label instead of the afore mentioned A-expression. 275 The steps for the construction of a syntax tree along with a proof are encoded as annotations of the categories in Lambek Calculus (see figure 9). An example for a result of Syntax Tree Construc- tion is shown in figure 10 where "input" syntax trees are listed below the corresponding sequent, and "output" syntax trees are displayed above their sequents, if shown at all. Since there is a one-to-one correspondence bet- ween proof steps and syntax tree construction steps, exactly one syntax tree is constructed per successful proof for a sequent. This leads us to the next step of the proof for the existence of normal forms, which is paraphrased by lemma 2. Lemma 2 From every syntax tree, a unique proof tree can be reconstructed. The proof for this lemma is again a constructive one: By a recursive traversal of a syntax tree, we obtain the normal form proof tree. (The formula- tion of the algorithm does not always properly di- stinguish between the nodes of a tree and the node labels.) (axiom) (/:left) (/:right) (\:left) (\:right) z:t --* x:t T -* V:~. ~', z:tt[t. ], V --. z:t U, (z/y):t 1, T, V -- z:t T~ ~ ...* x:t T-- (=/y):'(x/y)'(t) T -- ~:t. ~, =:~/[t.], v -- z:t U,T, (~\y):t s, V -. z:t T -- (=\~):'(=\v)'(O T non-empty sequence of categories; U, V sequences; x, y, z categories; t, ta, t I partial syntax trees. Figure 9: Syntax Tree Construction in LC Proof Reconstruction (P R) Input: A syntax tree t with root node label g. Output: A proof tree p whose root sequent s with antecedens A and goal category g, and whose i daughter proofs pi (i = 0, 1, 2) are determined by the following method: Method: • If t consists of the single node g, p consists of an s which is an instantiation of the axiom scheme with g --~ g. s has no daughters. • If g is a complex category z/y reap. z\y and has one daughter tree tl, the antecedens A is the list of all leaves of t without the leftmost resp. the rightmost leaf., s has one daughter proof which is determined by applying Proof Reconstruction to the daughter tree of g. • If g is a basic category and has two daughter trees tt and t~_, then A is the list of all leaves of t. s has two daughter proof trees Pt and P2- C is the label of the leaf whose projection line ends at the root g. tl is the sister tree of this leaf. Pl is obtained by applying PR to tl. P2 is the result of applying PR to t2 which remains after cutting off the two subtrees C and tt from t. Thus, all proofs of an equivalence class are map- ped onto one single proof by a composition of the two functions Syntax Tree Construction and Proof Reconstruction. [:] 4 The Parser We showed the existence of relative normal form proof trees by the detour on syntax trees, assu- ming that all possible proof trees have been gene- rated beforehand. This is obviously not the way one wants to take when parsing a sentence. The goal is to construct the normal form proof directly. For this purpose, a description of the properties which distinguish normal form proofs from non- normal form proofs is required. The essence of a proof tree is its nesting of cur- rent functors which can be regarded as a partial or- der on the set of current functors occuring in this specific proof tree. Since the current functors of two different rule applications might, coincidently, be the same form of category, obviously some kind of information is missing which would make all cur- rent functors of a proof tree (and hence of a syntax tree) pairwise distinct. This happens by stating which subsequence the head of the current functor spans over. As for information on a subsequence, it is sufficient to know where it starts and where it ends. Here is the point where we make use of the ex- pressiveness of ULC. We do not only add the start and end position information to the head of a com- plex category but also to its other basic subcate- gories, since this information will be used e.g. for making up subgoals. We make use of obvious con- straints among the positional indices of subcatego- ries of the same category. The category in figure 11 spans from position 2 to 3, its head spans from 1 to 3 if its argument category spans from 1 to 2. 276 whom mary loves 'tel'( 'rel/(s/np)', 's/n/( 's'('n/, 's\np'( '(s\np)lnp', 'np' )))) rel/(s/np), np, (s\np)/np ---, rel Az 'tel'( x ), 'np', AzlAz2 's'( z2, 's\np'( '(s\np)/np', zl )) 's/n/( 's'( 'rip', ' s\np'( '(s\np)/np', 'rip' ))) np, (s\np)/np ---* slnP '.p', ~1~2 's'( x2, 's\np'('(s\np)/np', xl )) np, (s\np)/np, np --* s 'np', AxQ~z2 's'( x2, 's\np'( '(s\np)/np', X 1 )), tllp! np ~ np 'np' np, s\np --* s 'rip', x2's'(x2,' s\np'('(s\.p)/.p', '.p')) np~np s---*s 'nit/ 's'('n//, 's\np'( '(s\np)/np', 'np' )) " rel --*rel Figure 10: Sample syntax tree construction The augmentation of categories by their positional indices is done most efficiently during the lexical lookup step. s([<start> : 1, <end> : 3 ]) \np([<start> : 1, <end> : 2 ]) Figure 11: Category with position features We can now formulate what we have learned from the Proof Reconstruction (PR) procedure. Since it works top-down on a syntax tree, the cha- racteristics of the partial order on current functors given by their nesting in a proof tree are the follo- wing Nesting Constraints: 1. Right.Rule Preference: Complex categories on th.e righthand side of the arrow become cur- rent functors before complex categories on the lefthand side. 2. Current Functor Unfolding: Once a lefthand side category is chosen for current functor it has to be "unfolded" completely, i.e. in the next inference step, its value category has to become current functor unless it is a basic ca- tegory. 3. Goal Criter~um: A lefthand side functor ca- tegory can only become current functor if its head category is unifiable with the goal cate- gory of the sequent where it occurs. Condition 3 is too weak if it is stated on the background of propositional Lambek Calculus only. It would allow for proof trees whose nesting of cur- rent functors does not coincide with the nesting of current functors in the corresponding syntax tree (see figure 12). S/S, S/S, S, S\S, S\S -"* S S "-* S S/8, S~ S\S, S\S ""+ S s ---, s s, s\s, s\s --* s S "+ S S, $\8 -'* S S "-*S S'"* S S sis / \ S S\8 Figure 12: Non.normal form proof The outline of the parsing/theorem proving al- gorithm P is: • A" sequent is proved if it is an instance of the axiom scheme. • Otherwise, choose an inference rule by obey- ing the nesting constraints and try to prove the premises of the rule. Algorithm P is sound with respect to LC be- cause it has been derived from LC by adding re- strictions, and not by relaxing original constraints. It is also complete with regard to LC, because the restrictions are just as many as needed to rule out proof trees of the "spurious ambiguity" kind accor- ding to theorem 1. 277 4.1 Further Improvements The performance of the parser/theorem prover can be improved further by adding at least the two fol- lowing ingredients: The positional indices can help to decide where sequences in the "left"-rules have to be split up to form the appropriate subsequences of the premises. In [van Benthem 1986], it was observed that theorems in LC possess a so-called count invariant, which can be used to filter out unpromising sugge- stions for (sub-)proofs during the inference process. 5 Conclusion The cut-free and product-free part of Lambek Cal- culus has been augmented by certain constraints in order to yield only normal form proofs, i.e. only one proof per "reading" of a sentence. Thus, theorem provers for Larnbek Calculus become realistic tools to be employed as parsers for categorial grammar. General efficiency considerations would be of in- terest. Unconstrained Lambek Calculus seems to be absolutely inefficient, i.e. exponential. So far, no results are known as to how the use of the nesting constraints and the count invariant filter systema- tically affect the complexity. At least intuitively, it seems clear that their effects are drastic, because due to the former, considerably fewer proofs are ge- nerated at all, and due to the latter, substantially fewer irrelevant sub-proofs are pursued. From a linguistic standpoint, for example, the following questions have to be discussed: How does Lambek Calculus interact with a sophisticated le- xicon containing e.g. lexical rules? Which would be linguistically desirable extensions of the infe- rence rule system that would not throw over the properties (e.g. normal form proof) of the original Lambek Calculus? An implementation of the normal form theorem prover is currently being used for experimentation concerning these questions. 6 Acknowledgements The research reported in this paper is supported by the LILOG project, and a doctoral fellowship, both from IBM Deutschland GmbH, and by the Esprit Basic Research Action Project 3175 (DY- ANA). I thank Jochen D6rre, Glyn Morrill, Remo Pareschi, and Henk Zeevat for discussion and criti- cism, and Fiona McKinnon for proof-reading. All errors are my own. References [Calder/Klein/Zeevat 1988] Calder, J.; E. Klein and H. Zeevat(1988): Unification Categorial Grammar: A Concise, Extendable Grammar for Natural Language Processing. In: Proceedings of the 12th International Conference Computa- tional Linguistics, Budapest. [Gallier 1986] Gallier, J.H. (1986): Logic for Com- puter Science. Foundations of Automatic Theo- rem Proving. Harper and Row, New York. [Hepple/Morrill 1989] Hepple, M. and G. Morrill (1989): Parsing and derivational equivalence. In: Proceedings of the Association for Computatio- nal Linguistics, European Chapter, Manchester, UK. [Lambek 1958] Lambek, J. (1958): The mathe- matics of sentence structure. In: Amer. Math. Monthly 65, 154-170. [Moortgat 1988] Moortgat, M. (1988): Categorial Investigations. Logical and Linguistic Aspects of the Lambek Calculus. Forts Publications. [Pareschi 1988] Pareschi, R. (1988): A Definite Clause Version of Categorial Grammar. In: Proc. of the 26th Annual Meeting of the Association for Computational Linguistics. Buffalo, N.Y. [Pareschi 1989] Pareschi, R. (1989): Type-Driven Natural Language Analysis. Dissertation, Uni- versity of Edinburgh. [Pareschi/Steedman 1987] Pareschi, R. and M. Steedman (1987): A Lazy Way to Chart-Parse with Categorial Grammars. In: Proc. 25th An- nual Meeting of the Association for Computatio- nal Linguistics, Stanford; 81-88. [Pereira/Warren 1983] Pereira, F.C.N and D.H.D. Warren (1983): Parsing as Deduction. In: Pro- ceedings of the 21st Annual Meeting of the As- sociation of Computational Linguistics, Boston; 137-144. [Smolka 1988] Smolka, G. (1988): A Feature Logic with Subsorts. Lilog-Report 33, IBM Deutsch- land GmbH, Stuttgart. 278 [Uszkoreit 1986] Uszkoreit, H. (1986): Categorial Unification Grammar. In: Proceedings of the 1 lth International Conference on Computational Linguistics, Bonn. [van Benthem 19861 Benthem, 3. v. (1986): Essays In Logical Semantics. Reidel, Dordrecht. [Zielonka 1981] Zielonka, W. (1981): Axiomatiza- bility of Ajdukiewicz-Lambek Calculus by Me- ans of Cancellation Schemes. In: Zeitschrift ffir mathematische Logik und Grundlagen der Ma- thematik, 27, 215-224. 279
1989
33
EFFICIENT PARSING FOR FRENCH* Claire Gardent University Blaise Pascal - Clermont II and University of Edinburgh, Centre for Cognitive Science, 2 Buccleuch Place, Edinburgh EH89LW, SCOTLAND, LrK Gabriel G. B~s, Pierre-Franqois Jude and Karine Baschung, Universit~ Blaise Pascal - Clermont II, Formation Doctorale Linguistique et Informatique, 34, Ave, Carnot, 63037 Clermont-Ferrand Cedex, FRANCE ABSTRACT Parsing with categorial grammars often leads to problems such as proliferating lexical ambiguity, spu- rious parses and overgeneration. This paper presents a parser for French developed on an unification based categorial grammar (FG) which avoids these pro- blem s. This parser is a bottom-up c hart parser augmen- ted with a heuristic eliminating spurious parses. The unicity and completeness of parsing are proved. INTRODUCTION Our aim is twofold. First to provide a linguistical- ly well motivated categorial grammar for French (henceforth, FG) which accounts for word order varia- tions without overgenerating and without unnecessary lexical ambiguities. Second, to enhance parsing effi- ciency by eliminating spurious parses, i.e. parses with • different derivation trees but equivalent semantics. The two goals are related in that the parsing strategy relies on properties of the grammar which are indepen- dently motivated by the linguistic data. Nevertheless, the knowledge embodied in the grammar is kept inde- pendent from the processing phase. 1. LINGUISTIC THEORIES AND WORD ORDER Word order remains a pervasive issue for most linguistic analyses. Among the theories most closely related to FG, Unification Categorial Grammar (UCG : Zeevat et al. 1987), Combinatory Categorial Grammar (CCG : Steedman 1985, Steedman 1988), Categorial Unification Grammar (CUG : Karttunen 1986) and Head-driven Phrase Structure Grammar (I-IPSG: Pollard & Sag 1988) all present inconvenien- ces in their way of dealing with word order as regards parsing efficiency and/or linguistic data. * The workreported here was carried outin the ESPR/T Project 393 ACORD, ,,The Construction and Interrogation of Knowledge Bases using Natural Language Text and Graphics~. 280 In UCG and in CCG, the verb typically encodes the notion of a canonical ordering of the verb arguments. Word order variations are then handled by resorting to lexical ambiguity and jump rules ~ (UCG) or to new combinators (CCG). As a result, the number of lexical and/or phrasal edges increases rapidly thus affecting parsing efficiency. Moreover, empirical evidence does not support the notion of a canonical order for French (cf. B~s & Gardent 1989). In contrast, CUG, GPSG (Gazdar et al. 1985) and HPSG do not assume any canonical order and subcate- gorisation information is dissociated from surface word order. Constraints on word order are enforced by features and graph unification (CUG) or by Linear Pre- cedence (LP) statements (HPSG, GPSG). The pro- blems with CUG are that on the computational side, graph-unification is costly and less efficient in a Prolog environment than term unification while from the linguistic point of view (a) NP's must be assumed unambiguous with respect to case which is not true for - at least - French and (b) clitic doubling cannot be ac- counted for as a result of using graph unification between the argument feature structure and the functor syntax value-set. In HPSG and GPSG (cf. also Uszko- reit 1987), the problem is that somehow, LP statements must be made to interact with the corresponding rule schemas. That is, either rule schemas and LP state- ments are precompiled before parsing and the number of rules increases rapidly or LP statements are checked on the fly during parsing thus slowing down proces- sing. 2. THE GRAMMAR The formal characteristics of FG underlying the parsing heuristic are presented in §4. The characteris- tics of FG necessary to understand the grammar are re- sumed here (see (B~s & Gardent 89) for a more detailed presentation). t Ajumpmle of the form X/Y, YfZ ---~ X/Z where X/Yis atype raised NP and Y/Z is a verb. FG accounts for French linearity phenomena, em- bedded sentences and unbounded dependencies. It is derived from UCG and conserves most of the basic characteristics of the model : monostratality, lexica- lism, unification-based formalism and binary combi- natory rules restricted to adjacent signs. Furthermore, FG, as UCG, analyses NP's as type-raised categories. FG departs from UCG in that (i) linguistic entities such as verbs and nouns, sub-categorize for a set - rather than a list-of valencies ; (ii) a feature system is introduced which embodies the interaction of the different elements conditioning word order ; (iii) FG semantics, though derived directly from InL ~, leave the scope of seeping operators undefined. The FG sign presents four types of information re- levant to the discussion of this paper : (a) Category, Co) Valency set ; (c) Features ; (d) Semantics. Only two combinatory rules-forward and backward concatena- tion - are used, together with a deletion rule. A Category can be basic or complex. A basic ca- tegory is of the form Head, where Head is an atomic symbol (n(oun), np or s(entence)). Complex categories are of the form C/Sign, where C is either atomic or complex, and Sign is a sign called the active sign. With regard to the Category information, the FG typology of signs is reduced to the following. (1)Type Category Linguistic entities f0 Head verb, noun fl Head/f0 NP, PP, adjective, adverb, auxiliary, negative panicles f2 (fl)/signi (a) sign i = f0 Co) sign i = fl Determiner, complementi- zer, relative pronoun Preposition Thus, the result of the concatenation of a NP (fl) with a verb (f0) is a verbal sign (f0). Wrt the concate- nation rules, f0 signs are arguments; fl signs are either functors of f0 signs, or arguments of f2 signs. Signs of type 1"2 are leaves and fanctors. Valencies in the Valency Set are signs which ex- press sub-categorisation. The semantics ofa fO sign is a predicate with an argumental list. Variables shared by the semantics of each valency and by the predicate list, relate the semantics of the valency with the semantics of the predicate. Nouns and verbs sub-categorize not only for "normal" valencies such as nom(inative), dat(ive), etc, but also for a mod(ifier) valency, which is consumed and recursively reintroduced by modifiers (adjectives, laP's and adverbs). Thus, in FG the com- : In/. (Indexed language) is the semantics incorporated to UCG ; it derives from Kamp's DRT. From hereafter werefer to FG semantics as InL'. 281 plete combinatorial potential of a predicate is incorpo- rated into its valency set and a unified treatment of nominal and verbal modifiers is proposed. The active sign of a fl functor indicates the valency - ff any - which the functor consumes. No order value (or directional slash) is associated with valencies. Instead, Features express adjacent and non-adjacent constraints on constituent ordering, which are enforced by the unification-based combina- tory rules. Constraints can be stated not only between the active sign of a functor and its argument, but also between a valency, of a sign., the sign. and the active J J . J sign of the fl functor consuming valency~ while con- catenating with sign~ As a result, the valency of a verb or era noun imposes constraints not only on the functor which consumes it, but also on subsequent concatena- tions. The feature percolation system underlies the partial associativity property of the grammar (cf. §4). As mentioned above, the Semanticspart of the sign contains an InL' formula. In FG different derivations of a string may yield sentence signs whose InL' formulae are formally different, in that the order of their sub-for- mulae are different, but the set of their sub-formulae are equal. Furthermore, sub-formulae are so built that formulae differing in the ordering of their sub-formu- lae can in principle be translated to a semantically equi- valent representation in a first order predicate logic. This is because : (i) in InL', the scope of seeping operators is left undefined ; (ii) shared variables ex- press the relation between determiner and restrictor, and between seeping operators and their semantic arguments ; (iii) the grammar places constants (i.e. proper names) in the specified place of the argumental list of the predicate. For instance, FG associates to (2) the InL' formulae in (3a) and (3b) : (2) Un garcon pr~sente Marie ~ une fille (3) (a) [15] [indCX) & garcon(X) & ind(Y) & fiRe(Y) & presenter (E,X,marie,Y)] Co) [E] [indCO & fille(Y) & ind(X) & gar~on(X) & presenter (E,X,marie,Y)] While a seeping operator of a sentence constituent is related to its argument by the index of a noun (as in the above (3)), the relation between the argument of a seeping operator and the verbal unit is expressed by the index of the verb. For instance, the negative version of (2) will incorporate the sub-formula neg (E). In InL' formulae, determiners (which are leaves and f2 signs, el. above), immediately precede their res- trictors. In formally different InL' formulae, only the ordering of seeping operators sub-formulae can differ, but this can be shown to be irrelevant with regard to the semantics. In French, scope ambiguity is the same for members of each of the following pairs, while the ordering of their corresponding semantic sub-formu- lae, thanks to concatenation of adjacent signs, is ines- capably different. (4) (a) Jacques avait donn6 un livre (a) ~ tousles dtu- diants ( b ). (a) Jacques avait donn6 d tousles dtudiants(b) un livre (a). (b) Un livre a 6t~ command6 par chaque ~tudiant (a) dune librairie (b). Co') Un livre a6t6 command6d une librairie (b)par chaque dtudiant (a). At the grammatical level (i.e. leaving aside prag- matic considerations),the translation of an InL' formu- la to a scoped logical formula can be determined by the specific scoping operator involved (indicated in the sub-formula) and by its relation to its semantic argu- ment (indicated by shared variables). This translation must introduce the adequate quantifiers, determine their scope and interpret the'&' separator as either ^ or -->, as well as introduce .1. in negative forms. For ins- tahoe, the InL' formulae in (Y) translate ~ to : (5) 3E, 3X, 3Y (garqon(X)^ fille(Y) ^ pr6senter (E,X~narie,Y)). We assume here the possibility of this translation without saying any more on it. Since this translation procedure cannot be defined on the basis of the order of the sub-formulae corresponding to the scoping opera- tors, InL' formulae which differ only wrt the order of their sub-formulae are said to be semantically equiva- lent. 3. THE PARSER Because the subcategorisation information is re- presented as a set rather than as a list, there is no constraint on the order in which each valency is consumed. This raises a problem with respect to par- sing which is that for any triplet X,Y,Z where Y is a verb and X and Z are arguments to this verb, there will often be two possible derivations i.e., (XY)Z and xo'z). The problem of spurious parses is a well-known one in extensions of pure categorial grammar. It deri- ves either from using other rules or combinators for de- rivation than just functional application (Pareschi and Steedman 1987, Wittenburg 1987, Moortgat 1987, Morrill 1988) or from having anordered set valencies (Karttunen 1986), the latter case being that of FG. Various solutions have been proposed in relation to this problem. Karttunen's solution is to check that for any potential edge, no equivalent analysis is already In (5) 3E can be paraphrased as "There exists an event". 282 stored in the chart for the same string of words. Howe- ver as explained above, two semantically equivalent formulae of InL' need not be syntactically identical. Reducing two formulae to a normal form to check their equivalence or alternatively reducing one to the other might require 2* permutations with n the number of predicates occaring in the formulae. Given that the test must occur each time that two edges stretch over the same region and given that itrequires exponential time, this solution was disguarded as computationaUy inef- ficient. Pareschi's lazy parsing algorithm (Pareschi, 1987) has been shown (I-Iepple, 1987) to be incomplete. Wittenburg's predictive combinators avoid the parsing problem by advocating grammar compilation which is not our concern here. Morilrs proposal of defining equivalence classes on derivations cannot be transpo- sed to FG since the equivalence class that would be of relevance to our problem i.e., ((X,Z)Y, X(ZY)) is not an equivalence class due to our analysis of modifiers. Finally, Moortgat's solution is not possible since it relies on the fact that the grammar is structurally com- plete ~ which FG is not. The solution we offer is to augment a shift-reduce parser with a heuristic whose essential content is that no same functor may consume twice the same valency. This ensures that for all semantically unambiguous sentences, only one parse is output. To ensure that a parse is always output whenever there is one, that is to ensure that the parser is complete, the heuristic only applies to a restricted set of edge pairs and the chart is organized as aqueue. Coupled with the parlial-associa- tivity of FG, this strategy guarantees that the parser is complete (of. §4). 3.1 THE HEURISTIC The heuristic constrains the combination of edges in the following way 2. Let el be an edge stretching from $1 to E1 labelled with the typefl~, a predicate identifier pl and a sign Sign1, let e2 be an edge stretching from E1 to $2 labelled with type fl and a sign Sign,?, then e2 will reduce with el by consuming the valency Val of pl if e2 has not already reduced with an edge el'by consu- ming the valency Valofpl where el 'stretches from $1" to E1 and $1' ~ $1. In the rest of this section, examples illustrate how A structurally complete grammar is one such that : If a sequence of categories X I.. Xn reduces to Y, there is a red u~on to Y for any bracketing of Xl .. Ym into constituents (Moortgat, 19S7). 2 A mote complete difinition is given in the description of the parsing algorithm below. this heuristic eliminates spurious parses, while allo- wing for real ambiguities. Avoiding spurious parses Consider the derivation in (6) (6) Jean aime Marie 0-Edl - I - Ed2-2 - F.A3- 3 0 ...... Ed4 ...... 2 Ed4 ffi Edl(Ed2,pl,subj) 0 ...... Ed5 ...... 2 *Ed5 = Edl(Ed2,pl, obj) I ...... Ed6 ....... 3 Ed6 = Ed3(Ed2,pLobj) l ....... Ed7 ....... 3 Ed7 ffi EcL3(Ed2,pLsubj) 0 ...... Ed8 .................. 3 Ed8 = Edl(Ed6,pl, subj) 0 .... Ed9 .................. 3 *Ed9 = FA3(Ed4,pl,obj) 0 ...... Edl0 .................. 3 *Edl0= Edl(EdT,pl,obj) where Ed4 = Edl(Ed2,pl,subj) indicates that the edge Ed 1 reduces with Ed2 by consuming the subject valen- cy of the edge Ed2 with predicate pl. Ed5 and EdlO are ruled out by the grammar since in French no lexical (as opposed to clirics and wh-NP) object NP may appear to the left of the verb. Ed9 is ruled out by the heuristic since Ed3 has already consu- med the object valency of the predicate pl thus yiel- ding Ed6. Note also that Edl may consume twice the subject valency ofpl thus yielding Ed4 and Ed8 since the heuristic does not apply to pairs of edges labelled with signs Of type fl and f0 respectively. Producing as many parses as there are readings The proviso that a functor edge cannot combine with two different edges by consuming twice the same valency on the same predicate ensures that PP attach- ment ambiguities are preserved. Consider (7) for ins- tance 1. (7) Regarde le chien darts la rue 0 --Edl --- 1 ---Ed2 - 2 - Ed3 .... 3 --- Ed4 ....... 4 I ..... Ed5 ........... 3 0 ................... Ed6 ........... 3 2 .............. Ed7 .......... 4 1 ...... Ed8 .............................. 4 0 ................... Ed9 .............................. 4 0 ................... Edl0 ............................. 4 with Ed7 = Ed4(Ed3,p2,mod) Ed8 = Ed2(Ed7) Ed9 = Ed8(Edl,pl,obj) EdlO = Ed4(Ed6,p l,mod) where pl and p2 are the predicate identifiers labelling the edges Edl and Ed3 respectively. The above heuristic allows a functor to concatenate twice by consuming two different valencies. This case t For the sake of clarity, all irelevant edges have been omitted. This practice will hold throughout the sequel. 283 of real ambiguity is illustrated in (8). (8) Quel homme pr6sente Marie ~t Rose ? 0 .... Edl .... 1 ---Ed2--2--Ed3---3-- Ed4--- 4 1 .......... Ed4 ........ 3 1 .......... Ed5 ........ 3 0 ................ Ed6 ................... 3 0 ................ Ed7 ................... 3 where Ed4 = (Ed3,pl,nom) and Ed5 = (Ed3,pl,obj) Thus, only edges of the same length correspond to two different readings. This is the reason why the heuristic allows a functor to consume twice the same valency on the same predicate iff it combines with two edges E andE' thatstretch over the same region. A case in point is illustrated in (9) (9) Quel homme pr6sente Marie ~ Rose ? 0 .... Edl .... 1 ---Ed2--2--Ed3---3-- Ed4--- 4 1 .......... Ed5 ........ 3 1 .......... Ed6 ........ 3 1 ......... Ed7 ...................... 4 1 ......... Ed8 ...................... 4 0 .... Ed9 ............................................. 4 0 .... Edl0 ........................................... 4 where a Rose concatenates twice by consuming twice the same - dative - valency of the same predicate. 3.2 THE PARSING ALGORITHM The parser is a shift-reduce parser integrating a chart and augmented with the heuristic. An edge in the chart contains the following infor- marion : edge [Name, Type, Heur, S,E, Sign] where Name is the name of the edge, S and E identifies the startingand the ending vertex and Sign is the sign labelling the edge. Type and Heur contain the info'r- marion used by the heuristic. Type is either f0, fl and t2 while the content of Heur depends on the type of the edge and on whether or not the edge has already combined with some other edge(s). Heur f0 pX where X is an integer. pX identifies the predicate associated with any edge. type fO fl before combination : Vat where Var is the anonymous variable. This indica- tes that there is as yet no information available that could violate the heuristic. after combination : Heur-List where Heur-List is a list of triplets of the form [Edge,pX.Val] and Edge indicates an argument edge with which the functor edge has combined by consuming valency Val of the predicate pX label- ling Edge. f2 nil The basic parsing algorithm is that of a normal shift-reduce parser integrating a chart rather than a stack i.e., 1. Starting from the beginning of the sentence, for each word W either shift or reduce, 2. Stop when there is no more word to shift and no more reduce to perfomi, 3. Accept or reject. Shifting a word W consists in adding to the chart as many lexical edges as there are lexical entries associa- ted with W in the lexicon. Reducing an edge E consists in trying to reduce E with any adjacent edge E' already stored in the chart. The operation applies recursively in that whenever a new edge E" is created it is immedia- tely added to the chart and tried for reduction. The order in which edges tried for reduction are retrieved from the chart corresponds to organising the chart as a queue i.e., f'n'st-in- ftrst-out. Step 3 consists in checking the chart for an edge stretching from the beginning to the end of the chart and labelled with a sign of category s(entence). If there is such an edge, the string is accepted - else it is rejected. The heuristic is integrated in the reduce procedure which can be defined as follows. Two edges Edge 1 and Edge2 will reduce to a new edge Edge3 iff - Either (a) 1. Edgel = [el,Typel,H1,E2,Signl] and 2. Edge2 = [e2,Type2,H2,E2,Sign2] and <Typel,Type2> # <f0,fl> and 3. apply(Sign 1,Sign2,Sign3) and 4. Edge3 = [e3,Type3,H3,E3,Sign3] and <$3,E3> = <S I,E2> or (b) 1. Edgel = [el,f0,pl,S1,E1,Signl] and 2. Edge2 = [e2,fl,I-I2,S2,E2,Sign2] and E1 = $2 and 3. bapply(Signl,Sign2,Sign3) by consuming the valency Val and 4. H2 does not contain a triplet of the form [el',pl,Val] where Edge 1' = [el',f0,pl,S'l,S2] and S'I"-S1 5. Edge3 = [e3,f0,pl,S1,E2,Sign3] 6. The heuristic information H2 in Edge2 is upda- ted to [e 1,p 1,Val]+I-I2 where '+ 'indicates list concatenation and under the proviso that the triplet does not already belong to H2. Where apply(Sign1 ,Sign2,Sign3) means that Sign 1 can combine with Sign2 to yield Sign3 by one of the two combinatory rules of FG and bapply indicates the backward combinatory rule. 284 This algorithm is best illustrated by a short exam- ple. Consider for instance, the parsing of the sentence Pierre aime Marie. Stepl shifts Pierre thus adding Edgel to the chart. Because the grammar is designed to avoid spurious lexical ambiguity, only one edge is created. Edgel = [el,fl,_,0,1,Signl] Since there is no adjacent edge with which Edgel could be reduced, the next word is shifted i.e., aime thus yielding Edge2 that is also added to the chart. Edge2 = [e2,f0,p 1,1,2,S ign2] Edge2 can reduce with Edgel since Signl can combine with Sign2 to yield Sign3 by consuming the subject valency of the predicate pl. The resulting edge Edge3 is added to the chart while the heuristic infor- mation of the functor edge Edgel is updated : Edge3 = [e3,f0,p 1,0,2,Sign3 ] Edgel = [el,fl ,[[e3,pl,subj]],0,1 ,Sign 1 ] No more reduction can occur so that the last word Marie is shifted thus adding Edge4 to the chart. Edge,4 = [e4,fl,_,2,3,Sig4] Edg4 first reduces with Edeg2 by consuming the sub- ject valency ofpl thus creating Edge5. It also reduces with Edge2 by consuming the object valency ofpl to yield Edge6. Edge5 = [e5,f0,pl,l,3,Sign5] Edge6 - [e6,f0,p 1,1,3,S ign6] Edge4 is updated as follows. Edge4 = [e4,fl,[[e2,pl,subj],[e2,pl,obj]],2,3,Sign4] At this stage, the chart contains the following edges. Pierre aime Marie 0--el ~ 1 ~e2--2~e4--3 0 e3 ~ 3 1 e5~3 1 e6~3 Now Edge1 can reduce with Edge6 by consuming the subject valency of pl thus yielding Edge7. Howe- ver, the heuristic forbids Edge4 to consume the object valency of pl on Edge3 since Edge4 has already consumed the object valency of pl when combining with Edge2. In this way, the spurious parse Edge8 is avoided. The final chart is as follows. Pierre aime Marie 0--elw l--e2 ..... 2we4-- 3 0 e3 3 1 e5 -- 3 1 e6~ 3 0 e7 3 *0------ e8 3 with Edge7 = [e7,f0,pl,0,3,Sign7] Edge4 = [e4 ,fl, [ [e2 ,p 1 ,s ubj], [e2 ,p 1, obj] ] ,2,3 ,S ign4]" Edge 1 = [e 1, fl,[ [e2,p I ,sub j] ] ,0,1 ,Sign 1 ] 4. UNICITY AND COMPLETNESS OF THE PARSING DEFINITIONS 1. An indexed lexical f0 is a pair <X,i> where X is a lexical sign of f0 type (c.f. 2) and i is an integer. 2. PARSE denotes the free algebra recursively defined by the following conditions. 2.1 Every lexical sign of type fl or f2, and every indexed lexical f0 is a member of PARSE. 2.2 If P and Q are elements of PARSE, i is an integer, and k is a name of a valency then (P+aQ) is a member of PARSE. 2.3 If P and Q are elements of PARSE, (P+imQ) is a member of PARSE, where I~ is a new symbol} 3. For each member, P, of PARSE, the string of the leaves of P is defined recursively as usual : 3.1 If P is a lexical functor or a lexical indexed argu- ment, L(P) is the string reduced to P. 3.2 L(P+~tQ) is the string obtained by concatenation of L(P) and L(Q). 4. A member P of PARSE, is called a well indexed parse (WP) if two indexed leaves which have different ranges in L(P), have different indicies. 5. The partial function, SO:'), from the set of WP to the set of signs, is defined recursively by the following conditions : 5.1 IfP is a leave S(P) = P 5.2 S(F+ikA) = Z [resp. S(A+ikF) = Z] (km ) If S (F) is a functor of type fl, S(A) is an argument and Z is the result sign by the FC rule [resp. BC rule] when S(F) consumes the valency named k in the leave of S(A) indexed by i. 5.3 S(P+ilnA ) = Z [res. S(A+i~-" ) = Z] if S(F) is a functor of type fl or f2, S(A) is an argument sign and Z is the result sign by the FC rule [resp. BC rule]. 6. For each pair of signs X and Y we denote X.=. Y if X and Y are such that their non semantic parts are formal- ly equal and their semantic part is semantically equiva- lent. I In 2.3 the index i is just introduced for notational convenience and will not be used ; k,l.., will denote a valency name or the symbol m. 285 7. IfP and Q are WP P =Qiff 7.1 S(P) and S(Q) are defined 7.2 S(P) = S(Q) and 7.3 L(P) = L(Q) 8. A WP is called acceptedif it is accepted by the parser augmented with the heuristic described in §3. THEOREM 1. (Unicity) IfP and Q are accepted WP's and ifP = Q, then P and Q are formally equal. 2. (Completeness) IfP is a WP which is accepted by the grammar, and S(P) is a sign corresponding to a gram- matical sentence, then there exists a WP Q such that : a) Q is accepted, and b)P =Q. NOTATIONAL CONVENTION F, F'...(resp. A,A',...) will denote WP's such that S(F), S(F')...are functors of type fl (resp. S(A), S(A') .... are arguments of type f0). The proof of the theorem is based on the following properties 1 to 3 of the grammar. Property 1 follows directly from the grammar itself (cf. §2) ; the other two are strong conjectures which we expect to prove in a near future. PROPERTY 1 If S(K) is defined and L(K) is not a lexical leaf, then : a) If K is of type f0, there exist i,k,F and A such that : K = F+ikA or K = A+ikF b) If K is of type fl, there exist Fu of type f2 and Ar of type f0 or of type fl such that : K = Fu+imAr c) K is not of type f2. PROPERTY 2 (Decomposition unicity) : For every i and k if F+i~A = F+ixA', or A+i~F -- A'+i~t.F then i= i', k = k', A--A' and F = F' PROPERTY 3 (Partial associativity) : For every F,A,F' such that L(F) L(A) L(F') is a sub- string of a string oflexical entries which is accepted by the grammar as a grammatical sentence, a) If S[F+i~(A+aF)] and S[(F+ikA)+u F'] are defined, then F+ii(A+ilF' ) = (F+~A)+IIF b) If S[A+nF ] and S[(F+ikA)+aF ] are defined, then S[F+ik(A+nF)] is also defined. LEMMA 1 If F+ikA = A'+jtF' then A'+j~F' is not accepted. Proof : L(F) is a proper substring of L(A), so there exists A" such that : a) S(A"+jlF) is defined, and b) L(A") is a substfing of L(A) But A' begins by F and F is not contained in A", so A" is an edge shorter than A'. Thus A'+F' is not accepted. LEMMA 2 If S[(A+tkF)+uF'] is defined and A+ikF is accepted, then (A+tkF)+uF is also accepted. Proof : Suppose, a contrario, that (A+ikF)+nF is not accepted. Then there must exist an edge A' = A"+i~F such that : a) S(A'+nF) is defined, and b) A' is shorter than A+ikF This implies that A" is shorter than A. Therefore A+ikF would not be accepted. PROOF OF THE PART 1 OF THE THEOREM Tile proof is by induction on the lengh, lg(P), of L(P). So we suppose a) and b) : a) (induction hypothesis). For every P' and Q' such that P' and Q' are accepted, if P' =_ Q', and lg(P') < n, then P' =Q' b) P and Q are accepted, P = Q and lg(P) = n and we have to prove that C) P= Q. First cas : if lg(P) = 1, then we have P = L(P) = L(Q) = Q. Second cas : if lg(P) > 1, then we have lg(Q) > 1 since L(P) = L(Q). Thus there exist P't, P'2, Q't, Q'2, i, k, j, 1, such that P = P'~+u P'2 and Q = Q't+~Q'2 By the Lemma 1 P't and Q't must be both functors or both arguments. And ifP'~ and Q'~ are functors (res. arguments) then P'2 and Q'2 are arguments (resp. func- tors). So by Property 2, we have : i = i', k = k', P'l -- Q't, and P'2 =- Q' 2 . Then the induction hypothesis implies that P't = Q't and that P'2 = Q'2" Thus we have proved that P = Q. PROOF OF THE PART 2 OF THE THEOREM Let P be a WP such that S(P) is define and cortes- 286 ponds to a grammatical sentence. We will prove, by induction on the lengh of L(K), that for all the subtrees K of P, there exists K' such that : a) K' is accepted, and b) K_=_K'. We consider the following cases (Property 1) 1. IfKis a leaf then K' = K 2. If K = F+tkA, then by the induction hypothesis there exist F' and A' such that : (i) F' and A' are accepted, and (ii) F_=_ F', A = A'. Then F'+A' is also accepted. So that K' can be choosed as F'+A'. 3. If K = A+ikF, we define F, A' as in (2) and we consider the following subcases : 3.1 If A' is a leaf or if A' = FI+jlA1 where S(AI+~ F') is not def'med, then A'+~F is accepted, and we can take it as K. 3.2 If A' = Al+ilF1, then by the Lemma 2 A'+~kF' is accepted. Thus we can define K' as A'+u F'. 3.3 IfA' = FI+nA1 and S(AI+~ F) is defined. Let A2 = Al+ikF. By the Property 3 S(FI+jlA2) is defined and K = A'+tkF = FI+jlA2. Thus this case reduces to case 2. 4. If K = Fu+~Ar, where Fu is of type f2 and Ar is of type f0 or fl, then by induction hypothesis there exists At' such that Ar ~_ Ar' and At' is accepted. Then K can be defined as Fu+i®Ar'. 5. IMPLEMENTATION AND COVE- RAGE FG is implemented in PIMPLE, a PROLOG term unification implementation of PATR II (cf. Calder 1987) developed at Edinburgh University (Centre for Cognitive Studies). Modifications to the parsing algo- rithm have been introduced at the "Universit6 Blaise Pascal", Clermont-Ferrand. The system runs on a SUN M 3/50 and is being extensively tested. It covers at present : declarative, interrogative and negative sen- tences in all moods, with simple and complex verb forms. This includes yes/no questions, constituent questions, negative sentences, linearity phenomena introduced by interrogative inversions, semi free cons- tituent order, clitics (including reflexives), agreement phenomena (including gender and number agreement between obj NP to the left of the verb and participles), passives, embedded sentences and unbounded depen- dencies. REFERENCES B~s, G.G. and C. Gardent (1989) French Order without Order. To appear in the Proceedings of the Fourth European ACL Conference (UMIST, Manchester, 10-12 April 1989), 249-255. Calder, J. (1987) PIMPLE ; A PROLOG Implementa- tion of the PATR-H Linguistic Environment. Edin- burgh, Centre for Cognitive Science. Gazdar, G., Klein, E., Pullum, G., and Sag., I. (1985) Generalized Phrase Structure Grammar. London: Basil Blackwell. Kamp, H. (1981) A Theory of Truth and Semantic Representation. In Groenendijk, J. A. G., Janssen, T. M. V. and Stokhof, M. B. J. (eds.) Formal Methods in the Study of Language, Volume 136, 277-322. Amsterdam : Mathematical Centre Tracts. Karttunen, L. (1986) Radical Lexicalism. Report No. CSLI-86-68, Center for the Study of Language and Information, Paper presented at the Conference on Alternative Conceptions of Phrase Structure, July 1986, New York. Morrill, G. (1988) Extraction and Coordination in Phrase Structure Grammar and Categorial Gram- mar. PhD Thesis, Centre for Cognitive Science, University of Edinburgh. Pareschi, R. (1987) Combinatory Grammar, Logic Programming, and Natural Language. In Haddock, N. J., Klein, E. and Morill, G. (eds.) Edinburgh Working Papers in Cognitive Science, Volume I ; Categorial Grammar, Unification Grammar and Parsing. Pareschi, R. and Steedman, M. J. (1987) A Lazy Way to Chart-Parse with Extended Categorial Gram- mars. In Proceedings of the 25 th Annual Meeting of the Association for Computational Linguistics, Stanford University, Stanford, Ca., 6-9 July, 1987. Pollard, C. J. (1984) Generalized Phrase Structure Grammars, Head Grammars, and Natural Lan- guages. PhD Thesis, Stanford University. Pollard, C. J. and Sag, I. (1988) An Information-Based Approach to Syntax and Semantics : Volume 1 Fundamentals. Stanford, Ca. : Center for the Study of Language and Information. S teedman, M. (1985) Dependency and Coordination in the Grammar of Dutch and English. Language, 61, 523 -568. Steedman, M. (1988) Combinators and Grammars. In Oehrle, R., Bach, E. and Wheeler, D. (eds.) Catego - rial Grammars and Natural Language Structures, Dordrecht, 1988. Uszkoreit, H. (1987) Word Order and Constituent Structure in German. Stanford, CSLI. Wittenburg, K. (1987) Predictive Combinators : a Method for Efficient Processing of Combinatory Categorial Grammar. In Proceedings of the 25th Annual Meeting of the Association for C omputatio- nalLinguistics, Stanford University, Stanford, Ca., 6-9 July, 1987. Zeevat, H. (1986) A Specification of InL. Internal ACORD Report. Edinburgh, Centre for Cognitive Science. Zeevat, H. (1988) Combining Categorial Grammar and Unification. In Reyle, U. and Rohrer, C. (eds.) Natural Language Parsing and Linguistic Theo- ries, 202-229. Dordrecht : D. Reidel. Zeevat, H., Klein, E. and Calder, J. (1987) An Inlroduc- tion to Unification Categorial Grammar. In Had- dock, N. J., Klein, E. and Morrill, G. (eds.) Edin- burgh Working Papers in Cognitive Science, Vo- lume 1 : Categorial Grammar, Unification Gram- mar and Parsing 287
1989
34
LOGICAL FORMS IN THE CORE LANGUAGE ENGINE Hiyan Alshawi & Jan van Eijck SRI International Cambridge Research Centre 23 Millers Yard, Mill Lane, Cambridge CB2 11ZQ, U.K. Keywords: logical form, natural language, semantics ABSTRACT This paper describes a 'Logical Form' target language for representing the literal mean- ing of English sentences, and an interme- diate level of representation ('Quasi Logical Form') which engenders a natural separation between the compositional semantics and the processes of scoping and reference resolution. The approach has been implemented in the SRI Core Language Engine which handles the English constructions discussed in the paper. INTRODUCTION The SRI Core Language Engine (CLE) is a domain independent system for translat- ing English sentences into formal represen- tations of their literal meanings which are capable of supporting reasoning (Alshawi et al. 1988). The CLE has two main lev- els of semantic representation: quasi logical forms (QLFs), which may in turn be scoped or unscoped, and fully resolved logical forms (LFs). The level of quasi logical form is the target language of the syntax-driven seman- tic interpretation rules. Transforming QLF expressions into LF expressions requires (i) fixing the scopes of all scope-bearing opera- tors (quantifiers, tense operators, logical op- erators) and distinguishing distributive read- ings of noun phrases from collective ones, and (ii) resolving referential expressions such as definite descriptions, pronouns, indexical ex- pressions, and underspecified relations. The QLF level can be regarded as the nat- ural level of sentence representation resulting 25 from linguistic analysis that applies composi- tional semantic interpretation rules indepen- dently of the influence of context. Sentence ~, syntax rules Parse trees semantic rules QLF ezpressions ~, context LF expressions The QLF expressions are derived on the ba- sis of syntactic structure, by means of se- mantic rules that correspond to the syntax rules that were used for analysing the sen- tence. Having QLFs as a well-defined level of representation allows the problems of com- positional semantics to be tackled separately from the problems of scoping and reference resolution. Our experience so far with the CLE has shown that this separation can ef- fectively reduce the complexity of the system as a whole. Also, the distinction enables us to avoid multiplying out interpretation possibil- ities at an early stage. The representation languages we propose are powerful enough to give weU-motiwted translations of a wide range of English sentences. In the current version of the CLE this is used to provide a systematic and coherent coverage of all the major phrase types of English. To demon- strate that the semantic representations are also simple enough for practical natural lan- guage processing applications, the CLE has been used as an interface to a purchase order processing simulator and a database query system, to be described elsewhere. In summary, the main contributions of the work reported in this paper are (i) the intro- duction of the QLF level to achieve a natural separation between compositional semantics and the processes of scoping and reference resolution, and (ii) the integration of a range of well-motivated semantic analyses for spe- cific constructions in a single coherent frame- work. We will first motivate our extensions to first order logic and our distinction between LF and QLF, then describe the LF language, illustrating the logical form translations pro- duced by the CLE for a number of English constructions, and finally present the addi- tional constructs of the QLF language and illustrate their use. EXTENDING FIRST ORDER LOGIC As the pioneer work by Montague (1973) sug- gests, first order logic is not the most nat- ural representation for the meanings of En- glish sentences. The development of Mon- tague grammar indicates, however, that there is quite a bit of latitude as to the scope of the extensions that are needed. In developing the LF language for the CLE we have tried to be conservative in our choice of extensions to first order logic. Earlier proposals with simi- lar motivation are presented by Moore (1981) and Schubert & Pelletier (1982). The ways in which first order logic-- predicate logic in which the quantifiers 3 and V range over the domain of individuals--is ex- tended in our treatment can be grouped and motivated as follows: • Extensions motivated by lack of ex- pressive power of ordinary first order logic: for a general treatment of noun phrase constructions in English general- ized quantifiers are needed ('Most A are B' is not expressible in a first order lan- guage with just the two one-place pred- icates A and B). • Extensions motivated by the desire 26 for an elegant compositional semantic framework: use of lambda abstraction for the translation of graded predicates in our treatment of comparatives and superlatives; use of tense operators and inten- sional operators for dealing with the English tense and au~liary sys- tem in a compositional way. • Extensions motivated by the desire to separate out the problems of scoping from those of semantic representation. • Extensions motivated by the need to deal with context dependent construc- tions, such as anaphora, and the implicit relations involved in the interpretation of possessives and compound nominals. The first two extensions in the list are part of the LF language, to be described next, the other two have to do with QLF constructs. These QLF constructs are removed by the processes of quantifier scoping and reference resolution (see below). The treatment of tense by means of tempo- ral operators that is adopted in the CLE will not be discussed in this paper. Some advan- tages of an operator treatment of the English tense system are discussed in (Moore, 1981). We are aware of the fact that some as- pects of our LF representation give what are arguably overly neutral analyses of English constructions. For example, our uses of event variables and of sentential tense operators say little about the internal structure of events or about an underlying temporal logic. Never- theless, our hope is that the proposed LF rep- resentations form a sound basis for the subse- quent process of deriving the fuller meaning representations. RESOLVED LOGICAL FORMS NOTATIONAL CONVENTIONS Our notation is a straightforward extension of the standard notation for first order logic. The following logical form expression involv- ing restricted quantification states that every dog is nice: quant(forall, x, Dog(x), Nice(x)). To get a straightforward treatment of the collective/distributive distinction (see below) we assume that variables always range over sets, with 'normal' individuals corresponding to singletons. Properties like being a dog can be true of singletons, e.g. the referent of Fido, as well as larger sets, e.g. the referent of the three dogs we saw yesterday. The LF language allows formation of com- plex predicates by means of lambda abstrac- tion: ,~x,\d.Heavy.degree( z, d) is the predi- cate that expresses degree of heaviness. EVENT AND STATE VARIABLES Rather than treating modification of verb phrases by means of higher order predicate modifiers, as in (Montague, 1973), we follow Davidson's (1967) quantification over events to keep closer to first order logic. The event corresponding to a verb phrase is introduced as an additional argument to the verb pred- icate. The full logical form for Every repre- sentative voted is as follows: quant(forall, x, Repr(x), past(quant(exists, e, Ev(e), Vote(e,x)))). Informally, this says that for every represen- tative, at some past time, there existed an event of that representative voting. The presence of an event variable allows us to treat optional verb phrase modifiers as predications of events, as in the translation of John left suddenly: past(quant(exists, e, Ev(e), 27 Leave(e, john) ^ Sudden(e))). The use of event variables in turn permits us to give a uniform interpretation of prepo- sitional phrases, whether they modify verb phrases or nouns. For example, John de- signed a house in Cambridge has two read- ings, one in which in Cambridge is taken to modify the noun phrase a house, and one where the prepositional phrase modifies the verb phrase, with the following translations respectively: quant(exlsts, h, House(h) A In_location(h, Cambridge), past(quant (exists, e, Ev(e), Design( e, john, h ) ) ) ). quant(exlsts, h, House(h) A past(quant(exists, e, Ev(e), Design(e, john, h) ^ In_location(e, Cambridge)))). In both cases the prepositional phrase is translated as a two-place relation stating that something is located in some place. Where the noun phrase is modified, the relation is between an ordinary object and a place; in the case where the prepositional phrase mod- ifies the verb phrase the relation is between an event and a place. Adjectives in pred- icative position give rise to state variables in their translations. For example, in the trans- lation of John was happy in Paris, the prepo- sitional phrase modifies the state. States are like events, but unlike events they cannot be instantaneous. GENERALIZED QUANTIFIERS A generalized quantifier is a relation Q be- tween two sets A and B, where Q is insensi- tive to anything but the cardinalities of the 'restriction set' A and the 'intersection set' A N B (Barwise & Cooper, 1981). A gen- eralized quantifier with restriction set A and intersection set ANB is fully characterized by a function AmAn.Q(m, n) of m and n, where m = IAI and n = IANB I. In theLFlan- guage of the CLE, these quantifier relations are expressed by means of predicates on two numbers, where the first variable abstracted over denotes the cardinality of the restriction set and the second one the cardinality of the intersection set. This allows us to build up quantifiers for complex specifier phrases like at least three but less than five. In simple cases, the quantifier predicates are abbrevi- ated by means of mnemonic names, such as exists, notexists, forall or most. Here are some quantifier translations: • most ",.* Xm,Xn.(m < 2n) [abbreviation: most]. • at least three but less than seven ,,~ )tm~n.(n > 3 ^ n < 7). • not every .,.* )~m)~n.(m ~ n). A logical form for Not every representative voted is: quant()~mAn.(m # n), x, Rep(z), past(quant (exists, e, Ev(e), Vote(e,x)))). Note that in one of the quantifier examples above the abstraction over the restriction set is vacuous. The quantifiers that do depend only on the cardinality of their intersection set turn out to be in a linguistically well- defined class: they are the quantifiers that can occur in the NP position in "There are NP'. This quantifier class can also be char- acterized logically, as the class of symmet- r/c quantifiers: "At least three but less than seven men were running" is true just in case "At least three but less than seven runners were men" is true; see (Barwise & Cooper, 1981) and (Van Eijck, 1988) for further dis- cussion. Below the logical forms for symmet- ric quantifiers will be simplified by omitting the vacuous lambda binder for the restric- tion set. The quantifiers for collective and measure terms, described in the next section, seem to be symmetric, although linguistic in- tuitions vary on this. COLLECTIVES AND TERMS MEASURE Collective readings are expressed by an ex- tension of the quantifier notation using set. 28 The reading of Two companies ordered five computers where the first noun phrase is in- terpreted collectively and the second one dis- tributively is expressed by the following log- ical form: quant(set(~n.(n = 2)), x, Company(x), quant(~n.(n = 5), y, Computer(y), past(quant (exists, e, Ev(e), Order(e, x, y))))). The first quantification expresses that there is a collection of two companies satisfying the body of the quantification, so this read- ing involves five computers and five buy- ing events. The operator set is introduced during scoping since collective/distributive distinctionsmlike scoping ambiguities--are not present in the initial QLF. We have extended the generalized quanti- fier notation to cover phrases with measure determiners, such as seven yards of fabric or a pound of flesh. Where ordinary generalized quantifiers involve counting, amount gener- alized quantifiers involve measuring (accord- ing to some measure along some appropriate dimension). Our approach, which is related to proposals that can be found in (Pelletier, ed.,1979) leads to the following translation for John bought at least five pounds of ap- ples: quant(amount($n.(n >_ 5), pounds), z, Apple(z), past(quant(exists, e, Ev(e), Buy( e, john , x))))). Measure expressions and numerical quanti- tiers also play a part in the semantics of com- paratives and superlatives respectively (see below). NATURAL KINDS Terms in logical forms may either refer to in- dividual entities or to natural kinds (Carlson, 1977). Kinds are individuals of a specific na- ture; the term kind(x, P(x)) can loosely be interpreted as the typical individual satisfy- ing P. All properties, including composite ones, have a corresponding natural kind in our formalism. Natural kinds are used in the translations of examples like John invented paperclips: past(quant(exists, e, Ev(e), Invent(e, john, kind(p, Paperclip(p) ) ) ). In reasoning about kinds, the simplest ap- proach possible would be to have a rule of inference stating that if a "kind individual" has a certain property, then all "real world" individuals of that kind have that property as well: if the "typical bear" is an animal, then all real world bears are animals. Of course, the converse rule does not hold: the "typical bear" cannot have all the properties that any real bear has, because then it would have to be both white all over and brown all over, and so on. COMPARATIVES AND SUPERLA- TIVES In the present version of the CLE, compara- tives and superlatives are formed on the basis of degree predicates. Intuitively, the mean- ing of the comparative in Mary is nicer than John is that one of the two items being com- pared possesses a property to a higher degree than the other one, and the meaning of a su- perlative is that art item possesses a property to the highest degree among all the items in a certain set. This intuition is formalised in (Cresswell, 1976), to which our treatment is related. The comparison in Mary is two inches taller than John is translated as follows: quant(amount(An.(n = 2), inches), h, Degree(h), more()~x Ad. tall_degree(z, d), mary, john, h ). The operator more has a graded predicate as its first argument and three terms as its second, third and fourth arguments. The op- erator yields true if the degree to which the first term satisfies the graded predicate ex- ceeds the degree to which the second term satisfies the predicate by the amount speci- fied in the final term. In this example h is a 29 degree of height which is measured, in inches, by the amount quantification. Examples like Mary is 3 inches less tall than John get sim- ilar translations. In Mary is taller than John the quantifier for the degree to which Mary is taller is simply an existential. Superlatives are reduced to comparatives by paraphrasing them in terms of the num- ber of individuals that have a property to at least as high a degree as some specific individ- ual. This technique of comparing pairs allows us to treat combinations of ordinals and su- perlatives, as in the third tallest man smiled: quant(ref(the,...), a, Man(a) A quant(An.(n = 3), b, Man(b)), quant(amount(,kn.(n _> 0), units), h, more( Az ~d.tall_degree( x, d), b, a, h ), past(quant(exists, e, Ev(e), Smile(e, a)))))). The logical form expresses that there are ex- actly three men whose difference in height from a (the referent of the definite noun phrase, see below) is greater than or equal to 0 in some arbitrary units of measurement. QUASI LOGICAL FORMS The QLF language is a superset of the LF language; it contains additional constructs for unscoped quantifiers, unresolved refer- ences, and underspecified relations. The 'meaning' of a QLF expression can be thought of as being given in terms of the meanings of the set of LF expressions it is mapped to. Ultimately the meaning of the QLF expressions can be seen to depend on the contextual information that is employed in the processes of scoping and reference res- olution. UNSCOPED QUANTIPIERS In the QLF language, unscoped quantifiers are translated as terms with the format qterm((quantifier),(number), ( variable),( restriction) ). Coordinated NPs, like a man or a woman, are translated as terms with the format term..coord( ( operator),( variable), (ten)). The unscoped QLF generated by the seman- tic interpretation rules for Most doctors and some engineers read every article involves both qterms and a term_coord (quantifier scoping generates a number of scoped LFs from this): quant(exists, e, Ev(e), Read(e, term_coord(A, x, qterm(most, plur, y, Doctor(y)), qterm(some, plur, z, Engineer(z))), qterm(every, sing, v, Art(v)))). Quantifier scoping determines the scopes of quantifiers and operators, generating scoped logical forms in a preference order. The or- dering is determined by a set of declarative rules expressing linguistic preferences such as the preference of particular quantifiers to outscope others. The details of two versions of the CLE quantifier scoping mechanism are discussed by Moran (1988) and Pereira (A1- shawl et al. 1988). UNRESOLVED REFERENCES Unresolved references arising from pronoun anaphora and definite descriptions are rep- resented in the QLF as 'quasi terms' which contain internal structure relevant to refer- ence resolution. These terms are eventually replaced by ordinary LF terms (constants or variables) in the final resolved form. A dis- cussion of the CLE reference resolution pro- cess and treatment of constraints on pronoun reference will be given in (Alshawi, in prep.). Pronouns. The QLF representation of a pronoun is an anaphoric term (or a_term). For example, the translations of him and himself in Mary expected him to introduce himself are as follows: 30 a_term(ref(pro, him, sing, [mary]), x, Male(x)) a_term(ref(refl, him, sing, [z, mary]), y, Male(y)). The first argument of an a_term is akin to a category containing the values of syn- tactic and semantic features relevant to ref- erence resolution, such as those for the reflexive/non-reflexive and singular/plural distinctions, and a list of the possible intra- sentential antecedents, including quantified antecedents. Definite Descriptions. Definite descrip- tions are represented in the QLF as unscoped quantified terms. The qterm is turned into a quant by the scoper, and, in the simplest case, definite descriptions are resolved by in- stantiating the quant variable in the body of the quantification. Since it is not possible to do this for descriptions containing bound variable anaphora, such descriptions remain as quantifiers. For example, the QLF gener- ated for the definite description in Every dog buried the bone that it found is: qterm(ref(def, the, sing, Ix]), sing, y, Bone(y) A past(quant(exlsts, e, Ev(e), Find(e, a_term(ref(pro, it, sing, [y,z]), w, Zmv rsonal(w)), y)))). After scoping and reference resolution, the LF translation of the example is as follows: quant(forall, x, Dog(x), q uant(exists_one, y, Bone(y) A past(quant(exists, e, Ev(e), Find(e, x, y))), quant(exists, e', Ev( e'), Bury( e', x, y)))). Unbound Anaphoric Terms. When an argument position in a QLF predication must co-refer with an anaphoric term, this is indi- cated as a_index(x), where x is the variable for the antecedent. For example, because want is a subject control verb, we have the following QLF for he wanted to swim: past(quant(exists, e, Ev(e), Want(e, a_term(ref(pro, he, sing, [ ]), z, Male(z)), quant(exists, e I, Ev(el), Swim( e', a_index(z))))). If the a_index variable is subsequently re- solved to a quantified variable or a constant, then the a_index operator becomes redun- dant and is deleted from the resulting LF. In special cases such as the so-called 'donkey- sentences', however, an anaphoric term may be resolved to a quantified variable v outside the scope of the quantifier that binds v. The LF for Every farmer who owns a dog loves it provides an example: quant(forall, x, Farmer( x )A quant(exists, y, Dog(y), quant(exists, e, Zv( e ), Own(e, x, y) ) ), quant(exists, e ~, Ev(e'), Love( e ~, x, a..index(y)))). The 'unbound dependency' is indicated by an a_index operator. Dynamic interpretation of this LF, in the manner proposed in (Groe- nendijk & Stokhof, 1987), allows us to arrive at the correct interpretation. UNRESOLVED PREDICATIONS The use of unresolved terms in QLFs is not sufficient for covering natural language con- structs involving implicit relations. We have therefore included a QLF construct (a_form for 'anaphoric formula') containing a formula with an unresolved predicate. This is eventu- ally replaced by a fully resolved LF formula, but again the process of resolution is beyond the scope of this paper. Implicit Relations. Constructions like possessives, genitives and compound nouns are translated into QLF expressions contain- ing uninstantiated relations introduced by the a_form relation binder. This binder is used in the translation of John's house which says that a relation, of type poss, holds be- tween John and the house: 31 qterm(exists, sing, x, a_form(poss, R, House(x) A R(john, x ) ) ). The implicit relation, R, can then be deter- mined by the reference resolver and instanti- ated, to Owns or Lives_in say, in the resolved LF. The translation of indefinite compound nominals, such as a telephone socket, involves an a_form, of type cn (for an unrestricted compound nominal relation), with a 'kind' term: qterm(a, sing, s, a_form(cn, R, Socket(s) ^ R( s, kind(t, Telephone(t)))). The 'kind' term in the translation reflects the fact that no individual telephone needs to be involved. One-Anaphora. The a_form construct is also used for the QLF representation of 'one-anaphora'. The variable bound by the a_form has the type of a one place predi- cate rather than a relation. Resolving these anaphora involves identifying relevant (parts of) preceding noun phrase restrictions (Web- ber, 1979). For example the scoped QLF for Mary sold him an expensive one is: quant(exists, x, a_form(one, P, P( x ) A Expensive(x)), past(quant(exists, e, Ev(e), Sell(e, mary, z, a_term(...)))). After resolution (if the sentence were pre- ceded, say, by John wanted to buy a futon) the resolved LF would be: q uant (exists, z, Futon( x ) ^ Expensive(z), past(quant(exists, e, Ev(e), Sell(e, mary, x, john ) ) ). CONCLUSION We have attempted to evolve the QLF and LF languages gradually by a process of adding minimal extensions to first order logic, in order to facilitate future work on natural language systems with reasoning ca- pabilities. The separation of the two seman- tic representation levels has been an impor- tant guiding principle in the implementation of a system covering a substantial fragment of English semantics in a well-motivated way. Further work is in progress on the treatment of collective readings and of tense and aspect. ACKNOWLEDGEMENTS The research reported in this paper is part of a group effort to which the following peo- ple have also contributed: David Carter, Bob Moore, Doug Moran, Barney Pell, Fernando Pereira, Steve Pulman and Arnold Smith. Development of the CLE has been carried out as part of a research programme in natural- language processing supported by an Alvey grant and by members of the NATTIE con- sortium (British Aerospace, British Telecom, Hewlett Packard, ICL, Olivetti, Philips, Shell Research, and SRI). We would like to thank the Alvey Directorate and the consortium members for this funding. The paper has benefitted from comments by Steve Pulman and three anonymous ACL referees. REFERENCES Alshawi, H., D.M. Carter, J. van Eijck, R.C. Moore, D.B. Moran, F.C.N. Pereira, S.G. Pulman and A.G. Smith. 1988. In- terim Report on the SRI Core Language Engine. Technical Report CCSRC-5, SRI International, Cambridge Research Centre, Cambridge, England. Alshawi, H., in preparation, "Reference Res- olution In the Core Language Engine". Barwise, J. & R. Cooper. 1981. "General- ized Quantifiers and Natural Language", Linguistics and Philosophy, 4, 159-219. Cresswell, M.J. 1976. "The Semantics of De- gree", in: B.H. Partee (ed.), Montague Grammar, Academic Press, New York, pp. 261-292. 32 Carlson, G.N. 1977. "Reference to Kinds in English", PhD thesis, available from In- diana University Linguistics Club. Davidson, D. 1967. "The Logical Form of Action Sentences", in N. Rescher, The Logic of Decision and Action, University of Pittsburgh Press, Pittsburgh, Penn- sylvania. van Eijck, J. 1988. "Quantification". Technical Report CCSRC-7, SRI Inter- national, Cambridge Research Centre. Cambridge, England. To appear in A. von Stechow & D. Wunderlich, Hand- book of Semantics, De Gruyter, Berlin. Groenendijk, J. & M. Stokhof 1987. "Dy- namic Predicate Logic". Preliminary re- port, ITLI, Amsterdam. Montague, R. 1973. "The Proper Treatment of Quantification in Ordinary English". In R. Thomason, ed., Formal Philoso- phy, Yale University Press, New Haven. Moore, R.C. 1981. "Problems in Logical Form". 19th Annual Meeting of the As- sociation for Computational Linguistics, Stanford, California, pp. 117-124. Moran, D.B. 1988. "Quantifier Scoping in the SRI Core Language Engine", 26th Annual Meeting of the Association for Computational Linguistics, State Uni- versity of New York at Buffalo, Buffalo, New York, pp. 33-40. Pelletier, F.J. (ed.) 1979. Mass Terms: Some Philosophical Problems, Reidel, Dordrecht. Schubert, L.K. & F.J. Pelletier 1982. "From English to Logic: Context-Free Compu- tation of 'Conventional' Logical Trans- lations". Americal Journal of Computa- tional Linguistics, 8, pp. 26-44. Webber, B. 1979. A Formal Approach to Dis- course Anaphora, Garland, New York.
1989
4
Abstract Unification-Based Semantic Interpretation Robert C. Moore Artificial Intelligence Center SRI International Menlo Park, CA 94025 We show how unification can be used to spec- ify the semantic interpretation of natural-language expressions, including problematical constructions involving long-distance dependencies. We also sketch a theoretical foundation for unification- based semantic interpretation, and compare the unification-based approach with more conven- tional techniques based on the lambda calculus. 1 Introduction Over the past several years, unification-based for- malisms (Shieber, 1986) have come to be widely used for specifying the syntax of natural lan- guages, particularly among computational lin- guists. It is less widely realized by computa- tional linguists that unification can also be a pow- erful tool for specifying the semantic interpreta- tion of natural languages. While many of the techniques described in this paper are fairly well known among natural-language researchers work- ing with logic grammars, they have not been ex- tensively discussed in the literature, perhaps the only systematic presentation being that of Pereira and Shieber (1987). This paper goes into many is- sues in greater detail than do Pereira and Shieber, however, and sketches what may be the first the- oretical analysis of unification-based semantic in- terpretation. We begin by reviewing the basic ideas behind unification-based grammar formalisms, which will also serve to introduce the style of notation to be used throughout the paper. The notation is that used in the Core Language Engine (CLE) devel- oped by SKI's Cambridge Computer Science Re- search Center in Cambridge, England, a system whose semantic-interpretation component makes use of many of the ideas presented here. Fundamentally, unification grammar is a gener- alization of context-free phrase structure grammar in which grammatical:category expressions are not simply atomic symbols, but have sets of features with constraints on their values. Such constraints are commonly specified using sets of equations. Our notation uses equations of a very simple format--just ~eal;ure=value--and permits only one equation per feature per constituent, but we can indicate constraints that would be expressed in other formalisms using more complex equations by letting the value of a feature contain a variable that appears in more than one equation. The CLE is written in Prolog, to take advantage of the effi- ciency of Prolog unification in implementing cate- gory unification, so our grammar rules are written as Prolog assertions, and we follow Prolog con- ventions in that constants, such as category and feature names, start with lowercase letters, and variables start with uppercase letters. As an ex- ample, a simplified version of the rule for the basic subject-predicate sentence form might be written in our notation as (1) syn(s_np_vp, [s: [type=tensed], np: [person=P, hUm=N] , vp: [~ype=~ens ed, person=P, hum=N] ] ). The predicate syn indicates that this is a syntax rule, and the first argument s_npovp is a rule iden- tifier that lets us key the semantic-interpretation rules to the syntax rules. The second argu- ment of syn is a list of category expressions that make up the content of the rule, the first speci- fying the category of the mother constituent and the rest specifying the categories of the daugh- ter constituents. This rule, then, says that a tensed sentence (s: [type=~ensed]) can consist of a noun phrase (rip) followed by a verb phrase (vp), with the restrictions that the verb phrase must be tensed (type=tensed), and that the noun phrase and verb phrase must agree in person and number--that is, the person and num features of the noun phrase must have the same respective values as the person and mm features of the verb phrase. These constraints are checked in the process of parsing a sentence by unifying the values of fea- tures specified in the rule with the values of fea- tures in the constituents found in the input. Sup- pose, for instance, that we are parsing the sentence 33 Mary runs using a left-corner parser. If Mary is parsed as a constituent of category np:[person=3rd,num=sing], then unifying this category expression with np : [person=P ,num=N] in applying the sentence rule above will force the variables P and N to take on the values 3rd and s~_ug, respectively. Thus when we try to parse the verb phrase, we know that it must be of the category vp : [type=tensed, person=3rd,num=sing]. Our notation for semantic-interpretation rules is a slight generalization of the notation for syn- tax rules. The only change is that in each position where a syntax rule would have a category expres- sion, a semantic rule has a pair consisting of a "logical-form" expression and a category expres- sion, where the logical-form expression specifies the semantic interpretation of the corresponding constituent. A semantic-interpretation rule cor- responding to syntax rule (1) might look hke the following: (2) sem(s_np_vp, [(apply(Vp,Np), s : [] ), (~p,np: [] ), (Vp,vp: [3 )] ). The predicate sere means that this is a semantic- interpretation rule, and the rule identifier s..up_vp indicates that this rule applies to structures built by the syntax rule with the same identifier. The list of pairs of logical-form expressions and cate- gory expressions specifies the logical form of the mother constituent in terms of the logical forms and feature values of the daughter constituents. In this case the rule says that the logical form of a sentence generated by the s_np_vp rule is an ap- plicative expression with the logical form of the verb phrase as the functor and the logical form of the noun phrase as the argument. (The dummy functor apply is introduced because Prolog syntax does not allow variables in functor position.) Note that there are no feature restrictions on any of the category expressions occurring in the rule. They are unnecessary in this case because the semantic rule applies only to structures built by the s_np_vp syntax rule, and thus inherits all the restrictions applied by that rule. 34 2 Functional Application vs. Unification Example (2) is typical of the kind of semantic rules used in the standard approach to semantic inter- pretation in the tradition established by Pdchard Montague (1974) (Dowty, Wall, and Peters, 1981). In this approach, the interpretation of a complex constituent is the result of the functional applica- tion of the interpretation of one of the daughter constituents to the interpretation of the others. A problem with this approach is that if, in a rule like (2), the verb phrase itself is semanti- cally complex, as it usually is, a lambda expres- sion has to be used to express the verb-phrase in- terpretation, and then a lambda reduction must be applied to express the sentence interpretation in its simplest form (Dowry, Wall, and Peters, 1981, pp. 98-111). To use (2) to specify the in- terpretation of the sentence John likes Mary, the logical form for John could simply be john, but the logical form for likes Mary would have to be something like X\like(X,mary). [The notation Var\Bocly for lambda expressions is borrowed from Lambda Prolog (Miller and Nadathur, 1988).] The logical form for the whole sentence would then be apply(Xklike(X,mary),john), which must be reduced to yield the simplified logical form like(jobn,m~y). Moreover, lambda expressions and the ensuing reductions would have to be introduced at many intermediate stages if we wanted to produce sim- plified logical forms for the interpretations of com- plex constituents such as verb phrases. If we want to accommodate modal auxiliaries, as in John might like Mary, we have to make sure that the verb phrase might like Mary receives the same type of interpretation as like(s) Mary in order to combine properly with the interpretation of the subject. If we try to maintain functional applica- tion as the only method of semantic composition, then it seems that the simplest logical form we can come up with for might like Mary is produced by the following rule: (3) sem(vp_aux_vp. [(Xkapply (Aux, apply (Vp, X) ), vp: [] ), (Aux, aux : [] ), (Vp,vp : [] )] ). Applying this rule to the simplest plausible logical forms for migM and like Mary would produce the following logical form for might like Mary: X\apply(might, (apply(Y\like(Y,mary),X))) which must be reduced to obtain the simpler ex- pression X\might (like (X ,mary) ). When this ex- pression is used in the sentence-level rule, another reduction is required to eliminate the remaining lambda expression. The part of the reduction step that gets rid of the apply functors is to some ex- tent an artifact of the way we have chosen to en- code these expressions as Prolog terms, but the lambda reductions are not. They are inherent in the approach, and normally each rule will intro- duce at least one lambda expression that needs to be reduced away. It is, of course, possible to add a lambda- reduction step to the interpreter for the semantic rules, but it is both simpler and more efficient to use the feature system and unification to do ex- plicitly what lambda expressions and lambda re- duction do implicitly--assign a value to a variable embedded in a logical-form expression. According to this approach, instead of the logical form for a verb phrase being a logical predicate, it is the same as the logical form of an entire sentence, but with a variable as the subject argument of the verb and a feature on the verb phrase having that same variable as its value. The sentence interpretation rule can thus be expressed as (4) sem(s_np_vp, [(Vp,,: [] ), (Np,np: []), (Vp,vp:[subjval=Np])]), which says that the logical form of the sentence is just the logical form of the verb phrase with the subject argument of the verb phrase unified with the logical form of the subject noun phrase. If the verb phrase likes Mary is assigned the logical- form/category-expression pair (like(X,mary),vp:[subjval=X]), then the application of this rule will unify the log- ical form of the subject noun phrase, say john, directly with the variable X in like(X,mary) to immediately produce a sentence constituent with the logical form like(jotm,mary). Modal auxiliaries can be handled equally easily by a rule such as (5) sem(vp_aux_vp, [ (Aux, vp: [subj val=S] ), (Aux, aux : [argval=Vp] ), (Vp, vp : [subj val=S] ) ] ). If might is assigned the logical-form/category- expression pair (might (A), aux : [argval=A] ), then applying this rule to interpret the verb phrase might like Mary will unify A in mighl;(A) with like(X,mary) to produce a constituent with the logical-form/category-expression pair (migh~ (like, X, mary), vp : [subj val=X] ). which functions in the sentence-interpretation rule in exactly the same way as the logical- form/category-expression pair for like Mary. 3 Are Lambda Expressions Ever Necessary? The approach presented above for eliminating tile explicit use of lambda expressions and lambda re- ductions is quite general, but it does not replace all possible uses of lambda expressions in seman- tic interpretation. Consider the sentence John and Bill like Mary. The simplest logical form for the distributive reading of this sentence would be and(like(john,mary) ,like(bill ,mary) ). If the verb phrase is assigned the logical- form/category-expression pair (like (X, mary), vp : [subj val=X] ), as we have suggested, then we have a problem: Only one of john or bill can be directly unified with X, but to produce the desired logical form, we seem to need two instances of like(X,mary), with two different instantiations of X. Another problem arises when a constituent that normally functions as a predicate is used as an argument instead. Common nouns, for example, are normally used to make direct predications, so a noun like senator might be assigned the logical- form/category-expression pair (S enamor (X), nbar: [argval=X] ) according to the pattern we have been following. (Note that we do not have "noun" as a syntactic category; rather, a common noun is simply treated as a lexical "n-bar.") It is widely recognized, how- ever, that there are "intensional" adjectives and adjective phrases, such as former, that need to be treated as higher-level predicates or operators on predicates, so that in an expression like former 35 senator, the noun senator is not involved in di- rectly making a predication, but instead functions as an argument to former. We can see that this must be the case, from the observation that a for- mer senator is no longer a senator. The logical form we have assigned to senator, however, is not literally that of a predicate, however, but rather of a complete formula with a free variable. We there- fore need some means to transform this formula with its free variable into an explicit predicate to be an argument of former. The introduction of lambda expressions provides the solution to this problem, because the transformation we require is exactly what is accomplished by lambda abstrac- tion. The following rule shows how this can be carried out in practice: (6) sem(nba~_adj_nba~, [(Adjp,nbar: [argval=A] ), (Adjp, adjp: [type=in~ensional, argval l=X\Nbar, argva12=A] ), (Nbar, nbar: [argval=X] ) ] ). This rule requires the logical-form/category- expression pair assigned to an intensional adjec- tive phrase to be something like (formerCP,¥), adjp: [~ype=intensional, argvall--P, argvalg=Y] ), where former(P,Y) means that Y is a former P. The daughter nbar is required to be as previously supposed. The rule creates a lambda expression, by unifying the bound variable with the argument of the daughter nbar and making the logical form of the daughter nbar the body of the lambda ex- pression, and unifies the lambda expression with the first argument of the adjp. The second ar- gument of the adjp becomes-the argument of the mother nbar. Applying this rule to former senator will thus produce a constituent with the logical- form/category-expression pair (former(Xksenator (X) .Y) . nbar: [argval=Y] ). This solution to the second problem also solves the first problem. Even in the standard lambda- calculus-based approach, the only way in which multiple instances of a predicate expression ap- plied to different arguments can arise from a sin- gle source is for the predicate expression to ap- pear as an argument to some other expression that contains multiple instances of that argument. Since our approach requires turning a predicate into an explicit lambda expression if it is used as an argument, by the time we need multiple instances of the predicate, it is already in the form of a lambda expression. We can show how this works by encoding a Montagovian (Dowty, Wall, Peters, 1981) treatment of conjoined sub- ject noun phrases within our approach. The ma- jor feature of this treatment is that noun phrases act as higher-order predicates of verb phrases, rather than the other way around as in the sim- pler rules presented in Sections 1 and 2. In the Montagovian treatment, a proper noun such as JoAn is given an interpretation equivalent to P\P(jotm), so that when we apply it to a pred- icate like ran in interpreting John runs we get something like apply(P\P(john),run) which re- duces to run(john). With this in mind, consider the following two rules for the interpretation of sentences with conjoined subjects: (7) sem(np_np_conj_np [(Conj .rip: [argval=P] ). (Np1 ,np: [axgval=P] ), (toni, conj : [argvall=Npl, argval2=Np2] ), (Np2,np: [argval=P] )] ). (8) semCs_np_vp, [CNp.s: Q). CNp.np: [argval=X\Vp] ), (Vp,vp: [subj val=X] )] ). The first of these rules gives a Montagovian treatment of conjoined noun phrases, and the second gives a Montagovian treatment of simple declarative sentences. Both of these rules assume that a proper noun such as John would have a logicai-form/category-expression pair like (apply(P, john) .np: [argval=P] ). In (7) it is assumed that the conjunction and would have a logicai-form/category-expression pair like (~dCP1,P2), conj : [argvall=Pl, argval2=P2] ). In (7) the logical forms of the two conjoined daugh- ter nps are unified with the two arguments of the conjunction, and the arguments of the daughter nps are unified with each other and with the sin- gle argument of the mother np. Thus applying (7) to interpret John and Bill yields a constituent with the logical-form/category-expression pair 35 (and(apply(P, j ohm), apply (P, bill) ), np: [argval=P] ). In (8) an explicit lambda expression is constructed out of the logical form of the vp daughter in the same way a lambda expression was constructed in (6), and this lambda expression is unified with the argument of the subject np. For the sentence John and Bill like Mary, this would produce the logical form and (apply (X\like (X,mary), j ohm), apply(X\like (X,mary) ,bill)), which can be reduced to and(like (john,mary) ,like(bill,mary)). 4 Theoretical Foundations of Unification-Based Seman- tics The examples presented above ought to be con- vincing that a unification-based formalism can be a powerful tool for specifying the interpretation of natural-language expressions. What may not be clear is whether there is any reasonable theoretical foundation for this approach, or whether it is just so much unprincipled "feature hacking." The in- formal explanations we have provided of how par- ticular rules work, stated in terms of unifying the logical form for constituent X with the appropriate variable in the logical form for constituent Y, may suggest that the latter is the case. If no constraints are placed on how such a formalism is used, it is certainly possible to apply it in ways that have no basis in any well-founded semantic theory. Never- theless, it is possible to place restrictions on the formalism to ensure that the rules we write have a sound theoretical basis, while still permitting the sorts of rules that seem to be needed to specify the semantic interpretation of natural languages. The main question that arises in this regard is whether the semantic rules specify the interpreta- tion of a natural-language expression in a compo- sitional fashion. That is, does every rule assign to a mother constituent a well-defined interpreta- tion that depends solely on the interpretations of the daughter constituents? If the interpretation of a constituent is taken to be just the interpre- tation of its logical-form expression, the answer is clearly "no." In our formalism the logical-form expression assigned to a mother constituent de- pends on both the logical-form expressions and the category expressions assigned to its daughters. As long as both category expressions and logical- form expressions have a theoretically sound basis, however, there is no reason that both should not be taken into account in a semantic theory; so, we will define the interpretation of a constituent based on both its category and its logical form. Taking the notion of interpretation in this way, we will explain how our approach can be made to preserve compositionality. First, we will show how to give a well-defined interpretation to every constituent; then, we will sketch the sort of re- strictions on the formalism one needs to guarantee that any interpretation-preserving substitution for a daughter constituent also preserves the interpre- tation of the mother constituent. The main problem in giving a well-defined inter- pretation to every constituent is how to interpret a constituent whose logical-form expression contains free variables that also appear in feature values in the constituent's category expression. Recall the rule we gave for combining auxiliaries with verb phrases: (5) sem(vp_aux_vp, [ (Aux, vp : [subj val--S] ), (Aux, aux: [argval=Vp] ), (Vp,vp: [subj val=S] )] ). This rule accepts daughter constituents having logical-form/category-expression pairs such as (migh~ (A), attz : [argval=A] ) and (like (X, mary), vp: [subj val=X] ) to produce a mother constituent having the logical-form~category-expression pair (migh~ (like, X, mary), vp: [subj val=X]. Each of these pairs has a logical-form expression containing a free variable that also occurs as a fea- ture value in its category expression. The simplest way to deal with logical-form/category-expression pairs such as these is to regard them in the way that syntactic-category expressions in unification grammar can be regarded--as abbreviations for the set of all their well-formed fully instantiated substitution instances. To establish some terminology, we will say that a logical-form/category-expression pair containing no free-variable occurrences has a "basic interpre- tation," which is simply the ordered pair consist- ing of the interpretation of the logical-form ex- pression and the interpretation of the category 37 expression. Since there are no free variables in- volved, basic interpretations should be unprob- lematic. The logical-form expression will simply be a closed well-formed expression of some ordi- nary logical language, and its interpretation will be whatever the usual interpretation of that ex- pression is in the relevant logic. The category ex- pression can be taken to denote a fully instantiated grammatical category of the sort typically found in unification grammars. The only unusual prop- erty of this category is that some of its features may have logical-form interpretations as values, but, as these will always be interpretations of ex- pressions containing no free-variable occurrences, they will always be well defined. Next, we define the interpretation of an arbi- trary logical-form/category-expression pair to be the set of basic interpretations of all its well- formed substitution instances that contain no free-variable occurrences. For example, the in- terpretation of a constituent with the logical- form/category-expression pair (might (like, X, mary), vp: [subj val=X] ) would consist of a set containing the basic inter- pretations of such pairs as (might (like, john, mary). vp : [subj val=j ohn] ). (might (like, bill, mary), vp : [subj val=bill] ). and so forth. This provides well-defined interpretation for ev- ery constituent, so we can now consider what re- strictions we can place on the formalism to guaran- tee that any interpretation-preserving substitution for a daughter constituent also preserves the inter- pretation of its mother constituent. The first re- striction we need rules out constituents that would have degenerate interpretations: No semantic rule or semantic lexical specification may contain both free and bound occurrences of the same variable in a logicai-form/category-expression pair. To see why this restriction is needed, consider the logical-form/category-expression pair (every (X ,man(X), die(X) ), np: [boundvar=X, bodyval=die (X) ] ). which might be the substitution instance of a daughter constituent that would be selected in a rule that combines noun phrases with verb phrases. The problem with such a pair is 38 that it does not have any well-formed substi- tution instances that contain no free-variable occurrences. The variable X must be left uninstantiated in order for the logical-form ex- pression every(X,man(X),die(X)) to be well formed, but this requires a free occurrence of X in np: [boundvar=X, bodyval=die (X) ]. Thus this pair will be assigned the empty set as its in- terpretation. Since any logical-form/category- expression pair that contains both free and bound occurrences of the same variable will receive this degenerate interpretation, any other such pair could be substituted for this one without alter- ing the interpretations of the daughter constituent substitution instances that determine the inter- pretation of the mother constituent. It is clear that this would normally lead to gross violations of compositionality, since the daughter substitution instances selected for the noun phrases every man, no woman, and some dog would all receive the same degenerate interpretation under this scheme. This restriction may appear to be so constrain- ing as to rule out certain potentially useful ways of writing semantic rules, but in fact it is gener- ally possible to rewrite such rules in ways that do not violate the restiction. For example, in place of the sort of logical-form/category-expression pair we have just ruled out, we can fairly easily rewrite the relevant rules to select daughter substitution instances such as (every (X ,man(X), die (X)), np: [bodypred=X\die (X)]), which does not violate the constraint and has a completely straightforward interpretation. Having ruled out constituents with degenerate interpretations, the principal remaining problem is how to exclude rules that depend on properties of logical-form expressions over and above their in- terpretations. For example, suppose that the or- der of conjuncts does not affect the interpretation of a logical conjunction, according to the inter- pretation of the logical-form language. That is, and(p,c 1) would have the same interpretation as and(q,p). The potential problem that this raises is that we might write a semantic rule that con- tains both a logicai-form expression like and(P, Q) in the specification of a daughter constituent and the variable P in the logical form of the mother constituent. This would be a violation of composi- tionality, because the interpretation of the mother would depend on the interpretation of the left con- junct of a conjunction, even though, according to the semantics of the logical-form language, it makes no sense to distinguish the left and right conjuncts. If order of conjunction does not af- fect meaning, we ought to be able to substitute a daughter with the logical form and(q,p) for one with the logical form and(p,q) without af- fecting the interpretation assigned to the mother, but clearly, in this case, the interpretation of the mother would be affected. It is not clear that there is any uniquely optimal set of restrictions that guarantees that such viola- tions of compositionality cannot occur. Indeed, since unification formalisms in general have Tur- ing machine power, it is quite likely that there is no computable characterization of all and only the sets of semantic rules that are compositional. Nev- ertheless, one can describe sets of restrictions that do guarantee compositionality, and which seem to provide enough power to express the sorts of semantic rules we need to use to specify the se- mantics of natural languages. One fairly natu- ral way of restricting the formalism to guarantee compositionality is to set things up so that unifi- cations involving logical-form expressions are gen- erally made against variables, so that it is possible neither to extract subparts of logical-form expres- sions nor to filter on the syntactic form of logical- form expressions. The only exception to this re- striction that seems to be required in practice is to allow for rules that assemble and disassemble lambda expressions with respect to their bodies and bound variables. So long as no extraction from inside the body of a lambda expression is allowed, however, compositionality is preserved. It is possible to define a set of restrictions on the form of semantic rules that guarantee that no rule extracts subparts (other than the body or bound variable of a lambda expression) of a logical-form expression or filters on the syntactic form of a logical-form expression. The statement of these restrictions is straightforward, but rather long and tedious, so we omit the details here. We will simply note that none of the sample rules pre- sented in this paper involve any such extraction or filtering. 5 The Semantics of Long- Distance Dependencies The main difficulty that arises in formulating semantic-interpretation rules is that constituents frequently appear syntactically in places that do not directly reflect their semantic role. Semanti- cally, the subject of a sentence is one of the argu- ments of the verb, so it would be much easier to produce logical forms for sentences if the subject were part of the verb phrase. The use of features such as subjval, in effect, provides a mechanism for taking the interpretation of the subject from the place where it occurs and inserting it into the verb phrase interpretation where it "logically" be- longs. The way features can be manipulated to accom- plish this is particularly striking in the case of the long-distance dependencies, such as those in WH- questions. For the sentence Which girl might John like.C, the simplest plausible logical form would be something like which(X, girl (X), migh~ (like (john, X) ), where the question-forming operator which is treated as a generalized quantifier whose "argu- ments" consist of a bound variable, a restriction, and a body. The problem is how to get the variable X to link the part of the logical form that comes from the fronted interrogative noun phrase with the argument of like that corresponds to the noun phrase gap at the end of the verb phrase. To solve this problem, we can use a technique called "gap- threading." This technique was introduced in uni- fication grammar to describe the syntax of con- structions with long-distance dependencies (Kart- tunnen, 1986) (Pereira and Sheiber, 1987, pp. 125- 129), but it works equally well for specifying their semantics. The basic idea is to use a pair of fea- tures, gapvalsin and gapvalsou% to encode a list of semantic "gap fillers" to be used as the seman- tic interpretations of syntactic gaps, and to thread that list along to the points where the gaps occur. These gap fillers are often just the bound variables introduced by the constructions that permit gaps to occur. The following semantic rules illustrate how this mechanism works: (9) s em(whq_ynq_np_gap, [(Np,s : [gapvalsin= [], gapvalsout = [7 ] ), (Np,np : [type=int errog, bodypred=A\Ynq] ), (Ynq, s : [gapvalsin= [A] , gapvalsout = [] ] )] ). This is the semantic-interpretation rule for a WH- question with a long-distance dependency. The syntactic form of such a sentence is an interrog- ative noun phrase followed by a yes/no question with a noun phrase gap. This rule expects the 39 interrogative noun phrase which girl to have a logical-form/category-expression pair such as (which(X, girl (X), Bodyval), np: [type=int errog, bodypred=X\Bodyval] ). The feature bodypred holds a lambda expression whose body and bound variable are unified respec- tively with the body and the bound variable of the which expression. In (9) the body of this lambda expression is unified with the logical form of the embedded yes/no question, and the gapvalsin feature is set to be a list containing the bound vari- able of the lambda expression. This list is actually used as a stack, to accomodate multiply nested filler-gap dependencies. Since this form of ques- tion cannot be embedded in other constructions, however, we know that in this case there will be no other gap-fillers already on the list. This is the rule that provides the logical form for empty noun phrases: (I0) sem(empl:y_np, [ (Val, np: [gapvalsin= [Val[ ValRest], gapvalsout=ValRes~] )] ). Notice that it has a mother category, but no daughter categories. The rule simply says that the logical form of an empty np is the first ele- ment on its list of semantic gap-fillers, and that this element is "popped" from the gap-filler list. That is, the gapvalsoul: feature takes as its value the tail of the value of the gapvalsin feature. We now show two rules that illustrate how a list of gap-fillers is passed along to the points where the gaps they fill occur. (II) sem(vp_aux_vp, [ (Aux, vp: [subj val=S, gapvals in= In, gapvalsouz=Out] ), (Aux, aux: [argvalfVp] ), (Vp, vp: [subj val=S, gapvalsin= In, gapvalsou~=Out] ) ] ). This semantic rule for verb phrases formed by an auxilliary followed by a verb phrase illustrates the typical use of the gap features to "thread" the list of gap fillers through the syntactic structure of the sentence to the points where they are needed. An auxiliary verb cannot be or contain a WH-type gap, so there are no gap features on the category aux. Thus the gap features on the mother vp are simply unified with the corresponding features on the daughter vp. A more complex case is illustrated by the fol- lowing rule: (12) sem(vp_vp_pp, [ (Pp, vp: [subj va1=S, gapvals in=In, gapvalsou~=Ou~] ). (Vp, vp : [subj val=S, gapvalsin=In, gapvalsout =Thru] ), (Pp ,pp : [argval=Vp, gapvalsin=Thru, gapvalsouZ=Out] ) ] ). This is a semantic rule for verb phrases that con- sist of a verb phrase and a prepositional phrase. Since WH-gaps can occur in either verb phrases or prepositional phrases, the rule threads the list carried by the gapvalsin feature of the mother vp first through the daughter vp and then through the daughter pp. This is done by unifying the mother vp's gapvalsin feature with the daughter vp's gapvalsin feature, the daughter vp's gapvalsout feature with the daughter pp's gapvalsin feature, and finally the daughter pp's gapvalsouz feature with the mother vp's gapvalsout feature. Since a gap-filler is removed from the list once it has been "consumed" by a gap, this way of threading ensures that fillers and gaps will be matched in a last-in-first-out fashion, which seems to be the general pattern for English sentences with multi- ple filler-gap dependencies. (This does not handle "parasitic gap" constructions, but these are very rare and at present there seems to be no really convincing linguistic account of when such con- structions can be used.) Taken altogether, these rules push the quan- tified variable of the interrogative noun phrase onto the list of gap values encoded in the fea- ture gapvalsin on the embedded yes/no question. The list of gap values gets passed along by the gap-threading mechanism, until the empty-noun- phrase rule pops the variable off the gap values list and uses it as the logical form of the noun phrase gap. Then the entire logical form for the embed- ded yes/no question is unified with the body of the logical form for the interrogative noun phrase, producing the desired logical form for the whole sentence. This treatment of the semantics of long-distance dependencies provides us with an answer to the question of the relative expressive power of our approach compared with the conventional lambda- calculus-based approach. We know that the unification-based approach is at least as power- ful as the conventional approach, because the the conventional approach can be embedded di- rectly in it, as illustrated by the examples in Section 3. What about the other way around? Many unification-based rules have direct lambda- calculus-based counterparts; for example (2) is 40 a counterpart of (4), and (3) is the counterpart of (5). Once we introduce gap-threading, how- ever, the correspondence breaks down. In the conventional approach, each rule applies only to constituents whose semantic interpretation is of some particular single semantic type, say, func- tions from individuals to propositions. If every free variable in our approach is treated as a lambda variable in the conventional approach, then no one rule can cover two expressions whose inter- pretation essentially involves different numbers of variables, since these would be of different seman- tic types. Hence, rules like (11) and (12), which cover constituents containing any number of gaps, would have to be replaced in the conventional ap- proach by a separate rule for each possible number of gaps. Thus, our formalism enables us to write more general rules than is possible taking the con- ventional approach. 6 Conclusions In this paper we have tried to show that a unification-based approach can provide powerful tools for specifying the semantic interpretation of natural-language expressions, while being just as well founded theoretically as the conventional lambda-calculus-based approach. Although the unification-based approach does not provide a sub- stitute for all uses of lambda expressions in se- mantic interpretation, we have shown that lambda expressions can be introduced very easily where they are needed. Finally, the unification-based ap- proach provides for a simpler statement of many semantic-interpretation rules, it eliminates many of the lambda reductions needed to express seman- tic interpretations in their simplest form, and in some cases it allows more general rules than can be stated taking the conventional approach. in part by a gift from the Systems Development Foundation and in part by a contract with the Nippon Telegraph and Telephone Corporation. References Dowty, David R., Robert Wall, and Stanley Pe- ters (1981) Introduction to Montague Semantics (D. Reidel, Dordrecht, Holland). Karttunnen, Lauri (1986) "D-PATR: A De- velopment Environment for Unification-Based Grammars," Proceedings of the llth Interna- tional Conference on Computational Linguis- tics, Bonn, West Germany, pp. 74-80. Miller, Dale A., and Gopalan Nadathur (1986) "Higher-Order Logic Programming," in E. Shapiro (ed.), Third International Conference on £ogic Programming, pp. 448-462 (Springer- Verlag, Berlin, West Germany). Montague, Richard (1974) Formal Philosophy (Yale University Press, New Haven, Connecti- cut). Pereira, Fernando C.N., and Stuart M. Shieber (1987) Prolog and Natural-Language Analysis, CSLI Lecture Notes Number 10, Center for the Study of Language and Information, Stanford University, Stanford, California. Shieber, Stuart M. (1986) An Introduction to Unification-Based Approaches to Grammar, CSLI Lecture Notes Number 4, Center for the Study of Language and Information, Stanford University, Stanford, California. Acknowledgments The research reported in this paper was begun at SRI International's Cambridge Computer Sci- ence Research Centre in Cambridge, England, sup- ported by a grant from the Alvey Directorate of the U.K. Department of Trade and Indus- try and by the members of the NATTIE consor- tium (British Aerospace, British Telecom, Hewlett Packard, ICL, Olivetti, Philips, Shell Research, and SRI). The work was continued at the SRI Ar- tificial Intelligence Center and the Center for the Study of Language and Information, supported 41
1989
5
REFERENCE TO LOCATIONS Lewis G. Creary, J. Mark Gawron, and John Nerbonne Hewlett-Packaxd Laboratories, 3U 1501 Page Mill Road Palo Alto, CA 94304-1126 Abstract I.I Sketch of Proposal We propose a semantics for locative expressions such as near Jones or west of Denver, an impor- tant subsystem for NLP applications. Locative ex- pressions denote regions of space, and serve as argu- ments to predicates, locating objects and events spa- tially. Since simple locatives occupy argument posi- tions, they do NOT participate in scope ambiguities m pace one common view, which sees locatives as logical operators. Our proposal justifies common representa- tional practice in computational linguistics, account- ing for how locative expressions function anaphori- tally, and explaining a wide range of inference in- volving locatives. We further demonstrate how the argument analysis may accommodate multiple loca- tive arguments in a single predicate. The analysis is implemented for use in a database query application. 1 Introduction Locative expressions take diverse forms: in New York, here, there, nowhere, and on a boat he has in Ohio. They combine with common nouns (city on the Rhine), or with verbs or verb-phrases (work in Boston), always locating objects and situations in space. Some temporal expressions are similar, but we focus here on spatial locatives. The analysis was developed for use in an NLP sys- tem producing database queries; it is fully imple- mented and has been in frequent (developmental) use for 18 months. It is important to provide facilities for reasoning about location in database query ap- plications because users typically do not query loca- tive information in the exact form it appears in the database. A database may e.g. contain the infor- mation that a painting is in the Guggenheim Mu- seum, perhaps even that it's in the Guggenheim in New York, and yet be helpless when queried whether that same painting is in the US. In our implementa- tion information about location is represented using the logical analysis provided here. I i of course, the in£ormation that New York is in the US must be provided by a compatible geographical knowledge base. The provides general service: first, in collecting the data relevant to a semantic analysis of locatives; sec- ond, in presenting the proposal in a fashion which ap- plies to other natural languages and other logical rep- resentations; and third, in noting the consequences of our proposal for the organization of NLP systems, specifically the cooperation of syntax and semantics. The behavior of locatives in inference and anaphora reflects their semantics. This behavior justifies the hypothesis that (unquantified) locatives refer to re- gions, while related sequences of locatives refer to the intersection of the regions associated with their components. E.g. the phrase (sequence) in Canada on the Atlantic Coast refers to the (maximal) region which is both in Canada and on the Atlantic Coast. Locative adverbials within a verb phrase will then be seen to contribute to a location argument in pred- icates which identifies an area within which the pred- icate is asserted to hold. The view that locatives occupy an ARGUMENT position within a predication is contrasted with the view that they are EXTER- NAL OPERATORS (cf. Cresswell [7]), or MODIFIERS on predications (cf. Davidson [8] or Sondheimer [18]). In fact, however, the analysis of locative phrases as arguments jibes well with the practice of most com- putational linguists; cf. Allen [1], pp.198-207 and the references there, [1], p.218. The present effort con- tributes to the justification and explication of this practice. Our approach is closest to Jackendoff [12]. We follow Jackendoff first, in suggesting that locative phrases are referential in the same way that noun phrases (NPs) are; and second, in taking locative ad- verbials to function as arguments. But there is a sig- nificant foundational problem implicit in the hypoth- esis that locatives are arguments: locatives, unlike standard arguments in the predicate calculus, appear optionally and multiply. Predicate logic does not ac- commodate the occurrence of multiple arguments in a single argument position. We solve this techni- cal problem by allowing that multiple locatives CON- 42 STRAIN a single argument within a predication. This effectively challenges a standard assumption about the syntax-semantics interface, viz. how syntactic el- ements map into arguments, but leads to an elegant semantics. In addition to the adverbial use of locatives, we recognize a predicative use illustrated by (1). We return to these in Section 6 below. (I) Tom is in Canada on the Atlantic Coast. 2 The Logic of Locatives In this section we collect valid and invalid argument- patterns involving adverbial locatives. A semantics of locatives should explain the entailments we catalog here. We restrict our attention initially to locative phrases in which locations are specified with respect to logical individuals (denoted by proper names, e.g. 'Boston', 'Jones', or 'Mass Ave') because we assume that their analysis is relatively uncontroversial. 2 We begin by noting that any number of locatives may adjoin to almost any verb (phrase); (2) Tom works on Mass Ave. in Boston near MIT. A natural question to ask, then, concerns the logical relation between complex clauses like (2) and simpler clauses eliminating one or more of its locatives. To begin, the SIMPLIFYING INFERENCE in (3) is valid: (3) AI works in Boston. .'. AI works. Using multiple adjuncts doesn't disturb this pattern of inference, as (4) and (5) illustrate: AI works on Mass Ave. in Boston. (4) ".'. 'AI works in Boston. (5) Al works on Mass Ave. in Boston. .'. Al works on Mass Ave. PERMUTING locative adjuncts has no effect on truth conditions. Thus the sentences in (6) are truth- conditionally equivalent. Some are less felicitous than others, and they may manipulate discourse context differently, but they all describe the same facts: 2We don't think it matters whether the proper names are taken to be indlvidtud constants, as they normally are, or whether they are analyzed as restricted parameters, as situ- ation semantics ([3],pp.165-68) has suggested. (6) AI works on Mass Ave in Boston near MIT AI works near MIT on Mass Ave in Boston Al works near MIT in Boston on Mass Ave Al works in Boston near MIT on Mass Ave AI works in Boston on Mass Ave near MIT Al works on Mass Ave near MIT in Boston Even though the simplifying inference in (3) is valid, we must take care, since the complementary (accumulative) inference (7) is INVALID (but cf. the valid (8)): AI works in NY. (7) AI works in Boston. ./. AI works in NY in Boston. AI works in NY. (8) AI works in Boston. .'. Al works in NY and in Boston. Finally, there is what we call the UPWARD MONO- TONICITY of locatives. If a sentence locating some- thing at a region R is true, and if R is contained in the region R ~, then a sentence locating that thing at R ~ is true: (9) A1 works in New York. New York is in the US. .'. AI works in the US. (10) The dog sleeps under the table. Under the table is in the house (region "under the table" is contained in region "in the house.") .'. The dog sleeps in the house. Notice in (10) that the locative phrases are specified with respect not to locations, but to other logical individuals. This is accomplished by the semantics of the prepositions under and in; our proposal will require that locative PHRASES refer to regions, but not that their subcomponents must. 3 Other Semantic Evidence 3.1 Scope Locatives by themselves do NOT induce scope am- biguity with respect to negation, thus the semantic nonambiguity of (11); compare that with (12). (11) Tina didn't work in New York. (12) Tina didn't drink because of her husband. 43 The causal adjunct becanse of DOES induce a scope ambiguity with respect to negation. That is why (12) has two readings, one (narrow scope negation) on which Tina's not drinking is in some way due to her husband, another (wide-scope negation) which denies that Tina's drinking is because of her husband. (11) shows no analogous scope ambiguity. Thus, locatives appear to behave differently from at least some other adjuncts in that they no show no scope variation with respect to negation. The simplest explanation of this failure to induce scope ambiguity is to deny that simple locatives have scope, i.e. to deny that they are logical operators or external modifiers. We propose exactly this when we postulate that they are arguments rather than op- erators. We grant that locatives in sentence-initial position DO display properties which suggest scope, but this needn't vitiate the argument analysis, s Note that the "commutativity of locatives" shown in (6) is another indication of failure to scope: loca- tives fall to scope with respect to each other. 3.1.1 Scope versus Focus In evaluating the claim that no SCOPE AMBIGUITY is possible in (11), it is important not to be confused by the possibility of interpreting the FOCUS of negation in various ways. The association of negation with a focused element is a well-discussed, if not not a well-understood, phenomenon in the literature (see Jackendoff ([11], pp.229-78), Chomsky ([4], pp.199- 208), and Rooth [17] for discussions of focus). The crucial point about focus is that it affects arguments and adjuncts alike, and that ambiguities involving the association of negation with focus affect both. For example, (13) Elizabeth Browning didn't adore Robert. The focus can be either on adore or on Robert, giving different presuppositions, 4 even though the proper name Robert is never analyzed as scope-inducing. 3.2 Preposed Locatives PaEPOSED locatives do show properties that resemble scope. Cf. Thomason and Stalnaker ([21], p.205): nit is worth emphasizing that we are nanking a semantic point boa-e--there may be a syntactic (attachment) ambiguity in (11), but it's not one that has any semantic significance. t Relevant here is Horn's [10] notion of metallnguistic nega- tion, which accounts for purely contrastive or contradicting negation. The issues Horn discusses are also orthogonal to the ambiguity in (12), since the ambiguity persists outside of contrastive contexts. In that restaurant, if John is asked to wear a (14) tie, he wears a tie. Here the preposed locative does not belong exclu- sively to either the antecedent or the consequent of the conditional; rather, the sentence says: if John is asked to wear a tie in that restaurant, he wears a tie in that restaurant. Thomason and Stalnaker argue hence that the locative must be treated seman- tically as a sentence operator. Cf. Cresswell ([7], p.217) points out another example where the result of preposing a locative is not a simple paraphrase of its "source": (15) At our house, everyone is eating. Everyone is eating at our house. Here there is a reading of the first which can be paraphrased Everyone at our house is eating, where the quantifier is restricted to people at our house. The most important point to make here is that "preposing" generates new readings, read- ings unavailable for unpreposed adverbial locatives. So if these examples are evidence for a sentence- operator semantics for locatives, then it's a seman- tics limited to locatives found in this position. The "wide-scope" readings occur only for locatives in this "topic" (sentence-initial) position, s It would be se- mantically implausible to regard the preposed adver- bials here as mere stylistic variants of nonpreposed elements, s • But we note further that locations can be restricted by discourse context alone: (16) Joan lived in LA. She often went swimming. We naturally interpret Joan as swimming in LA; and such effects can extend indefinitely through discourse. We propose to analyze both Thomason and Stal- naker's example and Cresswell's example as R.E- STRICTINO TOPIC locatives that restrict some loca- tion roles in the sentence to follow. In the case of (14), the restriction applies to the locations of both the antecedent and consequent clauses of the condi- tional sentence; in the case of (15), the restriction 5Note that this is not normally the case for sentence- operator adverbials; The number of the planeta is necessarily nine is semantically ambiguous between a wide- and narrow- scope reading of neeessarlb. eIt is syntactically implausible as well to regard restrict- ing topic elements as stylistic variants of unpreposed elements, since some preposed dements can only occur preposed: Of the dogs at the show, only Schnauzers were affected. 44 applies to the quantifier Everyone, limiting its do- main to those individuals at "our house. "7 This has the consequence that there is a class of restrictive topic-position modifiers that cannot be analyzed as preposed adverbials. 3.3 Analogy with NPs Jackendoff ([12], Chap.3) is partially devoted to ar- ticulating the strong semantic analogy between loca- tive phrases and noun phrases. The analogy includes quantification, a distinction between definite and in- definite reference, deictic reference, and anaphora. Jackendoff's programmatic point is that the seman- tic status of locatives is therefore the same as that of NPs: they both refer and both function as arguments. It is noteworthy that locatives have explicitly quan- tificational forms, as in: (17) Bill sang everywhere Mary sang. This suggests that quantified locatives have the same relationship to simple locatives as general NPs (such as erery small country) have to singular NPs (such as the smallest country, a small country, and Honduras). Though SIMPLE locatives show no scope variation with respect to other scope operators, quantified loca- tives (such as everywhere and nowhere) clearly do. But this scope is due to the quantification, not to the locative function. Since locatives occupy argument positions in predications, quantified locatives are sim- ply quantifications over those predications, exactly analogous to nonlocative quantifications. Second, we find similarly noteworthy the indefi- nitely referring locative somewhere. We note that its particular reference (like that of someone) is available for subsequent anaphoric use. That is, (18) may be understood to claim that Ed works where AI works, s (18) AI lives somewhere on the Ohio, and Ed works there. Third, we note that deictic locative reference is possible (using here or there), just as deictic non- locative reference is (using pronouns or demonstra- tives). We address the fourth and final reminder of the analogy between NP and locative reference, loca- tive anaphora, in Section 3.4, immediately below. ~'We don't claim to offer a complete analysis of these topic- locatives (nothing we have said makes it clear how these re- strictions are enforced, or what the constraints on them are); but we offer a plausibility argument that these ewe cases of a somewhat different color. SThis contrasts with examples of locative anaphors with shnple locative antecedents, examined below in Section 3.4. cf. (19). 3.4 Anaphora Viewing simple locatives as analogous to singular NPs, we obtain a simple account of the anaphoric po- tential of locatives by taking them to denote spatial regions. The functioning of locatives as antecedents for the locative pro-form there then provides addi- tional evidence that simple locatives are in a class with singular NPs. Consider: (19) A1 lives on the Ohio, and Ed works there. (19) makes the claim, not that AI lives in the same place Ed works, but that he lives on the same river that AI works on. Thus the reference of both on the Ohio and there appears to be the entire spatial re- gion which is ON the Ohio (as opposed to any partic- ular subregion of it). This region is uniquely (though vaguely) determined in a given context by the name of the river and the particular preposition on. We are, in effect, claiming that the PP on the Ohio acts as a sort of definite description of a particular spatial region. Anaphoric reference back to it is reference back to that same region. A further note is worthwhile here. If the locative phrase on the Ohio in (19) refers to the entire region which may be so described (as we've just argued), then the LOCATION role of the predicates LIVE and WORK must be construed as specifying a region 'within which' a relation is somewhere instantiated. Indeed, we postulate this as a general property of location roles within all predicates. 3.5 Regional Intersection Next consider a more complicated version of (19): (20) AI lives on the Ohio in Kentucky, and Ed works there. In (20) one may understand there as referring to the intersection of the regions 'on the Ohio,' and 'in Ken- tucky' (and again, NOT to the particular subpart of that intersection where AI lives). In fact, this reading is preferred. (There may also be understood to refer to one of the component superregious, and our anal- ysis is fully compatible with this possibility.) Let's consider how best to supply the intersective reference for the pronoun there. In (20) the two locative expressions in the first clause simultaneously constrain the same location role. In general, each successive locative in a clause further narrows the region filling the location role: (WORK agent : Ed (21) loc : ( n~reg: {0N(Ohio), IN(Kentucky) })) 45 'n~' is the intersection operation over regions. Cf. Section 5.2 for formal discussion. Now, since the filler of a Location role is always a single region, the anaphoric potential illustrated in (20) Ls explained. It would remain unexplained if each locative introduced a distinct predication. 4 Syntax/Semantics Mapping We employ a syntax/semantics interface that's inno- vative in two respects: first, we allow that adjuncts (locatives) be interpreted as arguments, rather than operators. Cf. McConnell-Ginet ([14],p.167ff) for a similar proposal about manner adverbs. Second, we allow that multiple locatives (in the same verb phrase) jointly determine a single location argument via the intersection of regions. Thus we allow sev- eral syntactic dependents corresponding to a single semantic argument. This challenges a standard work- ing assumption about the syntax-semantics mapping made in a number of frameworks, 9 but it leads to a neuter semantic account: by allowing several loca- tive specifiers to constrain a single role, we account more easily for the permutability of locatives, and we provide the right range of anaphoric antecedents. 5 Formal Aspects Here we describe the logical expressions into which locatives (and sentences containing them) are trans- lated, and the semantic interpretations of the logical expressions. 5.1 Overview of NFLT Our logical formalism is called NFLT. t° It is a modifi- cation and augmentation of standard predicate calcu- lus, with two modifications relevant here: predicates and functors of variable arity, and a semantic inter- pretation in terms of situation-types. 5.1.1 Predicate and Function Expressions Predications and functional terms in NFLT have an explicit rolemark for each argument; in this respect NFLT resembles semantic network formalisms and differs from standard predicate calculus, where the 9This doesn't contradict Montague's semantic theoriea, but it abandons the favored "functional application n mode of inter- pretation. Cf. Montague [15], p.202. Neither verb (phrase) nor locative is interpreted as a function applying to the argument supplied by the other. l°Cf. Creary and Pollard [6] for conceptual background, literature references, and more complete presentation of NFLT. roles are order-coded. For example, atomic formulas in NFLT are constructed of a base-predicate and a set of rolemark-argument pairs, as in the following translation of Tom works in Boston: (22) (WORK agent:TOM location:(IN theme:BOSTON)) The explicit representation of roles permits each predicate- and function-symbol in NFLT to take a variable number of arguments, so that different oc- currences of a verb are represented with the same predicate-symbol, despite differences in valence (i.e. number and identity of attached complements and adjuncts). 11 5.2 Functional Location Terms Functional location terms are functional terms denot- ing regions. These are of two general sorts, simple and intersective. The simple ones consist of a prepo- sitional functor applied to an appropriate argument, while the intersective ones consist of a regional in- tersection functor applied to a set of regions. As an example, consider the following location term, which might serve as the translation (in a given context) of the iterated locatives on the Ohio in Kentucky near Illinois: (23) (N=reg:{ON3(OHIO), IN(KENTUCKY), NEAEI(ILLINOIS)}) This is a complex location term whose components are simple functional location terms. NE.L~I should denote (e.g.) a function that maps Illinois onto a region beginning at its borders and extending out a short distance. The functor of an intersective location term de- notes the regional intersection function, which maps RI, R~, ..., Rn onto their intersection R. More for- mally, we postulate that spatial regions, partially or- dered by the subregion relation (written __.~), form a LATTICr.. The intersection of regions is then their lattice-theoretic MEET (written 17~), the greatest lower bound with respect to C,~. The eommutativity and associativity of n~ jus- tify specifying its arguments via sets. The order- indifference of set specification accounts for the per- mutability of locatives illustrated in (6). We will also make use of the following familiar lat- tice theorem: llIn order to save space, we shall write II(Boston) for (II the~ : BOSTON), however. 46 (Ex sub:(rlxreg:{R1,R2,..., P~}) (24) eup:(l'l=reg:{R1,R2,..., P~}) ), where l~m~.. According to (24), an intersective location term T al- ways denotes a subregion of the region denoted by the result of deleting some (but not all) of the argument- terms of T. 5.3 Located Predications This is a fact about situations being located in space: if an event or state occurs or obtains within a region R, then it occurs or obtains within any region R' containing R: (25) (((~ eub:R eup:R') A (PRED ... loc:R)) (PRED ... loc:R')) This is simply a statement of upward monotonicity for the location arguments of relations. The schemata (24) and (25) together justify the inference schema (26) (g0RK agt;:TOM loc:(nxreg:{Rl,...,]~})) .'. (WORK agt:TOM lo¢:(nxreg:{Rl,...,~})), where l~m~n. This accounts for the correctness of the locative- simplifying inferences in (4) and (5). The other sort of simplifying inference given in Sec- tion 2 was that exemplified in (3), the inference from Tom's working in Boston to Tom's working. In NFLT this inference is formulated thus: (NORK ag't:TOM loc:IN(BOSTON)) (27) .-. (woRK ag~:T0X) Both the premise and the conclusion of (27) are in- terpreted as denoting situation-types; each is true if there exists a situation of the type it denotes. Since every situation of the type denoted by the premise is necessarily also of the type denoted by the con- clusion, the truth of the premise necessarily entails the truth of the conclusion. This accounts for the validity of (3) in the situation-theoretic framework of NFLT. In a fixed-arity framework, one would repre- sent the conclusion as existentially quantifying over a location argument-position; the inference would then be existential generalization. We recall that (7), repeated here for convenience, is invalid, while the similar (8) (right) is valid: Tom works in NY. Tom works in Boston. .~. Tom works in NY in Boston. Tom works in NY. Tom works in Boston. .'. Tom works in NY and in Boston. The reason is that the premises of the former may locate two different 'working' events while its conclu- sion refers to one. The conclusion of the latter, on the other hand, may refer to distinct 'working' events. Its translation into NFLT is: ((WORK agt:TOM loc:IN(~P[)) A (28) (W0RK ag~:TOM loc:IN(BOSTON)) ) This conclusion is nothing more than the conjunction of the premises. 6 Adnominal Locatives We propose above that the ability to induce scope effects is a litmus test for distinguishing arguments and operators. This test, together with anaphoric evidence, suggests a heterodox treatment of adnomi- nal locatives. In a nutshell, these locatives might be arguments as well. (29) Few cars in Ohio rust. (30) (FEg x (CAR inszance:x loc:IN(0HI0)) (RUST thm:x)) There is a reasonable competing (predicative) analy- sis of the use of adnominal locatives, however. (31) (FEW x ((CAR instance:x) A (LOCATED thm:x loc:IN(0HI0))) (RUST ~ha:x)) Note that in both formulations there is reference to a region, and that the locative cannot participate in scope ambiguities. 12 12We leave as an exercise for the reader to show that the well known (semantically significant) attachment ambiguity between adverbial and adnominal loactives may be represented h~re: Tom evaluated a car in Ohio. 47 7 Other Proposals 7.1 External Operator Analysis Cresswell ([7], p.13) poses the problem of analysis for adverbial modification thus: There are two basic approaches to the analysis of adverbial constructions [...] One is to follow Richard Montague and treat them as sentential operators of the same syntactical category as not. The other is to follow Donald Davidson and represent them in the predicate calculus with the aid of an extra argument place in the verb to be mod- ified [...] We suspect that Cresswell would classify the tack taken toward locative adverbials in this paper as an "extra argument" analysis, but we shall note be- low some important differences between our approach and Davidson's. We find fault with the operator analysis of locative adverbials since it inherently attributes a scope to locatives which, as Section 3.1 shows, isn't reflected in natural language semantics. It is also clear that the simplifying and commutative inferences for loca- tives noted in Section 2 are not predicted by the ex- ternal operator analysis. Locatives wouldn't neces- sarily have these properties any more than negation or the modal adverbs. Finally, we note as problem- atic the comportment of the operator analysis with the anaphoric evidence, particularly where multiple locatives are concerned. 7.2 Davidsonian Analyses Davidson [8], and, following him, Bartsch [2] and Sondheimer [18] have proposed that adverbial modifi- cation is best represented using an unexpected argu- ment place within a predicate. Bartsch ([2], pp.122- 39) and Sondheimer [18] focus on locative construc- tions, so we concentrate on those works here. Sond- heimer ([18], pp.237-39) provides the following anal- ysis: (32) John stumbled in the park under a tree. 3e(Stmbl(J,e) A In(e,p) A Under(e,t)) The standard logic textbook representation of an in- transitive verb such as stumble uses a ONE-PLACE predicate, where Sondheimer, following Davidson, uses the TWO-PLACE predicate signifying a relation between an individual and an event. This is the "ex- tra argument place" that distinguishes Davidsonian treatments. It is worth noting that this approach ac- counts for the logical properties of locatives that we noted in Section (2) above. The simplification and commutativity of locatives follow from the proposi- tional logic of conjunction. The most important differences between Davidso- nian analyses and our own are the ability to account for locative anaphors, and the treatment of scope. As presented in Section 3.4 above, our treatment provides correct regional antecedents for the loca- tive anaphor there. On the other hand, Davidsonian treatments make no explicit reference to regions at all (to which anaphors might refer), and further provide no mechanism for referring to the intersective regions that were seen to be required in the analysis of (20). Our analysis places simple locatives within the scope of all sentence operators. The Davidsonian analysis creates multiple propositions, and scope- inducing elements such as negation can then be ana- lyzed as including some, but not all of these proposi- tions within their scope. For this reason, Davidsonian treatments are much less specific in their predictions vis-a-vis scope (than the one proposed here). Bartsch ([2], p.133) indicates e.g. that she would allow sen- tential negation to have scope over some of the con- juncts in logical forms such as (32), but not others; and Sondheirner ([18], p.250) seems to have a similar move in mind in his discussion of almost as in I al- most locked him in the closet. As indicated in Section 3.2 above, we regard such renderings as confusions of scope and focus. 7.3 Other Works Jackendoff ([12], Chap.3,9) argues that reference to places be recognized in semantic theory, thus allow- ing that locative phrases refer in the same way that NPs do, and that they function as arguments. But Jackendoff never examined inferences involving loca- tives, nor did he attempt to deal with the prima fa- cie difficulties of the argument analysis--the fact that locatives occur optionally and multiply. It is the lat- ter facts which make the argument analysis techni- cally difficult. Finally, where we have been precise about the semantics of the location role, emphasizing that it specifies a region WITHIN WHICH a relation must hold, Jackendoff was less exact. On the other hand, Jackendoff's analysis of PATH EXPRESSIONS is intriguingly analogous to that of locatives, and offers opportunity for extension of the work here. Colban ([5]) analyzes locatives in situation seman- tics, and would like to have the operator/argument 48 issue both ways: he allows that locatives might be external modifiers or arguments. But he offers no ev- idence to support this postulate of ambiguity. Ter Meulen ([20], also working within situation seman- tics, provides a means of referring to the location of complex events, such as the event of two detectives (33) solving a crime. She crucially requires a reference for locative expressions, and her proposals seem compat- ible with ours. Talmy [19], Herskovits [9], and Kautz [13] theorize about the INTERPRETATION of locative expressions, and especially how this is affected by the sorts of ob- jects referred to in locative expressions. Much of this latter work may be regarded as complementary to our own, since we have not attempted to characterize in any detail the manner in which context affects the (34) choice of functional denotation for particular locative prepositions. 8 Conclusions 8.1 Claims 1. . . 4. Locative expressions (e.g. north of Boston near Harry) denote regions of space. The denotations may be referred to anaphorically. Locative expressions are used adverbially to con- strain a location argument in a relation defined by a verb. Thus simple locatives fail to show scope (like proper names). Relations are upwardly monotonic at location ar- guments: if a relation holds at R, then it holds at every containing R I. When multiple locatives are used, the intersec- tion of their denoted regions plays 8 location role. This describes the truth conditions and anaphoric potential of such uses, and predicts correctly the permutability and omissibility of locatives. 8.2 Qualifications We don't claim that all reference to regions is through upwardly monotonic location arguments. On the contrary, regions can stand in relations in a variety of other ways. To take an obvious case, the sub- region relation is upwardly monotonic (transitive), but only in one (superregion) argument--it's not up- wardly monotonic in the first (subregion) argument. Here are two more fairly transparent examples of ref- erence to locations that don't involve the location at- guments of predicates, and therefore aren't upwardly monotonic: Tom likes it in Mendocino. ./. Tom likes it in California. George VI ruled in England. ./. George VI ruled in Europe. We claim that the regions referred to in (33) aren't location arguments, but rather theme (or patient) ar- guments. There are other examples of monotonicity failing that are less easily dismissed, however: It is the tallest in Palo Alto ./. It is the tallest in California. He is alone in the dining room. .f. He is alone in the house. The apparent location argument of these relations (and of all superlatives) is especially interesting be- cause it not only fails to be upwardly monotonic, it even turns out to be downwardly monotonic. We wish to deny that these phrases denote regions which play location roles--more specifically, we allow that the phrases denote regions, but we distinguish the seman- tic role that the regions play. In the case of LOCATION arguments, the intended semantics requires that the relation hold somewhere within the region denoted. In the case of (34), however, the relation can only hold be said to hold if it holds fhrougho,t the region denoted. It is this implicit (universal) quantification that explains the failure of upward monotonicity, of course. We symbolize this sort of role as "throughout, and represent the downwardly monotonic (34) in the following way: (TALLEST tim: • (35) throughout : In (Pa.Zo-Alt o) ) (We emphasize that this is intended to illustrate the distinction between the various semantic roles that locations play--it is not proferred as a serious analysis of the superlative.) 8.3 Future Directions We'd like to improve this account in several ways: first, we'd like to understand the interface between the syntax and semantics more rigorously. Section 4 explains what is unusual about our views here, but the model of syntax/semantics cooperation it sug- gests is something we'd like to explore. Second, we need an account of preposed locatives, as Section 3.2 49 admits. Third, we'd like to describe the relationship between predicates relating objects and regions on the one hand with regions occupied by the objects, as Section 6 shows. Fourth, we'd be interested in explor- ing the relation between our work on the semantics of locatives with work on the contextually dependent interpretation of locatives, such as the work by Her- skovits [9] and Retz-Schmidt [16]. 9 Acknowledgements We're indebted to Carl Pollard for the suggestion to use the algebraic operator 'N~ We'd like to thank him, Barbara Partee, David Dowry, and our col- leagues in the Natural Language Project at Hewlett- Packard Laboratories, especially Bill Ladusaw, for discussion and criticism of the ideas presented here. References [1] James Allen. Natural Language Understanding. Benjamin/Cummings, Menlo Park, 1987. [2] l~nate Bartsch. Adverbialsemantik. Athenaum, Frankfurt, 1972. [3] Jon Barwise and John Perry. Situations and At- titudes. MIT Press, Cambridge, 1983. [41 Noam A. Chomsky. Deep structure, surface structure, and semantic interpretation. In Danny D. Steinberg and Leon A. Jacobovits, ed- itors, Semantics: An Interdiscipinary Reader in Philosophy, Linguistics, and Psychology, pages 183-216. Cambridge University Press, Cam- bridge, 1970. [5] Erik Colban. Prepositional phrases in situation schemata. In Jens Erik Fenstad, Per-Kristian Halvorsen, Tore Langholm, and Johan van Ben- them, editors, Situations, Language, and Logic, pages 133-156. Reidel, Dordrecht, 1987. [6] Lewis G. Creary and Carl J. Pollard. A compu- tational semantics for natural language. In Pro- ceedings of the ~Sth Annual Meeting of the As- sociation for Computational Linguistics, pages 172-179, 1985. IT] M. J. CressweU. Adverbial Modification: Interval Semantics and its Rivals. D.Reidel, Dordrecht, 1985. [8] Donald Davidson. The logical form of action sen- tences. In Nicholas Rescher, editor, The Logic of Decision and Action, pages 81-95. University of Pittsburgh Press, Pittsburgh, 1967. [9] Annette Herskovits. Space and Prepositions in English: Regularities and Irregularities in a Complez Domain. Cambridge University Press, Cambridge, England, 1985. [10] Laurence R. Horn. Metafinguistic negation and pragmatic ambiguity. Language, 61(1):121-174, 1985. [11] Ray Jackendoff. Semantics Interpretation in Generative Grammar. MIT Press, Cambridge, 1972. [12] Ray Jackendoff. Semantics and Cognition. MIT Press, Cambridge, Massachusetts, 1983. [13] Henry A. Kautz. Formalizing spatial concepts and spatial concepts. In Hobbs et at., editor, Commonsense Summer: Final Report, pages 2.1-2.45. CSLI, 1985. [14] Sally McConnell-Ginet. Adverbs and logical form. Language, 58(1):144-184, 1982. [15] Richard Montague. English as a formal lan- guage. In Bruno Visentini, editor, Lingnaggi neUa societa e nella tecnica. Edizioni di Comu- nita, Milan, 1970. [16] Gudula Retz-Schmidt. Various views on spatial prepositions. AI Magazine, 9(2):95-105, 1988. [17] Mats Rooth. Association and Focus. PhD thesis, University of Massachusetts at Amherst, 1986. [18] Norman K. Sondheimer. Reference to spatial properties. Linguistics and Philosophy, 2(2), 1978. [19] Leonard Talmy. How language structures space. In Herbert Pick and Linda Acredolo, editors, Spatial Orientation: Theory, Research, and Ap- plication. Plenum Press, 1983. [20] Alice ter Meulen. Locating events. In Jeroen Groenendijk, Dick de Jongh, and Mar- tin Stokhof, editors, Foundations of Pragmatics and Lezical Semantics, pages 27-40. Forts, Dor- drecht, 1986. [21] Richmond Thomason and Robert Stalnaker. A semantic theory of adverbs. Linguistic Inquiry, 4(2), 1973. 5O
1989
6
GETTING AT DISCOURSE REFERENTS Rebecca J. Passonneau UNISYS, Paoli Research Center P.O. Box 517, Paoli, PA 19301, USA ABSTRACT I examine how discourse anaphoric uses of the definite pronoun it contrast with similar uses of the demonstrative pronoun thai. Their distinct contexts of use are characterized in terms of two contextual features--perslstence of grammati- cal subject and persistence of gr,~mmatical form--which together demonstrate very clearly the interrelation among lexical choice, grammati- cal choices and the dimension of time in signalling the dynamic attentional state of a discourse. 1 Introduction Languages vary in the number and kinds of gram- matical distinctions encoded in their nominal and pronominal systems. Language specific means for explicitly mentioning and re-mentioning dis- course entities constrain what Grosz and Sidner refer to as the linguistic structure of discourse [2]. This in turn constrains the ways in which dis- course participants can exploit linguistic structure for indicating or inferring attentional state. At- tentional state, Grosz and Sidner's term for the dynamic representation of the participants' focus of attention [2], represents--among other things- which discourse entities are currently most salient. One function of attentional state is to help resolve pronominal references. English has a relatively impoverished set of definite pronouns in which gender is relevant only in the 3rd person singu- lar, and where number---a fairly universal nominal category--is not relevant in the 2nd person. Yet even within the English pronominal system, there is a semantic contrast that provides language users with alternative means for accessing the same pre- viously mentioned entities, therefore providing in- vestigators of language with an opportunity to ex- plore how distinct lexicogrammatical features cor- relate with distinct attentional processes. This is the contrast between demonstrative and non- demonstrative pronouns. In this paper I examine how certain uses of the singular definite pronoun it contrast with similar uses of the singular demon- strative pronoun that. I present evidence that the two pronouns it and that have pragmatically distinct contexts of use that can be characterized in terms of a remarkably simple set of preconditions. First, in §2 1 delineate the precise nature of the comparison made here. In §3.1, I describe the methods I used to collect and analyze a set of data drawn from ordinary con- versational interactions. The result of my statisti- cal analysis was a single, highly significant multi- dimensional distributional model, showing lexieai choice to be predicted by two features of the lo- cal context. In §3.2, I summarize the statistical results. They were strikingly clearcut, and pro- vide confirmation that grammatical choices made by participants in a dialogue prior to a particular point in time correlate with lexical choice of either participant at that time. Of over a dozen different variables that were examined, two alone turned out to have enor- mons predictive power in distinguishing between the typical contexts for the two pronouns. Very briefly, the first variable, persistence of gram- matical subject, indicates whether both the an- tecedent and pronoun were subjects of their re- spective clauses. The second, persistence of grammatical form, indicates whether the an- tecedent was a single word phrase or a multi- word phrase, and if the latter, whether the phrase was syntactically more clause-like or more noun- like. Both variables point up the significance of the temporal dimension of discourse in two ways. The first has to do with the evanescence of sur- face syntactic form--the two features pertaining to the grammatical means used to refer to entities are relevant only for a short time, namely across two co-references [17]. The second has to do with the dual nature of referring expressions--as noted by Isard they are constrained by the prior con- text but immediately alter the contezt and become part of it [3] [18]. In §4 I discuss how the contrast between the definite and demonstrative pronouns is constrained by the local discourse context, and 51 how the constraining features of the local context in combination with the lexical contrast provides evidence about modelling the attentional state of discourse. 2 Comparability of it and that Previous work has related the discourse deictic uses of that to the global segmental structure of discourse, and tied the contrast between it and that to the distinction between units of informa- tion introduced at the level of discourse segments versus units of information introduced at the level of the constituent structure of sentences [8] [12] [18]. This paper deals only with the latter cate- gory. That is, I am concerned with entities that are evoked into the discourse model by explicit mentions, i.e., noun phrases [19] or other intra- sentential constituents, and with the difference be- tween accessing these referents via the definite ver- sus the demonstrative pronoun. Thus the data re- ported on here are restricted to cases where one of these pronouns has occurred with an explicit lin- guistic antecedent that is a syntactic argument. 1 A pronoun's antecedent was taken to be a prior linguistic expression evoking (or re-evoking) a dis- course entity that provided a pronoun's referent. The two expressions were not constrained to be strictly coreferential since a wide variety of seman- tic relationships may hold between cospecifying expressions [I] [16] [19]. Syntactically it and that have very similar-- though not identical--privileges of occurrence. 2 The following bullets briefly summarize their syn- tactic differences. • that, but not it, is categorially ambiguous, oc- curring either as a determiner or as an inde- pendent pronoun • it, but not that, has a reflexive and a posses- sive form ( itself/*thatsel~,, its/*thats) • it, but not that, may occur in prepositional phrases where the pronoun in the PP corefers with a c-commanding NP (the table with a drawer in itpthat) x Pronouns whose antecedents were independent tensed clauses or clausal conjuncts were excluded from considera- tion here; I reported on a much larger class of contexts in t12 42;ext,. ch . mmate betw n thcm. .t _ tically occurred very rarely in my data. • it, but not that, can be used non-referentially ( it/*that is raining; it/*that is hard to find an honest politician) These differences, though they may ultimately pertain to the phenomena presented here, won't be discussed further. In general, that can occur with the same syntactic types of antecedents with which it occurs. Thus, apart from prosodic differences-- which were not considered here---the two pronouns are extremely comparable semantically as well as syntactically. Both pronouns are 3rd person, non- animate, and singular. They are thus primarily distinguished by the semantic feature of demon- strativity. An unforeseen but interesting fact is that the proximal demonstrative this occurred very rarely. So the relevant semantic contrast was that be- tween definiteness and demonstrativity, and did not include the proximal/non-proximal contrast associated with this versus that. While I had originally planned to investigate the contrast be- tween the two demonstrative pronominals as well, there were only 8 tokens of this out of ,,-700 pro- nouns whose antecedents were sentence internal arguments. This strongly suggests that however the attentional space of discourse entities is struc- tured, it is not as differentiated as in the spatio- temporal domain, where the contrast between this and that is apparently more relevant. With respect to the contexts examined here, the proximal/non- proximal contrast between this and that is irrele- vant. A stretch of discourse evokes a set of discourse entities, some of which can be accessed pronomi- nally. Of these, some can be accessed by it, and some can be accessed by that. The data I present suggest that the availability of focussed entities for definite and demonstrative pronominal reference differs, and that the consequences on the subse- quent attentional state also differs. The conditions on and consequences of speaker choice of it or that must be pragmatic, and further, it is likely that the choice pertains to attentional state, since both pronominaiization and demonstrativity play such a large role in indicating the attentional status of their referents (cf. [8], [15], [18]). The following excerpts from my conversational data illustrate the syntactic variety of the pro- nouns' antecedents, and give a sense as well that substituting one pronoun for another sometimes results in an equally natural sounding discourse, with the difference being a very subtle one, as 52 in 2. 3 Occasionally, the substitution creates dis- course that is pragmatically odd, as in 6. 1. A: so [you plan to] work for a while, save some money, travel--B: save SOME MONEY and then blow IT (/THAT) off and then go to school 2. what does NOTORIETY mean to you, where does THAT (/IT) put you 3. I didn't really want TO (PAUSE) TEACH PEO- PLE, THAT (/IT) wasn't the main focus 4. so in some ways, I'd like TO BE MY OWN BOSS, so THAT (/IT)'s something that in some way appeals to me very much 5. the drawback is THAT I'M ON CALL 24 HOURS A DAY but IT (/THAT) also means I get dif- ferent periods of time off 6. I don't think EACH SITUATION IS INHERENTLY DIFFERENT FROM THE OTHER, at least, THAT (/IT)'s not the way I look at it In this paper, I focus on the linguistic features of the local context, i.e., the context containing a pronoun token and its antecedent, in order to in- vestigate the relationship between the pronominal features of demonstrativity and definiteness and the local attentional state of a discourse. 3 Statistical Analysis of the Conversational Data 3.1 Method Psychologists and sociologists studying face-to- face interaction have argued that the baseline of interactive behavior is dyadic rather than monadic [4] [9]; similarly, in understanding how speak- ers cooperatively construct a discourse, the base- line behavior must be dialogic rather than mono- logic. The analytic methods employed here were adapted from those used in studying social inter- action among individuals. I analyzed the local context of lexical choice between it and that in four career-counseling interviews. The interviews 3The relevant pronoun tokens and their antecedents appear in CAPS, and the substituted pronoun appears in parentheses to the right of the original. A: and B: are used to distinguish two speakers, where relevant. Text enclosed in brackets was added by the author to clarify the context. took place in a college career-counseling office, and were not staged. The final corpus consisted of over 3 1/2 hours of videotaped conversation be- tween counselors and students. This provided an excellent source of data, with the speakers con- tributing tokens of it/that at the rate of roughly 1 in every 2 sentences, or a total of 1,183 tokens in all. Nearly all of these were indexed and coded for 16 contextual variables characterizing the linguis- tic structure of the local context. 4 These variables fell into two classes: those pertaining to the rdN- EAR ORGANIZATION OF DISCOURSE, or to the re- spective locations of the antecedent and pronoun~ 5 and those pertaining to the SYNTACTIC FORM of the antecedent expression. Statistical analysis was used as a discovery pro- cedure for finding the strongest determinants of lexical choice, rather than to test a particular hy- pothesis. The goal was to find the best ]it between the contextual variables and lexical choice, i.e., to include in a final statistical model only those variables which were highly predictive. I used log- linear statistical methods to construct a single best multi-dimensional model; log-linear analysis per- mits the use of the x-square statistic for greater than 2-dimensional tables. This is advantageous, because multi-dimensionality imposes more con- straints on the statistical model, and is thus even more reliable than 2-dimensional tables in reveal- ing non-chance correlations. In addition to multi- dimensionality, three other criteria guided the se- lection of the best ]it: a statistically significant probability for the table, meaning a probability of 5.0% or lower; statistical independence of the pre- dictive variables from one another, i.e., that they represented truly distinct phenomena, rather than overlapping factors; and finally, that the distribu- tional patterns were the same for each individual speaker and for each separate conversation, in or- der to justify pooling the data into a single set. 6 The antecedents of some of the pronouns oc- curred in the interlocutor's speech, but change of 4Certain repetitions, e.g., false starts, were excluded from consideration; cf. chapter 2 of 1131 SLacation was construed very abstractly, and included, e.g., measures of whether the antecedent and pronoun were in the same, adjacent, or more distant sentences; how deeply embedded syntactically the antecedent and pronoun were; how many referential expres~ons with the same or conflicting semantic features of person, number and gender intervened between the pronoun and its antecedent; and their respective grammatical roles [131. 6The reliability of the data was tested by comparing within- and across-subjects statistical measures; i.e., I took into account the data for the conversations as a whole, each individual conversation, and each individual speaker [13]. 53 IT Form of Gram'l Ant't Roles Pronoun Subj-Subj Other NP Subj-Subj Other Nou-NP Subj-Subj Arg Other Other Subj-Subj Other Column Totals 147 110 90 18 25 18 416 Absolute Distributions THAT Row Totals 31 178 54 164 6 24 88 178 3 6 66 91 2 7 12 30 262 678 Main and Interaction Effects Source Intercept Form of Antecedent Grammatical Roles Likelihood Ratio Degrees of X- Proba- Freedom Square bility 1 12.71 0.0004 3 39.37 0.0001 1 16.87 0.0001 :3 0.35 0.9509 Table I: A Multi-Dimensional Statistical Model of Lexical Choice speaker within the local context had no effect on lexical choice, either alone, or in concert with other factors. Before pooling the data from all con- versations and all individual speakers into a sin- gle population, the variability across conversations and speakers was tested and found to be insignif- icant. Thus the results presented below represent a speaker behavior--lexical choice of pronoun m that is extraordinarily consistent across speakers, that is independent of whether a pronoun and its antecedent occurred in the same speaker's turn, independent of individual speaker and even of in- dividual conversation. Consequently, it is justifi- able to assume that the factors found to predict lexical choice pertain to communicatively relevant purposes. In other words, whatever these factors are, they presumably pertain not only to models of speech production, but also to models of speech comprehension. 3.2 Results Table 1 gives the distribution of pronouns across the relevant contexts and gives the probabilities and x-squares for the two contextual variables and their intercept, i.e., the interaction between them/ The very low probability of 0.04% for 7Note that the 4th category of Antecedent--Other-- includes a mixture of atypical arguments, primarily adver- bial in nature, like the adverbial argument of go in go far. I Form Subsequen i Pronoun of Subject Non-Subject Antecedent IT I THAT IT I THAT Pronominal 147 31 39 19 Subject 96.0 48.7 48•7 42.4 27.1 6.4 1.9 12.9 Pronominal 37 21 34 14 Non-Subject 43.1 21.9 21.9 19.1 .9 .0 6.7 1•3 NP 18 6 Ii 10 Subject 18.3 9.3 9.3 8.1 .0 1.1 .3 •1 NP 43 33 36 45 Non-Subject 63.9 32.4 32.4 28.2 6.8 .0 .4 10.0 Non-NP 8 5 1 1 Subject 6.1 3.1 3.1 2.7 .6 1.2 1.4 I.I Non-NP 23 44 19 33 Non-Subject 48.4 24.6 24.6 21.4 13.3 15.3 1.3 6.3 Table x-Square 116.3 Degrees of Freedom 7 Probability 0.001 Table 2: A Two-Way Distributional View of the Data, showing Absolute Frequency, Expected Fre- quency, and x-squares for each Cell the intercept indicates that the two variables axe clearly independent, or in other words, repre- sent two distinct contexts. The exceedingly low probabilities of 0.01% for the contextual variables and the highly significant table x-square (i.e., close to 1) indicate that the model is extremely significant, s The correlation between the depen- dent dimension of lexical choice and the two inde- pendent dimensions, persistence of grnmmat- ical subject and persistence of grammati- cal form, presents an intuitively very satisfying view--yet not an obvious one a priori---of how all three variables conspire together to convey the current attentional status of a discourse referent• First I will summarize the effects of the two con- textual variables one at a time. Then I will review the distributionally significant facts as a whole. First Dimension: Persistence of Grammat- ical Subject. The first dimension of the model is binary and the two contexts it defines are in diametric opposition to one another; it was likely SThe cutoff is generally 5%; 1% is deemed to be very significant. 54 to occur in exactly one of the two contexts, and that was likely to occur in the opposing context. If both referring expressions were subjects, then the lexical choice was far more likely to be it than that. All it took for the balance to swing in favor of the demonstrative was for either the pronoun itself or for its antecedent to be a non-subject. The two relevant contexts, then are: • those in which both the antecedent and the target pronoun are syntactic subjects; =~ IT • all other contexts. =~ THAT Parallelism has sometimes been suggested as an organizing factor across clauses. It is certainly a strong stylistic device, but did not make a strong enough independent contribution to the statistical model to be included as a distinct variable. To repeat, the crucial factor was found to be that both expressions were subjects, not that both had the same grammatical function in their respective clauses. In §4.1 I will review the relationship of these results to the centering literature [I] [5] [6]. Second Dimension: Persistence of Gram- matical Form. While many grammatical dis- tinctions among sentence constituents are possi- ble, the syntactic form of a pronoun's antecedent correlated with the choice between it and that in the following very specific way. The 3 discriminat- ing contexts were where the antecedent was: • any pronoun--the lexical choice for an an- tecedent pronoun had no effect on the lexical choice of the subsequent pronoun; =~ IT • a canonical ~P headed by a noun (including nominalizations); =~ IT or THAT • and all other types of constituents. =~ THAT The latter category included gerundives, infiniti- val expressions, and embedded finite clauses. 9 For contexts with a pronominal antecedent, the lexical choice was far more likely to be it. For canonical NP antecedents, it and that were equally likely, re- gardless of the type of head. For other types of constituents, that was far more likely. Thus there are two opposing contexts and one which doesn't discriminate between the two pronouns, i.e., a con- text in which the opposition is neutralized. °Cf. [14] for & detailed discussion of how the precise dividing llne between types of antecedents was determined. The dynamic component of this dimension is that it indicates, for a consecutive pair of co- specifying expressions, whether there has been a shift towards a surface form that is syntactically more compact and semantically less explicit, and if so, how great a shift. In the first context, where the antecedent is already pronominal, there is no shift, and it has a much higher probability of oc- currence than that. The context in which there is a shift from a lexical NP to a phrasal NP, i.e., a shift from a reduced form to an unreduced one, but no categorial shift, doesn't discriminate between the two pronouns. The context favoring that is the one in which there is not only a shift from a single word to a multi-word phrase, but also a change in the categorial status of the phrase from a non-NP constituent to a lexical NP. Full 3-way model. Table 2 displays the data in a finer-grained two-dimensionai x-square table in order to show separately all 4 of the possible out- comes, i.e., it or that as a subject or non-subject. In this table, the row headings represent the an- tecedent's form and grammatical role; the column headings represent the lexical choice and gram- matical role of the subsequent pronominal expres- sion. Each cell of the table indicates the absolute frequency, the expected frequency given a non- chance distribution, and the cell x-square, with the latter in boldface type to indicate the signif- icant cells. This is a somewhat more perspicu- ous view of the data because it can be displayed schematically in terms of initial states, final states, and enhanced, suppressed or neutral transitions, as in Fig. 1. However, it is also a somewhat mis- leading transformation of the 3-dimensional view given in table 1, because it suggests that the gram- matical role of a pronoun and that of its an- tecedent are independent factors. Since the sta- tistical model shown in table I is actually the best fit of the data, better than other models that were tested in which the grammatical role of each ex- pression was treated separately [13], it is crucial to recognize that the statistically significant factor is the pair-wise comparison of subject status. Large cell x-squares in Table 2 indicate the sig- nificant contexts, and a comparison of the absolute and expected frequencies in these cells indicate whether the context is significantly frequent or sig- nificantly infrequent. Thus there ar~ 3 types of cells in the table representing the contexts of lexi- ca] choice as chance events, as enhanced events, or as suppressed events. In Fig. 1, I have translated 55 1. Pro-Subj ~" IT-Subj 2. -I THAT-Subj 3. IT-NonSubj 4. -I THAT-NonSubj 5. Pro-NonSubj IT-Subj 6. THAT-Subj 7. I- IT-NonSubj 8. THAT-NonSubj 9. NP-Subj IT-Subj 10. THAT-Subj 11. IT-NonSubj 12. THAT-NonSubj 13. NP-NonSubj -~ IT-Subj 14. THAT-Subj 15. IT-NonSubj 16. t- THAT-NonSubj 17. NonNP-Subj 18. 19. 20. 21. NonNP-NonSubj -I IT-Subj 22. I- THAT-Subj 23. IT-NonSubj 24. I- THAT-NonSubj Figure 1: Schematic Representation of Table 2 as a set of State Transitions the table into a set of 3 types of state transitions. Initial states are in the left column and final states in the right one. The 3 types of transition are one which is unaffected by the contrast between it and that (no symbol), one which is enhanced (~-), and one which is suppressed (-t). The initial states in boldface indicate for each antecedent type which of the two grammatical role states was more likely, subject or non-subject. Absence of final states for the nonNP-Subj initial state indicates that this set of contexts is extremely rare. In the following sec- tion, I discuss the relation of these events to an abstract model of attentional state. 4 Discussion The outcome of this study is not a model of at- tentionai processes per #e, but rather, a set of fac- tors pertaining to attentional structure that elu- cidates the shifting functions of the demonstra- tive pronoun in English discourse. The particular function served by that seems to depend on what functional contrasts are available given the current attentionai state. It is most useful to think of the data in terms of two major categories of phenomena. The first category is where a discourse entity has already been mentioned pronominally. In this case, main- tenance of reference in subject grammatical role is a particularly signficant determinant of the choice between it and that. This effect is discussed in §4.1 in relation to the notion of centering. The second category is where a discourse entity most recently evoked by a multi-word phrase is subse- quently referenced by a pronoun. While gram- matical role is relevant here, its relevance seems to depend on a more salient distinction pertaining to the syntactico-semantic type of the discourse entity, as discussed in §4.2. 4.1 Definite/Demonstrative Pronouns and Centering The literature on attentionai state has shown that both pronominalization and grammatical role af- fect the attentionai status of a discourse entity. In this section I will show how the use of the definite pronoun it conforms in particular to the predic- tions made by Kameyaxna [6] [5] regarding canon- icai and non-canonical center-retention, and that the demonstrative pronoun is incompatible with center-retention. The centering model predicts that an utterance will contain a referent that is distinguished as the backward looking center (Cb) [1], and that if the Cb of an utterance is coreferentlai with the Cb of the prior utterance, it will be pronominallzed [1]. Kameyama [6] proposes that there are two means for retaining a discourse entity as the Cb, canon- ical center-retention--both references in subject role--and non-canonical center retention--neither reference in subject role. As shown in Fig. 1, the most enhanced context for lexical choice of it (context 1) was where both the pronoun and its pronominal antecedent were subjects, i.e., canoni- cal center-retention. The next most enhanced con- text for it (context 7) was where neither the pro- noun nor its pronominal antecedent were subjects, i.e., non-canonical center-retention. Thus, the def- inite pronoun correlates with both canonical and non-canonical center retention. Lexical choice of it is actively suppressed in contexts which are incompatible with center- retention. Note in Fig. 1 that if the antecedent is neither a pronoun nor a subject, a subsequent ref- erence via it in subject role is suppressed (contexts 13 and 21). The only (non-rare) context where an it subject is neither enhanced nor suppressed is 56 where the antecedent is a canonical NP in subject role (context 9) (cf. §4.2). The demonstrative pronoun is actively sup- pressed in the case of canonical center-retention (context 2); i.e, given two successive pronominal references to the same entity where reference is maintained in subject role, the referent's atten- tional state is such that it precludes demonstrative reference. Use of that is also suppressed if the an- tecedent is a potential candidate for canonical cen- ter retention, even if reference is not maintained in subject role (context 4). Attentional state is only one component of a discourse structure. The discourse model as a whole will contain representations of many of the things to whcih the discourse participants can sub- sequently refer, including discourse entities evoked by NPs, and additionally, as argued by Webber [18], discourse segments. Webber notes that dis- course segment referents may have a different star tus from the discourse entities evoked by NPs, at least until they have been pronominally refer- enced. However, she suggests that when a demon- strative pronoun refers to a discourse entity, it accesses that entity by a process which first in- volves accessing the discourse segment in which the discourse entity is introduced. In other words, she posits two distinct referential processes, de- ictic and anaphoric reference, and suggests that even when a demonstrative pronoun refers to a discourse entity, the process of finding the refer- ent is distinct in kind from anaphoric reference to the same entity. While I have no evidence that bears directly on such a claim, my data do indicate that some entities in a discourse segment are ordi- narily unavailable via the demonstrative pronoun, namely entities that would be expected canonical centers, as described in the preceding paragraph. Thus my data support the view that there are dis- tinct processes for accessing entities in the model. It is relevant to note here that the notion of eb is generally discussed in terms of links between successive utterances. Since there is an extraor- dinary frequency of conjoined sentences in con- versational language, I distinguished between ut- terances and independent clauses within an ut- terance. The successive references in my data were in successive sentences a majority of the time (roughly 2/3; cf. [13]), but were sometimes sepa- rated by one or more sentences (roughly 1/6) and sometimes occurred in the same sentence (roughly 1/6). This distance factor had no correlation with lexical choice of pronoun, which suggests that dis- course segment structure interacts with centering. The relevant local context for center-retention may not be successive sentences/utterances, but rather, successive sentences/utterances within the same discourse segment. In any case, for the data pre- sented here, the relevant local context consisted of two successive co-specifying phrases, not two suc- cessive utterances. Since the primary objective of this study was to examine various features of the context imme- diately preceding a given type of pronoun, rather than to track the discourse history of particular entities, little can be said here about the general case of multiple successive references to the same entity. However, I did investigate a subset of this general case, namely, successive pronominal refer- ences to the same entity where the initial men- tion was a canonical NP, and where each next co- specifying pronoun served as the antecedent for a subsequent pronoun. I refer to these as pronoun chains. 1° The relative likelihood of it and that was the same for the first slot in the chain, which conforms to the general distribution for pronouns with NP antecedents. The ratio of it to that in the last position of a chain conforms to chance, i.e., it equals the ratio of it to that in the pronoun chain sample. But within a chain, that is strongly pre- dicted by persistence of grammatical form. The demonstrative occured rarely within chains, but where it did occur, either the demonstrative token or its antecedent was a non-subject. This was found to he the only factor pertaining to lin- guistic structure that affected the occurrence of that within a pronoun chain. A final set of conclusions derived from the pronominal initial states in Fig. 1 pertains to the non-predictive contexts, i.e., those which neither enhance nor suppress center-retention, and those which neither enhance nor suppress demonstrative reference. These are cases where there is either a shift in grammatical role, or where the lexical choice is that (contexts 3, 5, 6 and 8). When a cen- ter is not retained across two successive utterances (in the same discourse segement), then it is likely that the global context is affected [1], perhaps by a center-shift (cf. [5]), or by a segment boundary (cf. [7], [11]). Centers seem generally to be unavailable for demonstrative reference, but contexts 6 and 8 l°The term ~nm to have appeared in the philosophical and llngu~tic literature at about the same time, e.g., in worlm by K. Donnellan, C. Chastain, M. Halliday and D. Zubin. There were a total of 101 such chains comprising 305 total pronoun toker~; they ranged in length from 2 to 13 pronomm. 57 in Fig. 1 perhaps represent a mechanism whereby an entity maintained as center can become avail- able for demonstrative reference; e.g., context 6 may coincide with the chaining context discussed in the preceding paragraph, whereby a locally fo- cussed entity can be accessed by that just in case the prior reference was a non-subject. Context 8 suggests that demonstrative reference is more available in contexts of non-canonical center re- tention than canonical center retention. 4.2 Non-Centered Discourse Enti- ties I have argued elsewhere that the crucial dis- tinction for the category of non-pronominal an- tecedents is the contrast between true NPS with NP syntax, versus all other types of syntactic argu- ments ([12] [14]). This raises two important issues pertaining to the status of the discourse entities evoked by Nl's versus other kinds of arguments. The first is that if non-NP arguments evoke dis- course entities, which they certainly must, such entities apparently have a different status in the model than discourse entities evoked by NPs, given that the combination of lexical choice between it and that and grammatical function so clearly dis- tinguish them. The second issue is that although the difference in status seems--at first blush--to correlate with a syntactic property, the distinction may ultimately be semantic in nature. I will dis- cuss each issue in turn. Two of the non-pronominal initial states in Fig. 1 are distinguished by neither enhancing nor sup- pressing any of the possible transitions to it or that: NP subjects (9-12), and non-NV non-subjects (17-20). The extreme rarity of the latter suggests that non-NPs don't occur as grammatical subjects, or that when they do, they are not likely to be re- evoked by a pronoun. On the other hand, NP sub- jects are fairly frequent in the contexts where it or that occurs with a non-pronominal antecedent, thus the absence here of enhanced or suppressed transitions suggests that an entity mentioned as an NP subject is free to be accessed in a variety of ways, or more precisely, that it has a relatively unspecified attentional state. It is neither a par- ticularly likely Cb nor is it particularly available or unavailable for demonstrative reference. The two remaining non-subject initial states, i.e., NP non-subjects and non-NP non-subjects, both suppress subsequent reference via it subjects, as mentioned in the previous section. While NP subjects apparently have a somewhat unspecified attentional status, NP non-subjects enhance the lexical choice of non-subject that. It appears that discourse entities evoked by NPs which are not sub- jects are in an attentional state that is quite dif- ferent from that of canonical center retention. It is especially interesting that when the an- tecedent is a non-NP non-subject, a subsequent pronominal reference is most likely to be demon- strative, and most likely to be a subjectJ 1 The en- hancement of a that-subject context is completely contrary to the pattern established for subjects and for the demonstrative pronoun. These facts contribute to the view that entities evoked by non- NP constituents have'a special status, but what this status is remains to be determined. In pre- vious work, I emphasized the syntactic distinc- tion with respect to lexical choice between it and that [14]. Although the most obvious difference is the purely syntactic one, the syntactic distinc- tion between NP and non-NP constituents has a number of semantico-pragmatic consequences. In discussing the nominal and temporal anaphora within Kamp's framework of discourse represen- tation structures (DRS), Partee raised the ques- tion of the difference in status between event- describing clauses and nominalizations [10]. In- dependent clauses differ from the class of non- NP constituents under consideration here in that the latter occur as arguments of superordinate verbs, and are thus entities participating in a de- scribed situation, as well as descriptions of situa- tions. However, true noun phrases--whether they describe events or not--can have definite or indef- inite determiners, and cannot have tense or any aspectual categories associated with the verb. The study presented here brings us no closer to a so- lution to the questions posed by Partee regarding the ontology and representation of different kinds of event descriptions, but it does offer further con- firmation that entities evoked by NP and non-sP constituents have a different conceptual status, given the different possibilities for lexical choice and grammatical role of a subsequent pronominal mention. 5 Conclusion The following bullets encapsulate the observations made in §4: 11Cf. examples 3-6 in §2 for illustrations. 58 • Lexical choice of it indicates canonical or non- canonical center retention • Lexical choice of it in subject role conflicts with non-subject antecedents, but is compat- ible with an NP-subject antecedent • Lexical choice of that blocks canonical center retention • Lexical choice of that may be more compatible with non-canonical center retention • Lexical choice of that in subject role is most likely when the antecedent is a non-NP con- stituent • Lexical choice of that is enhanced when the antecedent is a non-NP constituent • Lexical choice of that is enhanced when the antecedent NP is a non-subject • NP subjects have a relatively unspecified at- tentional status ACKNOWLEDGEMENTS The data collection and statistical analysis were sup- ported by Sloan Foundation Grant 1-5680-22-4898. The computational analysis and preparation of the pa- per were supported by DARPA Contract N00014-85- C-0012. Many thanks to Elena Levy, Deborah Dahl, Megumi Kameyama, Carl Weir, Bonnie Webber and David Searls for helpful discussion, commentary and criticism. Bibliography [1] B. J. Grosz, A. K. Joshi, and S. Weinstein. Pro- riding a unified account of definite noun phrases in discourse. In Proceedings of the ~lst An- nual Meeting of the Association for Computa- tional Linguistics, pages 44-50, 1983. [2] B. J. Grosz and C. L. Sidner. Attention, inten- tions and the structure of discourse. Computa- tional Linguistics, 175-204, 1986. [3] S. Isard. Changing the context. In E.L.Keenan, editor, Formal Semantics of Natural Language, pages 287-296, Cambridge U. Press, Cambridge, 1975. [4] S. Duncan Jr., B. Kanki, H. Mokros, and D. Fiske. Pseudounilaterality, simple-rate variables, and other ills to which interaction research is heir. Journal of Personality and Social Psychol- ogy, 1335-1348, 1984. [5] M. Kameyama. Computing japanese discourse: grammatical disambiguation with centering con- stralnts. In Proceedings of University of Manch- ester Institute of Science and Technology: Work- shop on Computing Japanese, 1987. [6] M. Kameyama. A property-sharing constraint in centering. In Proceedings of the 24th An- nual Meeting of the Association for Computa- tional Linguistics, pages 200-206, 1986. [7] E. Levy. Communicating Thematic Structure in Narrative Discourse: The Use of Referring Terms and Gestures. PhD thesis, University of Chicago, 1984. [8] C. Linde. Focus of attention and the choice of pronouns in discourse. In Talmy Givon, editor, Syntax and Semantics: Discourse and Syntax, pages 337-354, Academic Press, New York, 1979. [9] H. B. Mokros. Patterns of Persistence and Change in the Sequencing of Nonverbal Actions. PhD thesis, University of Chicago, 1984. [10] B. H. Partee. Nominal and temporal anaphora. Linguistics and Philosophy, 243-286, 1984. [11] R. Reichman. Getting Computers to Talk Like You and Me. MIT Press, Cambridge, Mas- sachusetts, 1985. [12] R. J. (Passonnean) Schiffman. Categories of dis- course deixls. 1984. Presented at the 29th An- nual Conference of the International Linguistics Association. [13] R. J. (Passonneau) Schiffman. Discourse Con- attaints on it and that: A Study of Language Use in Career-Counseling Interviews. PhD the- sis, University of Chicago, 1985. [14] R. J. (Passonneau) Schiffman. The two nominal anaphors it and that. In Proceedings of the ~Oth Regional Meeting of the Chicago Linguistic Soci- ety, pages 322-357, 1984. [15] C. L. Sidner. Focusing in the comprehension of definite anaphora. In Michael Brady and Robert C. Berwick, editors, Computational Mod. els of Discourse, pages 267-330, The MIT Press, Cambridge, Massachusetts, 1983. [16] C. L. Sidner. Towards a Computational Theory of Definite Anaphora Comprehension in English Discourse. Technical Report, MIT AI Lab, 1979. [17] M. Silverstein. Cognitive implications of a refer- ential hierarchy. 1980. Unpublished ms. [18] B. L. Webber. Discourse deixis: reference to discourse segments. In Proceedings of the ~6th Annual Meeting of the Association ,for Computa- tional Linguistics, pages 113-122, 1988. [19] B. L. Webber. So what can we talk about now. In Michael Brady and Robert C. Berwick, editors, Computational Models of Discourse, pages 331- 372, The MIT Press, Cambridge, Massachusetts, 1983. 59
1989
7
CONVERSATIONALLY RELEVANT DESCRIPTIONS Amlchai Kronfeld Natural Language Incorporated 2910 Seventh St. Berkeley, CA 94710 i Abstract Conversationally relevant descriptions are definite descriptions that are not merely tools for the iden- tification of a referent, but are also crucial to the dis- course in other respects. I analyze the uses of such descriptions in assertions as conveying a particular ~ype of conversational implicatures. Such implics- tures can be represented within the framework of possible world semantics. The analysis is extended to non-assertive illocutionary acts on the one hand, and to indefinite descriptions on the other. 2 Introduction In an earlier paper [Kronfeld 1986 b I have intro- duced the distinction between/unctionagiy and con- versationally relevant descriptions. All uses of def- inite descriptions for the purpose of referring are functional in the sense that they are supposed to identify the referent. But some uses of definite de- scriptions exhibit a type of relevance (or irrelevance) that goes beyond identification purposes. Consider the following example. As part of his effort to re= cruit more young people into the police force, the mayor of New York proclaims in a public speech: 1 New York needs more policemen. Instead of gNew York" he might have used UThe Big Apple, s or ~The city by the Hudson, ~ or some such description. It would not do, however, to say that 2 The city with the world's largest Jewish community needs more policemen even though this description might be useful enough in identifying New York for the audience. It is sim- ply irrelevant in this context. On the other hand, 5O this same description might be quite relevant in a different context. For example, suppose the mayor were giving a speech at a reception in honor of Is- rael's Prime Minister. Under those circumstances, the statement $ The city with the world's largest Jewish community welcomes Israel's Prime Minister. would make perfect sense. The difference, of course, is in the relevance of the description to the state= ment in (3), and its irrelevance to the one in (2). Uses of definite descriptions such as illustrated in example (3), are what I call conversationally rele- vant. The distinction between functionally and conver- sationally relevant descriptions is part of a general model of referring that is based on what I have termed the dea4:riptlve approach to reference [Kron- reid 1990]. An ellucidation of the speech act of refer- ring cannot be complete without understanding the role of conversationally relevant descriptions in the larger discourse. My hypothesis in [Kronfeld 1086] was that conversationally relevant descriptions func- tion as part of implicatures of a particular type. The problem is to specify what this type is. An outline of a solution is the topic of the present paper. 3 Implicature Why should we think that whenever a conversation- ally relevant description is used an implicature al- ways exists? The reason for this has to do with the fact that discourse is something more than a simple sum of the isolated sentences that constitute its parts. Discourse consists of a sequence of ut- terances that are tied together in ways that make sere. Typically, there are m~o~uf why a speaker says what he says in the order and manner that he says it, and in general a hearer must have a clue as to what these reasons are (this is why plan recognition is so important for plan-based theories of speech acts). Of course, the hearer cannot hope to know or even guess all of the reasons that led the speaker to participate in the discourse, but he can, indeed must, recognize aome of them. As ALlen, Co- hen, Gross, Perranlt, and Sidner have pointed out [Allen 1978; Perranlt and Cohen 1978; Alien and Perranlt 1978; Allen and Perranlt 1980; Gross and Sidner 1986; Sidner 1983; Sidner 1985], the recog- nition of what the speaker is "up to e contributes to coherence and comprehensibility of the discourse and is essential for the hearer's generation of an ap- propriate response. Now, the unstated reasons whose recognition is required for discourse coherence are by de~nition implicated, since they must be inferred in order to preserve the assumption that the speaker is being cooperative. This is precisely what an implicature is. Moreover, turning to conversationally relevant descriptions, we should observe that by their very nature, they cannot be merely functionally relevant. That is, the assumption that they are intended merely as tools for identification is not enough to make the discourse coherent. This, after all, is pre. cisely what distinguishes functionally relevant de- scriptious from the conversationally relevant ones. Hence, additional assumptions are required in order to make sense of the way the speaker uses the latter descriptions. These assumptious themselves must be implicated. Thus, in using a conversationally relevant descrip- tion, the speaker implicates something. The con- tent of implicatures that accompany such descrip- tions depends on circumstances, but they all share a rather specific form. My method in uncovering this form is this: taking the heater's perspective, I begin by postulating that if the referring expremion used by the speaker is merely functionally relevant, the speaker must be viewed as uncooperative. I then outline a sequence of deductions that eliminate the apparent conflict between what the speaker says and the assumption of his cooperation. 3.1 Recognition The general mechanism for the recognition of a con- versationally relevant description follows the famil- iar Gricean path. The hearer begins by assuming that the referring expression is only functionally relevant, and then gets into diflicnlties. An obvi- ous strategy is illustrated by Example (3) above. At first glance it appears that the mayor violated the third maxim of manner ('Be briei~): he used 61 a long and cumbersome description ('the city with the largeat Jewish couunltym), although a much shorter and functionally superior one is available ('New York'}. However, a hearer can easily make sense of the mayor's behavior by assuming that the referring expression is not merely a tool for identifi- cation. That is, it must be conversationally relevant. Another strategy for letting the hearer recog- a conversationally relevant description is illus- trated by "Smith's murderer ~ (interpreted "attribu- tively'). There, the assumption that the description is only functionally relevant would lead to an inex- plicable violation of the second maxim of quality. It is obvious that no one knows yet who murdered Smith. Thus, if the description is only functionally relevant, the hearer would be pussled as to how the speaker could form an opinion about the sanity of a person whose identity is unknown to him. 3.2 Asserted universality When a conversationally relevant description is used (or implied), the proposition which the speaker is trying to express lends itself to the Russellian anal- ysis. Thus, if a speaker asserts a statement of the form 4 The D is F, and "The D ~ is a conversationally relevant descrip- tion, then the proposition expessed by the speaker is this: 5 (3x)(D(x) & (VY)CD(Y) --* x = y) & F(x)). ~ Note that (5) is equivalent to the conjunction of two propositions. The first one is the uniqueness condition: 6 (U'niquenels) (3z)CDCz) ~ CVy)(DCy) ~ z = y)~). The second is a universal generalization: .v, (u~-..er,~ztx) ('v'z)(D(.,.) --, F(.,)). Both the uniqueness and the universality condi- tions have to be satisfied if what the speaker means is to be true. But from this it does not follow that the speaker ~sert8 these conditions. Both Straw- son 11971] and following him Searle [1969, 157E.] have argued in their criticism of Russell's theory of descriptions that the uniqueness condition, though presupposed, is not ~erted. For example, when a speaker says that the Queen of England is ill, he does not ~ert that there is one and only one Queen SContextu~l information may be needed to augment the descriptive content of "The D." of England. This, no doubt, is true of the unique- ness condition, but I think that when a conversa- tionally relevant description is used, the un~eerm6/- /ty condition is indeed asserted, or at least strongly implied. In a sense, what the speaker attempts to convey is that any object that is denoted by the description "the D t has the property F, and this is why it is so natural to insert "whoever he is" in the classical examples of attributive uses of definite descriptious. By saying "Smith's murderer, who- ever he is, is insane" the speaker obviously means that for any person, if that person is Smith's mur- derer, that person is insane, which has the exact same form as (7). Note that the convention for us- ing definite descriptions to express universal state- ments already exists in the language ('The whale is a mammar' i.e., for any z, if z is a whale, then z is a mammal). Moreover, very frequently, when a conversationally relevant description is used, the speaker would maintain that the universality con- dition is true even if uniqueness fails. Suppose it turns out that not one but two culprits are respon- sible for Smith's sorry state. If our speaker asserted that Smith's murderer, whoever he is, is insane, he is very likely to say now that both are insane, rather than withdraw his judgment altogether. All in all, it seems to me very plausible to assume that when conversationally relevant descriptions are used, the universal claim is not only part of the truth condi- tious (together with uniqueness), but part of what is asserted as well. 3.3 Justification A rational speaker who follows the Gricean max- ims is expected, among other things, to obey the second maxim of quality. That is, he is expected to have "adequate evidence ~ for what he asserts. What counts as adequate evidence obviously depends on context: we have di~erent standards for assertions in a scientific article and in a gossipy chat. Nev- ertheless in all verbal exchanges, a speaker is ex- pected to be able to provide reasonable justification for what he says. He must be able to answer ques- tions such as "how do you know?" "why do you think so?" and so on. If he cannot, the assumption about his cooperation cannot be maintained. If a universal statement such as (7) is part of what the speaker asserts, he must be able, then, to jus- tify it. The hearer may not know exactly what the speaker's evidence is for believing this generalisa- tlon but the hearer can reason about the type of ev- idence or justification that the speaker is expected to have. In particular, I want to draw a distinction between ezte~ona/ and inten.5~oaal justifications of universal statements. This distinction will help us see what sort of justification a speaker can offer 62 for a statement such as (7) when a conversationally relevant description is used. The distinction between extensional and inten- sional justification of universal statements is based on a familiar distinction in the philosophy of sci- ence between accidental and law]ike general;ffiations (See Waiters 1967). Not all universal generalls~- tions are scientific laws. For example, the following statement, although true, is not a law of nature: 8 All mount~;n, on Earth are le~ than 30,000 feet high. On the other hand, this next statement is: 9 All basketballs are attracted to the center of Earth. What is the difference? We]], there are several, but two related ones are specifically relevant to us. First, the latter generalization, but not the former, supports counter/actual statement. If a mountain on Earth were to be examined a billion years from now, would it still be less than 30,000 feet high? We don't know. Changes in the surface of the earth oc- cur all the time, and Mount Everest needs a mere 972 additional feet to make (8) false. On the other hand, if a player were to make a jump shot a bi]Hon years from now the basketball would still find its way down to the ground. A law of nature does not lose its v'aHdity over time. Second, there is a crucial difference in the man- ner in which statements (8) and (9) are ju,~t~fied. The gener2l;tation about mountains on earth is sup- ported by observation: all mountain, on earth were measured and found to be less than 30,000 feet high. I do not know why this is so. As far as I am con- cerned this is just one more accidental fact about the world I llve in. The generalization about bas- ketbalk, on the other hand, is derived from a more general principle that explains why things such as basketballs behave the way they do. Such a deriva- tion is an essential part of an explanation of why (9) is true. It also contributes to the coherence of our experience: what science provides us with, among other things, is the reauuring knowledge that nat- ural phenomena do not just happen to occur, but follow a general scheme that provides the basis for both explanation and prediction. Thus, our confi- dence in the truth of (9) is not merely the result of examining a large sample of basketballs. We also have a theory that explains why they do not just happen to come down whenever dropped, but, in a sense, mu.~t do so. Given these two ditrerences between accidental and lawl~e statements, let ezten~onal and ~nten- a~oas/justifications of universal generalizations be defined as follows. An extensional justification of a generalization such as "All A's are F" would rely on the fact that aLl, or most, or a good sample of the things with the property A have been examined and were found to have property F. In such a case there would not be any attempt to explain why this is so, only a claim that as things stand, all A's do, in fact, have the property F. An intensional justification of a universal generaKsation, on the other hand, would attempt to show that anything with the property of being A m~t have the property of being P, because of a more general principle or theory from wldch the gsneral~ation can be derived. 2 The distinction I have just described is obviously not restricted to science, nor am I interested in elu- cidating different scientific methods of corrobora- tion. Rather, I want to apply this distinction to the kind of justification that a speaker is expected to have for what he says, in view of Grice's second maxim of quail W. In a sense, what I am after is a "folk theory" of justification, not the foundation of knowledge. Thus, the extensional/intensional dis- tinction between types of justification is indifferent to the question whether the evidence for a statement is good or bad, as an intensional justification can be either silly or brilliant. Moreover, the distinction applies to all sorts of judgments, not merely theo- retical ones. The biggotted justification for holding stereotypic beliefs would presumably be extensional ('Look, I don't know why they are all such dirty cowards, but I have met enough of them to know that they are:'). On the other hand, when the notorious fundamentalist preacher Jimmy Swaggart states that all adulterers are sinners, he does not intend us to believe that he has examined all (most, enough) adulterers and found that they happen to be sinners. If someone who is not an adulterer now would become one, he would have to be a sinner as well, and the reason for that is simple. Within $waggart's world view, adulterers rn~t be sinners simply because the bible says so, and whatever the bible says is true. The same distinction applies to the most mundane generalisations that can appear in discourse. "A11 the nursery schools in our area are simply unacceptable ~ my friend tells me. I assume the justification for what he says is extensional (he has checked out each and every one), but then he adds: "... they are all Montessori schools," and an intensional justification is revealed. Clearly, extensional and intensional justification are not mutually exclusive. Nor do they exhaust the types of justification one can use. Thus, the justi- fication of the most fundamental principles of any theory (scientific or otherwise), although clearly not extensional, would not be intensional either, since ~Statisticai correlations belong to the extensional realm. Causal explanatiorm to the intensional one. 63 by definition they are not derivable from any other principles (they would still support counterfactuak, though). However, apart from such fundamental ax/oms, the justification of any universal general- isation, if it is not extensional, must be intenaionaL Now back to the univereali W condition that the speaker asserts when he uses a conversationally re]- evant description. As mentioned already, the hearer may not know why the speaker believes the general- isation, but from the heater's point of view it stands to reason that the speaker's justification is inten- donul, because of the uniqueness condition. If the uniquene~ condition is presupposed, an extensional justification of the universal generalization amounts to no more than this: there is evidence that the ref- erent happens to have the property F. But if this were all that the speaker had in mind, it would be very misleading to give the impression that a univer- sal generalization was meant. To see why, consider a case in which an author tells you that all his books are published by Cambridge University Press. If later you are to find out that he has publi-qhed only one book, you would surely be pusaled, although as things are, his statement was technically true. In other words, if the uniqueness condition is pre- supposed, it makes little sense to assert a universal generalization, unless the speaker believes that the gsneralisation m~t be true whether the uniqueness condition is true or not. Thus, if the speaker has in- tensional justification for what he says, the unique- ncss condition no longer interferes with universality. If the author tells you that he has just signed a life- time contract with Cambridge University Press, and therefore all his books m~t be published there, the fact that he has written only one book (so far) does not matter any more. In view of the contract, if he were to write others, they would be published by Cambridge. For the speech act to be coherent, therefore, the speaker must have an inter~ional justification for (7). This is why frequently when a conversation- ally relevant description is used (for example, in the paradigmatically "attributive ~ uses of definite de- scriptious), it is natural to replace the auxiliary verb with an appropriately tensed occurrence of "mnst. ~ For example, 10 The inventor of the sewing machine, wae very whoever he or she was, ntuet haue been smart. 11 If my political ~nalysis is correct, the. Democratic candidate in 1992 will ~.avebVto be a conservative. 12 The thief who stole your diamond ring, knew mu,t ~we known how valuable it was. My hypothesis, therefore, is that conversationally relevant descriptions are used to assert universal generalizations for which the speaker has inteusional justification. Therefore, when a speaker says "The D is F" and "The D" is conversationally relevant, a first approbation of what is usually being im- plicated is this: 13 Any D must be F. When the modal verb is actually added, the speaker simply makes (part of) the implicature explicit. 3.4 The meaning of "must" As it stands, the implicature expressed by (13) is hopelessly vague. The problem is with the modal verb ~must." How is it to be interpreted? Compare, for example, the following uses: 14 (a) The bird m~t have entered through the attic. (b) Whether I like it or not, I m~t pay my taxes. (¢) The Butcher of Lion m,~t pay for his crimes. If I do not pay my taxes, I will be punished. This is why I feel that I must do it. But if the bird did not enter through the attic, or if the Butcher of Lion does not pay for his crimes, neither bird nor beast will be punished as a result of that. Moreover, if the bird did not enter through the attic, th e speaker ut- tering 14(a) would simply be wrong. But whether or not the Butcher of Lion ever pays for his crimes, the speaker uttering 14(c) would nevertheless be right. Thus, in each case, the intended interpretation of the modal verb is radically different. Is the word "must" multiply ambiguous then? Not necessarily. As Angelika Kratser has argued [1977; 1979; 1981], the force of modal verbs such as "must" is relative to an implied contextual element. The examples in 14 are elliptical pronouncements whose full meaning can be given by the following: 15 (a) In view of what we know, the bird must have entered through the attic. (b) In view of what the law is, I must pay my taxes, whether I like it or not. (¢) In view of our moral convictions, the Butcher of Lion must pay for his crimes. The interpretation of "must" in each example is indeed different, but there is, Kratzer argues, a core of meaning which is common to alL This core is 64 specified as a/unct/oa that can be precisely formu- lated within the framework of possible-world seman- tics. Schematically, Kratser's suggestion is that the meaning of "must" is given by the function m~t- in-dew-o/, which accepts two arguments. One ar- gument is the proposition within the scope of the modal verb (e.g., The bird came through the attic in 14(a)). Values for the other argument are phrases such as "what is known," awhat the law is," "our moral convictions," etc. Thus, for example, sen- tence 15(a) is interpreted as 16 Must=In-View-Of(What is known, The bird entered through he attic) The sentence is true in possible world uJ just in case the proposition expressed by "The bird entered through the attic" logically follows from what we know in w [Kratzer 1977, 346]. 3 Kratser's suggestion can be used in the elucida- tion of the implicature conveyed by a conversation- ally relevant description. Let f stand for phrases such as "what is known," "What the law is," etc. Applying Kratser's analysis to (13) we get 17 In view of/, any D must be F. or more accurately: lS Must=In-View-Of(/, any D /a F). (18), then, is the implicature conveyed by a typical use of a conversationally relevant description. How "in view of .f" is to be interpreted is up to the hearer to find out, but we may assume that possible val- ues for f come from a list that is scanned by the hearer until a particular item on the list provides a satisfactory interpretation. Such a list may contain the following (see Kratzer 1981, 44-45, for possible- world interpretation): • FactuoJ: In view of facts of such and such kinds... (including institutional facts such as what the law is). aPhrases such as "what is known, ~ "our moral convic- tions, n "what the facts are, ~ and so on are represented by Krat~er u functions from possible worlds to sets of proposi- tions. For example, "what is known ~ is represented as a funo tion J" which assigns sets of propositions to possible worlds such that for each possible world w, f(w) contains all the propositions that are known in that world. According to Kratser's first suggestion, for any function / from worlds to sets of propositions, and for every proposition P, "It must be the case that P in view of f' is true in to just in case/(to) entails P. As Kratler notes, this is only the first step in the elucidation of the meaning of modal verbs, and it works only when .f(to) is guaranteed to be consistent (as is indeed the case when f is "what is known ~ ). When/(to) can be incon- sistent (e.g., when [ is ,d~ the/,Lw/e ), problems arise, which Kratzer solves using the concept of the set of all consistent subsets. • Epiatemi¢: In view of what is known... (or al- ternatively, what is believed, assumed, hypoth- esised, and so on). • Stereotypical: In view of the normal course of events... • Deontic: In view of what is the right thing to do... • Teleoloqical: In view of our objectives ... (or alternatively, our wishes, our intentions, and SO on). While the l~t may turn out to be much longer, there is no reason to assume that it will be infinite. The assumption that a conversationally relevant description is used to implicate a modal operator provides a formal reason why in paradigmatically attributive uses, if nothing fits the description, the speech act as a whole must fail. If (18) is part of what the speaker means in these cases, the descrip- tion is within the scope of a modal operator, hence, within an intensional context in which substitution is not guaranteed to be a valid form of ;nference. Suppose that Ralph asserts that in view of what we know about the normal human propensity for violence, Smith's murderer, whoever he is, must be insane; and suppose Ralph thinks that Smith's mur- derer is Jane's uncle. Substituting "Jane's uncle s for ~Smith's murderer" yields the wrong result: it is not the case that in view of what we know about the normal human propensity for violence, Jane's uncle, whoever he is, must be insane. Since in general sub- stitution is not allowed in intensional contexts, when the description fads (i.e., no one murdered Smith), Ralph's speech act must fail too. The fact that he may know quite well who he thought the culprit was does not matter. By way of summary, here are the steps that a hearer might go through in calculating the implica- ture that is typically intended when a conversation- ally relevant description is used: 1. Recognising a conversationally relevant de- scription 2. Identifying the universal generalization 3. Postulating an intensional justification 4. Locating an appropriate set of propositions rel- ative to which the modal operator is interpreted Of course, this mechanism is much more flexible than I make it out to be, and a speaker can use it to satisfy various other goals. For example, a defi- nite description can be used to provide information [Appelt 1985], or to highlight shared knowledge, or simply to avoid mechanical repetition of a proper name. The following quotation illustrates how a conversationally relevant description can achieve all these goals simultaneously: 65 19 In the Democratic primaries, Mr. Jackson, who is considered a long shot for the Vice-Presidential nomination, received more than seven million votes. The J6-year-old Okieago ele~yman has not said whether he wants the second spot on the Democratic ticket... (New York Times, June 28, 1988) Since the name "Jackson" is already available as the best functionally relevant referring expression, it should be obvious to the reader that the descrip- tion "the 46-year-old Chicago clergyman ~ is con- versationally relevant. But if the reader were to go through the steps outlined above, he would reach a dead end. There is nothing in view of which it m~t be the case that any 46 year old Chicago clergyman has not said whether he wants the second spot on the democratic ticket. Thus, the implicature that usually accompanies a conversationally relevant de- scription is ruled out. Nor is there an obvious im- plicit description that is used to convey a similar implicature. The reader is then forced to search for other explanations, and one obvious possibility is that the author wants to inform the reader (or re- mind him) that Jackson is a 46 year old clergyman fTom Chicago. 4 iNon-assertives So far I have assumed that the conversationally rele- vant description is used within the context of an as- sertion, and I have relied, in my derivation of the im- pUcature, on the fact that in assertions the speaker is expected to have adequate evidence for what he says. In other speech acts, however, evidence and justification play a completely different role, if any. For example, a speaker who asks a question is not expected to have Xevidence~ for it. Still, the use of conversationally relevant descriptions is clearly not restricted to assertions. Consider the following: 20 [After the verdict is pronounced, the Mayor to the District Attorney] Congratulations on nailing the ntogt fearsome criminal in recent Ai~tory. 21 [While the serving plate is passed around, a guest to the hostJ I am not very hungry. Could I have the sntalle~t ~teak please? 22 [A young cop to his superior, as the chase he~] One thing I can promise you: I will not let Smith's murderer get away! A detailed description of how my account can be extended to cover these cases would take us too far afield. In general, however, the same analysis can apply to non-assertions such as the above as welL Coherence is no leas important in discourse contain- ing requests, warnings, promises, etc. than in one containing assertives. The hearer must understand the reasons tuhy a congratulation, a request, or a promise are being performed, and the role of conver- sationally relevant descriptions in such speech acts would be similar to their roles in assertives, with similar impficatures. As rough approximations, the implicatures involved in the three examples above are expressed by the following statements, respec- tively: • In view of the danger that criminals pose to society, nailing the most fearsome criminal in recent history is an act that must be congratu- lated. • In view of my wish to stay both slim and polite, I must have the smallest steak. • In view of my moral convictions, I should try my best to bring Smith's murderer to justice. 5 Indefinite descriptions In this paper I take referring exp~ssious to be uses of noun phrases that are intended to indicate that a particular object is being talked about. Hence, in- definite descriptions can obviously serve as referring expressions, and the distinction between functional and conversational relevance should apply to them as well. Usually, a use of an indefinite description as a referring expression signals to the hearer that the identity of the referent is not important (e.g., IA policeman gave me a speeding ticketJ). Some uses of indefinite descriptions, however, are clearly made with the intention that the hearer identify whom the speaker has in mind. For example, 23 A person I know did not take out the garbage as he had promised... Here, identification is obviously required, but it does not matter at all how the referent is identified. The indefinite description is, therefore, only functionally relevant. In contrast, consider the following 24 A cardioeaacuclar s~ci~t told me that I exercise too much. Although the identity of the physician is not impor- tant, the fact that he is a cardiovascular specialist surely is. The indefinite description is, therefore, con~ersatiqnally relevant. Deborah Dalai discusses interesting cases in which an indefinite description is both spe¢if~ (i.e., used with the intention that the hearer know the identity of the referent) and attributive (that is, conversa- tionally relevant). Here is one of her examples: 25 Dr. Smith told me that exercise helps. Since I heard it from a doctor, I'm inclined to believe it [Dahl 1984]. Clearly, an accurate interpretation of aa doctor ~ would connect the referent with Dr. Smith. At the same time, the use of the indefinite description high- lights a property of Smith which is conversationally relevant. Note that the indefinite description is used to implicate a universal generaliffiation, namely, that in view of what doctors know, any doctor who gives you an advice, should (other things being equal) be listened to. This is very similar in structure to the implkature that is typically associated with conver- sationally relevant definite descriptions. As is the case with definite descriptions, such uses of indefinite descriptions can accomplish other pur- poses besides (or instead of) implicating a universal gsneralization. For example, 26 In fact, the Dewey-Truman matchup illustrates the point. Mr. Truman was thought to be a weak leader who could not carry out his strong predecessor's program. His election prospects were bleak. The pundits were against him and a highly successful Northe~tsrn Governor was poised to sweep into the White House. (New York Times, May 26, 1988)' The calculation of the impllcature conveyed by the indefinite description is left as an exercise for the reader. REFERENCES Allen, J. F. 1978. Recogn/~/ng Intention in Dia- logue. Ph.D. di~s., University of Toronto. Allen, J. F. and C. R. Perrault. 1978. Participat- ing in dialogues: understanding via plan de- duction. In Proceedings , Canadian Society for Computational Studies of Intelligence. ALlen, J. F. and C. R. Perrault. 1980. Analyzing intention in dialogues. Artificial Intelligence, 15(3):143-178. Appelt, D. E. 1985. Planning Engl~h Sentences. Cambridge Univ. Press, Cambridge. Dahl, Deborah A. 1984. Recognizing sepcific at- tributes, presented at the 59th Annum Meet- ing of the Linguistic Society of America. Bal- timore. 4In this op-ed, piece the author argues that polls showing Michael Dukakis leading George Bush in the race for the pres- idency do not mean much. Note that in May 1988, Dukakis is the Governor of Mauachusetts, a Northeastern state. 66 Gross, B. J. and C. L. Sidner. 1986. Attention, in- tantions, and the structure of discourse. Com- putational I, infuiatics, 12(3):175-204. Krat~r, A. 1977. What 'must' and 'can' must and can mean. Linguia64:s and Philosophy, 1(1):337-335. Kratser, A. 1979. Conditional necessity and pos- edbility. In U.Egli R.l~uerle and A.Von Ste. chow, editors, Semantics for Different Point8 of Fie,o, pp. 117-147, Spriager-Verlag , Berlin. Krat~r, A. 1981. The notional category of modal- ity. In H. J. Eikmeyer and H. Rieeer, stil- ton, Words, Worlds, and Contexts: New Ap- proaches in Word Semantlcs, pp. 38-74, Walter de Gruyter, Berlin. Kronfeld, A. 1986. Donnellan's distinction and x computational model of reference. In Proceed- infs of the P~th Annual Meeting, pp. 186-191, AJmociation for Computational Linguistic& Kroafeld, A. 1990. Reference and Computation: an Eesall in Applied Philosophy of Language. Cambridge Univ. Press, Cambridge. Perranlt, C. R. , J. F. Alien, and P. R. Cohen. 1978. Speech acts as a basis for understanding dialogue coherence. In TINI, AP-£, pp. 125- 132, University of m~nois, Urbana~Champaign. Searle, J. R. 1969. Speech Ac~: An Essay in the Philosophy of Language. Cambridge Univ. Press, Cambridge. Sidner, C. L. 1983. What the speaker means: the recognition of speakers' plans in discourse. In. ternational Journal of Computers and Mathe. rustics, 9(1):71-82. Sidner, C. L. 1985. Plan parsing for intended re- sponse recognition in discourse. Computational Intelligence, 1(1):1-10. Strawson, P. F. 1971. On referring. In J.F Rosen- berg and C. Travis, editors, Reading in the Philosophy of Language, Prentice Hall, Engle- wood, N. J. Waiters, R. S. 1967. The Encyclopedia of Philos- ophy, pp. 410-414. Volume 4, Macmillan, New York. s.v. "laws of science and lawlike state- ments'. 67
1989
8
COOKING UP REFERRING EXPRESSIONS Robert Dale Centre for Cognitive Science, University of Edinburgh 2 Buccleuch Place, Edinburgh EH8 9LW, Scotland email: rda~uk, ac. ed. epJ.stemi~nss, c s. ucl. ac. uk ABSTRACT This paper describes the referring expression generation mechanisms used in EPICURE, a com- puter program which produces natural language descriptions of cookery recipes. Major features of the system include: an underlying ontology which permits the representation of non-singular entities; a notion of diacriminatory power, to determine what properties should be used in a description; and a PATR-like unification grammar to produce surface linguistic strings. INTRODUCTION EPICURE (Dale 1989a, 1989b) is a natural lan- guage generation system whose principal concern is the generation of referring expressions which pick out complex entities in connected discourse. In particular, the system generates natural lan- guage descriptions of cookery recipes. Given a top level goal, the program first decomposes that goal recursively to produce a plan consisting of oper- ations at a level of detail commensurate with the assumed knowledge of the hearer. In order to de- scribe the resulting plan, EPICURE then models its execution, so that the processes which produce referring expressions always have access to a rep- resentation of the ingredients in the state they are in at the time of description. This paper describes that part of the system responsible for the generation of subsequent refer- ring expressions, i.e., references to entities which have already been mentioned in the discourse. The most notable features of the approach taken here are as follows: (a) the use of a sophisticated un- derlying ontology, to permit the representation of non-singular entities; (b) the use of two levels of se- mantic representation, in conjunction with a model of the discourse, to produce appropriate anaphoric referring expressions; (c) the use of a notion of dis- crimiaatory power, to determine what properties should be used in describing an entity; and (d) the use of a PATR-1ike unification grammar (see, for ex- ample, Karttunen (1986); Shieber (1986)) to pro- duce surface linguistic strings from input semantic structures. THE REPRESENTATION OF INGREDIENTS In most natural language systems, it is assumed that all the entities in the domain of discourse are singular individuals. In more complex domains, such as recipes, this simplification is of limited value, since a large proportion of the objects we find are masses or sets, such as those described by the noun phrases two ounces of salt and three pounds of carrots respectively. In order to permit the representation of enti- ties such as these, EPICURE makes use of a notion of a generalized physical object or physob]. This permits a consistent representation of entities irre- spective of whether they are viewed as individuals, masses or sets, by representing each as a knowledge base entity (KBE) with an appropriate structure at. tribute. The knowledge base entity corresponding to three pounds of carrots, for example, is that shown in figure 1. A knowledge base entity models a physobj in a particular state. An entity may change during the course of a recipe, as processes are applied to it: in particular, apart from gaining new properties such as being peeled, chopped, etc., an ingredient's structure may change, for example, from set to mass. Each such change of state results in the creation of a new knowledge base entity. Suppose, for example, a grating event is applied to our three pounds of carrots between states so and sl: the entity shown in figure i will then become a mass of grated carrot, represented in state sl by the KBE shown in figure 2. BUILDING A REFERRING EXPRESSION To construct a referring expression corresponding to a knowledge base entity, we first build a deep se- 68 KBE -~ indus = ZO state = so structure = set quantity = [ num~erUnit = pound= 3 ] speC = structure = individual substance = carrot -,-- [ ] packaging = ehape= carrot • = regular 8| Ze Figure 1: The knowledge base entity corresponding to three pounds of carrots KBE = irides = zo state ---- Sl strt~|urc = m~8o qu4ntity = [ unit = pound ] spec = number = 3 substar~e = carrot grated = + Figure 2: The knowledge base entity corresponding to three pound8 of grated carrot mantic structure which specifies the semantic con- tent of the noun phrase to be generated. We call this the recoverable semantic content, since it con- sists of just that information the hearer should be able to derive from the corresponding utter- ance, even if that information is not stated explic- itly: in particular, elided elements and instances of oae-anaphora are represented in the deep seman- tic structure by their more semantically complete counterparts, as we will see below. From the deep semantic structure, a surface semantic structure is then constructed. Unlike the deep semantic structure, this closely matches the syntactic structure of the resulting noun phrase, and is suitable for passing directly to a PATR-like unification grammar. It is at the level of surface semantic structure that processes such as elision and one-anaphora take place. PRONOMINALIZATION When an entity is to be referred to, we first check to see if pronominalisation is possible. Some pre- vious approaches to the pronominalization deci. don have taken into account a large number of contextual factors (see, for example, McDonald (1980:218-220)). The approach taken here is rel- atively simple. EPICURE makes use of a discourse model which distinguishes two principal compo- nents, corresponding to Grosz's (1977) distinction between local focus and global focus. We call that part of the discourse model corresponding to the local focus cache memory: this contains the lex- ical, syntactic and semantic detail of the current utterance being generated, and the same detail for the previous utterance. Corresponding to global focus, the discourse model consists of a number of hierarchically-arranged focua spaces, mirroring the structure of the recipe being described. These focus spaces record the semantic content, but not the syntactic or lexlcal detail, of the remainder of the preceding discourse. In addition, we make use of a notion of discourse centre: this is intu- itively similar to the notion of centering suggested by (]ross, Joshi and Weinstein (1983), and corre- sponds to the focus of attention in the discourse. In recipes, we take the centre to be the result of 69 the previous operation described. Thus, after an utterance like Soak the butterbeaa.s the centre is the entity described by the noun phrase the but- terbeans. Subsequent references to the centre can be pronominalized, so that the next instruction in the recipe might then be Drain and dnse tltem. Following Grosz, Joshi and Weinstein (1983), references to other entities present in cache mem- ory may also be pronominalized, provided the cen- tre is pronominalized. 1 If the intended referent is the current centre, then this is marked as part of the status infor- mation in the deep semantic structure being con- structed, and a null value is specified for the struc- ture's descriptive content. In addition, the verb case frame used to construct the utterance speci- fies whether or not the linguistic realization of the entity filling each case role is obligatory: as we will see below, this allows us to model a common linguistic phenomenon in recipes (recipe contezt empty objects, after Massam and Roberge (1989)). For a case role whose surface realization is obliga- tory, the resulting deep semantic structure is then as follows: D$ = inde: : : [ N~en = + statttm : e.cntrs : t sem : oblig -~ + "Pec = [ "PC=q) I This will be realized as either a pronoun or an elided NP, generated from a surface semantic struc- ture which is constructed in accordance with the following rules: • If the status includes the features [centre, +] and [oblig, +], then there should be a cor- responding element in the surface semantic structure, with a null value specified for the descriptive content of the noun phrase to be generated; t We do not permit pronominal reference to entities last mentioned before the previous utterance: support for this restriction comes from a study by Hobbs, who, in a sam- ple of one hundred consecutive e~.amples of pronouns from each of three very different texts, found that 98% of an- tecedents were either in the same or previous sentence (Hobbs 1978:322-323). However, see Dale (1988) for a sug- gestion as to how the few instances of/onc-dbt~a.e pronom- inalimtion that do exist might be explained by means of a theory of discourse structure like that suggested by Gross and Sidner (1986). 7O • If the status includes the features [centre, +] and [oblig,-], then this participant should be omitted from the surface semantic struc- ture altogether. In the former case, this will result in a pronominal reference as in Remove them, where the surface se- mantic structure corresponding to the pronominal form is as follows: ind~z = z status : [ SS = "1 given = + | J centre = ~r oblig = + [ nu~ = pl agr 8p~ ~--- C CG$~ = GCC &*c = However, if the participant is marked as non-obligatory, then reference to the entity is omitted, as in the following: Fry the onions. Add the garlic ~b. Here, the case frame for add specifies that the in- direct object is non-obllgatory; since the entity which fills this case role is also the centre, the complete prepositional phrase to the onions can be elided. Note, however, that the entity corre- sponding to the onions still figures in the deep semantic structure; thus, it is integrated into the discourse model, and is deemed to be part of the semantic content recoverable by the hearer. FULL DEFINITE NOUN PHRASE REFERENCE If pronominalization is ruled out, we have to build an appropriate description of the intended refer- ent. In EPICURE, the process of constructing a description is driven by two principles, very like Gricean conversational maxims (Grice 1975). The p~'nciple of adequacy requires that a referring ex- pression should identify the intended referent un- ambiguously, and provide sufficient information to serve the purpose of the reference; and the princi- ple of e~ciency, pulling in the opposite direction, requires that the referring expression used must not contain more information than is necessary for the task at hand. 2 These principles are implemented in EPICUItE 2Similar considerations are discussed by Appelt (1985). DS ~--- inde= ..~ = status =. [ given = + unique = + eel'n ~- opec = agr = tvpe= I countable = + ] J number = pl category : olive $ize : regular props = pitted = + Figure 3: The deep semantic structure corresponding to the pitted olives #tat*t. = epee = a/yen= + ] unique = + [countable : -~ ] agr = number = pl ] head = olive dee¢ = mad= [ head = pltted ] Figure 4: The surface semantic structure corresponding to the pitted olives by means of a notion of discriminatory power. Sup- pose that we have a set of entities U such that U = {zl,z2,...,x,} and that we wish to distinguish one of these en- tities, zl, from all the others. Suppose, also, that the domain includes a number of attributes (a I, a~, and so on), and that each attribute has a number of permissible values {v,,t, v,,2, and so on}; and that each entity is described by a set of attribute- value pairs. In order to distinguish z~ from the other entities in U, we need to find some set of attribute-value pairs which are together true of zl, but of no other entity in U. This set of attribute- value pairs constitutes a distinguishing descriptior, of xl with respect to the ,~ontext U. A mini- mal distinguishing description is then a set of such attribute-value pairs, where the cardinality of that set is such that there are no other sets of attribute- value pairs of lesser cardinality which are sufficient to distinguish the intended referent. We find a minimal distinguishing description by observing that different attribute-value pairs differ in the effectiveness with which they distin- guish an entity from a set of entities. Suppose U has N elements, where N > I. Then, any attribute-value pair true of the intended referent zl will be true of n entities in this set, where n >_ i. For any attribute-value pair < a, v > that is true of the intended referent, we can compute the discriminatory power (notated here as F) of that attribute-value pair with respect to U as fol- lows" ~'(< ~,v>, U) = ~-'~ l<n<N F thus has as its range the interval [0,1], where a value of 1 for a given attribute-value pair indi- cates that the attribute-value pair singles out the intended referent from the conte×t, and a value of 7] DS -~- indez = z2 status = SSf~t SpSC -~ [ #/uen= + ] unique = + number = sg agr = countable ---- + type = ] categorl! = capsicum r I eolour = red properties L size = small Figure 5: The deep semantic structure corresponding to the small red capsicum SS = indez = z2 , unique = + i Jpsc = _ ~ nu,n~sr= so ] agr- [ countable = + J Figure 6: The surface semantic structure corresponding to the small red one 0 indicates that the attribute-value pair is of no assistance in singling out the intended referent. Given an intended referent and a set of entities from which the intended referent must be distin- guished, this notion is used to determine which set of properties should be used in building a descrip- tion which is both adequate and efficient. 3 There remains the question of how the constituency of the set U of entities is determined: in the present work, we take the context always to consist of the working set. This is the set of distinguishable enti- sstrictly speaking, this mechanism is only applicable in the form described here to those properties of an entity which are realizable by what are known as abJolute (or t~- tereect/ee or pred~tiee) adjectives (see, for example, Kamp (1975), Keenan and FaRm (1978)). This is acceptable in the current domain, where many of the adjectives used are derived from the verbs used to describe processes applied to entities. ties in the domain at any given point in time: the constituency of this set changes as a recipe pro- ceeds, since entities may be created or destroyed. 4 Suppose, for example, we determine that we must identify a given object as being a set of olives which have been pitted (in a context, for example, where there are also olives which have not been pitted}; the corresponding deep semantic struc- ture is then as in figure 3. Note that this deep semantic structure can be realized in at least two ways: as either the olives which have been pitted or the pitted olives. 4A slightly more sophisticated approach would be to restrict U to exclude those entities which are, in G rosz and Sidner's (1986) terms, only present in closed focus spaces. However, the benefit gained from doing this (if indeed it is a valid thing to do) is minimal in the current context because of the small number of entities we are dealing with. 72 indez = z ~tatt~ = .[ ] number = pl agr = "ountable = + DS = ~.nuant 8pec = 8ubst = ] t number--- 3 ] agr = countable = + tltpe -- categorlt = pound ] number = pl l agr = countable = + J type = category = carrot ] J Figure 7: The deep semantic structure corresponding to three pounds of carrots Both forms are possible, although they correspond to different surface semantic structures. Thus, the generation algorithm is non-deterministic in this respect (although one might imagine there are other factors which determine which of the two re- alizations is preferrable in a given context}. The surface semantic structure for the simpler of the two noun phrase structures is as shown in figure 4. ONE ANAPHORA The algorithms employed in EPICURE also permit the generation of onc-anaphora, as in Slice the large green capsicum. Now remove the top of the small red one. The deep semantic structure corresponding to the noun phrase the small red one is as shown in fig- ure 5. The mechanisms which construct the surface semantic structure determine whether one-anaphora is possible by comparing the deep semantic struc- ture corresponding to the previous utterance with that corresponding to the current utterance, to identify any elements they have in common. The two distinct levels of semantic representation play an important role here: in the deep semantic struc- ture, only the basic semantic category of the de• scription has special status (this is similar to Wel>- her's (1979) use of restricted quantification), whereas the embedding of the surface semantic structure's dcsc feature closely matches that of the noun phrase to be generated. For one-anaphora to be possi- ble, the two deep semantic structures being com- pared must have the same value for the feature addressed by the path <sere spec type category>. Rules which specify the relative ordering of ad- jectives in the surface form are then used to build an appropriately nested surface semantic structure which, when unified with the grammar, will result in the required one-anaphoric noun phrase. In the present example, this results in the surface seman- tic structure in figure 6. PSEUDO-PARTITI'VE NPS Partitive and pseudo-partitive noun phrases, ex- emplified by half of the carrots and three pounds of carrots respectively, are very common in recipes; EPICURE is capable of generating both. So, for example, the pseudo-partitive noun phrase three pounds of carrots (as represented by the knowledge base entity shown in figure 1) is generated from the deep semantic structure shown in figure 7 via the surface semantic structure shown in figure 8. The generation of partitive noun phrases re- quires slightly different semantic structures, de- scribed in greater detail in Dale (1989b). THE UNIFICATION GRAMMAR Once the required surface semantic structure has been constructed, this is passed to a unification 73 $S = ind.= = z atatua= 8era epee = . [ giuen = -- ] countable = + agr = number = 3 epec I = &so = $p¢c2 = ] t countable = + age = number = 3 desc = head = pound agr= [[eountab|e=+ d¢8c = head = carrot Figure 8: The surface semantic structure corresponding to three pounds of carrots grammar. In EPICURE, the grammar consists of phrase structure rules annotated with path equa- tions which determine the relationships between semantic units and syntactic units: the path equa- tions specify arbitrary constituents (either com- plex or atomic) of feature structures. There is insufficient space here to show the en- tire NP grammar, but we provide some representa- tive rules in figure 9 (although these rules are ex- pressed here in a PATR-Iike formalism, within EPI- CURE they are encoded as PROLOG definite clause grammar (DCG) rules (Clocksin and Mellish 1981)). Applying these rules to the surface semantic struc- tures described above results in the generation of the appropriate surface linguistic strings. CONCLUSION In this paper, we have described the processes used in EPICURE to produce noun phrase referring ex- pressions. EPICURE is implemented in C-PROLOG running under UNIX. The algorithms used in the system permit the generation of a wide range of pronominal forms, one-anaphoric forms and full noun phrase structures, including partitives and pseudo-partitives. ACKNOWLEDGEMENTS The work described here has benefited greatly from discussions with Ewan Klein, Graeme Ritchie, :Ion Oberlander, and Marc Moens, and from Bonnie Webber's encouragement. REFERENCES Appelt, Douglas E. (1985) Planning English Refer- ring Expressions. Artificial Intelligence, 26, 1-33. Clocksin, William F. and Melllsh, Christopher S. (1981) Programming in Prolog. Berlin: Springer- Verlag. Dale, Robert (1988) The Generation of Subsequent Referring Expressions in Structured Discourses. Chapter 5 in Zock, M. and Sabah, G. (eds.) Ad- uances in Natural Language Generation: An Inter- disciplinary Perspective, Volume 2, pp58-75. Lon- don: Pinter Publishers Ltd. Dale, Robert (1989a) Generating Recipes: An Over- view of EPICURE. Extended Abstracts of the Sec- ond European Natural Language Generation Work- shop, Edinburgh, April 1989. Dale, Robert (1989b) Generating Referring Ex- pressions in a Domain of Objects and Processes. PhD Thesis, Centre for Cognitive Science, Univer- sity of Edinburgh. Grice, H. Paul (1975) Logic and Conversation. In Cole, P. and Morgan, J. L. (eds.) Syntax and Se- mantics, Volume 3: Speech Acts, pp41-58. New York: Academic Press. Grosz, Barbara J. (1977} The Representation and Use of Focus in Dialogue. Technical Note No. 151, 74 NP N2 Nll NPx NPI ---4. Dee N1 <Dee sere> <NP 8yn agr> <N1 syn agr> <Dee syn agr> <N1 sere> N <N sent> AP NI2 <AP sere> <NI~ sere head> <NP2 sere> <N1 sere> <NI 8yn ayr> <NPa 8era statuJ> <NP2 sere status> <NPa 8era> <PP 8era> = <NP sere status> = <NP sere spec agr> = <NP syn agr> = <N1 syn agr> = <NP sere spec desc> = <N1 sent head> = <Nll sere rood> -- <Nlx sere head> = <NPx sere spec desc specx > = <NPx sere spec desc spe¢2> = <NPx sere spec agr> = <NPz sere status> = <NPx sere status> = <NPx sere spec desc spec> = <NPx sere spec desc set> Figure 9: A fragment of the noun phrase grammar SRI International, Menlo Park, Ca., July, 1977. Grosz, Barbara J., Joshi, Aravind K. and Wein- stein, Scott (1983) Providing a Unified Account of Definite Noun Phrases in Discourse. In Proceed- ings of the ~lst Annual Meeting o/the Associa- tion for Computational Linguistics, Massachusetts Institute of Technology, Cambridge, Mass., 15-17 June, 1983, pp44-49. Grosz, Barbara J. and Sidner, Candace L. (1986) Attention, Intentions, and the Structure of Dis- course. Computational Linguistics, 12, 175-204. Hobbs, Jerry R. (1978) Resolving Pronoun Refer- ences. Lingua, 44, 311-338. Kamp, Hans (1975) Two Theories about Adjec- tives. In Keenan, E. L. (ed.) Formal Semantics of Natural Language: Papers from a colloquium spon- sored by King's College Research Centre, Cam- bridge, pp123-155. Cambridge: Cambridge Uni- versity Press. Karttunen, Lauri (1986) D-PATR: A Development Environment for Unification-Based Grammars. In Proceedings of the 11th International Conference on Computational Linguistics, Bonn, 25-29 Au- gust, 1986, pp74-80. Keenan, Edward L. and Faltz, Leonard M. (1978) Logical Types for Natural Language. UCLA Occa- sional Papers in Linguistics, No. 3. McDonald, David D. (1980) Natural Language Gen- eration as a Process of Decision-Making under Con- straints. PhD Thesis, Department of Computer Science and Electrical Engineering, MIT. Massam, Diane and Roberge, Yves (1989) Recipe Context Null Objects in English. Linguistic In- quiry, 20, 134--139. Shieber, Stuart M. (1980) An Introduction to Unification- based Approaches to Grantmar. Chicago, Illinois: The University of Chicago Press. Webber, Bonnie Lynn (1979) A Formal Approach to Discourse Anaphora. London: Garland Pub- lishing. 75
1989
9
POLYNOMIAL TIME PARSING OF COMBINATORY CATEGORIAL GRAMMARS* K. Vijay-Shanker Department of CIS University of Delaware Delaware, DE 19716 David J. Weir Department of EECS Northwestern University Evanston, IL 60208 Abstract In this paper we present a polynomial time pars- ing algorithm for Combinatory Categorial Grammar. The recognition phase extends the CKY algorithm for CFG. The process of generating a representation of the parse trees has two phases. Initially, a shared for- est is build that encodes the set of all derivation trees for the input string. This shared forest is then pruned to remove all spurious ambiguity. 1 Introduction Combinatory Categorial Grammar (CCG) [7, 5] is an extension of Classical Categorial Grammar in which both function composition and function application are allowed. In addition, forward and backward slashes are used to place conditions on the relative ordering of adjacent categories that are, to be com- bined. There has been considerable interest in pars- ing strategies for CCG' [4, 11, 8, 2]. One of the major problems that must be addressed is that of spurious ambiguity. This refers to the possibility that a CCG can generate a large number of (exponentially many) derivation trees that assign the same function argu- ment structure to a string. In [9] we noted that a CCG can also generate exponentially many genuinely am- biguous (non-spurious)derivations. This constitutes a problem for the approaches cited above since it re- suits in their respective algorithms taking exponential time in the worst case. The algorithm we present is the first known polynomial time parser for CCG. The parsing process has three phases. Once the recognizer decides (in the first phase) that an input can be generated by the given CCG the set of parse *This work was partially supported by NSF grant IRI- 8909810. We are very grateful to Aravind Joshi, Michael Niv, Mark Steedman and Kent Wittenburg for helpful discussior~. 1 trees can be extracted in the second phase. Rather than enumerating all parses, in Section 3, we describe how they can be encoded by means of a shared forest (represented as a grammar) with which an expoo en- tial number of parses are encoded using a polynomi- ally bounded structure. This shared forest encodes all derivations including those that are spuriously am- biguous. In Section 4.1, we show that it is possible to modify the shared forest so that it contains no spuri- ous ambiguity. This is done (in the third phase) by traversing the forest, examining two levels of nodes at each stage, detecting spurious ambiguity locally. The three stage process of recognition, building the shared forest, and eliminating spurious ambiguity takes poly- nomial time. 1.1 Definition of CCG A CCG, G, is denoted by (VT, VN, S, f, R) where VT is a finite set of terminals (lexical items), VN is a finite set of nonterminals (atomic categories), S is a dis- tinguished member of VN, f is a function that maps elements of VT to finite sets of categories, R is a fi- nite set of combinatory rules. Combinatory rules have the following form. In each of the rules x, y, zl,.., are variables and li E {\,/}. 1. Forward application: z/y y .--. z 2. Backward application: y z\y ~ z 3. Forward composition (for n > 1): ~ly yllz112... I.z. - xllz112.., l~z. 4. Backward composition (for n_> i): yl,z~12...l.=, x\y--* ~I~=~12...I.=~ In the above rules, z [ y is the primary category and the other left-hand-side category is the secondary category. Also, we refer so the leftmost nonterminal of a category as the target of the category. We assume that categories are parenthesis-free. The results pre- sented here, however, generalize to the case of fully parenthesized categories. The version of CCG used in [7, 5] allows for the possibility that the use of these combinatory rules can be restricted. Such restrictions limit the possible categories that can inatantiate the variables. We do not consider this possibility here, though the results we present can be extended to han- dle these restrictions. Derivations in a CCG involve the use of the com- binatory rules in R. Let ~ be defined as follows, where Tt and T2 are strings of categories and termi- nals and c, cl, c2 are categories. • If ctc2 ---* c is an instance of a rule in R then TtcT2 ~ Ttctc2T2. • If c E f(a) for some a E Vr and category c then TzcT2 ==~ TtaT2. The string language generated is defined as L(G)- {w IS =~ w I w e V~ }. 1.2 Context-Free Paths In Section 2 we describe a recognition algorithm that involves extending the CKY algorithm for CFG. The differences between the CKY algorithm and the one presented here result from the fact that the derivation tree sets of CCG have more complicated path sets than the (regular) path sets of CFG tree sets. Consider the set of CCG derivation trees of the form shown in Figure 1 for the language { ww t w E {a, b} ° }. Due to the nature of the combinatory rules, cate- gories behave rather like stacks since their arguments are manipulated in a last-in-first-out fashion. This has the effect that the paths can exhibit nested dependen- cies as shown in Figure 1. Informally, we say that CCG tree sets have context-free paths. Note that the tree sets of CFG have regular paths and cannot produce such tree sets. 2 Recognition of CCG The recognition algorithm uses a 4 dimensional ar- ray L for the input at...a,. In entries of the ar- ray L we cannot store complete categories since ex- ponentially many categories can derive the substring A I a S B I b StA $|A tB B SIAIBIB b S1AIB/S SIN SIA/S SIB/S b I I a b Figure 1: Trees with context-free paths ai... aj I it is necessary to store categories carefully It is possible, however, to share parts of categories b~ tween different entries in L. This follows from the fac' that the use of a combinatory rule depends only on (1) the target category of the primary category of th~ rule; (2) the first argument (sufrLx of length 1) of th~ primary category of the rule;(3) the entire (bounded secondary category. Therefore, we need only find thi: (bounded) information in each array entry in ordel to determine whether a rule can be used. Entries o the form ((A, a), T) are stored in L[i, j][p, q]. This en codes all categories whose target is A, suffix ~, am that derive the ai ... aj. The tail T and the indices j and q are used to locate the remaining part of thes~ categories. Before describing precisely the informatior that is stored in L we give some definitions. If ~ E ({\,/}VN)" then [a[ = n. Given a CCG, G = (VT, VN,S,f,R) let kt be the largest n such that R contains a rule whose secondary category is ylzzzl2... InZn and let k2 be the maximum of kl and all n where there is some c E f(a) such that c = As and ]o~ I = n. In considering how categories that are derived in the course of a derivation should be stored we have two cases. 1. Categories that are either introduced by lexical 1 This is possible since the length of the category can be linear with respect to j - i. Since previous approaches to CCG parsin~ store entire categories they can take exponential time. items appearing in the input string or whose length is less that kt and could therefore be secondary cat- egories of a rule. Thus all categories whose length is bound by k~ are encoded in their entirety within a sin- gle array entry. 2. All other categories are encoded with a sharing mechanism in which we store up to kt arguments lo- cally together with an indication of where the remain- ing arguments can be found. Next, we give a proposition that characterizes when an entry is included in the array by the algorithm. An entry (A, a), T) E L[i, j]~>, q] where A E VN and a ~ ({\,/}VN)* when one of the following holds. If T = 7 then 7 e {\, I}VN, 1 < I~l < kx, and for some a' ~ ({\,/}VN)* the following hold (1) Aa'ct "';~ hi...%-tAa'Taq+t ...aj. (2) An'7 ~ ap...%. (3) Informally, the category An'7 in (1) above is "de- rived" from Aatc~ such that there is no intervening point in the derivation before reaching An7 at which the all of the suffix a of Aa~a has been "popped"• Alternatively, ifT = - then 0 <: [a I < kt +k2, (p, q) = (0, 0) and Ac~ =~=t, al...a~. Note that we have In[ < kl + k2 rather than [M <_ k~ (as might have been expected from the discussion above). This is the case because a category whose length is strictly less than k2, can, as a result of function composition, result in a category of length < kl + k~. Given the way that we have designed the algorithm below, the latter category is stored in this (non-sharing) form. 2.1 Algorithm If c E f(ai) for some category c, such that c - An, then include the tuple ((A, a),-) in L[i, i][0, 0]. For some i and j, l < i < j <_ n consider each rule x/~ ~ltzt... I,~z,, ~ xllzt.., l.,z., 2. For some k, i < k < j, we look for some ((B, B), -) E L[k+l,j][O,O], where IN - m, (corresponding to the secondary cate$ory of the rule) and we look for ((A, a/B), T) E L[i, k][p, q] for some a, T, p and q (corresponding to the primary category of the rule). From these entries in L we know that for some c~' Aa%/B =~ ai...ak and B/3 =~ ak+1...a~. 2Backward composition and application are treated in the same way as this rule, except that all occurrences below of i and k are swapped with occurrences of k+ 1 and j, respectively. Thus, by the combinatory rule given above we have Asia/3 ~ hi...aj and we should store and encod- ing of the category Acgaf? in L[i, j]. This encoding depends on cd, a, fl, and T, If [~[ < kl + k2 then (case la) add ((A, aft), -) to L[i, j][0, 0]. Otherwise, (case lb) add ((A, •),/B) to ~[i,/][i, k]. *T~- andre> 1 The new category is longer than the one found in L[i, k][p, q]. If a ¢ e then (case 2a) add ((A, •), IS) to L[i, Jill, k], otherwise (case 2b) add ((A, ~),T) to L[i, j] [p, q]. *T~- andrn= 1 (case 3) The new category has the same length as the one found in L[i, k]~, q]. Add ((A, ~/), T) to L[i, j]~, q]. .T----7 ~- and m----O The new category has the a length one less than the one found in L[i, k]~, q]. If a ~ e then (case 4a) add ((A, a), T) to. L[i, j][p, q]. Otherwise, (case 4b) since a = • we have to look for part of the category that is not stored locally in L[i, k]~, q]. This may be found by looking in each entry Lip, q][r, s] for each ((A, ~'7), T'). We know that either T' = - or fl' ¢ e and add ((A, ~'), T') to L[i, jilt, s]. Note that for some a", Aa'l~17 ~ a v. .aq, Aa"/3'/B a~ .ak, and thus by the combinatory rule above Au'~ ~ =~ al • • • a t • As in the case of CKY algorithm we should have loop statements that allow i, j to range from 1 through n such that the length of the spanned substring starts from 1 (i - j) and increases to n (i = 1 and j --- n). When we consider placing entries in L[i,j] (i.e., to detect whether a category derives ai•..ai) we have to consider whether there are two subconstituents (to simplify the discussion let us consider only forward combinations) which span the substrings ai .. • ak and ak+l...aj. Therefore we need to consider all values for k between i through j - 1 and consider the entries in L[i,k]~,q] and L[k+ 1,j][0, 0] where i ~ p _< q < k orp=q=0. The above algorithm can be shown to run in time O(n 7) where n is the length of the input. In case 4b. we have to consider all possible values for r, s between p and q. The complexity of this case dominates the complexity of the algorithm since the other cases do involve fewer variables (i.e., r and s are not involved). Case 4b takes time O((q - p)2) and with the loops for i, j, k, p, q ranging from 1 through n the time complex- ity of the algorithm is O(n't). However, this algorithm can be improved to obtain a time complexity of O(n s) by using the same method employed in [9]. This improvement is achieved by moving part of case 4b outside of the k loop, since looking for ((A, ff/7'), T~) in LIp, q][r, s] need not be done within the k loop. The details of the improved method may be found in [9] where parsing of Linear Indexed Grammar (LIG) was considered. Note that O(n s) (which we achieve with the improved method) is the best known result for parsing Tree Adjoining Grammars, which generates the same class of lan- guages generated by CCG and LIG. A[.-a] --. A, [a,]... A, x [a,-a ] A,[../~] A,+I [ai+l]... A,[an] A[a] "~ a The first form of production is interpreted as: if a nonterminal A is associated with some stack with the sequence cr on top (denoted [-.c~]), it can be rewritten such that the i th child inherits this stack with ~ re- placing a. The remaining children inherit the bounded stacks given in the production. The second form of production indicates that if a non- terminal A has a stack containing a sequence a then it can be rewritten to a terminal symbol a. The language generated by a LIG is the set of strings derived from the start symbol with an empty stack. 3 Recovering All Parses At this stage, rather than enumerating all the parses, we will encode these parses by means of a shared forest structure. The encoding of the set of all parses must be concise enough so that even an exponential number of parses can be represented by a polynomial sized shared forest. Note that this is not achieved by any previously presented shared forest presentation for CCG [8]. 3.1 Representing the Shared Forest Recently, there has been considerable interest in the use of shared forests to represent ambiguous parses in natural language processing [1, 8]. Following Bil- lot and Lang [1], we use grammars as a representa- tion scheme for shared forests. In our case, the gram- mars we produce may also be viewed as acyclic and-or graphs which is the more standard representation used for shared forests. The grammatical formalism we use for the repre- sentation of shared forest is Linear Indexed Grammar (LIG) a. Like Indexed Grammars (IG), in a LIG stacks containing indices are associated with nonterminals, with the top of the stack being used to determine the set of productions that can be applied. Briefly, we define LIG as follows. If a is a sequence of indices and 7 is an index, we use the notation A[c~7] to represent the case where a stack is associated with a nonterminal A having -y on top with the remaining stack being the c~. We use the following forms of productions. aIt has been shown in [I0, 3] that LIG and CCG generate the same class of languages. 3.2 Building the Shared Forest We start building the shared forest after the recognizer has completed the array L and decided that a given input al ... an is well-formed. In recovering the parses, having established that some ~ is in an element of L, we search other elements of L to find two categories that combine to give a. Since categories behave like stacks the use of CFG for the representation of the set of parse trees is not suitable. For our purposes the LIG formalism is appropriate since it involves stacks and production describing how a stack can be decomposed based on only its top and bottom elements. We refer to the LIG representing the shared forest as Gsl. The set of indices used in Ga! have the form (A, a, i, j). The terminals used in Gs/ are names for the combinatory rule or the lexical assignment used (thus derived terminal strings encode derivations in G). For example, the terminal Fm indicates the use of the forward composition rule z/y yllzII2... ImZm and (c, a) indicates the lexical assignment, c to the symbol a. We use one nonterminal, P. An input al...an is accepted if it is the case that ((S, e), -) 6 L[1, n][0, 0]. We start by marking this entry. By marking an entry ((A, c~), T) e L[i, j]~, q] we are predicting that there is some derivation tree, rooted with the category S and spanning the input al ...a,, in which a category represented by this en- try will participate. Therefore at some point we will have to consider this entry and build a shared forest to represent all derivations from this category. Since we start from ((S, e),-) E L[1, hi[0, 0] and proceed to build a (representation of) derivation trees in a top down fashion we will have loop statements that vary the substring spanned (a~...aj) from the largest possible (i.e., i = 1 and j = n) to the smallest (i.e., i = j). Within these loop statements the algo- rithm (with some particular values for i and j) will consider marked entries, say ( (A, ct), T) E L[i, j]~, q] (where i < p < q < j or p = q = 0), and will build representations of all derivations from the category (specified by the marked entry) such that the input spanned is ai...aj. Since ((A, ~), T) is a representa- tion of possibly more than one category, several cases arise depending on ot and T. All these cases try to un- cover the reasons why the recognizer placed thin entry in L[i, j]~, q]. Hence the cases considered here are in- verses of the cases considered in the recognition phase (and noted in the algorithm given below). Mark ((S, e), -) in L[1, n][0, 0]. By varying i from 1 to n, j from n to i and for all ap- propriate values of p and q if there is a marked entry, say ((d, a), T) ~ L[i,j]~p, q] then do the following. • Type I Production (inverse of la, 3, and 4a) If for some k such that i _ k < j, some a, 13 such that ~' = a/3, and B E VN we have ((A, a/B), T) E L[i, k][p, q] and ((B,/3), -) E L[k + 1, j][0, 0] then let p be the production P[..(A, a', i, j)] -..* F,, P[..(A, a/B, i, k)] P[(B, B, k + 1, j)] where m = [/31. If p is not already present in G°! then add p and mark ((A, a/B), T) e L[i, k]~,, q] as well as ((B,/3),-) e L[k + i, j][0, 01. • Type $ Production (inverse of lb and 2a) If for some k such that i < k < j, and a,B,T',r,s,k we have ((A,a/B),T') E L[i,k][r,s] where (p,q) = (i, k), ((B, ~'), -) e L[k + 1, j][0, 0], T =/B, and the lengths of a and a' meet the requirements on the cor- responding strings in case lb and 2a of the recognition algorithm then then let p be the production P[..(A, a/B, i, k)(A, a', i, 1)] -- F,,, P[..(A, or~B, i, k)] P[(B, a', k + 1, j)] where m = la'l. If p is not already present in G°! then add p and mark ((A, a/B), T') e L[i, k][r, s] and ((B, ~'), -) e L[k + 1,1][0, 0]. • Type 3 Production (inverse of 2b) If for some k such that i < k < j, and some B it is the case that ((A,/B), T) 6 L[i, l:][p, q] and ((B, ~'),-) E L[k + 1, j][0, 0] where ]a'] > 1 then then let p be the production P[.-(A, a', i, 1)] --. E,, P[..(A,/B, i, k)] P[(B, a', k + 1, j)] where m = Intl. If p is not already present in G,I then add p and mark ((A,/B),T) 6 L[i, k]~, q] and ((S, ~'), -) e L[k + 1, j][0, 0]. • Type 4 Production (inverse of 4b) If for some h such that i < k < j, and some B,~',r,8,~, we and ((A, IB,),~') ~ L[i,k][r,~], ((A, a'7'), T) E L[r,s]~,q], and ((B,e),-) 6 L[k + 1, j][0, 0] then then let p be the production P[..(A, ~', i, j)] -- Fo P[..(A, ~'v', ,, ,)(A,/B, i, k)] P[(B, ,, k + 1, j)] If p is not already present in G,! then add p and mark ((A,/B), 7') E L[i, k][r, s] and ((B, e), -) 6 L[k + 1, j][0, 0]. * Type 5 Production If j = i, then it must be the case that T = - and there is a lexical assignment assigning the category As / to the input symbol given by at. Therefore, if it has not already been included, output the production P[(a, ~', i, i)] - (A~, a,) The number of terminals and nonterminals in the grammar is bounded by a constant. The number of in- dices and the number of productions in G,! are O(nS). Hence the shared forest representation we build is polynomial with respect to the length of the input, n, despite the fact that the number of derivations trees could be exponential. We will now informally argue that G,! can be built in time O(nZ). Suppose an entry ((A, a'), T) is in L[i,j]~,q] indicating that for some /3 the category A/3c~' dominates the substring al...aj. The method outlined above will build a shared forest structure to represent all such derivations. In particular, we will start by considering a production whose left hand side is given by P[..(A, ~', i, j)]. It is clear that an intro- duction of production of type 4 dominates the time complexity since this case involves three other vari- ables (over input positions), i.e., r, sl k; whereas the introduction of other types of production involve only one new variable k. Since we have to consider all pos- sible values for r, s, k within the range i through j, this step will take O((j - 0 3) time. With the outer loops for i, j, p, and q allowing these indices to range from 1 through n, the time taken by the algorithm is O(n7). Since the algorithm given here for building the shared forest simply finds the inverses of moves made in the recognition phase we could have modified the recognition algorithm so as to output appropriate G,! productions during the process of recognition without altering the asymptotic complexity of the recognizer. However this will cause the introduction of useless pro- ductions, i.e., those that describe subderivations which do not partake in any derivation from the category S spanning the entire input string al ... a,. 5 4 Spurious Ambiguity We say that a given CCG, G, exhibits spurious am- biguity if there are two distinct derivation trees for a string w that assign the same function argument structure. Two well-known sources of such ambiguity in CCG result from type raising and the associativity of composition. Much attention has been given to the latter form of spurious ambiguity and this is the one that we will focus on in this paper. To illustrate the problem, consider the following string of categories. At!A2 A2/Aa ... An-z/An Any pair of adjacent categories can be combined using a composition rule. The number of such derivations is given by the Catalan series and is therefore expo- nential in n. We return a single representative of the class of equivalent derivation trees (arbitrarily chosen to be the right branching tree in the later discussion). 4.1 Dealing with Spurious Ambiguity We have discussed how the shared forest representa- tion, Gsl, is built from the contents of array L. The recognition algorithm does not consider whether some of the derivations built are spuriously equivalent and this is reflected in G,I. We show how productions of G,! can be marked to eliminate spuriously ambigu- ous derivations. Let us call this new grammar Gnu. As stated earlier, we are only interested in detecting spuriously equivalent derivations arising from the as- sociativity of composition. Consider the example in- volving spurious ambiguity shown in Figure 2. This example illustrates the general form of spurious am- biguity (due to associativity of composition) in the derivation of a string made up of contiguous substrings ai~ ...a h, a~ ...aj2, and ai~ ...aj8 resulting in a cat- egory Az alot2a3. For the sake of simplicity we assume that each combination indicated is a forward combi- nation and hence i2 = jl + 1 and i3 = J2 + 1. Each of the 4 combinations that occur in the above figure arises due to the use of a combinatory rule, and hence will be specified in G,! by a production. For example, it is possible for combination 1 to be repre- sented by the following type I production. P[..( At , ot' ot2 / A3, il , j2)] -~ F,,, P[..( Ax, ot' / A2, i, ,jx)] P[(A2, a2, i2, j2 )] where i2 = jz + 1, ~' is a suffix of az of length less than A a a a • 1 1 2 3 A 1 % ~ A a /A A a /A A a 1 1 2 2 2 3 3 3 a a a a a a il jl i2 12 i3 j3 1 1 2 3 A a /A A a /A A a 11 2 22 3 33 a a a a a a il jl i2 j2 13 j3 Figure 2: Example of spurious ambiguity kl, and m = la2[. Since Aloq/A3 and Aaa3 are used as secondary categories, their lengths are bounded by kl + 1. Hence these categories will appear in their en- tirety in their representations in the G,! productions. The four combinations 4 will hence be represented in G,! by the productions: Combination 1: P[..(A1, a'ot2/Aa, il, j2)] --* Combination 2: P[..(Aa, a'a~cra, ia, ja)] "-* F,, P[..(At, a'a2/A~, it, jr )] P[(A,, a3, j~ + 1, j, )] Combination 3: P["(A2, ot~ota,ja + 1,ja)] --* F,, P[..(A2, ot2/Aa, jx + 1, j2)] P[(Aa, ot,, j2 + 1,3'3)] Combination 4: P[.-(Ax, a'a2a,, il, j3)] --* Fna P["(Ax, ct'/A2, Q,/x)] P[(A2, a2c~3, ja + 1, j3)] where., = = and = 4We consider the case where each combination is represented by a Type 1 production. These productions give us sufficient information to de- tect spurious ambiguity locally, i.e., the local left and right branching derivations. Suppose we choose to re- tain the right branching derivations only. We are no longer interested in combination 2. Therefore we mark the production corresponding to this combination. This production is not discarded at this stage be- cause although it is marked it might still be useful in detecting more spurious ambiguity. Notice in Figure 3 A Q a ~ a I 2 3 A a a ~ A a /A A a IA A a IA A a 1 1 I 2 22 3 33 a a a a a a a a io jO ii Jl i2 j2 i3 j3 t 2 3 Aa/A AaalA Aa I 112 3 33 a a 8 a a a I0 iO II 12 13 j3 Figure 3: Reconsidering a marked production that the subtree obtained from considering combina- tion 5 and combination 1 is right branching whereas the entire derivation is not. Since we are looking for the presence of spurious ambiguity locally (i.e., by con- sidering two step derivations) in order to mark this derivation we can only compare it with the derivation where combination 7 combines Aa/A1 with Alala2a3 (the result of combination 2) s. Notice we would have already marked the production corresponding to com- bination 2. If this production had been discarded then the required comparison could not have been made and the production due to combination 6 can not have been marked. At the end of the marking process all marked productions can be discarded 6 . In the procedure to build the grammar Gn8 we start with the productions for lexical assignments (type 5). By varying il from n to 1, jz from i + 2 to n, i~ from j3 to il + 1, and i3 from i.~ + 1 to j3 we look for a group of four productions (as discussed above) that locally indicates the the presence of spurious ambigu- ity. Productions involved in derivations that are not right branching are marked. It can be shown that this local marking of spuri- ous derivations will eliminate all and only the spuri- ously ambiguous derivations. That is, enumerating all derivations using unmarked productions, will give all and only genuine derivations. If there are two deriva- tions that are spuriously ambiguous (due to the as- sociativity of composition) then in these derivations there must be at least one occurrence of subderiva- tions of the nature depicted in Figure 3. This will result in the marking of appropriate productions and hence the spurious ambiguity will be detected. By induction it is also possible to show that only the spu- riously ambiguous derivations will be detected by the marking process outlined above. 5 Conclusions • Several parsing strategies for CCG have been given recently (e.g., [4, 11, 2, 8]). These approaches have concentrated on coping with ambiguity in CCG deriva- tions. Unfortunately these parsers can take exponen- tial time. They do not take into account the fact that categories spanning a substring of the input could be of a length that is linearly proportional to the length of the input spanned and hence exponential in num- ber. We adopt a new strategy that runs in polynomial time. We take advantage of the fact that regardless of the length of the category only a bounded amount of information (at the beginning and end of the cate- 5Although this category is also the result of combination 4, the tree with combinations 5 and 6 can not be compared with the tree having the combinations 7 and 4. 6Steedman [6] has noted that although all multiple deriva- tions arising due to the so-called spurious amb;~ty yield the same "semantics" they need not be considered useless. 7 gory) is used in determining when a combinatory rule can apply. We have also given an algorithm that builds a shared forest encoding the set of all derivations for a given input. Previous work on the use of shared forest structures [1] has focussed on those appropri- ate for context-free grammars (whose derivation trees have regular path sets). Due to the nature of the CCG derivation process and the degree of ambiguity possi- ble this form of shared forest structures is not appro- priate for CCG. We have proposed a shared forest representation that is useful for CCG and other for- malLsms (such as Tree Adjoining Grammars) used in computational linguistics that share the property of producing trees with context free paths. Finally, we show the shared forest can be marked so that during the process of enumerating all parses we do not list two derivations that are spuriously am- biguous. In order to be able to eliminate spurious ambiguity problem in polynomial time, we examine two step derivations to locally identify when they are equivalent rather than looking at the entire derivation trees. This method was first considered by [2] where this strategy was applied in the recognition phase. The present algorithm removes spurious ambiguity in a separate phase after recognition has been com- pleted. This is a reasonable approach when a CKY- style recognition algorithm is being used (since the de- gree of ambiguity has no effect on recognition time). However, if a predictive (e.g., Earley-style) parser were employed then it would be advantageous to detect spurious ambiguity during the recognition phase. In a predictive parser the performance on an ambigu- ous input may be inferior to that on an unambiguous one. Due to the spurious ambiguity problem in CCG, even without genuine ambiguity, the purser's perfor- mance be poor if spurious ambiguity was not detected during recognition. CKY-style parsers are closely re- lated to predictive parsers such as Earley's. There- fore, we believe that the techniques presented here, i.e., (1) the sharing of stacks used in recognition and in the shared forest representation and (2) the local iden- tification of spurious ambiguity (first proposed by [2]) can be adapted for use in more practical predictive algorithms. [2] [3] [5] [6] [7] [8] C9] [i0] [11] soc. Comput Ling., 1989. M. Hepple and G. Morrill. Parsing and deriva- tional equivalence. In European Assoc. Comput. Ling., 1989. A. K. Joshi, K. Vijay-Shanker, and D. J. Weir. The convergence of mildly context-sensitive grammar formalisms. In T. Wasow and P. Sells, editors, The Processing of Linguistic Structure. MIT Press, 1989. R. Pareschi and M. J. Steedman. A lazy way to chart-parse with categorial grammars. In 25 ~h meeting Assoc. Comput. Ling., 1987. M. Steedman. Combinators and grammars. In 1~. Oehrle, E. Bach, and D. Wheeler, editors, Cat- egorial Grammars and Natural Language Struc- tures. Foris, Dordrecht, 1986. M. Steedman. Parsing spoken language using combinatory grammars.: In International Work- shop of Parsing Technologies, Pittsburgh, PA, 1989. M. J. Steedman. Dependency and coordination in the grammar of Dutch and English. Language, 61:523-568, 1985. M. Toraita. Graph-structured stack and natural language parsing. In 26 th meeting Assoc. Corn- put. Ling., 1988. K. Vijay-Shanker and D. J. Weir. The recognition of Combinatory Categorial Grammars, Linear In- dexed Grammars, and Tree Adjoining Grammars. In International Workshop of Parsing Technolo- gies~ Pittsburgh, PA, 1989. D. J. Weir and A. K. Joshi. Combinatory cate- gorial grammars: Generative power and relation- ship to linear context-free rewriting systems. In 26 th meeting Assoc. Comput. Ling., 1988. K. B. Wittenburg. Predictive combinators: a method for efficient processing of combinatory categorial grammar. In 25 th meeting Assoc. Corn- put. Ling., 1987. References [1] S. Billot and B. Lang. The structure of shared forests in ambiguous parsing. In 27 ~h meeting As- 8
1990
1
Mixed Initiative in Dialogue: An Investigation into Discourse Segmentation Marilyn Walker University of Pennsylvania* Computer Science Dept. Philadelphia, PA 19104 [email protected] Steve Whittaker Hewlett Packard Laboratories Bristol, England BS12 6QZ HP Stanford Science Center [email protected] Abstract Conversation between two people is usually of MIXED-INITIATIVE, with CONTROL over the con- versation being transferred from one person to an- other. We apply a set of rules for the transfer of control to 4 sets of dialogues consisting of a total of 1862 turns. The application of the control rules lets us derive domain-independent discourse structures. The derived structures indicate that initiative plays a role in the structuring of discourse. In order to explore the relationship of control and initiative to discourse processes like centering, we analyze the distribution of four different classes of anaphora for two data sets. This distribution indicates that some control segments are hierarchically related to oth- ers. The analysis suggests that discourse partic- ipants often mutually agree to a change of topic. We also compared initiative in Task Oriented and Advice Giving dialogues and found that both allo- cation of control and the manner in which control is transferred is radically different for the two dia- logue types. These differences can be explained in terms of collaborative planning principles. 1 Introduction Conversation between two people has a number of characteristics that have yet to be modeled ade- quately in human-computer dialogue. Conversa- tion is BIDIRECTIONAL; there is a two way flow of information between participants. Information *This research was partially funded by ARO grants DAAG29-84-K-0061 and DAAL03-89-C0031PRI, DARPA grant N00014-85-K0018, and NSF grant MCS-82-19196 at the University of Pennsylvania, and by Hewlett Packard, U.K. is exchanged by MIXED-INITIATIVE. Each partici- pant will, on occasion, take the conversational lead. Conversational partners not only respond to what others say, but feel free to volunteer information that is not requested and sometimes ask questions of their own[Nic76]. As INITIATIVE passes back and forth between the discourse participants, we say that CONTROL over the conversation gets trans- ferred from one discourse participant to another. Why should we, as computational linguists, be interested in factors that contribute to the interac- tivity of a discourse? There are both theoretical and practical motivations. First, we wish to ex- tend formal accounts of single utterances produced by single speakers to explain multi-participant, multi-utterance discourses[Po186, CP86]. Previ- ous studies of the discourse structure of multi- participant dialogues have often factored out the role of MIXED-INITIATIVE, by allocating control to one participant[Gro77, Coh84], or by assuming a passive listener[McK85, Coh87]. Since conversation is a collaborative process[CWG86, SSJ74], models of conversation can provide the basis for extending planning theories[GS90, CLNO90]. When the sit- uation requires the negotiation of a collaborative plan, these theories must account for the interact- ing beliefs and intentions of multiple participants. ~,From a practical perspective, there is ample evi- dence that limited mixed-initiative has contributed to lack of system usability. Many researchers have noted that the absence of mixed-initiative gives rise to two problems with expert systems: They don't allow users to participate in the rea- soning process, or to ask the questions they want answered[PHW82, Kid85, FL89]. In addition, ques- tion answering systems often fail to take account of the system's role as a conversational partner. 70 For example, fragmentary utterances may be inter- preted with respect to the previous user input, but what users say is often in reaction to the system's previous response[CP82, Sid83]. In this paper we focus on interactive discourse. We model mixed-initiative using an utterance type classification and a set of rules for transfer of control between discourse participants that were proposed by Whittaker and Stenton[WS88]. We evaluate the generality of this analysis by applying the control rules to 4 sets of dialogues, including both advi- sory dialogues (ADs) and task-oriented dialogues (TODs). We analysed both financial and support ADs. The financial ADs are from the radio talk show "Harry Gross: Speaking of Your Money "1 The support ADs resulted from a client phoning an expert to help them diagnose and repair various software faults ~. The TODs are about the construc- tion of a plastic water pump in both telephone and keyboard modality S. The application of the control rules to these dia- logues lets us derive domain-independent discourse segments with each segment being controlled by one or other discourse participant. We propose that control segments correspond to different subgoals in the evolving discourse plan. In addition, we ar- gue that various linguistic devices are necessary for conversational participants to coordinate their con- tributions to the dialogue and agree on their mu- tual beliefs with respect to a evolving plan, for ex- ample, to agree that a particular subgoal has been achieved. A final phenomenon concerns shifts of control and the devices used to achieve this. Con- trol shifts occur because it is unusual for a single participant to be responsible for coordinating the achievement of the whole discourse plan. When a different participant assumes control of a discourse subgoal then a control shift occurs and the par- ticipants must have mechanisms for achieving this. The control framework distinguishes instances in which a control shift is negotiated by the partic- ipants and instances where one participant seizes control. This paper has two objectives: 110 randomly selected dialogues (474 turns) from a corpus that was collected and transcribed by Martha Pollack and Julia Hirschberg[HL87, PHW82]. 24 dialogues (450 turns) from tapes made at one of Hewlett-Packard's customer response centers. See [WS88]. 35 keyboard (224 turns) and 5 telephone dialogues (714 turns), which were collected in an experiment by Phil Cohen to explore the relationship between modality, interactivity and use of referring expressions[Coh84]. To explore the phenomenon of control in rela- tion to ATTENTIONAL STATE [GS86, GJW86, Sid79] 4. We predict shifts of attentional state when shifts in control are negotiated and agreed by all participants, but not when con- trol is seized by one participant without the acceptance of the others. This should be re- flected in different distribution of anaphora in the two cases. To test predictions about the distribution of control in different types of dialogues. Be- cause the TOD's embody the master-slave assumption[GSg0], and control is allocated to the expert, our expectation is that control should be located exclusively with one partici- pant in the TODs in contrast with the ADs. 2 Rules for the Allocation and Transfer of Control We use the framework for the allocation and trans- fer of control of Whittaker and Stenton[WS88]. The analysis is based on a classification of utterances into 4 types 5. These are: • UTTERANCE TYPES -- ASSERTIONS: Declarative utterances used to state facts. Yes and No in response to a question were classified as assertions on the basis that they are supplying informa- tion. -- COMMANDS: Utterances intended to in- stigate action. Generally imperative form, but could be indirect such as My suggestion would be that you do ..... -QUESTIONS: Utterances which are in- tended to elicit information, including in- direct forms such as I was wondering whether I should .... -- PROMPTS: Utterances which did not ex- press propositional content, such as Yeah, Okay, Uh-huh .... 4The theory of centering, which is part of attentional state, depends on discourse participants' recognizing the be- ginning and end of a discourse segment[BFP87, Wal89]. 5The relationship between utterance level meaning and discourse intentions rests on a theory of joint commitment or shared plans[GSg0, CLNO90, LCN90] 71 Note that prompts are in direct contrast to the other options that a participant has available at any point in the discourse. By indicating that the speaker does not want the floor, prompts function on a number of levels, including the expression of understanding or agreement[Sch82]. The rules for the allocation of control are based on the utterance type classification and allow a di- alogue to be divided into segments that correspond to which speaker is the controller of the segment. • CONTROL RULES UTTERANCE ASSERTION COMMAND QUESTION PROMPT CONTROLLER (ICP) SPEAKER, unless response to a Question SPEAKER SPEAKER, unless response to Question or Command HEARER The definition of controller can be seen to cor- respond to the intuitions behind the term INITI- ATING CONVERSATIONAL PARTICIPANT (ICP), who is defined as the initiator of a given discourse segment[GS86]. The OTHER CONVERSATIONAL PARTICIPANT(S), OCP, may speak some utterances in a segment, but the DISCOURSE SEGMENT PUR- POSE, must be the purpose of the ICP. The control rules place a segment boundary whenever the roles of the participants (ICP or OCP) change. For ex- ample: Abdication Example E: "And they are, in your gen youql find that they've relo- Cated into the labelled common area" (ASSERT - E control) C: "That's right." (PROMPT - E control) E: "Yeah" (PROMPT - E abdicates control) CONTROL SHIFT TO C - - C: "I've got two in there. There are two of them." (ASSERT - C control) E: "Right" (PROMPT - C control) C: "And there's another one which is % RESA" (ASSERT - C control) E: "OK urn" (PROMPT - C control) C: "VS" (ASSERT- C control) E: "Right" (PROMPT - C control) C: "Mm" (PROMPT - C abdicates control) CONTROL SHIFT TO E ---- E: "Right and you haven't got - I assume you haven't got local labelled common with those labels" (QUESTION - E control) Whittaker and Stenton also performed a post-hoe analysis of the segment boundaries that are defined by the control rules. The boundaries fell into one of three types: • CONTROL SHIFT TYPES - ABDICATION: Okay, go on. - REPETITION/SUMMARY: That would be my recommendation and that will ensure that you get a logically integral set of files. I N T E R R U P T I O N : It is something new though urn. ABDICATIONS 6 correspond to those cases where the controller produces a prompt as the last utterance of the segment. The class REPETI- TION/SUMMARY corresponds to the controller pro- ducing a redundant utterance. The utterance is either an exact repetition of previous propositional content, or a summary that realizes a proposition, P, which could have been inferred from what came before. Thus orderly control shifts occur when the controller explicitly indicates that s/he wishes to relinquish control. What unifies ABDICATIONS and REPETITION/SUMMARIES is that the controller supplies no new propositional content. The re- maining class, INTERRUPTIONS, characterize shifts occurring when the noncontroller displays initia- tive by seizing control. This class is more general than other definitions of Interruptions. It prop- erly contains cross-speaker interruptions that in- volve topic shift, similar to the true-interruptions of Grosz and Sidner[GS86], as well as clarification subdialogues[Sid83, LA90]. This classification suggests that the transfer of control is often a collaborative phenomenon. Since a noncontroller(OCP), has the option of seizing con- trol at any juncture in discourse, it would seem that controllers(ICPs), are in control because the noncontroller allows it. These observations address problems raised by Grosz and Sidner, namely how ICPs signal and OCPs recognize segment bound- aries. The claim is that shifts of control often do not occur until the controller indicates the end of a discourse segment by abdicating or producing a repetition/summary. 3 Control Segmentation and Anaphora To determine the relationship between the de- rived control segments and ATTENTIONAL STATE we 6Our abdication category was called prompt by [WS88]. 72 looked at the distribution of anaphora with respect to the control segments in the ADs. All data were analysed statistically by X 2 and all differences cited are significant at the 0.05 level. We looked at all anaphors (excluding first and second person), and grouped them into 4 classes. • Classes of Anaphors - 3RD PERSON: it, they, them, their, she, he, her, him, his ONE/SOME, one of them, one of those, a new one, that one, the other one, some - DEICTIC: Noun phrases, e.g. this, that, this NP, that NP, those NP, these NP - EVENT: Verb Phrases, Sentences, Seg- ments, e.g. this, that, it The class DEICTIC refers to deictic references to material introduced by noun phrases, whereas the class EVENT refers to material introduced clausally. 3.1 Hierarchical Relationships The first phenomenon we noted was that the anaphora distribution indicated that some seg- ments are hierarchically related to others 7. This was especially apparent in cases where one dis- course participant interrupted briefly, then imme- diately passed control back to the other. Interrupt/Abdicate 1 A: ... the only way I could do that was to take a to take a one third down and to take back a mortgage (ASSERTION) -INTERRUPT SHIFT TO B--- 2. B: When you talk about one third put a number on it (QUESTION) 3. A: uh 15 thou (ASSERTION, but response) 4. B: go ahead (PROMPT) ----ABDICATE SHIFT BACK TO .4.- 5. A: and then I'm a mortgage baz.k for 36 The following example illustrates the same point. Interrupt/Abdicate 2 1. A: The maximum amount ... will be $400 on THEIR tax return. (ASSERTION) INTERRUPT SHIFT TO B 7Similar phenomena has been noted by many researchers in discourse including[Gro77, Hob79, Sid79, PHg0]. 2. B: 400 for the whole year? (QUESTION) 3. A: yeah it'll be 20% (ASSERTION, but response) 4. B: um hm (PROMPT) -----ABDICATE SHIFT BACK TO A- 5. A: now if indeed THEY pay the $2000 to your wife .... The control segments as defined would treat both of these cases as composed of 3 different segments. But this ignores the fact that utterances (1) and (5) have closely related propositional content in the first example, and that the plural pronoun straddles the central subsegment with the same referents be- ing picked out by they and their in the second ex- ample. Thus we allowed for hierarchical segments by treating the interruptions of 2-4 as subsegments, and utterances 1 and 5 as related parts of the parent segments. All interruptions were treated as embed- dings in this way. However the relationship of the segment after the interruption to the segment be- fore must be determined on independent grounds such as topic or intentional structure. 3.2 Distribution Once we extended the control framework to allow for the embedding of interrupts, we coded every anaphor with respect to whether its antecedent lay outside or within the current segment. These are la- belled X (cross segment boundary antecedent) NX (no cross segment boundary), in Figure 1. In addi- tion we break these down as to which type of control shift occurred at the previous segment boundary. 3rd Pets One Deictic Event x xlxk xlxi x x I Abdication 1 105 0 10 27 7 18 3 ll01 4 li31 5 li 5 i Inter pt 7 :7 il 0 I 0 il 8 I 9 il2 1, I TOTAL 11 165 el 0 I 14 ii 24 I 41 el '1 34 i Figure 1: Distribution of Anaphora in Finance ADs We also looked at the distribution of anaphora in the Support ADs and found similar results. For both dialogues, the distribution of anaphors varies according to which type of control shift oc- curred at the previous segment boundary. When we look at the different types of anaphora, we find that third person and one anaphors cross bound- 73 Abdication Summary Interrupt TOTAL 3rd Pets One Deictic Event x ixtvlixl xllx v 4 46 0 4 12 4 4 =6 ill 4 II 10 16 II 9 =4 6 40 II 0 4 115 I 5 II 5 10 16 11211 1 11 11191 23 Ills 42 I Figure 2: Distribution of Anaphora in Support ADs aries extremely rarely, but the event anaphors and the deictic pronouns demonstrate a different pat- tern. What does this mean? The fact that anaphora is more likely to cross segment boundaries following interruptions than for summaries or abdications is consistent with the con- trol principles. With both summaries and abdica- tions the speaker gives an explicit signal that s/he wishes to relinquish control. In contrast, interrup- tions are the unprompted attempts of the listener to seize control, often having to do with some 'prob- lem' with the controller's utterance. Therefore, in- terruptions are much more likely to be within topic. But why should deixis and event anaphors be- have differently from the other anaphors? Deixis serves to pick out objects that cannot be selected by the use of standard anaphora, i.e. we should expect the referents for deixis to be outside imme- diate focus and hence more likely to be outside the current segment[Web86]. The picture is more com- plex for event anaphora, which seems to serve a number of different functions in the dialogue. It is used to talk about the past events that lead up to the current situation, I did THAT in order to move the place. It is also used to refer to sets of propo- sitions of the preceding discourse, Now THAT'S a little background (cf [Web88]). The most prevalent usei however, was to refer to future events or ac- tions, THAT would be the move that I would make - but you have to do IT the same day. SUMMARY EXAMPLE A: As far as you are concerned THAT could cost you more .... what's your tax bracket? (QUESTION) B: Well I'm on pension Harry and my wife hasn't worked at all and ..(ASSERT/RESP) A: No reason at all why you can't do THAT. (ASSERTION) ---SUMMARY 3HIFT to B .... 13: See my comment was if we should throw even the $2000 into an IRA or something for her. (ASSERTION) --REPETITION SHIFT to A. A: You could do THAT too. (ASSERTION) Since the task in the ADs is to develop a plan, speakers use event anaphora as concise references to the plans they have just negotiated and to discuss the status and quality of plans that have been sug- gested. Thus the frequent cross-speaker references to future events and actions correspond to phases of plan negotiation[PHW82]. More importantly these references are closely related to the control struc- ture. The example above illustrates the clustering of event anaphora at segment boundaries. One dis- course participant uses an anaphor to summarize a plan, but when the other participant evaluates this plan there may be a control shift and any reference to the plan will necessarily cross a control boundary. The distribution of event anaphora bears this out, since 23/25 references to future actions are within 2 utterances of a segment boundary (See the ex- ample above). More significantly every instance of event anaphora crossing a segment boundary occurs when the speaker is talking about future events or actions. We also looked at the TODs for instances of anaphora being used to describe a future act in the way that we observed in the ADs. However, over the 938 turns in the TODs, there were only 18 instances of event anaphora, because in the main there were few occasions when it was necessary to talk about the plan. The financial ADs had 45 event anaphors in 474 utterances. 4 Control and Collaborative Plans To explore the relationship of control to planning, we compare the TODs with both types of ADs (financial and support). We would expect these dialogues to differ in terms of initiative. In the ADs, the objective is to develop a collaborative plan through a series of conversational exchanges. Both discourse participants believe that the expert has knowledge about the domain, but only has partial information about the situation. They also believe that the advisee must contribute both the prob- lem description and also constraints as to how the problem can be solved. This information must be exchanged, so that the mutual beliefs necessary to develop the collaborative plan are established in the conversation[Jos82]. The situation is different 74 in the TODs. Both participants here believe at the outset that the expert has sufficient informa- tion about the situation and complete and correct knowledge about how to execute the Task. Since the apprentice has no need to assert information to change the expert's beliefs or to ask questions to verify the expert's beliefs or to issue commands, we should not expect the apprentice to have con- trol. S/he is merely present to execute the actions indicated by the knowledgeable participant. The differences in the beliefs and knowledge states of the participants can be interpreted in the terms of the collaborative planning principles of Whittaker and Stenton[WS88]. We generalize the principles of INFORMATION QUALITY and PLAN QUALITY, which predict when an interrupt should occur. • INFORMATION QUALITY: The listener must be- lieve that the information that the speaker has provided is true, unambiguous and relevant to the mutual goal. This corresponds to the two rules: (A1) TRUTH: If the listener believes a fact P and believes that fact to be relevant and either believes that the speaker believes not P or that the speaker does not know P then inter- rupt; (A2)AMBIGUITY: If the listener believes that the speaker's assertion is relevant but am- biguous then interrupt. • PLAN QUALITY: The listener must believe that the action proposed by the speaker is a part of an adequate plan to achieve the mutual goal and the action must also be comprehensible to the listener. The two rules to express this are: (B1)EFFECTIVENESS: If the listener believes P and either believes that P presents an ob- stacle to the proposed plan or believes that P • is part of the proposed plan that has already been satisfied, then interrupt; (B2) AMBIGU- ITY: If the listener believes that an assertion about the proposed plan is ambiguous, then interrupt. These principles indirectly proyide a means to ensure mutual belief. Since a participant must in- terrupt if any condition for an interrupt holds, then lack of interruption signals that there is no discrep- ancy in mutual beliefs. If there is such a discrep- ancy, the interruption is a necessary contribution to a collaborative plan, not a distraction from the joint activity. We compare ADs to TODs with respect to how Turns/Seg Exp-Contr Abdication Summary Interrupt Finance Support Task-Phone Task-Key 7.49 8.03 15.68 11.27 60°~ 51~ 91% 91% 38~ 38~0 45~ 28% 23°~ 27~ 7~ 6~ 38~ 36°~ 48~ 67% Turns/Seg: Average number of turns between control shifts Exp-Contr: % total turns controlled by expert Abdication: ~ control shifts that are Abdications Summaries: % control shifts that are Reps/Summaries Interrupt: ~ control shifts that are Interrupts Figure 3: Differences in Control for Dialogue Types often control is exchanged by calculating the aver- age number of turns between control shifts s. We also investigate whether control is shared equally between participants and what percentage of con- trol shifts are represented by abdications, inter- rupts, and summaries for each dialogue type. See Figure 3. Three things are striking about this data. As we predicted, the distribution of control between ex- pert and client is completely different in the ADs and the TODs. The expert has control for around 90% of utterances in the TODs whereas control is shared almost equally in the ADs. Secondly, con- trary to our expectations, we did find some in- stances of shifts in the TODs. Thirdly, the distri- bution of interruptions and summaries differs across dialogue types. How can the collaborative planning principles highlight the differences we observe? There seem to be two reasons why shifts occur in the TODs. First, many interruptions in the TODs result from the apprentice seizing control just to indicate that there is a temporary problem and that plan execution should be delayed. TASK INTERRUPT 1, A is the Instructor A: It's hard to get on (ASSERTION) -----INTERRUPT SHIFT TO B B: Not there yet - ouch yep it's there. (ASSERTION) A: Okay (PROMPT) B: Yeah (PROMPT) -ABDICATE SHIFT TO A-- A: All right. Now there's a little blue cap .. Second, control was exchanged when the execu- tion of the task started to go awry. 8 We excluded turns in dialogue openings and closings. 75 TASK INTERRUPT 2, A is the Instructor A: And then the elbow goes over that ... the big end of the elbow. (COMMAND) ---INTERRUPT SHIFT TO B ~ B: You said that it didn't fit tight, but it doesn't fit tight at all, okay ... (ASSERTION) A: Okay (PROMPT) B: Let me try THIS - oo1~ - again(ASSERTION) The problem with the physical situation indicates to the apprentice that the relevant beliefs are no longer shared. The Instructor is not in possession of critical information such as the current state of the apprentice's pump. This necessitates an infor- mation exchange to resynchronize mutual beliefs, so that the rest of the plan "~ ~,v be successfully ex- ecuted. However, since control is explicitly allo- cated tothe instructor in TODs, there is no reason for that participant to believe that the other has any contribution to make. Thus there are fewer attempts by the instructor to coordinate activity, such as by using summaries to synchronize mutual beliefs. Therefore, if the apprentice needs to make a contribution, s/he must do so via interruption, explaining why there are many more interruptions in these dialogues. 9 In addition, the majority of Interruptions (73%) are initiated by apprentices, in contrast to the ADs in which only 29% are produced by the Clients. Summaries are more frequent in ADs. In the ADs both participants believe that a plan cannot be con- structed without contributions from both of them. Abdications and summaries are devices which al- low these contributions to be coordinated and par- ticipants use these devices to explicitly set up op- portunities for one another to make a contribution, and to ensure mutual bellefs The increased fre- quency of summaries in the ADs may result from the fact that the participants start with discrepant mutual beliefs about the situation and that estab- lishing and maintaining mutual beliefs is a key part of the ADs. 5 Discussion It has Often been stated that discourse is an inher- ently collaborative process and that this is man- ifested in certain phenomena, e.g. the use of 9The higher, percentage of Interruptions in the keyboard TODs in comparison with the t ~1 ~ ./.hone TODs parallels Ovi- att and Cohen's analysis, showing that participants exploit the Wider bandwidth of the iptoractive spoken channel to break tasks down into subtaskstCoh84 , OC89]. anaphora and cue words [GS86, HL87, Coh87] by which the speaker makes aspects of the discourse structure explicit. We found shifts of attentional state when shifts in control are negotiated and agreed by all participants, but not when control is seized by one participant without the acceptance of the others. This was reflected in different distri- bution of anaphora in the two cases. Furthermore we found that not all types of anaphora behaved in the same way. Event anaphora clustered at seg- ment boundaries when it was used to refer to pre- ceding segments and was more likely to cross seg- ment boundaries because of its function in talking about the proposed plan. We also found that con- trol was distributed and exchanged differently in the ADs and TODs. These results provide support for the control rules. In our analysis we argued for hierarchical orga- nization of the control segments on the basis of specific examples of interruptions. We also be- lieve that there are other levels of structure in dis- course that are not captured by the control rules, e.g. control shifts do not always correspond with task boundaries. There can be topic shifts with- out change of initiation, change of control without a topic shift[WS88]. The relationship of cue words, intonational contour[PH90] and the use of modal subordination[Rob86] to the segments derived from the control rules is a topic for future research. A more controversial question concerns rhetori- cal relations and the extent to which these are de- tected and used by listeners[GS86]. Hobbs has ap- plied COHERENCE RELATIONS to face-to-face con- versation in which mixed-initiative is displayed by participants[HA85, Hob79]. One category of rhetor- ical relation he describes is that of ELABORATION, in which a speaker repeats the propositional con- tent of a previous utterance. Hobbs has some diffi- culties determining the function of this repetition, but we maintain that the function follows from the more general principles of the control rules: speak- ers signal that they wish to shift control by sup- plying no new propositional content. Abdications, repetitions and summaries all add no new informa- tion and function to signal to the listener that the speaker has nothing further to say right now. The listener certainly must recognize this fact. Summaries appear to have an additional function of synchronization, by allowing both participants to agree on what propositions are mutually believed at that point in the discussion. Thus this work highlights aspects of collaboration in discourse, but 76 should be formally integrated with research on collaborative planning[GS90, LCN90], particularly with respect to the relation between control shifts and the coordination of plans. 6 Acknowledgements We would like to thank Aravind Joshi for his sup- port, comments and criticisms. Discussions of joint action with Phil Cohen and the members of CSLI's DIA working group have influenced the first au- thor. We are also indebted to Susan Brennan, Herb Clark, Julia Hirschberg, Jerry Hobbs, Libby Levi- son, Kathy McKeown, Ellen Prince, Penni Sibun, Candy Sidner, Martha Pollack, Phil Stenton, and Bonnie Webber for their insightful comments and criticisms on drafts of this paper. References [BFP87] Susan E. Brennan, Marilyn Walker Friedman, and Carl J. Pollard. A cen- tering approach to pronouns. In Proc. 25th Annual Meeting of the ACL, pages 155-162, 1987. [CLNO90] Phillip R. Cohen, Hector J. Levesque, Jose H. T. Nunes, and Sharon L. Ovi- att. Task oriented dialogue as a conse- quence of joint activity, 1990. Unpub- lished Manuscript. [Coh84] Phi]lip R. Cohen. The pragmatics of re- ferring and the modality of communica- tion. ComputationalLinguistics, 10:97- 146, 1984. [Coh87] Robin Cohen. Analyzing the structure of argumentative discourse. Computa- tional Linguistics, 13:11-24, 1987. [CP82] Phillip R. Cohen, C. Raymond Per- rault, and James F. Allen 1982. Beyond question answering. In Wendy Lehnert and Martin Ringle, editors, Strategies for Natural Language Processing, pages 245-274. Lawrence Erlbaum Ass. Inc, Hillsdale, N.J., 1982. [CP86] Philip R. Cohen and C. Raymond Perrault. Elements of a plan-based theory of speech acts. In Bonnie [CWG86] [FL89] [GJW86] [Gro77] [GS86] [GS90] [HA85] [HL87] [Hob79] [Jos82] Lynn Webber Barbara J. Grosz, Karen Sparck Jones, editor, Readings in Nat- ural Language Processing, pages 423- 440. Morgan Kauffman, Los Altos, Ca., 1986. Herbert H. Clark and Deanna Wilkes- Gibbs. Referring as a collaborative pro- cess. Cognition, 22:1-39, 1986. David M. Frohlich and Paul Luff. Con- versational resources for situated action. In Proc. Annual Meeting of the Com- puter Human Interaction of the ACM, 1989. Barbara J. Grosz, Aravind K. Joshi, and Scott Weinstein. Towards a computa- tional theory of discourse interpretation. Unpublished Manuscript, 1986. Barbara J. Grosz. The representation and use of focus in dialogue understand- ing. Technical Report 151, SRI Inter- national, 333 Ravenswood Ave, Menlo Park, Ca. 94025, 1977. Barbara J. Grosz and Candace L. Sid- net. Attentions, intentions and the structure of discourse. Computational Linguistics, 12:pp. 175-204, 1986. Barbara J. Grosz and Candace L. Sid- her. Plans for discourse. In Cohen, Morgan and Pollack, eds. Intentions in Communication, MIT Press, Cam- bridge, MA., 1990. Jerry R. Hobbs and Michael H. Agar. The coherence of incoherent discourse. Technical Report CSLI-85-38, Center for the Study of Language and Informa- tion, Ventura Hall, Stanford University, Stanford, CA 94305, 1985. Julia Hirschberg and Diane Litman. Now lets talk about now: Identifying cue phrases intonationally. In Proc. 25th Annual Meeting of the ACL, pages 163- 171, Stanford University, Stanford, Ca., 1987. Jerry R. Hobbs. Coherence and corefer- ence. Cognitive Science, 3:67-90, 1979. Aravind K. Joshi. Mutual beliefs in question-answer systems. In Neil V. 77 [Kid85] [LA90] [LCN90] [McK85] [Nic76] [oc89] [PH90] [PHW82] [Po186] Smith eds. Mutual Knowledge, Aca- demic Press, New York, New York, pages 181-199, 1982. Alison Kidd. The consultative role of an expert system. In P. Johnson and S. Cook, editors, People and Com- puters: Designing the Interface. Cam- bridge University Press, Cambridge, U.K., 1985. Diane Litman and James Allen. Rec- ognizing and relating discourse inten- tions and task-oriented plans. In Co- hen, Morgan and Pollack, eds. Inten- tions in Communication, MIT Press, Cambridge, MA., 1990. Hector J. Levesque, Phillip R. Cohen, and Jose H. T. Nunes. On acting to- gether. In AAAIgO, 1990. Kathleen R. McKeown. Discourse strategies for generating natural lan- guage text. Artificial Intelligence, 27(1):1-42, September 1985. R.S. Nickerson. On converational in- teraction with computers. In SiegFried Treu, editor, User-Oriented Design of Interactive Graphics Systems, pages 101-65. Elsevier Science, 1976. Sharon L. Oviatt and Philip R. Cohen. The effects of interaction on spoken dis- course. In Proc. 27th Annual Meeting of the ACL, pages 126-134, 1989. Janet Pierrehum- bert and Julia Hirschberg. The meaning of intonational contours in the interpre- tation of discourse. In Cohen, Morgan and Pollack, eds. Intentions in Commu- nication, MIT Press, Cambridge, MA., 1990. Martha Pollack, Julia Hirschberg, and Bonnie Webber. User participation in the reasoning process of expert systems. In Proc. National Conference on Artifi- cial Intelligence, 1982. Martha Pollack. Inferring domain plans in question answering. Technical Report 403, SRI International - Artificial Intel- ligence Center, 1986. [Rob86] [Sch82] [Sid79] [Sid83] [SSJ74] [Wa189] [Web86] [Web88] [ws88] Craige Roberts. Modal Subordination and Anaphora. PhD thesis, Linguis- tics Dept, University of Massachusetts, Amherst, 1986. Emanuel A. Sehegloff. Discourse as an interactional achievement: Some uses of 'uh huh' and other things that come between sentences. In D. Tannen, ed- itor, Analyzing Discourse: Text and Talk, pages 71-93. Georgetown Univer- sity Press, 1982. Candace L. Sidner. Toward a computa- tional theory of definite anaphora com- prehension in english. Technical Report AI-TR-537, MIT, 1979. Candace Sidner. What the speaker means: the recognition of speakers plans in discourse. International Journal of Computers and Mathematics, 9:71-82, 1983. Harvey Sacks, Emmanuel Schegloff, and Gail Jefferson. A simplest systematics for the organization of turn-taking in conversation. Language, 50:pp. 325-345, 1974. Marilyn A. Walker. Evaluating dis- course processing algorithms. In Proc. 27th Annual Meeting of the A CL, pages 251-261, 1989. Bonnie Lynn Webber. Two steps closer to event reference. Technical Report MS-CIS-86-74, Line Lab 42, Depart- ment of Computer and Information Sci- ence, University of Pennsylvania, 1986. Bonnie Lynn Webber. Discourse deixis: Reference to discourse segments. In Proc. 26th Annual Meeting of the ACL, Association of Computational Linguis- tics, pages 113-123, 1988. Steve Whittaker and Phil Stenton. Cues and control in expert client dialogues. In Proc. 26th Annual Meeting of the ACL, Association of Computational Linguis- tics, pages 123-130, 1988. 78
1990
10
Performatives in a Rationally Based Speech Act Theory* Philip R. Cohen Artificial Intelligence Center and Center for the Study of Language and Information SRI International 333 Ravenswood Ave. Menlo Park, CA 94025 and Hector J. Levesque $ Department of Computer Science University of Toronto Abstract 1 Introduction A crucially important adequacy test of any the- ory of speech acts is its ability to handle perfor- matives. This paper provides a theory of perfor- matives as a test case for our rationally based the- ory of illocutionary acts. We show why "I request you..." is a request, and "I lie to you that p" is self-defeating. The analysis supports and extends earlier work of theorists such as Bach and Harnish [1] and takes issue with recent claims by Searle [10] that such performative-as-declarative analyses are doomed to failure. *This paper was made possible by a contract from ATR International to SRI International, by a gift from the Systems Development Foundation, and by a grant from the Natural Sciences and Engineering Research Council of Canada. The views and conclusions con- tained in this document axe those of the authors and should not be interpreted as representative of the of- ficial policies, either expressed or implied, of ATR In- ternational, the Systems Development Foundation, or the Canadian government. t Fellow of the Canadian Institute for Advanced Research. There is something special about performative sentences, sentences such as "I promise to return": uttering them makes them true. How and when is this possible? Not all verbs can be uttered in the first-person present tense and thereby make the sentence true. In general, the successful verbs seem to correspond to those naming illocution- ary acts, but not to perlocutionary ones such as "frighten." But, even some illocutionary verbs cannot be used performatively: e.g., "I lie to you that I didn't steal your watch" is self-defeating [12]. So, which verbs can be use performatively, and in Searle's words [10], "how do performatives work?" Any theory of illocutionary acts needs to pro- vide a solution to questions such as these. But, such questions are not merely of theoretical in- terest. Natural language database question- answering systems have been known to receive performative utterances [14], dialogue systems that recognize illocutionary acts (e.g., [6]) will need to infer the correct illocutionary force to function properly, dialogue translation systems [5] will have to cope with markers of illocutionary 79 force that function performatively (e.g., sentence final particles in Japanese), and proposals for "agent-oriented programming languages" [7, 13], as well as Winograd and Flores' [15] COORDINA- TOR system, are based on performative communi- cation. For all these systems, it is important to understand the semantics and pragmatics of such communicative acts, especially their intended ef- fects. To do so, one needs a full theory of il- locutionary acts, and a formal theory that pre- dicts how utterances can be made true by uttering them. The currently accepted theory of performatives is that they are in fact assertions, hence true or false, and additionally constitute the performance of the named illocutionary act, in the same way as an indirect reading of an illocutionary act is obtained from the direct illocutionary act. That is, the named illocutionary act is derived from the assertion as an indirect speech act. The most com- pelling defense of this performative-as-assertion analysis that we are aware is that of Bach and Har- nish [1], who address many of the linguistic phe- nomena discussed by Sadock [9], but who, we be- lieve, have misanalyzed indirect speech acts. How- ever, in a recent paper, Searle [10] forcefully crit- icizes the performative-as-assertion approach on the following grounds: • Assertions commit the speaker to the truth of what is asserted • Performative statements are self-referential • "An essential feature of any illocutionary act is the intention to perform that act" Searle claims that accounts based on self- referential assertions are "doomed to failure" be- cause one cannot show that being committed to having the intention to be performing the named illocutionary act entails that one in fact has that intention. Moreover, he questions that one should derive the named illocutionary act from an asser- tion, rather than vice-versa. However, Searle has imparted into Bach and Harnish's theory his no- tion of assertions as commitments to the truth without providing a precise analysis of commit- ment. What may be doomed to failure is any at- tempt to base an analysis of performatives on such a theory of assertions. This paper provides a formal analysis of per- formatives that treats them as declarative utter- ances, not initially as assertions, does not succumb to Searle's criticisms, and does not require an en- tirely new class of illocutionary acts (the "dec- larations") as Searle and Vanderveken [12] have proposed. The analysis is offered as another ade- quacy criterion for our theory of illocutionary acts. That theory, more fully explicated in [3], is based on an analysis of the individual rational balance agents maintain among their beliefs, goals, inten- tions, commitments, and actions [2]. As desiderata for the theory of performatives, we demonstrate that the analysis meets two prop- erties: • A sincere utterance of "I request you to open the door" is both a request and an assertion, yet neither illocutionary act characterization is derived from the other. • "I lie that the door is open" is self-defeating. Briefly, the ability to capture performatives is met almost entirely because such utterances are treated as indicative mood utterances, and be- cause illocutionary acts are defined as attempts. Since attempts depend on the speaker's beliefs and goals, and these mental states are introspectable in our theory if a speaker sincerely says, for ex- ample, "I request you to open the door," he must believe he did the act with the requisite beliefs and goals. Hence, the utterance is a request. To meet the desiderata we need first to present, albeit briefly, the theory of rational interaction, the treatment of declarative mood utterances, and then the illocutionary act definitions for request- ing and asserting. Finally, we combine the vari- ous analyses natural language processor's task by making explicit the intended word sense of the ac- tion, and by reducing the combinatorics inherent in determining the attachment of the prepositional phrases. 80 2 Abbreviated theory of rational action Below, we give an abbreviated description of the theory of rational action upon which we erect a theory of intention. The theory is cast in a modal logic of belief, goal, action, and time. Further de- tails of this logic can be found in [2]. 2.1 Syntax The language we use has the usual connectives of a first-order language with equality, as well as opera- tors for the propositional attitudes and for talking about sequences of events: (BEL x p) and (GOAL x p) say that p follows from x's beliefs or goals (a.k.a choices) respectively; (AGT x e) says that x is the only agent for the sequence of events e; el _<as says that el is an initial subsequence of e2; and finally, (HAPPENS a) and (DONE a) say that a sequence of events describable by an action expression a will happen next or has just happened, respectively. Versions of HAPPENS and DONE specifying the agent (x) axe also defined. An action expression here is built from variables ranging over sequences of events using the con- structs of dynamic logic [4]: a;b is action composi- tion; a[b is nondeterministic choice; a[[b is concur- rent occurrence of a and b; p? is a test action; and finally, a* is repetition. The usual programming constructs such as IF/THEN actions and WHILE loops, can easily be formed from these. Because test actions occur frequently in our analysis, yet create considerable confusion, read p?;a as "action a occurring when p holds," and for a;p?, read "ac- tion a occurs after which p holds." We use e as a variable ranging over sequences of events, and a and b for action expressions. We adopt the following abbreviations and do- main predicates. (BEFORE a p) de___f (DONE p?;a) z (AFTER a p) def= (HAPPENS a;p?) def <~p -- =le (HAPPENS e;p?). (LATER p) d~f = ~p A Op. 1This differs from the BEFORE relation described in [3], which is here labelled PRIOR. def Op = -~<>-=p. (PRIOR p q) dej Vc (HAPPENS c;q?) D 3a (a < c) A (HAPPENS a;p?). The proposition p will become true no later than q. def (KNOW x p) = p A (BEL x p). (IMPERATIVE s) means that sentence s is an im- perative. (DECLARATIVE s) means that sentence s, a string of words, is a declarative. (MAIN-VERB s v), (TENSE s tense), (COMPLE- MENT s s'), (D-OBJECT s np), (SUBJECT s np), are all syntactic predicates intended to have the obvious meanings. 2 (TRUE s e) means that sentence s is true with re- spect to some event sequence • (which we will say has just been done.) (REFERS np x e) means that noun phrase np refers to thing x with respect to event e. (FULFILL-CONDS s • e') means that event • ful- fills the satisfaction conditions, relative to event e', that are imposed by sentence s. 3 For example, ifs is "wash the floor," e would be a floor-washing event. 2.2 Assumptions The model we are developing embodies various as- sumptions constraining beliefs and choices (goals). First, BEL has a "weak $5" semantics, and GOAL has a "system K" semantics. 4 Among the remain- ing assumptions, the following will be used in this paper. 5 Beliefs imply choice: (BEL x p) D (GOAL x p). 2Feel free to substitute your favorite syntactic predicates. 3TRUE REFERS, and FULFILL-CONDS are just placeholders for semantic theories of truth, reference, and the meanings of imperatives, respectively. Their last event arguments would be used only in the inter- pretation of indexica]s. 4See other work of ours [2] for a full model theory. 5In other words, we only deal with semantic struc- tures where these propositions come out true. 81 This means that agents choose amongst worlds that are compatible with their beliefs. Goals are known: I:::(GOAL x p) - (BEL x (GOAL x p)). Memory: p (DONE x (BEL x p)?;e) = (BEE x (DONE x (BEE x p)?;e)). That is, agents remember what their beliefs were. 3 Individual Commitments and In- tentions To capture one grade of commitment that an agent might have toward his goals, we define a persistent goal, P-GOAL, to be one that the agent will not give up until he thinks certain conditions are sat- isfied. Specifically, we have Definition 1 (P-GOAL x p q) def= (1) (BEt x -~p) ^ (2) (GOAL x (LATER p)) A (3) [KNOW x (PRIOR [(BEL x p)V(BEL x n-~p)v(eEL x "-,q)] -~[GOAL x (LATER p)])]. That is, the agent x believes p is currently false, chooses that it be true later, and knows that before abandoning that choice, he must either believe it is true, believe it never will be true, or believe q, an escape clause (used to model subgoals, reasons, etc.) is false. Intention is a species of persistent goal. We analyze two kinds of intentions, those to do ac- tions and those to achieve propositions. Accord- ingly, we define INTEND1 and INTEND2 to take action expressions and propositions as arguments, respectively. Definition 2 Intention: def (INTEND1 x a q) = (P-GOAL x [DONE x (BEL x (HAPPENS a))?;a] q). (INTEND~ x p q) def= (P-GOAL x 3e[HAPPENS x (BEE x 3e' (HAPPENS x e';p?))?;e;p?] q) Intending to do an action a or achieve a proposi- tion p is a special kind of commitment (i.e., per- sistent goal) to having done the action a or having achieved p.¢ However, it is not a simple commit- ment to having done a or e;p? for that would al- low the agent to be committed to doing something accidentally or unknowingly. Instead, we require that the agent be committed to arriving at a state in which he believes he is about to do the intended action next. This completes a brief discussion of the founda- tional theory of intention and commitment. Next, we proceed to define the more specific concepts needed for analyzing communicative action. 4 Utterance Events We begin the analysis of utterance events by adopting a Gricean correlation of an utterance's features (e.g., syntactic mood or sentence-final particles in Japanese) with the speaker's mental state, termed a "core attitude" in [3, 8]. Very roughly, a declarative utterance $ will be corre- lated with the speaker's believing the uttered sen- tence is true, and an imperative utterance will be correlated with the speaker's wanting the ad- dressee to do some action that fulfills the condi- tions imposed by the sentence. Let us notate these correlations as: DECLARATIVE =~ (aLL x (TRUE s e)) IMPERATIVE =~ (GOAL x 03# (DONE y e') A (FULFILL-CONDS s e' e) We formalize this notation below. Someone who thinks he is observing an utter- ance event will come to believe the speaker is in the correlated mental state, unless he has other beliefs to the contrary. For example, if the ob- server thinks the speaker is lying, he believes that the speaker does not believe the uttered sentence is true. But, because he may think the speaker takes himself to be especially convincing, the ob- server may still believe that the speaker thinks the observer is deceived. Hence, he would believe the 6For simplicity, we omit here one condition from the definition of INTEND2 in [2]. 82 speaker thinks that he thinks the speaker believes p. This type of reasoning can continue to further levels. In general, if an utterance is produced when there are no countervailing observer beliefs at a certain level of nesting, then the result will be, at the given level of nesting, that the speaker is taken to be in the correlated mental state [8]. To be able to state such conditions, we need to be able to refer easily to what a person x believes about what y believes about what x believes etc., to arbitrary depths. To do so, we use the notion of ABEL. Definition 3 (ABEL n x y p) de__f (BEL x (BEL y (BEL x ...(BEL x p )...) That is, ABEL characterizes the nth alternating belief between x and y that p, built up "from out- side in," i.e, starting with x's belief that p. On this basis, one can define unilateral mutual belief -- what one agent believes is mutually believed -- as follows. Definition 4 (BMB x y p) def= Vn(ABEL n x y p) In other words, (BMB x y p) is the infinite conjunc- tion (BEL x p) A (BEL x (BEL y p)) ^... Finally, we define mutual belief and mutual knowledge as follows. Definition 5 (MB x y p) dej (BMB x y p) A (BMB y x p). (MKxyp) de---fpA(MBxyp). Utterance events can produce effects at any (or no) level of alternating belief. For example, the speaker may not be trying to communicate any- thing to an intended observer. Illocutionary acts will be defined to require that the speaker intend to produce BM Bs. In what follows, it is important to keep in mind the distinction between utterance events and full-blooded communicative acts. 4.1 Notation for Describing Utterance Events We now provide a formal notation for this corre- lation of utterance form and the speaker's mental state as a kind of default axiom (cf. [8]). First, we specify who is speaking (spkr), who is observing (obs, which includes the speaker and addressee, but also others), who is being addressed (addr), and what kind of sentence (s) has been spoken (indicated by q~). We shall assume that everyone knows that a given utterance is of a given syn- tactic type (e.g., declarative), that speakers and addressees are observers, and that observers are known by all to be observing. 7 Definition 6 ~ =~ ~ de_/ V spkr, obs, addr, e, s, n (KNOW obs (DONE spkr e) A (UTTER spkr addr s e) A (q~ s)) ^ ,-,(ABEL nobs spkr (BEFORE • ,-,(GOAL spkr [AFTER • (KNOW addr (BEFORE • o~))]) )) 2) (ABEL nobs spkr (BEFORE • t~ A (GOAL spkr [AFTER • (KNOW addr (BEFORE • a))]) )) That is, • =~ ~ is an abbreviation for a quan- tified implication roughly to the effect that if an observer obs knows that • was just done, where • was an uttering to addressee addt of a sentence s in syntactic mood q~, and obt does not believe that • was done when the speaker did not want the addressee to come to know that the core speaker- attitude a associated with utterances of that type held, then obs believes that the speaker in fact wanted the addressee to know that o~, and so he, the observer, believes that c~ held just prior to the utterance. The notation states that at each level of alternating belief for which the antecedent holds, so does the consequent. The symbol '=~' can now be understood as a textual-replacement "macro" operator. Since these correlations are of the form VnP(n) 2~ Q(n)), they imply VnP(n) D VnQ(n). 7The case of unseen observers is straightforward, but omitted here. 83 As we quantify over the positive integers indicat- ing levels of alternative belief, we can derive the conclusion that under certain circumstances, addr thinks it is mutually believed (in our notation, BMB'ed) that the speaker spkr wants addr to know was true. Notice that right after the utterance, we are concerned with what mental state the observer thinks the speaker chose to bring about in the ob- server with that utterance. That is, the condition on utterance events involves the speaker's wanting to get the observer to know something, Without this temporal dimension, our performative analy- sis would fail. The analysis of performatives will say that after having uttered such a sentence, or while uttering it, the speaker believes he has just done or is doing the named illocutionary act. Typ- ically, prior to uttering a performative, the speaker has not just performed that speech act, and so he would believe his having just done so is false. So, if the condition on utterance events in Domain :Ax- iom 1A involved only what the speaker believed or wanted to be true prior to the utterance, rather than after, all performatives would fail to achieve the observer's coming to believe anything. We can now state the correlation between ut- terance form and a speaker's mental state as a domain axiom. Domain Axiom 1 Declaratives and Imperatives: A. ~=DECLARATIVE =~ (BEL spkr (TRUE s e)) B. I= IMPERATIVE :=~ (GOAL x O3e'(DONE y e') ^ (FULFILL-CONDS s e' e) Below, we present our definitions of illocutionary acts. Further justification can be found in [3]. 5 Illocutionary Acts as Attempts Searle [11] points out that an essential condition for requesting is that the speaker be attempting to get the addressee to perform the requested action. We take this observation one step further and de- fine all illocutionary acts as attempts, hence de- fined in terms of the speaker's mental states. At- tempts involve both types of goal states, GOAL (merely chosen) and INTEND (chosen with com- mitment), as noted below. de] Definition 7 {ATTEMPT x e p q tl} = tI?;[(BEL x -,~p A ,,-q) A (INTEND1 x tl?;e;p? (GOAL x Oq)) A (GOAL x Oq)]?; • That is, an attempt to achieve q via p is a complex action expression in which x is the agent of event • at time tl, and prior to e the agent believes p and q are both false, chooses that q should eventually be true, and intends, relative to that choice, that • should produce p. So, q represents some ultimate goal that may or may not be achieved by the at- tempt, while p represents what it takes to make an honest effort. 5.1 Definitions of Request and Assert To characterize a request or, for that matter, any illocutionary action, we must decide on the appro- priate formulas to substitute for p and q in the def- inition of an attempt. We constrain illocutionary acts to be those in which the speaker is committed to understanding, that is, to achieving a state of BMB that he is in a certain mental state. Below is a definition of a speaker's requesting an addressee to achieve p. Definition 8 {REQUEST spkr addr • p tl} def= {ATTEMPT spkr • [BMB addr spkr (BEFORE • (GOAL spkr Op A [AFTER • (INTEND~ addr p [(GOAL spkr Op) A (HELPFUL addr spkr)] )])]] 3e' (DONE adclr e';p?) tl} That is, event • is a request at time tl if it is an attempt at that time to get the addressee to 84 achieve some condition p while being committed to making public that the speaker wanted: first, that p eventually be achieved; and second, that the addressed party should intend to achieve it relative to the speaker's wanting it achieved and relative to the addressee's being helpfully disposed towards the speaker. The illocutionary act of asserting will be defined as an attempt to make the speaker's believing the propositional content mutually believed. def Definition 9 {ASSERT spkr addr • p tl} = {ATTEMPT spkr addr • [BMB addr spkr (BEFORE e [GOAL spkr (AFTER • [KNOW addr (BEFORE • (BEL spkr p))])])] (BMB acldr spkr (BEFORE e (BEL spkr p))) h} More precisely, assertions at time tl are defined as attempts in which to make an "honest effort," the speaker is committed to getting the addressee to believe that it is mutually believed that the speaker wanted prior to the utterance that the addressee would come to know that the speaker believed p held then. That is, just like a request, an assertion makes public that the speaker wants the addressee to know what mental state he was in. Although he is committed to that, what the speaker has chosen to achieve is not merely to make public his goal that the addressee know what mental state he was in, but to make public that he was in fact in that state of believing p. For an INFORM, the speaker would choose to achieve (KNOW addr p). 6 Performatives To illustrate how performatives work, we show when both assertions and requests can be derived from the utterance of the performative "I request you to <do act>." The important point to notice here is that we have not had to add to our ma- chinery; performative utterances will be treated exactly as declarative utterances, with the excep- tion that the content of the utterance will make reference to an utterance event. 6.1 Request Reports Let us characterize the truth conditions of the family of declarative sentences "x requests y to (imperative sentence sl). " Let s be such a sen- tence. Let ct be 3el(DONE y el) A (FULFILL- CONDS s' ez e). We ignore most syntactic con- siderations and indexicality for reasons of space. Domain Axiom 2 Present tense requests J= Vx, y, e, tl, (DONE h?;e) ^ (SUBJECT s ~) A (D-OBJECT s y) A (REFERS z x e) A (REFERS y y e) D (TRUE s e) - (DONE x {REQUEST x y e ~ tl}) That is, if event • is happening and the sentence s is a present tense declarative sentence whose main verb is "request," whose subject x refers to per- son x, and whose direct object Y refers to person y, then the sentence is true iff x is requesting the addressee y to fulfill the conditions of imperative sentence s'. A bare present (or present progres- sive) tense sentence is true when the event being described is contemporaneous with the event of uttering it. s This definition applies equally well to "John requests Mary to ..." as it does when I utter "I request you to ..." For the former, such sentences are likely to be narrations of ongoing events. 9 For the latter, the event that is happen- ing that makes the utterance true is the speaker's uttering of the sentence. By our definition of request, for x to request y to achieve p, x has to attempt to get y to do some action intentionally to fulfill the sentence s', by making that goal mutually believed between them. Thus, to say x requested y to do something is only to say that x had the right beliefs, goals, and intentions. SSearle [10] correctly points out that performatives can be uttered in the passive, and in the first-person plural. 9We are ignoring the habitual reading of bare present tense sentences because we do not have a se- mantics for them. 85 6.2 Performatives Used as Requests Next, we treat performative sentences as declar- atives. This means that the effects of uttering them are described by Domain Axiom 1A. We sketch below a proof of a general theorem re- garding performative requests, with s being the declarative sentence"I request you to(imperative sentence Sl) , and c~ being 3el(DONE addr el) A (FULFILL-CONDS S 1 e I e). We take the uttering of a sentence to be a unitary utterance event. Theorem 1 A Performative Request I=V spkr, addr, e, n, tl, (MK spkr addr (DONE spkr tl?;e) A (UTTER spkr addr e s)) A (BEFORE h?;e (GOAL spkr [AFTER tl?;e (KNOW addr [BEFORE tl?;e (BEL spkr (TRUE s e))])])) Z) (DONE {REQUEST spkr addr e a tl}) That is, we need to show that if the sentence "I request you to <imperative sentence>" has just been uttered at time tl sincerely, i.e., when the speaker wanted the addressee to know that he be- lieved the sentence was true, then a direct request has taken place at tl. Proof sketch: Essentially, one invokes the do- main axiom for declaratives at the first level of ABEL, entailing that the speaker believes that he believes that he has just done a REQUEST. Then, one expands the definition of REQUEST into an ATTEMPT, and then into its parts. The defini- tion of ATTEMPT is based on BEL, GOAL and INTEND, the first two of which are obviously in- trospectable. That is, if one believes one has them one does, and vice-versa. Hence, by the memory assumption, the speaker actually had them prior to the utterance. More critically, intending to act at time tl is also introspectable at time tl because agents know what they are doing at the next in- stant and because there is no time to drop their commitment [2]. Thus, one can repackage these mental states up into an ATTEMPT and then a REQUEST. 6.3 Performatives Used as Assertions We have shown that the speaker of a sincere per- formative utterance containing an illocutionary verb has performed the illocutionary act named by that verb. Under somewhat stronger conditions, we can also prove that the speaker has made an assertion. As before, let s be "I request you to <imperative sentence>." Theorem 2 Perforrnatives Used as Assertions I::V spkr, addr, e, n, tl, (MK spkr addr (DONE spkr tl?;e) A (UTTER spkr addr • s)) A [BEFORE • (BEL spkr [AFTER e Vn~,(ABEL n addr spkr (BEFORE e ~(GOAL spkr [AFTER • (KNOW addr [BEFORE • (BEL spkr (TRUE s e))]] This default condition says that before the ut- terance, the speaker believed there would be no addressee belief after the utterance event (at any level n) to the effect that prior to that event the speaker did not want the addressee to come to know that the speaker believed (TRUE s e). Given Domain Axiom 1A, and the fact that BEL entails GOAL, this suffices to entail the definition of asser- tion. Notice that whereas requesting was derived in virtue of the content of the utterance, an asser- tion was derived by default assumptions regarding lack of belief in the speaker's insincerity. 7 'Lie' is not a performative Some illocutionary verbs such as "lie, .... hint, .... in- sinuate," cannot be achieved performatively. The following analysis shows a general model for why such verbs naming covert acts cannot be perfor- matively achieved. A reasonable definition of lying is the following complex action: 86 Definition 10 {LIE spkr addr e p} de__f (BEL spkr ~p)?;{ASSERT spkr addr e p tl} That is, a lie is an assertion performed when the speaker believes the propositional content is false. For "I lie to you that the door is open" to be a successful performative utterance, it would have to be true that the utterance is a lie. We would have to show that the uttering of that declarative sentence results in a lie's having been done. More generally, we provide a putative statement of the truth conditions of "x lies to y that <declarative sentence s'> ." Call the main sentence s. Domain Axiom 3 Supposed Truth Conditions for Performative Lying l:: Ve, x, y, tl, (DONE h?:e) A (REFERS x x e) A (REFERS y y e) D (TRUE s e) - (DONE {LIE x y e (TRUE s' e) tl} ) That is, if s and s' are declarative sentences of the appropriate syntactic form, x refers to x and y refers to y, then s is true iff in performing it at time tl, x was lying that sentence s' is true. So we can prove the following. Let the sentence s be "I lie to you that <declarative sentence s'>." Theorem 3 Lies are not performative ~V spkr, addr, e, n (MK spkr addr [(DONE spkr tl?;e) A (UTTER spkr addr • s)]) D ,-,(DONE {LIE spkr addr e (TRUE s e) tl}) In other words, you cannot perform a lie by saying "I lie that ..." Proof Sketch: Assume that it is mutually be- lieved that the speaker has uttered declarative sentence s. Now, apply Domain Axiom 1A. By assumption, the first conjuct of the antecddent holds. There are then two cases to consider. First, assume (**) the second conjunct holds (say, at level n = 1), i.e., the addressee does not believe the speaker did not want him to know that he be- lieved s' was true. In virtue of the supposed truth conditions on lying, spkr would have to have been lying. By expanding its definition, and using the memory and introspectability properties of BEI_, GOAl', and INTEND the addressee can conclude that, before the utterance, the speaker wanted him not to know that the speaker believes that in ut- tering S, he was lying. But, this contradicts the assumption (**). Since the speaker in fact uttered the sentence, that assumption is false, and the ad- dressee believes the speaker did not in fact want him to know that he believed the sentence was true. This renders impossible the intentions to be achieved in asserting, which are constitutive of ly- ing as well. Now, assume (**) is false, so the addressee in fact believes the speaker did not want him to know that s' was true. Again, this immediately makes the speaker's intentions in asserting, and hence ly- ing, impossible to achieve. So, in neither case is the utterance a lie. If the addressee believes the speaker is a competent speaker of the language, the speaker must have intended some other inter- pretation. 8 Conclusion Requesting works well as a performative verb be- cause requesting requires only that the agent has made an attempt, and need not have succeeded in getting the hearer to do the requested action, or even to form the right beliefs. Some verbs can- not be used performatively, such as "frighten," because they require something beyond a mere attempt. Hence, such verbs would name action expressions that required a particular proposition p be true after the utterance event. When the ut- terance event does not guarantee such a p, the use of the performative verb will not be possible. On the other hand, certain utterances (perfor- mative or not), when performed by the right peo- ple in the right circumstances, make certain insti- tutional facts hold. So, when a clergyman, judge, or ship captain says "I now pronounce you hus- band and wife," the man and woman in question are married. In our framework, there would be a domain axiom whose antecedent characterizes the circumstances, participants, and nature of the ut- terance event, and whose consequent asserts that an institutional fact is true. The axiom is justified not by the nature of rational action, but by the ex- istence of an institution. Such utterances could be 87 made with a performative prefix provided such at- tempts are made into successes by the institution. This paper has shown that treating performa- tive utterances as declarative sentences is a vi- able analysis, in spite of Searle's criticisms. The performative use of an illocutionary verb is self- guaranteeing when the named illocutionary act consists in the speaker's making an attempt to make public his mental state. In such cases, if the speaker thinks he has done so, then he has. However, we do not derive the named illocution- ary act from the assertion, nor vice-versa. Instead, both derivations may be made from the utterance event, but the assertive one is in fact harder to obtain as it has extra conditions that need to be satisfied. References [1] K. Bach and R. Harnish. Linguistic Com- munication and Speech Acts. M. I. T. Press, Cambridge, Massachusetts, 1979. [2] P. R. Cohen and H. J. Levesque. Intention is choice with commitment. Artificial Intelli- gence, 42(3), 1990. [3] P. R. Cohen and H. J. Levesque. Rational interaction as the basis for communication. In P. R. Cohen, J. Morgan, and M. E. Pollack, editors, Intentions in Communication. M.I.T. Press, Cambridge, Massachusetts, in press. [4] D. Harel. First-Order Dynamic Logic. Springer-Verlag, New York City, New York, 1979. [5] K. Kogure, H. Iida, K. Yoshimoto, H. Maeda, M. Kume, and S. Kato. A method of ana- lyzing Japanese speech act types. In Second International Conference on Theoretical and Methodological Issues in Machine Translation of Natural Languages, 1986. [6] D.J. Litman and J. F. Allen. A plan recogni- tion model for subdialogues in conversation. Technical report, Department of Computer Science, Univ. of Rochester, Rochester, New York, November 1984. [7] J. McCarthy. ELEPHANT: a programming language based on speech acts. Unpublished ms., Dept. of Computer Science, Stanford University, 1989. [8] C. R. Perrault. An application of default logic to speech act theory. In P. R. Cohen, J. Mor- gan, and M. E. Pollack, editors, Intentions in Communication. M.I.T. Press, Cambridge, Massachusetts, in press. [9] J. Sadock. Toward a Linguistic Theory of Speech Acts. Academic Press, New York, 1984. [10] J. Searle. How performatives work. Linguis- tics and Philosophy, 12:535-558, 1989. [11] J. R. Searle. Speech acts: An essay in the philosophy of language. Cambridge Univer- sity Press, Cambridge, 1969. [12] J. R. Searle and D. Vanderveken. Founda- tions of lllocutionary Logic. Cambridge Univ. Press, New York City, New York, 1985. [13] Y. Shoham. Agent oriented programming. Unpublished ms., Dept. of Computer Science, Stanford University, October 1989. [14] H. Tennant. Evaluation of natural language processors. Technical Report T-103, Coordi- nated Science Laboratory, University of Illi- nois, Urbana, Illinois, November 1980. Ph. D. Thesis. [15] T. Winograd and F. Flores. Understanding Computers and Cognition: A New Founda- tion for Design. Ablex Publishing Co., Nor- wood, New Jersey, 1986. 88
1990
11
NORMAL STATE IMPLICATURE Nancy L. Green Department of Computer and Information Sciences University of Delaware Newark, Delaware 19716, USA Abstract In the right situation, a speaker can use an unqual- ified indefinite description without being misun- derstood. This use of language, normal slate im- plicature, is a kind of conversational implicature, i.e. a non-truth-functional context-dependent in- ference based upon language users' awareness of principles of cooperative conversation. I present a convention for identifying normal state implica- tures which is based upon mutual beliefs of the speaker and hearer about certain properties of the speaker's plan. A key property is the precondition that an entity playing a role in the plan must be in a normal state with respect to the plan. 1 Introduction In the right situation, a speaker can use an unqualified indefinite description without being misunderstood. For example, a typical customer in a typical pet shop who said (la) in response to the clerk's question in (1) would expect to be un- derstood as meaning (lb). The goal of this paper is to formally describe such uses of language. 1 1A similar use of language is noted in [McC87]. Mc- Carthy (pp. 29-30) discusses the problem of brid~ng the gap between a "rather direct [translation] into first order logic" of a statement of the Missionaries and Cannibals puz- zle, and a representation suitable for devising a solution to the puzzle. For example, if the puzzle statement mentions that '% rowboat that seats two is available" and doesn't say that anything is wrong with the boat, the problem-solver may assume that the boat doesn't leak, has oars, etc. Mc- Carthy proposes a general-purpose method for formalizing common sense reasoning, "circumscription", to solve the problem. Also, a similar use of language is described in [GriT5] (p. 51): "A is standing by an obviously immobilized car and is approached by B; the following exchange takes place: A: I am out of petrol. B: There is a garage round the corner. ... [B] implicates that the garage is, or at least may be open, [has petrol to sell], etc." That tiffs use of language 89 1. (Clerk A:) May I help you? a. (Customer B:) I'd like to see a parrot. b. I [the speaker] would like to see a live parrot. c. 3 p:PARROT REQUEST(B,A,SIIOW(A,B,p)) d. 3 q:[A p:PARROT LIVE(p)] REQUEST(B,A, SHOW(A,B,q) One problem is that (la) (i.e. its putative representation in (lc)) does not entail (lb) (i.e. its putative representation in (ld)). 2 Another problem is the context-dependence, both spatio-temporal and linguistic, of the relationship of (lb) to (la). In a different spatic~temporal context, such as in a china shop, a speaker might use (la) to convey (2) rather than (lb). 2. I [the speaker] would like to see a porcelain parrot. In a different linguistic context, such as if the cus- tomer had said (3a) following (la), she would not involves the use of language I have illustrated in (1) can be seen by considering a situation identical to the above except that the dialogue consists of just A's saying "I need a garage." In other words, Grice's example is of a situation where B has anticipated a request from A which is the same kind of request as (la). 2The customer's use of (la) is an indirect speech act, namely, a request to be shown a parrot; other possible re- alizations of this request include "Show me a parrot" and "Can you show me a parrot?". (The derivation of represen- tations of indirect speech acts has been treated elsewhere [PAS0] and is not a concern of this paper.) (Ic) is intended to represent that request by means of a first order language extended with hlgher-order operators such as REQUEST. Also, indefinite descriptions are represented as in [Web83]. The status of the existence of the parrot in the real world or discourse context (and the related question as to the proper scope of the existential quantifier), is not relevazlt to the concerns of this paper. My point is that the usual treatments employing a one-to-one translation from surface structure to logical form without consideration of other in- formation will not he able to explain the relationship of (lb) to (1@ normally expect the clerk to think she had meant (lb). A related question is why it would be ap- propriate (non-redundant) for the customer to say (3b) following (la) if the customer believed that the clerk might mistakenly believe that the cus- tomer wanted to see a dead parrot. 2 Scalar Implicature tIirschberg proposes the following set of six necessary and sufficient conditions for identifying conversational implicatures (p. 38). 3 A speaker S conversationally implicates Q to a hearer tI by saying U (where U is a realization of a proposition P) in a context C iff: 3.a .... a dead one b .... a live one A third problem is that in order to derive (lb) from (la) it is necessary to consider the beliefs of speaker (S) and hearer (H): e.g. S's and H's beliefs about why each said what they did, and about the appropriate state of the parrot. Grice [Gri75] described conversational im- plicature, a kind of non-truth-functional context- dependent inference based upon a speaker's and hearer's awareness of principles of cooperative con- versation. In this paper, I claim that a speaker's use of (la) may conversationally implicate (lb). In order to formally describe this kind of conver- sational implicature, which I have termed 'nor- mal state implicature', I adopt the methodology used by Hirschberg [Hir85] for the identification of another kind of conversational implicature, scalar implicature. In section 2, I present a brief description of scalar implicatures and Hirschberg's methodol- ogy for identifying them. In section 3, I present a convention for identifying normal state implica- tures. Informally speaking, the convention is that if speaker S makes a request that hearer H per- form an action A on an entity E, and if S and tt mutually believe that S has a plan whose success depends on the E being in a certain state N (which is the normal state for an E with respect to that plan) and that S's request is a step of that plan, then S is implicating a request for S to do A on an E in state N. In section 4, I clarify the notion of nor- mal state with respect to a plan by distinguish- ing it from the notions of stereotype and plan- independent normal state. Next, in section 5, I show how states can be represented in the lexicon. In section 6, I compare scalar and normal state im- plicature; in section 7, survey related work; and, in section 8, present my conclusions. 1. S intends to convey Q to H via U; and 2. S believes that S and H mutually believe that S is being cooperative; and . . . . S and H mutually believe that S's saying U in C, given S's cooperativity, licenses Q; and Q is cancelable; i.e., it is possible to deny Q without denying P; and Q is nondetachable; i.e., the choice of a real- ization U of P does not affect S's implicating Q (except in certain situations where Q is li- censed via Grice's Maxim of Manner); and Q is reinforceable; i.e., it is possible to affirm Q without seeming redundant. Instead of using these conditions to identify particular scalar implicatures, Hirschberg argues that it is sufficient to provide a means of iden- tifying instances of a class of conversational im- plicature, such as scalar implicatures. Then, she provides a convention for identifying instances of scalar implicat ure. Informally speaking, scalar implicature is based on the convention that (pp. 1 - 2)"cooper- ative speakers will say as much as they truthfully can that is relevant to a conversational exchange"; and distinguished from other conversational impli- catures by "being dependent upon the identifica- tion of some salient relation that orders a concept referred to in an utterance with other concepts"; e.g. by saying (4a), B has scalar implicated (4b). 4 (4) A: How was the party last night? a. B: Some people left early. b. Not all people left early. 90 The convention for identifying scalar impli- cature proposed by Hirschberg is of the form: if 3Her conditions are ~ revision of Grice's. Also, I have changed the names of her variables to be consistent with usage in the rest of my paper. 4 (4) is example (1) in [Hir85]. there exists a partial order O such that S and H mutually believe that O is salient in context C, and utterance U realizes the proposition that S af- firms/denies/is ignorant of some value in O, then by saying U to H in C, S licenses the scalar im- plicature that S has a particular belief regarding some other value of O. In the next section, I will ap- ply Hirschberg's methodology to the problem of identifying normal state implicatures. 3 Normal State Implicature In this section, I will argue that (lb) is a conversational implicature and propose a conven- tion for identifying instances of that class of impli- cature, which I will call 'normal state implicature'. First, I claim that a speaker S conversa- tionally implicates (lb) to a hearer H by saying (la) in the context described above; i.e. that (lb) is a conversational implicature according to the six conditions described in section 2. Condition 1 is met since S intends to cause H to believe (lb) by saying (la); condition 2 since S believes that it is a mutual belief of S and H that S is being cooperative; condition 3 will be satisfied by pro- viding a convention for normal state implicature below. The previous discussion about (3a) and (3b) provides evidence for cancelability (condition 4) and reinforceability (condition 6), respectively; and, (lb) is nondetachable (condition 5) since al- ternate ways of saying (la), in the same context, would convey (lb). Next, in order "to motivate the general convention ((6) below) for identifying normal state implicatures, I'll present the instance of the convention that accounts for the implicature in (1). Let S, H, U, and C be constants de- noting speaker, hearer, utterance, and context, respectively. Let b0, bl, and g be first or- der variables over parrots (PARROT), live par- rots (the lambda expression), and plans (PLAN), respectively. 5 HAS-PLAN(Agent,Plan,Entity) is 5The model of plans used here is that of STRIPS [FN71] with minor extensions. A plan includes preconditions which must hold in order for the plan to succeed, and a sequence of actions to be carried out to achieve some goal. One extension to this model is to add a llst of entities play- ing a role in the plan either as instruments (e.g. a boat which is to be used to cross a river) or as the goal itself (e.g. a parrot to be acquired for a pet). The second exten- true if Agent has a plan in which Entity plays a role; PRECOND(Plan,Proposition) is true if Plan has Proposition as a precondition; STEP(Plan,Action) is true if Action is a step of Plan. Also, BMB(A,B,Proposition) is true if A believes that A and B mutually believe that Proposition; REALIZE(Utterance, Propo- sition) is true if Utterance expresses Proposi- tion; REQUEST(S,H,Action) is true if S re- quests H to perform Action; and SAY(S,H,U,C) is true if S says U to H in C. 6 SHOW(A,B,C) is true if A shows C to B. IN-STATE(Entity,State) is true if Entity is in the given State; and NORMAL-STATE(State,Plan,Entity) is true if State is the normal state of Entity with re- spect to Plan. 7 Finally, NORMAL-STATE- IMP (Speaker, Hearer ,Utterance ,Prop osition ,Context ) is true if by use of Utterance in Context, Speaker conveys Proposition to Hearer. Now, to paraphrase (5) below, if S and H mutually believe that S has a plan in which a par- rot plays a role and that a precondition of S's plan is that the parrot should be alive, which is its nor- mal state with respect to the plan, and that S's saying U is a step of that plan; and, if U is a re- quest to be shown a parrot, then S normal state implicates a request to be shown a live parrot. 5. Vb0:PARROT Vbl : [Ab2: PARROT LIVE(b2)] ¥g:PLAN BMB(S, H, ~HAS-PLAN(S, g, b0) A PRECOND(g, IN-STATE(b0, LIVE)) A NORMAL-STATE(LIVE, g, b0) A STEP(g, SAY(S, H, U, C))) A REALIZE(U, REQUEST(S, H, SHOW(H, S, b0))) NORMAL-STATE-IMP(S, H, U, REQUEST(S, H, SHOW(H, S, bl)),C) It is possible to generalize (5) as follows. Let K, N, and A be higher order variables over classifications (CLASSIF), states (STATE), and actions that may be performed as a step in a plan sloE, suggested in [Car88], is to distinguish preconditions which can be achieved as subgoais from those which are unreasonable for the agent to try to bring about ("applica- bility conditions" ). In (5) and (6), preconditions are meant in the sense of applicability conditions. eBMB, REALIZE, REQUEST, and SAY are from [Hir85]. 7I will discuss what is meant by state and normal state in section 4. 91 (ACT), respectively. Then, (6) is the general con- vention for identifying normal state implicature. 6. V K:CLASSIF V N:STATE V A:ACT Vb0:K Vbl: [~b2:K N(b~)] V g:PLAN BMB(S, H, HAS-PLAN(S, g, b0) A PRECOND(g, IN-STATE(b0, N)) A NORMAL-STATE(N, g, b0) A STEP(g, SAY(S, It, U, C))) A REALIZE(U, REQUEST(S, H, A(b0))) ¢~ NORMAL-STATE-IMP(S, H, U, REQUEST(S, I-I, A(bl)),C) Unfortunately, if (6) is to be of maximum use, there are two problems to be solved. First, there is the problem of representing all precon- ditions of a plan, s and, second, is the problem of plan inference, i.e., how does H come to know what S's plan is (including the problem of recognizing that the saying of U is a step in S's plan)? 9 Both problems are outside the scope of this paper. 4 States and Normal States First, what I mean by a state of an entity E is, adopted from [Lan87], a history of related events involving E. In Lansky's ontology, events may be causally or temporally related. Tem- poral precedence is transitive. Causality is not transitive and does not necessitate occurrence but does imply temporal precedence. A strong pre- requisite constraint (--,) can be defined such that "each event of type E~ can be caused by ex- actly one event of type El, and each event of type E1 can cause at most one event of type E2" ([Lan87],p. 142). Many classifications expressed as nouns de- note a class of entity whose state varies over the period of existence during which it is aptly char- acterized by the classification. For example, Fig- ure 1 and Figure 2 depict causal event chains l° of parrots and vases, respectively. (Nodes represent events and directed arcs represent causality.) The state of being dead or SE.g., see [McC87]. 9E.g., see [Car88]. 1°I don't mean 'causal chain' in the sense that philoso- phers have recently used it [Sch77], nor in the sense of [SA77], nor do I mean 'chain' in the mathematical sense of a total order. broken can be defined in terms of the occurrence of an event type of dying or breaking, respectively. Live is the state of an entity who has been born but has not yet died; ready-to-use is the state of an artifact between its creation or repair and its destruction. 11 Note that, paradoxically, language users would agree that a dead parrot or a vase with a crack in it is still aptly characterized as a parrot or vase, respectively. 12 Next, what I mean by a normal state of E is a state that E is expected to be in. For example, in the absence of information to the contrary, live or ready-to-use is expected by language users to be a state of parrots or vases, respectively. Note, however, that NORMAL-STATE in (6) represents a normal state of an entity with respect to some plan. That is, I am not claiming that, in the ab- sence of information about S's plan, S's use of (la) conversationally implicates (lb). The reason for stipulating that NORMAL- STATE be relative to S's plan is that use of (la) in the context of a different plan could change what S and H consider to be normal. For example, in a taxidermist's plan, dead could be the normal state of a parrot. Also, consider 'coffee': a speaker's use of (7) in the context of a coffee farm could be used to request coffee beans; in a grocery store, ajar of instant; and in a restaurant, a hot beverage. 7. I'd like some coffee. 92 Note that more than one precondition of S's plan may be relevant to interpreting S's use of an expression. For example, a typical restaurant customer uttering (7) expects to be understood as not only requesting coffee in its hot-beverage state, but also in its safe-to-drink state. Also, more than one of S's plans may be relevant, Returning to the pet shop example, suppose that S and H mutually believe that S has plans to acquire a parrot as a pet and also to study its vocalizations; then it would be inappropriate for H to show S a parrot that H believed to be incapable of making vocalizations. Normal states differ from stereotypes. A stereotype is a generalization about prototypes of a category, 13 e.g. (8). 14 11Examples of how state predicates can be defined in Lansky's formal language will be given later. 12The cracked vase example is from [Her87]. laThe prototype-stereotype distinction is described in[HH83]. 14Note that stereotypes may be relative to a state of the 8. Unripe bananas are green. Qualifying an expression in a way which contradicts a stereotype may have a different ef- fect on H than doing so in a way which specifies a non-normal state. For instance, if S says (9) after saying (la) in the above pet shop scenario, H may doubt S's sincerity or S's knowledge about parrots; while S's use of (3a) after saying (la) may cause tI to have doubts about S's sincerity or It's knowl- edge of S's plan, but not S's knowledge about par- rots. 9 .... a 100 pound one Another difference between stereotypes and normal states is that stereotypes are not affected by S's and H's mutual beliefs about S's plan, whereas I have just demonstrated that what is considered normal may change in the context of S's plan. Finally, another reason for making the distinction is that I am not claiming that, in the above pet shop scenario, S's use of (la) licenses (10); i.e., S does not intend to convey (10). 15 10. I [the speaker] would like to see a large, green, talking bird. 5 The Role of Events in cer- tain Lexical Representa- tions Now I will show how the notion of state presented in the previous section can be repre- sented in the lexicon via state predicates based on causal event chains. The purpose of this is to clarify what counts as a state and hence, what is prototype; e.g. contrast (8) with "Ripe bananas are yel- low". A statement of a stereotype in which the state of the prototypes is unspecified may describe prototypes in the plan-independent normal state for the category; e.g. con- sider "Bananas are yellow". Also, note that stereotypical properties may be used to convey the state; e.g. consider "I want a green banana" used to convey "I want an unripe banana". 15I recognize that it is possible for a speaker to exploit mutual beliefs about stereotypes or plan-independent nor- real states to convey conversational implicatures. E.g., con- sider the conversation: A says, "Is your neighbor rich?" B replies, "He's a doctor." However, this kind of implicature does not occur under the same conditions as those given for normal state implicature, and is outside of the scope of tiffs paper. 93 to be identified by the convention for normal state implicature. This way of representing states has benefits in other areas. First, entaihnent relation- ships between states of an entity are thereby rep- resented. Second, certain scalar implicatures may be based on the event ordering of a causal event chain. For example, Figure 3 contains pictorial and formal representations of a causal event chain for the ripening of fruit. Definitions of states are given as state predicates; e.g. the expression 'un- ripe' is used to denote a state such that no event of ripening (R) has occurred (yet). Note that, as (11) shows, 'ripe' may be used to scalar implicate but not to entail 'not overripe'; the event order- ing of the causal event chain serves as the salient order for the scalar implicature. The expected en- tailments follow from the constraints represented in Figure 3. ll.a. It's ripe. In fact, it's just right for eating. b. It's ripe. In fact, it's overripe/too ripe. 6 Comparison of Scalar and Normal State Implicature These two classes of conversational impli- cature have some interesting similarities and dif- ferences. First, licensing a scalar implicature requires the mention of some specific value in an ordering, while licensing a normal state implicature requires the absence of the mention of any state. For ex- ample, consider a situation where S is a restaurant customer; H is a waiter; S and H have mutual be- lief of the salience of an ordering such that warm precedes boiling hot; and, S and H have mutual belief of S's plan to make tea by steeping a tea bag in boiling hot water. 14.a. I'd like a pot of water. b. I'd like a pot of warm water. c. I'd like a pot of boiling hot water. d. I'd like a pot of warm but not boiling hot water. In this situation, use of (14a) would license the normal state implicature (14c) but no scalar implicature. IIowever, use of (14b) would license the scalar implicature (14d) but not the normal state implicature (14c). (In fact, use of 'warm' in (14b) would cancel (14c), as well as be confusing to H due to its inconsistency with H's belief about S's intention to make tea.) Thus, at least in this example, scalar and normal state implicature are mutually exclusive. Second, saliency and order relations play a role in both. Scalar implicature is based on the salience of a partially ordered set (from any do- main). Normal state implicature is based on the salience of a plan; one of a plan's preconditions may involve a normal state, which can be defined in terms of a causal event chain. normal state implicature, while the presence of a qualification (the marked case), blocks it (thereby allowing the scalar implicature to be conveyed). Finally, Herskovits [Her87] addresses the problem that the meaning of a locative expression varies with the context of its use. Her approach is to specify "a set of characteristic constraints - constraints that must hold for the expression to be used truly and appropriately under normal condi- tions. " (p. 20) Her constraints appear to include stereotypes and plan-independent normal states; normal is distinguished from prototypical; and the constraints may include speaker purpose. 7 Related Work This work is related to work in several dif- ferent areas. First, one of the goals of research on non- monotonic reasoning 16 has been the use of default information. The classic example, that if some- thing is a bird then it can fly, appears to in- volve all three notions that I have distinguished here; namely, stereotype, plan-independent nor- mal state, and normal state with respect to a plan. (It is a stereotype that birds are genetically suited for flight; a plan-independent normal state that a bird is alive or uninjured; and a normal state with respect to a plan to send a message via carrier pi- geon that the bird be able to fly.) Also, I have shown that the calculation of normal state impli- cature is based only on the third notion, i:e., that certain "defaults" are context-dependent. In another area, work has been done on using knowledge of a speaker's plans to fill in missing information to interpret incomplete utter- ances, e.g. sentence fragments [AP80] and ellipsis [car89]. As for related work on conversational im- plicature, both [iior84] and [ALS1] describe prag- matic inferences where what is conveyed by an utterance is more precise than its literal mean- ing. They claim that such inferences are based on a principle of speaker economy and exploit the speaker's and hearer's shared beliefs about stereo- types. Also, Horn points out that an unmarked ex- pression tends to be associated with the stereotype of an extension and its marked counterpart with the non-stereotype. Roughly, this corresponds to my observation regarding (14), that the absence of a qualification (the unmarked case) licenses a lOFor a survey, see [GinS7]. 94 8 Conclusions This paper has provided a convention for identifying normal state implicatures. Normal state implicature permits a speaker to omit certain information from an indefinite description in cer- tain situations without being misunderstood. The convention is that if S makes a request that tt per- form an action A on an E, and if S and H mutually believe that S has a plan whose success depends upon the E being in the normal state N with re- spect to that plan, and that S's request is a step of that plan, then S is implicating a request for S to do A on an E in state N. In order to specify the convention for nor- mal state implicature, I distinguished the notions of stereotype, plan-independent normal state, and normal state with respect to a plan. This distinc- tion may prove useful in solving other problems in the description of how language is used. Also, a representation for states, in terms of causal event chains, was proposed. The convention I have provided is impor- tant both in natural language generation and in- terpretation. In generation, a system needs to consider what normal state implicatures would be licensed by its use of an indefinite description. These implicatures determine what qualifications may be omitted (namely, those which would be im- plicated) and what ones are required (those which are needed to block implicatures that the system does not wish to convey), lr In interpretation, a system may need to understand what a user has 17This latter behavior is an example of Joshi's revised Maxim of Quality: "If you, the speaker, plan to say any- thing which may imply for the hearer something you believe to be false, then provide further information to block it." [JosS2] implicated in order to provide a cooperative re- sponse. For instance, if during a dialogue a sys- tem has inferred that a user has a plan to make an immediate delivery, and then the user says (15a), then if the system knows that the only truck in terminal A is out of service, it would be uncoop- erative for the system to reply with (15b) alone; (15c) should be added for a more cooperative re- sponse. 15.a. User: Is there a truck in terminal A? b. System: Yes, there is one c. but it's out of service. This work may be extended in at least two ways. First, it would be interesting to investigate what plan inference algorithms are necessary in or- der to recognize normal state implicatures in ac- tual dialogue. Another question is whether the notion of normal state implicature can be gener- alized to account for other uses of language. 9 Acknowledgments An earlier version of this work was done at the University of Pennsylvania, partially sup- ported by DARPA grant N00014-85-K0018. My thanks to the people there, particularly Bonnie Webber and Ellen Prince. Thanks to my col- leagues at SAS Institute Inc., Cary, N. C., for their moral support while much of this paper was being written. The final draft was prepared at the Uni- versity of Delaware; thanks to the people there, especially Sandra Carberry and K. Vijayashanker. References [AL81] Jay David Atlas and Stephen C. Levin- son. It-clefts, informativeness, and log- ical form: radical pragmatics (revised standard version). In Peter Cole, editor, Radical Pragmatics, pages 1-62, Aca- demic Press, N. Y., 1981. lAP80] James F. Allen and C. Raymond Per- rault. Analyzing intention in utterances. Artificial Intelligence, 15:143-178, 1980. [c~881 Sandra Carberry. Modeling the user's plans and goals. Computational Linguis- tics, 14(3):23-37, 1988. 95 [Car80] [FN71] [Gin87] [Gri75] [Her87] [HH831 [Hir85] [Hot84] [JosS2] [Lan87] [McC87] Sandra Carberry. A pragmatics-based approach to ellipsis resolution. Compu- tational Linguistics, 15(2):75-96, 1989. R. E. Fikes and N. J. Nilsson. Strips: a new approach to the application of the- orem proving to problem solving. Artifi- cial Intelligence, 2:189-208, 1971. Matthew L. Ginsberg. Readings in Non- monotonic Reasoning. Morgan Kauf- mann, Los Altos, California, 1987. H. Paul Grice. Logic and conversation. In P. Cole and J. L. Morgan, editors, Syntax and Semantics III: Speech Acts, pages 41-58, Academic Press, N.Y., 1975. Annette Herskovits. Language and Spa- tial Cognition. Cambridge University Press, Cambridge, England, 1987. J. Hurford and B. Heasley. Semantics: A Coursebook. Cambridge University Press, Cambridge, England, 1983. Julia Bell Hirschberg. A Theory of Scalar Implicature. Technical Re- port MS-CIS-85-56, Department of Computer and Information Science, Uni- versity of Pennsylvania, 1985. Larry Horn. Toward a new taxonomy for pragmatic inference: q-based and r- based implicature. In D. Schiffrin, ed- itor, GURT '84. Meaning, Form and Use in Context: Linguistic Applica- tions, pages 11--42, Georgetown Univer- sity Press, Washington, D. C., 1984. Aravind K. Joshi. Mutual beliefs in question-answer systems. In N. Smith, editor, Mutual Beliefs, pages 181-197, Academic Press, New York, 1982. Amy Lansky. A representation of par- allel activity based on events, struc- ture, and causality. In M. P. Georgeff and A. Lansky, editors, Reasoning about Actions and Plans: Proceedings of the 1986 Workshop, pages 123-160, Morgan Kaufmann, 1987. John McCarthy. Circumscription - a form of non-monotonic reasoning. In Matthew L. Ginsberg, editor, Readings in Nonmonotonic Reasoning, pages 145- 152, Morgan Kaufmann, 1987. [PASO] [SA77] [Sch77] [Web83] R. Perrault and J. Allen. A plan-based analysis of indirect speech acts. Amer- ican Journal of Computational Linguis- tics, 6(3-4):167-182, 1980. Roger C. Schank and Robert P. Abel- son. Scripts, Plans, Goals and Under- standing. Lawrence Erlbaum Associates, Hinsdale, New Jersey, 1977. Stephen P. Schwartz. Introduction. In Stephen P. Schwartz, editor, Naming, Necessity, and Natural Kinds, pages 13- 41, Cornell University Press, 1977. Bonnie L. Webber. So what can we talk about now? In Jones K. S. Grosz, B. and B. L. Webber, editors, Readings in Nat- ural Language Processing, Morgan Kauf- mann, Los Altos, California, 1983. unborn ,~ live ,~ dead Figure 1: Causal event chain for parrot unfinished~ready-to-use~ Figure 2: Causal event chain for vase \ Y Y ripe Fruit-for-eating = element type events R [Ripen] 0 [Become Overripe] constraints 1. R--. O end element type unripe(x) = --, (3 r:x.R) occurred(r) just-ripe(x) =- (3 r:x.a) occurred(r) A -~((5o:x.O) occurred(o) A r --* o) overripe(x) -- (3 o:x.O) occurred(o) ripe(x) _ (3 r:x.R) occurred(r) Figure 3: Causal event chain for fruit ripening 96
1990
12
THE COMPUTATIONAL COMPLEXITY OF AVOIDING CONVERSATIONAL IMPLICATURES Ehud Reiterf Aiken Computation Lab Harvard University Cambridge, Mass 02138 ABSTRACt Referring expressions and other object descriptions should be maximal under the Local Brevity, No Unnecessary Components, and Lexical Preference preference rules; otherwise, they may lead hearers to infer unwanted conversational implicatures. These preference rules can be incorporated into a polyno- mial time generation algorithm, while some alterna- tive formalizations of conversational impficature make the generation task NP-Hard. 1. Introduction Natural language generation (NLG) systems should produce referring expressions and other object descriptions that are free of false implicatures, i.e., that do not cause the user of the system to infer incorrect and unwanted conversational implicatures (Grice 1975). The following utterances illustrate referring expressions that are and are not free of false implicatures: la) "Sit by the table" lb) "Sit by the brown wooden table" In a context where only one table was visible, and this table was brown and made of wood, utterances (la) and (lb) would both fulfill the referring goal: a hearer who heard either utterance would have no trouble picking out the object being referred to. However, a hearer who heard utterance (lb) would probably assume that it was somehow important that the table was brown and made of wood, i.e., that the speaker was trying to do more than just identify the table. If the speaker did not have this intention, and only wished to tell the hearer where to sit, then this would be an incorrect conversational implicature, and could lead to problems later in the discourse. Accordingly, a speaker who only wished to identify the table should use utterance (la) in this situation, f Currently at the Depamnem of Artificial Intelligence, University of Edinburgh, 80 South Bridge, Edinburgh EHI 1HN, Scotland. 97 and avoid utterance (lb). Incorrect conversational implicatures may also arise from inappropriate attributive (informational) descriptions. 1 This is illustrated by the following utterances, which might be used by a salesman who wished to inform a customer of the color, material, and sleeve-length of a shirt: 2a) "I have a red T-shirt" 2b) "I have a lightweight red cotton shirt with short sleeves" Utterances (2a) and (2b) both successfully inform the hearer of the relevant properties of the shirt, assum- ing the hearer has some domain knowledge about T- shirts. However, if the hearer has this domain knowledge, the use of utterance (2b) might incorrectly implicate that the object being described was not a T-shirt -- because if it was, the hearer would reason, then the speaker would have used utterance (Za). Therefore, in the above situations the speaker, whether a human or a computer NLG system, should use utterances (la) and (2a), and should avoid utter- ances (lb) and (2b); utterances (la) and (2a) are free of false implicatures, while the utterances (lb) and (2b) are not. This paper proposes a computational model for determining when an object description is free of false implicatures. Briefly, a description is considered free of false implicatures if it is maximal under the Local Brevity, No Unnecessary Com- ponents, and Lexical Preference preference rules. These preference rules were chosen on complexity- theoretic as well as linguistic criteria; descriptions that are maximal under these preference rules can be found in polynomial time, while some alternative for- malizations of the free-of-false-implicatures con- straint make the generation task NP-Hard. I The referring/attributive distinction follows Donnellan (1966): a referring expression is intended to identify an object in the current context, while an attributive description is in- tended to communicate information about an object. This paper only addresses the problem of gen- erating free-of-false-implicatures referring expres- sions, such as utterance (la). Reiter (1990a,b) uses the same preference rules to formalize the task of generating free-of- false-implicatures attributive descriptions, such as utterance (2a). 2. Referring Expression Model The referring-expression model used in this paper is a variant of Dale's (1989) model for full definite noun phrase referring expressions. Dale's model is applicable in situations in which the speaker intends to refer to an object that the speaker and hearer are mutually aware of, and the speaker has no other communicative goal besides identifying the referred-to object. 2 The model assumes that objects belong to a taxonomy class (e.g., Chair) and possess values for various attributes (e.g., Color:Brown). 3 Referring expressions are represented as a classification and a set of attribute-value pairs: the classification is syntactically realized as the head noun, while the attribute-value pairs are syntactically realized as NP modifiers. Successful referring expressions are required to be distinguishing descrip- t/ons, i.e., descriptions that contain a classification and a set of attributes that are true of the object being referred to, but not of any other object in the current discourse context. 4 More formally, and using a somewhat different terminology from Dale, let a component be either a classification or an attribute-value pair. A classification component will be written class:Class; an attribute-value pair component will be written Attribute:Value. Then, given a target object, denoted Target, and a set of contrasting objects in the current discourse context, denoted Excluded, a set of com- ponents will represent a successful referring expres- sion (a distinguishing description, in Dale's terminol- 2 Appelt (1985) presented a more complex rderring- expression model that covered situations where the hearer was not already aware of the referred-to object, and that al- lowed the speaker to have more complex communicative goals. A similar laalysis to the one presented in this paper could in principle be done for Appelt's model, but it would be substantially more difficult, in part because the model is more complex, and in pa~t because Appeh did not separate his 'content detcrminatiou' subsystem frona his planner and his sudaee-form generator. 3 All auributes are assumed to be predicative (Karnp 1975). 4 Dale also suggested that NLG systems should choose distinguishing descril0dons of minimal cardinality; this is dis- cussed in footnote 7. ogy) if the set, denoted RE, satisfies the following constraints: 1) Every component in RE applies to Target: that is, every component in RE is either a classification that subsumes Target, or an attribute-value pair that Target possesses. 2) For every member E of Excluded, there is at least one component in RE that does not apply toE. Example: the current discourse context con- tains objects A, B, and C (and no other objects), and these objects have the following classifications and attributes (of which both the speaker and the hearer are aware): A) Table with Material:Wood and Color:Brown. B) Chair with Material:Wood and Color:Brown C) Chair with Material:Wood and Color:Black In this context, the referring expressions {class:Table} ("the table") and {class:Table, Material:Wood, Color:Brown} ("the brown wooden table") both successfully refer to object A, because they match object A but no other object. Similarly, the referring expressions {class:Chair, Color:Brown} ("the brown chair") and {class:Chair, Material:Wood, Color:Brown} ("the brown wooden chair") both successfully refer to object B, because they match object B, but no other object. The refer- ring expression {class:Chair} (~the chair"), how- ever, does not successfully refer to object B, because it also matches object C. 98 3. Conversational Implicature 3.1. Grice's Maxims and Their Interpretation Grice (1975) proposed four maxims of conver- sation that speakers needed to obey: Quality, Quan- tity, Relevance, and Manner. For the task of generat- ing referring expressions as formalized in Section 2, these maxims can be interpreted as follows: Quality: The Quality maxim requires utter- anees to be truthful. In this context, it requires refer- ring expressions to be factual descriptions of the referred-to object. This condition is already part of the definition of a successful referring expression, and does not need to be restated as a conversational implicature constraint. Quantity: The Quality maxim requires utter- antes to contain enough information to fulfill the speaker's communicative goal, but not more informa- tion. In this context, it requires referring expressions to contain enough information to enable the hearer to identify the referred-to object, but not more informa- tion. Therefore, referring expressions should be suc- cessful (as defined in Section 2), but should not con- rain additional elements that are unnecessary for fulfilling the referring goal. Relevance: The Relevance maxim requires utterances to be relevant to the discourse. In this context, where the speaker is assumed just to have the communicative goal of identifying an object to the hearer, the maxim prohibits referring expressions from containing elements that do not help distinguish the target object from other objects in the discourse context. Irrelevant elements are also unnecessary elements, so the Relevance maxim may be con- sidered to be a special case of the Quantity maxim, at least for the referring-expression generation task as formalized in Section 2. Manner: The Brevity submaxim of the Manner maxim requires a speaker to use short utterances if possible. In this context it requires the speaker to use a short referring expression if such a referring expression exists. The analysis of the other Manner submaxims is left for future work. An additional source of conversational impli- catm'e was proposed by Cruse (1977) and Hirschberg (1985), who hypothesized that. implicatures might arise from the failure to use basic-level classes (Rosch 1978) in an utterance. In this paper, such implicatures are generalized by assuming that there is a lexical-preference hierarchy among the lexical classes (classes that can be realized with single lexi- cal units) known to the hearer, and that the use of a lexical class in an utterance implicates that no pre- ferred lexical class could have been used in its place. In summary, conversational implicature con- siderations require referring expressions to be brief, to not contain unnecessary elements, and to use lexically-preferred classes whenever possible. The following requests illustrate how violations of these principles in referring expressions may lead to unwanted conversational implicatares: 3a) "Wait for me by the pine." ({class:Pine}) 99 3b) "Wait for me by the tree that has pinecones." ({class:Tree, Seed-type :Pinecone } ) 3c) "Wait for me by the 50-foot-high pine." ({class:Pine, Height:50-feet } ) 3d) ~Wait for me by the sugar pine." ({ class:Sugar-pine }) If there were only two trees in the hearer's immediate surroundings, a pine and an oak, then all of the above utterances would be successful referring expressions that enabled the hearer to pick out the object being referred to (assuming the hearer could recognize pines and oaks). In such a situation, however, utter- ance (3b) would violate the brevity principle, and thus would implicate that the tree could not be described as a "pine" (which might lead the hearer to infer that the tree was not a real pine, but some other tree that happened to have pinecones). Utterance (3c) would violate the no-unnecessary-elements prin- ciple, and thus would implicate that it was important that the tree was 50 feet tall (which might lead the hearer to infer that there was another pine tree in the area that had a different height). Utterance (3d) would violate the lexical-preference principle, and thus would implicate that the speaker wished to emphasize that the tree was a sugar pine and not some other kind of pine (which might lead the hearer to infer that the speaker was trying to impress her with his botanical knowledge). A speaker who only wished to tell the hearer where to wait, and did not want the hearer to make any of these implicatures, would need to use utterance (3a), and to avoid utter- ances (3b), (3c), and (30). 3.2. Formalizing Conversational Implicature Through Preference Rules The brevity, no-unnecessary-elements, and lexical-preference principles may be formalized by requiring a description to be a maximal element under a preference function of the set of successful referring expressions. More formally, let D be the set of successful referring expressions, and let >> be a preference function that prefers descriptions that are short, that do not contain unnecessary elements, and that use lexically preferred classes. Then, a referring expression is considered free of false implicatures if it is a maximal element of D with respect to >>. In other words, a description B in D is free of false implicatures if there is no description A in D, such that A >> B. This formalization is similar to the par- tially ordered sets that Hirschberg (1985) used to for- malize scalar implicatures: D and >> together form a partially ordered set, and the assumption is that the use of an element in D carries the conversational implicature that no higher-ranked element in D could have been used. The overall preference function >> will be decomposed into separate preference rules that cover each type of implicature: >>B for brevity, >>u for unnecessary elements, and >>t. for lexical prefer- euce. >> is then defined as the disjunction of these preference rules, i.e., A >> B if A >>s B, A >>v B, or A >>L B. The assumption will be made in this paper that there are no conflicts between preference rules, i.e., that it is never the case that A is preferred over B by one preference rule, but B is preferred over A by another preference rule. 5 Therefore, >> will be a partial order if >>B, >>v, and >>n are partial ord- ers. 3.3. Computational Tractability Computational complexity considerations are used in this paper to determine exactly how the no- unnecessary-elements, brevity, and lexical- preference principles should be formalized as prefer- enee rules. Sections 4, 5, and 6 examine various preference rules that might plausibly be used to for- malize these implicatures, and reject preference rules that make the generation task NP-Hard. This is justified on the grounds that computer NLG systems should not be asked to solve NP-Hard problems. 6 Human speakers and hearers are also probably not very proficient at solving NP-Hard problems, which suggests that it is unlikely that NP-Hard preference rules have been incorporated into language. 4. Brevity Grice's submaxim of brevity states that utter- auces should be kept brief. Many NLG researchers (e.g., Dale 1989; Appelt 1985: pages 117-118) have suggested that this means generation systems need to produce the shortest possible utterance. This will be called the Full Brevity preference rule. Unfor- tunately, it is NP-Hard to find the shortest successful referring expression (Section 4.1). Local Brevity (Section 4.2) is a weaker version of the brevity sub- maxim that can be incorporated into a polynomial- time algorithm for generating successful referring expressions. 5 Section 7.2 discusses this assumption. 6 Section 7.1 discusses the computational impact of NP- Hard preference rules. i00 4.1. Full Brevity The Full Brevity preference rule requires the generation system to generate the shortest successful referring expression. Formally, A >>FB B if length(A) < length(B). The task of finding a maximal element of >>FB, i.e., of finding the shortest success- ful referring expression, is NP-Hard. This result holds for all definitions of length the author has examined (number of open-class words, number of words, number of characters, number of com- ponents). To prove this, let Target-Components denote those components (classifications and attribute-value pairs) of Target that are mutually known by the speaker and the hearer. For each Tj in Target- Components, let Rules-Out(Tj) be the members of Excluded that do not possess Tj (so, the presence of Tj in a referring expression 'rules out' these members). Then, consider a potential referring expression, RE = {Ct ..... C,}. RE will be a suc- cessful referring expression if and only if a) Every Ci is in Target-Components b) The union of Rules-Out(Ci), for all Ci in RE, is equal to Excluded. For example, if the task was referring to object B in the example context of Section 2, then Target- Components would be {class:Chair, Material:Wood, Color:Brown}, Excluded would be {A, C}, and Rules-Out(class:Chair) = { A } Rules-Out(Material:Wood) = empty set Rules-Out(Color:Brown) = {C} Therefore, {class:Chair, Color:Brown} (i.e., "the brown chair") would be a successful referring expression for object B in this context. If description length is measured by number of components, 7 finding the minimal length referring expression is equivalent to solving a minimum set cover problem, where Excluded is the set being covered, and the Rules-Out(Tj) are the covering sets. Unfortunately, finding a minimal set cover is an NP- 7 Dale's (1989) minimal distinguishing descriptions are, in the terminology of this paper, successful referring expres- sions that are maximal under Full Brevity when number of components is used as the measure of description length. Therefore, finding a minimal distinguishing description is an NP-Hard problem. The algorithm Dale used was essentially equivalent to the greedy heuristic for minimal set cover (Johnson 1974); as such it ran quickly, but did not always find a tree minimal distinguishing description. Hard problem (Garey and Johnson 1979), and thus solving it is in general computationally intractable (assuming that P ~ NP). Similar proofs will work for the other definitions of length mentioned above. On an intui- tive level, the basic problem is that finding the shor- test description requires searching for the global minimum of the length function, and this global minimum (like many global minima) may be very expensive to locate. 4.2. Local Brevity The Local Brevity preference rule is a weaker interpretation of Grice's brevity submaxim. It states that it should not be possible to generate a shorter successful referring expression by replacing a set of components by a single new componenL Formally, >>us is the transitive closure of >>us', where A >>us, B if size(components(A)-components(B)) = 1, s and length(A) < length(B). The best definition of length(A) is probably the number of open-class words in the surface realization of A. Local brevity can be checked by selecting a potential new component, finding all minimal sets of old components whose combined length is greater than the length of the new component, performing the substitution, and checking if the result is a sue- cessful referring expression. This can be done in polynomial time if the number of minimal sets is polynomial in the length of the description, which will happen if (non-zero) upper and lower bounds are placed on the length of any individual component (e.g., the surface realization of every component must use at least one open-class word, but no more than some fixed number of open-class words). element is defined: detecting unnecessary words in referring expressions is NP-Hard (Section 5.1), but unnecessary components can always be found in polynomial time (Section 5.2). 5.1. No Unnecessary Words The No Unnecessary Words preference rule forbids referring expressions from containing unnecessary words. Formally, A >>ow B if A's sur- face form uses a subset of the words used by B's sur- face form. There are several variants, such as only considering open-class words, or requiring the words in B to be in the same order as the corresponding words in A. All of these variants make the genera- tion problem NP-Hard. The formal proofs are in Reiter (1990b). Intui- tively, the basic problem is that any preference that is stated solely in terms of surface forms must deal with the possibility that new parses and semantic interpre- tations may arise when the surface form is modified. This means that the only way a generation system can guarantee that an utterance satisfies the No Unnecessary Words rule is to generate all possible subsets of the surface form, and then run each subset through a parser and semantic interpreter to check if it happens to be a successful referring expression. The number of subsets of the surface form is exponential in the size of the surface form, so this process will take exponential time. To illustrate the 'new parse' problem, consider two possible referring expressions: 4a) "the child holding a pumpkin" 4b) "the child holding a slice of pumpkin pie" 5. No Unnecessary Elements The Gricean maxims of Quantity and Relevance prohibit utterances from containing ele- ments that are unnecessary for fulfilling the speaker's communicative goals. The undesirability of unneces- sary elements is further supported by the observation that humans find pleonasms (Cruse 1986) such as "a female mother" and "an unmarried bachelor" to be anomalous. The computational tractability of the no-unnecessary-elements principle depends on how 8 This is a set formula, where "-* means set-difference and "size" means nmnher of members. The formula requires A to have exactly one COmlx~ent that is not present in B; B can have an ~oitra W number of components that are not present in A. i01 If utterances (4a) and (4b) were both successful referring expressions (i.e., the child had a pumpkin in one hand, and a slice of pumpkin pie in the other), then (4a) >>ow (4b) under any of the variants men- tioned above. However, because utterance (4a) has a different syntactic structure than utterance (4b), the only way the generation system could discover that (4a) >>vw (4b) would be by constructing utterance (4b)'s surface form, removing the words "slice," "of," and "pie" from it, and analyzing the reduced surface form. This problem, of new parses and semantic interpretations being uncovered by modifications to the surface form, causes difficulties whenever a preference rule is stated solely in terms of the surface form. Accordingly, such preference rules should be avoided. 5.2. No Unnecessary Components The No Unnecessary Components preference rule forbids referring expressions from containing unnecessary components. Formally, A >>uc B if A uses a a subset of the components used by B. Unnecessary components can be found in poly- nomial time by using a simple incremental algorithm that just removes each component in turn, and checks if what is left constitutes a successful referring expression. The key algorithmic difference between No Unnecessary Components and No Unnecessary Words is that this simple incremental algorithm will not work for the No Unnecessary Words preference rule. This is because there are cases where removing any single word from an utterance's surface form wifl leave an unsuccessful (or incoherent) referring expression (e.g., imagine removing just "slice" from utterance (4b)), but removing several words will uncover a new parse that corresponds to a successful referring expression. In contrast, if B is a successful referring expression, and there exists another sue- cessful referring expression A that satisfies components(A) c components(B) (and hence A is preferred over B under the No Unnecessary Com- ponents preference rule), then it will be the case that any referring expression C that satisfies components(A) c components(C) c components(B) will also be successful. This means that the simple algorithm can always produce A from B by incre- mental steps that remove a single component at a time, because the intermediate descriptions formed in this process will always be successful referring expressions. Therefore, the simple incremental algo- rithm will always find unnecessary components, but may not always find unnecessary words. 6. Lexlcal Preference If the attribute values and classifications used in the description are members of a taxonomy, then they can be realized at different levels of specificity. For example, the object in the parking lot outside the author's window might be called "a vehicle," "a motor vehicle," "a car," "a sports car," or "a Porsche." The Lexical Preference rule assumes there is a lexical-preference hierarchy among the taxonomy's lexical classes (classes that can be realized with sin- gle lexical units). The rule states that utterances should use preferred lexical classes whenever possi- ble. Formally, A >>t. B if for every component in A, that is a component in B that has the same structure, 102 and the lexieal class used by the A component is equal to or lexically preferred over the lexical class used by the B component. The lexical-preference hierarchy should, at minimum, incorporate the following preferences: i) Lexical class A is preferred over lexical class B if A's realization uses a subset of the open- class words used in B's realization. For exam- ple, the class with realization ``vehicle" is pre- ferred over the class with realization "motor vehicle." ii) Lexical class A is preferred over lexical class B if A is a basic-level class, and B is not. For example, if car was a basic-level class, then "a car" would be preferred over ``a vehicle" or ``a porsche. "9 In some cases these two preferences may conflict; this is discussed in Section 7.2. Utterances that violate either preference (i) or preference (ii) may implicate unwanted implicatures. Preference rule (ii) has been discussed by Cruse (1977) and Hirschberg (1985). Preference rule (i) may be considered to be another application of the Gricean maxim of quantity, and is illustrated by the following utterances: 5a) "Wait for me by my car" 5b) "Walt for me by my sports car" If utterances (5a) and (5b) were both successful referring expressions (e.g., if the speaker possessed only one ear), then the use of utterance (5b) would implicate that the speaker wished to emphasize that his vehicle was a sports car, and not some other kind of car. From an algorithmic point of view, referring expressions that are maximal under the lexical- preference criteria can be found in polynomial time if the following restriction is imposed on the lexical- preference hierarchy: Restriction: If lexical class A is preferred over lexical class B, then A must either subsume B or be sub- sumed by B in the class taxonomy. For example, it is acceptable for car to be preferred over vehicle or Porsche, but it is not acceptable for car to be preferred over gift (because car neither sub- sumes nor is subsumed by g~ft). If the above reslriction holds, a variant of the simple incremental algorithm of Section 5.2 may be used to implement lexical preference: the algorithm simply attempts each replacement that lexical prefer- ence suggests, and checks if this results in a success- ful referring expression. If the restriction does not hold, then the simple incremental algorithm may fall, and obeying the Lexical Preference rule is in fact N-P-Hard (the formal proof is in Reiter (1990b)). 7. ISSUES 7.1. The Impact of NP-Hard Preference Rules It is difficult to precisely determine the compu- tational expense of generating referring expressions that are maximal under the Full Brevity or No Unnecessary Words preference rules. The most straightforward algorithm that obeys Full Brevity (a similar analysis can be done for No Unnecessary Words) simply does an exhaustive search: it first checks if any one-component referring expression is successful, then checks if any two-component refer- ring expression is successful, and so forth. Let L be the number of components in the shortest referring expression, and let N be the number of components that are potentially useful in a description, i.e., the number of members of Target-Components that rule out at least one member of Excluded. The straight- forward full-brevity algorithm will then need to examine the following number of descriptions before it finds a successful referring expression: For the problem of generating a referring expression that identifies object B in the example context presented in Section 2, N is 3 and L is 2, so the straightforward brevity algorithm will take only 6 steps to find the shortest description. This problem is artificially simple, however, because N, the number of potential description components, is so small. In a more realistic problem, one would expect Target- Components to include size, shape, orientation, posi- tion, and probably many other attribute-value pairs as well, which would mean that N would probably be at least 10 or 20. L, the number of attributes in the shortest possible referring expression, is probably fairly small in most realistic situations, but there are cases where it might be at least 3 or 4 (e.g., consider Uthe upside-down blue cup on the second shelf"). 203 For some example values of L and N in this range, the straightforward brevity algorithm will need to examine the following number of descriptions: L = 3, N = 10; 175 descriptions L = 4, N = 20; over 6000 descriptions L = 5, N = 50; over 2,000,000 descriptions The straightfo~vard full-brevity algorithm, then, seems prohibitively expensive in at least some circumstances. Because finding the shortest descrip- tion is N-P-Hard, it seems likely (existing complexity-theoretic techniques are too weak to prove such statements) that all algorithms for finding the shortest description will have similarly bad per- formance in the worst case. It is possible, however, that there exist algorithms that have acceptable per- formance in almost all 'realistic' cases. Any such proposed algorithm, however, should be carefully analyzed to determine in what circumstances it will fail to find the shortest description or will take exponential time to run. 7.2. Conflicts Between Preference Rules The assumption has been made in this paper that the preference rules do not conflict, i.e., that it is never the case that description A is preferred over description B by one preference rule, while descrip- tion B is preferred over description A by another preference rule. This means, in particular, that if lex- ical class LC1 is preferred over lexical class LC2, then LC,'s realization must not contain more open- class words than LC2's realization; otherwise, the Lexical Preference and Local Brevity preference rules may conflict. 1° This can be supported by psychological and linguistic findings that basic-level classes are almost always realized with single words (Rosch 1978; Berlin, Breedlove, and Raven 1973). However, there are a few exceptions to this rule, i.e., there do exist a small number of basic-level categories that have realizations that require more than one open-class word. For example, Washing- Machine is a basic-level class for some people, and it has a realization that uses two open-class words. This leads to a conflict of the type mentioned above: basic-level Washing-Machine is preferred over non- 10 This assmnes that the Local Brevity pTcfenmcc rule uses number of open-class words as its measure of descrip- tic~ length. If number of comp~cnts or number of lcxical units is used as the measure of description length, then Local Brevity will never conflict with Lcxical Prcfc~-ncc. No other conflicts can occur between the No Unneces- saw Components, Local Brevity, and Lexical Preference preference rules. basic-level Appliance, but Washing-Machine's reali- zation contains more open-class words than Appliance's. The presence of a basic-level class with a multi-word realization can also cause a conflict to occur between the two lexical-preference principles given in Section 6 (such conflicts are otherwise impossible). For example, Washing-Machine's reali- zation contains a superset of the open-class words used in the realization of Machine, so the basic-level preference of Section 6 indicates that Washing- Machine should be lexically preferred over Machine, while the realization-subset preference indicates that Machine should be lexically preferred over Washing-Machine. The basic-level preference should take priority in such cases, so Washing- Machine is the true lexicaUy-preferred class in this example. 7.3. Generalizability of Results For the task of generating attributive descrip- tions as formalized in Reiter (1990a, 1990b), the Local Brevity, No Unnecessary Components, and Lexieal Preference rules are effective at prohibiting utterances that carry unwanted conversational impli- catures, and also can be incorporated into a polynomial-time generation algorithm, provided that some restrictions are imposed on the underlying knowledge base. The effectiveness and tractability of these preference rules for other generation tasks is an open problem that requires further investigation. The Full Brevity and No Unnecessary Words preference rules are computationally intractable for the attributive description generation task (Reiter 1990b), and it seems likely that they will be intract- able for most other generation tasks as well. Because global maxima are usually expensive to locate, finding the shortest acceptable utterance will prob- ably be computationally expensive for most genera- tion tasks. Because the 'new parse' problem arises whenever the preference function is staled solely in terms of the surface form, detecting unnecessary words will also probably be quite expensive in most situations. 8. Conclusion Referring expressions and other object descrip- tions need to be brief, to avoid unnecessary elements, and to use lexically preferred classes; otherwise, they may carry unwanted and incorrect conversational implicatures. These principles can be formalized by requiring referring expressions to be maximal under the Local Brevity, No Unnecessary Components, and 104 Lexical Preference preference rules. These prefer- ence rules can be incorporated into a polynomial- time algorithm for generating free-of-false- implicatures referring expressions, while some alter- native preference rules (Full Brevity and No Unnecessary Words) make this generation task NP- Hard. AckmowJedgements Many thanks to Robert Dale, Joyce Friedman, Barbara Grosz, Joe Marks, Warren Plath, Candy Sid~er, Jeff Siskind, Bill Woods, and the anonymous reviewers for their help and sugges- tions. This work was partially supported by a National Science Foundatiou Graduate Fellowship, an IBM Graduate Fellowship, and a contract from U S WEST Advanced Technologies. Any opinions, findings, conclusions, or recommendations are those of the author and do not necessarily reflect the views of the National Science Fotmdation, IBM, or U S WEST Advanced Technologies. References Appelt, D. 1985 Planning English Referring Expressions. Cam- bridge University Press: New York. Berlin, B.; Breedlove, D,; and Raven, P. 1973 General Principles of Classification and Nomenclature in Folk Biology. Amer- ican Anthropologist 75:214-242. Cruse, D. 1977 The pragmatics of lexical specificity. Journal of Linguistics 13:153-164. Cruse, D. 1986 Lexical Semantics. Cambridge University Press: New York. Dale, R. 1989 Cooking up Referring Expressious. In Proceedings of the 27th Annual Meeting of the Association for Compu- tational Linguistics. Donnellan, K. 1966 Reference and Definite Descriptions. Philo- sophical Review 75:281-304. Garey, M. and Johnson, D. 1979 Computers and Intractability: a Guide to the Theory of NP-Completeness. W. H. Freeman: San Francisce. Grice, H. 1975 Logic and conversatiou. In P. Cole and J. Morgan (Eds.), Syntax and Semantics: Vol 3, Speech Acts, pg 43- 58. Academic Press: New York. Hirsehberg, J. 1985 A Theory of Scalar lmplicature. Report MS- CIS-85-56, LINC LAB 21. Department of Computer and Information Science, University of Pennsylvania. Johnson, D. 1974 Approximation algorithms for eomhinatorial problems. Journal of Computer and Systems Sciences 9:256-178. Kamp, H. (1975) Two Theories about Adjectives. In E. Koenan (Ed.) Formal Semantics of Natural Language, pg 123-155. Cambridge University Press: New York. Reiter, E. 1990a Generating Descriptions that Exploit a User's Domain Knowledge. To appear in R. Dale0 C. MeRish, and M. Zock (F_xls.), Current Research in Natural Language Generation. Academic Press: New York. Reiter, E. 1990b Generating Appropriate Natural Language Object Descriptions. Ph.D thesis. Aiken Computation Lab, Harvard University: Cambridge, Mass. Rosch, E. 1978 Principles of Categorization. In E. Rosch and B. Lloyd (Eds.), Cognition and Categorization. Lawrence Erl- baum: Hillsdale, NL
1990
13
Free Indexation: Combinatorial Analysis and A Compositional Algorithm* Sandiway Fong 545 Technology Square, Rm. NE43-810, MIT Artificial Intelligence Laboratory, Cambridge MA 02139 Internet: [email protected] Abstract The principle known as 'free indexation' plays an important role in the determination of the refer- ential properties of noun phrases in the principle- and-parameters language framework. First, by in- vestigating the combinatorics of free indexation, we show that the problem of enumerating all possi- ble indexings requires exponential time. Secondly, we exhibit a provably optimal free indexation al- gorithm. 1 Introduction In the principles-and-parameters model of lan- guage, the principle known as 'free indexation' plays an important part in the process of deter- mining the referential properties of elements such as anaphors and pronominals. This paper ad- dresses two issues. (1) We investigate the combi- natorics of free indexation. By relating the prob- lem to the n-set partitioning problem, we show that free indexation must produce an exponen- tial number of referentially distinct phrase struc- tures given a structure with n (independent) noun phrases. (2) We introduce an algorithm for free in- dexation that is defined compositionally on phrase structures. We show how the compositional na- ture of the algorithm makes it possible to incre- mentally interleave the computation of free index- ation with phrase structure construction. Addi- tionally, we prove the algorithm to be an 'optimal' procedure for free indexation. More precisely, by relating the compositional structure of the formu- lation to the combinatorial analysis, we show that the algorithm enumerates precisely all possible in- dexings, without duplicates. 2 Free Indexation Consider the ambiguous sentence: (1) John believes Bill will identify him *The author would like to acknowledge Eric S. Ris- tad, whose interaction helped to motivate much of the analysis in this paper. Also, Robert C. Berwick, Michael B. Kashket, and Tanveer Syeda provided many useful comments on earlier drafts. This work is supported by an IBM Graduate Fellowship. In (1), the pronominal "him" can be interpreted as being coreferential with "John", or with some other person not named in (1), but not with "Bill". We can represent these various cases by assigning indices to all noun phrases in a sentence together with the interpretation that two noun phrases are coreferential if and only if they are coindexed, that is, if they have the same index. Hence the follow- ing indexings represent the three coreference op- tions for pronominal "him" :1 (2) a. John1 believes Bill2 will identify him1 b. John1 believes Bill2 will identify him3 c. *John1 believes Bills will identify him2 In the principles-and-parameters framework (Chomsky [3]), once indices have been assigned, general principles that state constraints on the lo- cality of reference of pronominals and names (e.g. "John" and "Bill") will conspire to rule out the impossible interpretation (2c) while, at the same time, allow the other two (valid) interpretations. The process of assigning indices to noun phrases is known as "free indexation," which has the fol- lowing general form: (4) Assign indices freely to all noun phrases? In such theories, free indexation accounts for the fact that we have coreferential ambiguities in lan- guage. Other principles interact so as to limit the 1Note that the indexing mechanism used above is too simplistic a framework to handle binding examples involving inclusion of reference such as: (3) a. We1 think that I1 will win b. We1 think that Is will win c. *We1 like myself 1 d. John told Bill that they should leave Richer schemes that address some of these problems, for example, by representing indices as sets of num- bers, have been proposed. See Lasnik [9] for a discus- sion on the limitations of, and alternatives to, simple indexation. Also, Higginbotham [7] has argued against coindexation (a symmetric relation), and in favour of directed links between elements (linking theory). In general, there will be twice as many possible 'linkings' as indexings for a given structure. However, note that the asymptotic results of Section 3 obtained for free indexation will also hold for linking theory. 105 number of indexings generated by free indexation to those that are semantically well-formed. In theory, since the indices are drawn from the set of natural numbers, there exists an infinite number of possible indexings for any sentence. However, we are only interested in those indexings that are distinct with respect to semantic interpre- tation. Since the interpretation of indices is con- cerned only with the equality (and inequality) of indices, there are only a finite number of seman- tically different indexings. 3 For example, "John1 likes Mary2" and "John23 likes Mary4" are con- sidered to be equivalent indexings. Note that the definition in (4) implies that "John believes Bill will identify him" has two other indexings (in ad- dition to those in (2)): (5) a. *John1 believes Bill1 will identify him1 b. *John1 believes Bill1 will identify him2 subsets. For example, a set of four elements {w, x, y, z} can be partitioned into two subsets in the following seven ways: {w, z}{y} {w, y, y} y, z){w} The number of partitions obtained thus is usually represented using the notation {~} (Knuth [8]). In general, the number of ways of partitioning n elements into m sets is given by the following formula. (See Purdom & Brown [10] for a discussion of (6).) (6) {:++11} = {:} + (m + 1){m: 1 } In some versions of the theory, indices are only freely assigned to those noun phrases that have not been coindexed through a rule of movement (Move-a). (see Chomsky [3] (pg.331)). For exam- ple, in "Who1 did John see [NPt]l?", the rule of movement effectively stipulates that "Who" and its trace noun phrase must be coreferential. In particular, this implies that free indexation must not assign different indices to "who" and its trace element. For the purposes of free indexation, we can essentially 'collapse' these two noun phrases, and treat them as if they were only one. Hence, this structure contains only two independent noun phrases. 4 3 The Combinatorics of Free Indexation ........ In this section, we show that free indexation gen- erates an exponential number of indexings in the number of independent noun phrases in a phrase structure. We achieve this result by observing that the problem of free indexation can be expressed in terms of a well-known combinatorial partitioning problem. Consider the general problem of partitioning a set of n elements into m non-empty (disjoint) 2The exact form of (4) varies according to different versions of the theory. For example, in Chomsky [4] (pg.59), free indexation is restricted to apply to A- positions at the level of S-structure, and to A-positions at the level of logical form. ZIn other words, there are only a finite number of equivalence classes on the relation 'same core[erence relatlons hold.' This can easily be shown by induction on the number of indexed elements. 4TechnicaJly, "who" and its trace are said to form a chain. Hence, the structure in question contains two distinct chains. for n,m > 0 The number of ways of partitioning n elements into zero sets, {o}, is defined to be zero for n > 0 and one when n = 0. Similarly, {,no}, the number of ways of partitioning zero elements into m sets is zero for m > 0 and one when m = 0. We observe that the problem of free indexa- tion may be expressed as the problem of assign- ing 1, 2,... ,n distinct indices to n noun phrases where n is the number of noun phrases in a sen- tence. Now, the general problem of assigning m distinct indices to n noun phrases is isomorphic to the problem of partitioning n elements into m non-empty disjoint subsets. The correspondence here is that each partitioned subset represents a set of noun phrases with the same index. Hence, the number of indexings for a sentence with n noun phrases is: (7) m=l (The quantity in (7) is commonly known as Bell's Exponential Number B.; see Berge [2].) The recurrence relation in (6) has the following solution (Abramowitz [1]): (8) Using (8), we can obtain a finite summation form for the number of indexings: (9) (-1) k" S. = (¥7 k-7.' rn=l k=0 106 It can also be shown (Graham [6]) that Bn is asymptotically equal to (10): (10) mrtn em~-n- ~ where the quantity mn is given by: (11) 1 mn In mn= n - - 2 That is, (10) is both an upper and lower bound on the number of indexings. More concretely, to provide some idea of how fast the number of pos- sible indexings increases with the number of noun phrases in a phrase structure, the following table exhibits the values of (9) for the first dozen values of n: NPs Indexings NPs Indexings 1 1 7 877 2 2 8 4140 3 5 9 21147 4 15 10 115975 5 52 11 678570 6 203 12 4123597 4 A Compositional Algorithm In this section, we will define a compositional algo- rithm for freeindexation that provably enumerates all and only all the possible indexings predicted by the analysis of the previous section. The PO-PARSER is a parser based on a principles-and-parameters framework with a uniquely flexible architecture ([5]). In this parser, linguistic principles such as free indexation may be applied either incrementally as bottom-up phrase structure construction proceeds, or as a separate operation after the complete phrase structure for a sentence is recovered. The PO-PARSER was de- signed primarily as a tool for exploring how to organize linguistic principles for efficient process- ing. This freedom in principle application allows one to experiment with a wide variety of parser configurations. Perhaps the most obvious algorithm for free in- dexation is, first, to simply collect all noun phrases occurring in a sentence into a list. Then, it is easy to obtain all the possible indexing combinations by taking each element in the list in turn, and optionally coindexing it with each element follow- ing it in the list. This simple scheme produces each possible indexing without any duplicates and works well in the case where free indexing applies after structure building has been completed. The problem with the above scheme is that it is not flexible enough to deal with the case when free 107 indexing is to be interleaved with phrase structure construction. Conceivably, one could repeatedly apply the algorithm to avoid missing possible in- dexings. However, this is very inefficient, that is, it involves much duplication of effort. Moreover, it may be necessary to introduce extra machin- ery to keep track of each assignment of indices in order to avoid the problem of producing du- plicate indexings. Another alternative is to sim- ply delay the operation until all noun phrases in the sentence have been parsed. (This is basically the same arrangement as in the non-interleaved case.) Unfortunately, this effectively blocks the interleaved application of other principles that are logically dependent on free indexation to assign indices. For example, this means that principles that deal with locality restrictions on the bind- ing of anaphors and pronominals cannot be in- terleaved with structure building (despite the fact that these particular parser operations can be ef- fectively interleaved). An algorithm for free indexation that is defined compositionally on phrase structures can be effec- tively interleaved. That is, free indexing should be defined so that the indexings for a phrase is some function of the indexings of its sub-constituents. Then, coindexings can be computed incrementally for all individual phrases as they are built. Of course, a compositional algorithm can also be used in the non-interleaved case. Basically, the algorithm works by maintaining a set of indices at each sub-phrase of a parse tree. 5 Each index set for a phrase represents the range of indices present in that phrase. For example, "Whoi did Johnj see tiT' has the phrase structure and index sets shown in Figure 1. There are two separate tasks to be performed whenever two (or more) phrases combine to form a larger phrase, s First, we must account for the possibility that elements in one phrase could be coindexed (cross-indexed) with elements from the other phrase. This is accomplished by allowing in- dices from one set to be (optionally) merged with distinct indices from the other set. For example, the phrases "[NpJohni]" and "[vP likes himj]" have index sets {i} and {j}, respectively. Free indexation must allow for the possibilities that "John" and "him" could be coindexed or main- tain distinct indices. Cross-indexing accounts for this by optionally merging indices i and j. Hence, we obtain: (12) a. Johnl likes him/, i merged with j 5For expository reasons, we consider only pure in- dices. The actual algorithm keeps track of additional information, such as agreement features like person, number and gender, associated with each index. For example, irrespective of configuration, "Mary" and "him" can never have the same index. [cP [NP who/] [~- did [IP [NP Johnj] [vP see [NP tdl]]] {i,j} {i} {/,j} {i,j} {j} {i} {/} Figure 1 Index sets for "Who did John see?" b. Johni likes himj, i not merged with j Secondly, we must find the index set of the ag- gregate phrase. This is just the set union of the in- dex sets of its sub-phrases after cross-indexation. In the example, "John likes him", (12a) and (125) have index sets {i} and {i, j}. More precisely, let Ip be the set of all in- dices associated with the Binding Theory-relevant elements in phrase P. Assume, without loss of generality, that phrase structures are binary branching. 7 Consider a phrase P = Iv X Y] with immediate constituents X and Y. Then: 1. Cross Indexing: Let fx represent those ele- ments of Ix which are not also members of Iv, that is, (Ix -Iv). Similarly, let iv be (Iv - Ix). s (a) If either ix or fr are empty sets, then done. (b) Let x and y be members of ix and fy, respectively. (c) Eifher merge indices z and y or do noth- ing. (d) Repeat from step (la) with ix_ - {z} in place of ix. Replace Ir with Iv - {y} if and y have been merged. 2. Index Set Propagation: Ip = Ix O Iv. The nondeterminism in step (lc) of cross- indexing will generate all and only all (i.e. with- out duplicates) the possible indexings. We will show this in two parts. First, we will argue that eSome rea£lers may realize that the algorithm must have an additional step in cases where the larger phrase itself may be indexed, for instance, as in [NPi[NP, John's ] mother]. In such cases, the third step is slCmply to merge the singleton set consisting of the index of the larger phrase with the result of cross- indexing in the first step. (For the above example, the extra step is to just merge {i} with {j}.) For exposi- tory reasons, we will ignore such cases. Note that no loss of generality is implied since a structure of the form [NPI [NPj... ~.. -]... ~...] can be can always be handled as [P1 [NPi][P2[NPj... o¢...].../~...]]. rThe algorithm generalizes to n-ary branching us- ing iteration. For example, a ternary branching struc- ture such as [p X Y Z] would be handled in the same way as [p X[p, Y Z]]. SNote that ix and iv are defined purely for no- tational convenience. That is, the algorithm directly operates on the elements of Ix and Iy. 108 / N P k / ~ N Pj Y Pi Figure 2 Right-branching tree the above algorithm cannot generate duplicate in- dexings: That is, the algorithm only generates distinct indexings with respect to the interpreta- tion of indices. As shown in the previous section, the combinatorics of free-indexlng indicates that there are only B, possible indexings. Next, we will demonstrate that the algorithm generates ex- actly that number of indexings. If the algorithm satisfies both of these conditions, then we have proved that it generates all the possible indexings exactly once. 1. Consider the definition of cross-indexing, ix represents those indices in X that do not ap- pear in Y. (Similarly for iv.) Also, whenever two indices are merged in step (lb), they are 'removed' from ix and iv before the next it- eration. Thus, in each iteration, z and y from step (lb) are 'new' indices that have not been merged with each other in a previous itera- tion. By induction on tree structures, it is easy to see that two distinct indices cannot be merged with each other more than once. Hence, the algorithm cannot generate dupli- cate indexings. 2. We now demonstrate why the algorithm gen- erates exactly the correct number of index- ings by means of a simple example. Without loss of generality, consider the right-branching phrase scheme shown in Figure 2. Now consider the decision tree shown in Fig- ure 3 for computing the possible indexings of the right-branching tree in a bottom-up fash- ion. Each node in the tree represents the index set of the combined phrase depending on whether the noun phrase at the same level is cross- NPs gPi i= NPj i= NPk Decision Tree k i=k i,j• { {i,k} {i,j} {~j} {i,j,k} : : : : Figure 3 Decision tree 1 1 2 1 2 2 2 3 r',, B. b. B. b.. 122232232233334 : : : : : Figure 4 Condensed decision tree indexed or not. For example, {i} and {i, j} on the level corresponding to NPj are the two possible index sets for the phrase Pij. The path from the root to an index set contains arcs indicating what choices (either to coin- dex or to leave free) must have been made in order to build that index set. Next, let us just consider the cardinality of the index sets in the decision tree, and expand the tree one more level (for NP~) as shown in Figure 4. Informally speaking, observe that each deci- sion tree node of cardinality i 'generates' i child nodes of cardinality i plus one child node of cardinality i + 1. Thus, at any given level, if the number of nodes of cardinality m is cm, and the number of nodes of cardinality m- 1 is c,,-1, then at the next level down, there will be mcm + c,n-1 nodes of cardinality m. Let c(n,m) denote the number of nodes at level n with cardinality m. Let the top level of the decision tree be level 1. Then: (13) c(n+l, re+l) = c(n, m)+(m+l)c(n, re+l) Observe that this recurrence relation has the same form as equation (6). Hence the al- gorithm generates exactly the same number of indexings as demanded by combinatorial analysis. 5 Conclusions This paper has shown that free indexation pro- duces an exponential number of indexings per phrase structure. This implies that all algorithms that compute free indexation, that is, assign in- dices, must also take at least exponential time. In this section, we will discuss whether it is possible for a principle-based parser to avoid the combina- torial 'blow-up' predicted by analysis. First, let us consider the question whether the 'full power' of the free indexing mechanism is nec- essary for natural languages. Alternatively, would it be possible to 'shortcut' the enumeration pro- cedure, that is, to get away with producing fewer than B, indexings? After all, it is not obvious that a sentence with a valid interpretation can be constructed for every possible indexing. However, it turns out (at least for small values of n; see Figures 5 and 6 below) that language makes use of every combination predicted by analysis. This implies, that all parsers must be capable of pro- ducing every indexing, or else miss valid interpre- tations for some sentences. There are B3 = 5 possible indexings for three noun phrases. Figure 5 contains example sen- tences for each possible indexing. 9 Similarly, there are fifteen possible indexings for four noun phrases. The corresponding examples are shown in Figure 6. Although it may be the case that a parser must be capable of producing every possible indexing, it does not necessarily follow that a parser must enumerate every indexing when parsing a parlicu- lar sentence. In fact, for many cases, it is possible to avoid exhaustively exploring the search space of possibilities predicted by combinatorial analy- sis. To do this, basically we must know, a priori, what classes of indexings are impossible for a given sentence. By factoring in knowledge about restric- tions on the locality of reference of the items to be indexed (i.e. binding principles), it is possible to explore the space of indexings in a controlled fash- ion. For example, although free indexation implies that there are five indexings for "John thought [s Tom forgave himself ] ", we can make use of the fact that "himself" must be coindexed with an el- ement within the subordinate clause to avoid gen- STo make the boundary cases match, just define c(0, 0) to be 1, and let c(0, m) = 0 and c(n, 0) = 0 for m > 0 and n > 0, respectively. 9PRO is an empty (non-overt) noun phrase element. 109 (111) 012) (121) (122) (123) John1 wanted PRO1 to forgive himselfl John1 wanted PRO1 to forgive him2 Johnl wanted Mary 2 to forgive himl Johnl wanted Mary 2 to forgive herself2 John1 wanted Mary 2 to forgive him3 Figure 5 Example sentences for B3 (1111) (1222) (1112) (1221) (1223) (1233) (1122) (1211) (1121) (1232) 0123) 0213) 0e31) (1234) John1 John1 John1 Johnl Johnl John1 Johnl John1 JOhnl John1 John1 John1 John1 John1 persuaded himselfl that hel should give himselfl up persuaded Mary 2 PRO2 to forgive herself2 persuaded himselfl PRO1 to forgive hers persuaded Mary 2 PROs to forgive himl persuaded Mary 2 PRO~ to forgive him3 wanted Bill2 to ask Mary a PRO3 to leave wanted wanted wanted wanted wanted wanted wanted wanted PRO1 to tell Mary 2 about herself2 Mary 2 to tell him1 about himselfl PRO1 to tell Mary 2 about himself1 Bill2 to tell Marya about himself2 PRO1 to tell Mary 2 about Torna Mary 2 to tell him1 about Torn3 Mary 2 to tell Toma about himl Mary2 to tell Toma about Bill4 Figure 6 Example sentences for B4 crating indexings in which "Tom" and "himself" are not coindexed. 1° Note that the early elimina- tion of ill-formed indexings depends crucially on a parser's ability to interleave binding principles with structure building. But, as discussed in Sec- tion 4, the interleaving of binding principles logi- cally depends on the ability to interleave free in- dexation with structure building. Hence the im- portance of an formulation of free indexation, such as the one introduced in Section 4, which can be effectively interleaved. References [1] M. Abramowitz ~ I.A. Stegun, Handbook of Mathematical Functions. 1965. Dover. [2] Berge, C., Principles of Combinatorics. 1971. Academic Press. [3] Chornsky, N.A., Lectures on Government and Binding: The Pisa Lectures. 1981. Foris Pub- lications. 1°This leaves only two remaining indexings: (1) where "John" is coindexed with "Tom" and "himself", and (2) where "John" has a separate index. Similarly, if we make use of the fact that "Tom" cannot be coin- dexed with "John", we can pare the list of indexings down to just one (the second case). ii0 [4] Chomsky, N.A., Some Concepts and Conse- quences of of the Theory of Government and Binding. 1982. MIT Press. [5] Fong, S. &: R.C. Berwick, "The Compu- tational Implementation of Principle-Based Parsers," InternationM Workshop on Pars- ing Technologies. Carnegie Mellon University. 1989. [6] Graham, R.L., D.E. Knuth, & O. Patash- nik, Concrete Mathematics: A Foundation for Computer Science. 1989. Addison-Wesley. [7] Higginbotham, J., "Logical Form, Binding, and Nominals," Linguistic Inquiry. Summer 1983. Volume 14, Number 3. [8] Knuth, D.E., The Art of Computer Program- ming: Volume 1 / Fundamental Algorithms. 2nd Edition. 1973. Addison-Wesley. [9] Lasnik, H. & J. Uriagereka, A Course in GB Syntax: Lectures on Binding and Empty Cat- egories. 1988. M.I.T. Press. [10] Purdom, P.W., Jr. ~ C.A. Brown, The Anal- ysis of Algorithms. 1985. CBS Publishing.
1990
14
LICENSING AND TREE ADJOINING GRAMMAR IN GOVERNMENT BINDING PARSING Robert Frank* Department of Computer and Information Sciences University of Pennsylvania Philadelphia, PA 19104 email: frank@ linc.cis.upenn.edu Abstract This paper presents an implemented, psychologically plau- sible parsing model for Government Binding theory gram- mars. I make use of two main ideas: (1) a generaliza- tion of the licensing relations of [Abney, 1986] allows for the direct encoding of certain principles of grammar (e.g. Theta Criterion, Case Filter) which drive structure build- ing; (2) the working space of the parser is constrained to the domain determined by a Tree Adjoining Grammar elementary tree. All dependencies and constraints are lo- caiized within this bounded structure. The resultant parser operates in linear time and allows for incremental semantic interpretation and determination of grammaticaiity. 1 Introduction This paper aims to provide a psychologically plausible mechanism for putting the knowledge which a speaker has of the syntax of a language, the competence gram- mar, to use. The representation of knowledge of language I assume is that specified by Government Binding (GB) Theory introduced in [Chomsky, 1981]. GB, as a com- petence theory, emphatically does not specify the nature of the language processing mechanism. In fact, "proofs" that transformational grammar is inadequate as a linguis- tic theory due to various performance measures are funda- mentally flawed since they suppose a particular connection between the grammar and parser [Berwick and Weinberg, 1984]. Nonetheless, it seems desirable to maintain a fairly direct connection between the linguistic competence and *I would like to thank the following for their valuable discussion and suggestions: Naoki Fukui, Jarnie Henderson, Aravind Joshi, Tony Kroch, Mitch Marcus, Michael Niv, Yves Schabes, Mark Steedman, Enric Vall- duv{. This work was pa~ially supported by ARO Grants DAAL03-89- C0031 PRI and DAAG29-84-K-0061 and DARPA grant N00014-85-K- 0018. The author is supported by a Unisys doctoral fellowship. its processing. Otherwise, claims of the psychological re- ality of this particular conception of competence become essentially vacuous since they cannot be falsified but for the data on which they are founded, i.e. grammaticality judgments. Thus, in building a model of language pro- cessing, I would like to posit as direct a link as is possible between linguistic competence and the operations of the parser while still maintaining certain desirable computa- tional properties. What are the computational properties necessary for psychological plausibility? Since human syntactic pro- cessing is an effortless process, we should expect that it take place efficiently, perhaps in linear time since sen- tences do not become more difficult to process simply as a function of their length. Determinism, as proposed by Marcus [1980], seems desirable as well. In addition, the mechanism should operate in an incremental fashion. Incrementality is evidenced in the human language pro- cessor in two ways. As we hear a sentence, we build up semantic representations without waiting until the sen- tence is complete. Thus, the semantic processor should have access to syntactic representations prior to an utter- ance's completion. Additionally, we are able to perceive ungrammaticality in sentences almost immediately after the ill fonnedness occurs. Thus, our processing mecha- nism should mimic this early detection of ungrammatical input. Unfortunately, a parser with the most transparent rela- tionship to the grammar, a "parsing as theorem proving" approach as proposed by [Johnson, 1988] and [Stabler, 1990], does not fare well with respect to our computa- tional desiderata. It suffers from the legacy of the com- putational properties of first order theorem proving, most notably undecidability, and is thus inadequate for our pur- poses. The question, then, is how much we must repeat from this direct instantiatiou so that we can maintain the requisite properties. In this paper, I attempt to provide iii an answer. I propose a parsing model which represents the principles of the grammar in a fairly direct manner, yet preserves efficiency and incrementality. The model depends upon two key ideas. First, I utilize the insight of [Abney, 1986] in the use of licensing relations as the foundation for GB parsing. By generalizing Abney's for- mulation of licensing, I can directly encode and enforce a particular class of the principles of GB theory and in so doing efficiently build phrase structure. The principles expressible through licensing are not all of those posited by GB. Thus, the others must be enforced using a different mechanism. Unfortunately, the unbounded size of the tree created with licensing makes any such mechanism compu- tationally abhorrent. In order to remedy this, I make use of the Tree Adjoining Grammar (TAG) framework [Joshi, 1985] to limit the working space of the parser. As the parser proceeds, its working slructure is bounded in size. If this bound is exceeded, we reduce this structure by one of the operations provided by the TAG formalism, either substitution or adjunction. This results in two structures, each of which form independent elementary trees. Inter- estingly, the domain of locality imposed by a TAG ele- mentary tree appears to be sufficient for the expression of the remaining grammatical principles. Thus, we can check for the satisfaction of the remaining grammatical princi- ples in just the excised piece of structure and then send it off for semantic interpretation. Since this domain of con- straint checking is bounded in size, this process is done efficiently. This mechanism also works in an incremental fashion. 2 Abney's Licensing Since many grammatical constraints are concerned with the licensing of elements, Abney [1986] proposes utiliz- ing licensing structure as a more concrete representation for parsing. This allows for more efficient processing yet maintains "the spirit of the abstract grammar." Abney's notion of licensing requires that every element in a structure be licensed by performing some syntac- tic function. Any structure with unlicensed elements is ill-formed. Abney takes them role assignment to be the canonical case of licensing and assumes that the properties of a general licensing relation should mirror those of theta assignment, namely, that it be unique, local and lexical. The uniqueness proporty for them assignment requires that an argument receives one and only one them role. Corre- spondingly, licensing is unique: an element is licensed via exactly one licensing relation. Locality demands that theta assignment, and correspondingly licensing, take place un- der a strict definition of government: sisterhood. Finally, 112 IP NP will v p S M ~ M ry tomorrow ~T...~ Figure 1: Abney's Licensing Relations in Clausal Struc- ture (S = subjecthood, F = functional selection, M = mod- ification, T = theta) theta assignment is lexical in that it is the properties of the the theta assigner which determine what theta assignment relations obtain. Licensing will have the same property; it is the licenser that determines how many and what sort of elements it licenses. Each licensing relation is a 3-tuple (D, Cat, Type). D is the direction in which licensing occurs. Cat is the syntac- tic category of the element licensed by this relation. Type specifies the linguistic function accomplished by this li- censing relation. This can be either functional selection, subjecthood, modification or theta-assignment. Functional selection is the relation which obtains between a func- tional head and the element for which it subcategorizes, i.e. between C and IP, I and VP, D and NP. Subjecthood is the relation between a head and its "subject". Moditica- tion holds between a head and adjunct. Theta assignment occurs between a head and its subeategnrized elements. Figure 1 gives an example of the licensing relations which might obtain in a simple clause. Parsing with these li- censing relations simply consists of determining, for each lexieal item as it is read in a single left to right pass, where it is licensed in the previously constructed structure or whether it licenses the previous structure. We can now re-examine Abney's claim that these licens- ing relations allow him to retain "the spirit of the abstract grammar." Since licensing relations talk only of very lo- cal relationships, that occurring between sisters, this sys- tem cannot enforce the constraints of binding, control, and ECP among others. Abney notes this and suggests that his licensing should be seen as a single module in a parsing system. One would hope, though, that principles which have their roots in licensing, such as those of theta and case theory, could receive natural treatments. Unfortu- nately, this is not true. Consider the theta criterion. While this licensing system is able to encode the portion of the constraint that requires theta roles to be assigned uniquely, it fails to guarantee that all NPs (arguments) receive a theta role. This is crucially not the case since NPs are some- times licensed not by them but by subject licensing. Thus, the following pair will be indistinguishable: i. It seems that the pigeon is dead ii. * Joe seems that the pigeon is dead Both It and Joe will be appropriately licensed by a subject licensing relation associated with seems. The case filter also cannot be expressed since objects of ECM verbs are "licensed" by the lower clause as subject, yet also require case. Thus, the following distinction cannot accounted for: i. Carol asked Ben to swat the fly ii. * Carol tried Ben to swat the fly Here, in order to get the desired syntactic structure (with Ben in the lower clause in both cases), Ben will need to be licensed by the inflectional element to. Since such a licensing relation need be unique, the case assigning prop- erties of the matrix verbs will be irrelevant. What seems to have happened is that we have lost the modularity of the the syntactic relations constrained by grammatical princi- ples. Everything has been conltated onto one homoge- neous licensing structure. 3 Generalized Licensing In order to remedy these deficiencies, I propose a system of Generalized Licensing. In this system, every node is assigned two sets of licensing relations: gives and needs. Gives are similar to the Abney's licensing relations: they are satisfied locally and determined lexically. Needs spec- ify the ways in which a node must be licensed.1 A need of type them, for example, requires a node to be licensed by a theta relation. In the current formulation, needs differ from gives in that they are always directionaUy unspeci- fied. We can now represent the theta criterion by placing theta gives on a theta assigner for each argument and theta needs on all DPs. This encodes both that all them roles must be assigned and that all arguments must receive theta roles. In Generalized Licensing, we allow a greater vocabu- lary of relation types: case, them assignment, modification, functional selection, predication, f-features, etc. We can then explicitly represent many of the relations which are posited in the grammar and preserve the modularity of the theory. As a result, however, certain elements can and must be multiply licensed. DPs, for instance, will have needs for both them and case as a result of the case filter and theta criterion. We therefore relax the requirement that 1These bear some similarity to the anti-relations of Abney, but are used in a rather different fashion. 113 all nodes be uniquely licensed. Rather, we demand that all gives and needs be uniquely "satisfied." The unique- ness requirement in Abney's relations is now pushed down to the level of individual gives and needs. Once a give or need is satisfied, it may not participate in any other licensing relationships. One further generalization which I make concerns the positioning of gives and needs. In Abney's system, licens- ing relations are associated with lexical heads and applied to maximal projections of other heads. Phrase structure is thus entirely parasitic upon the reconstruction of licensing structure. I propose to have an independent process of lexical projection. A lexical item projects to the correct number of levels in its maximal projection, as determined by theta structure, f-features, and other lexical properties. 2 Gives and needs are assigned to each of these nodes. As with Abney's system, licensing takes place under a strict notion of government (sisterhood). However, the projec- tion process allows licensing relations determined by a head to take place over a somewhat larger domain than sisterhood to the head. A DP's theta need resulting from the them criterion, for example, is present only at the max- imal projection level. This is the node which stands in the appropriate structural relation to a theta give. As a re- sult of this projection process, though, we must explicitly represent structural relations during parsing. The reader may have noticed that multiple needs on a node might not all be satisfiable in one structural position. Consider the case of a DP subject which possesses both theta and case needs. The S-structure subject of the sen- tence receives its theta role from the verb, yet it receives its case from the tense/agreement morpheme heading IP. This is impossible, though, since given the structural correlate of the licensing relation, the DP would then be directly dominated both by IP and by VP. Yet, it cannot be in ei- ther one of these positions alone, since we will then have unsatisfied needs and hence an ill-formed structure. Thus, our representation of grammatical principles and the con- straints on give and need satisfaction force us into adopt- ing a general notion of chain and more specifically the VP internal subject hypothesis. A chain consist of a list of nodes (al .... ,a~) such that they share gives and needs and each ai c-commands each a~+l. The first element in the chain, al, the head, is the only element which can have phonological content. Others must be empty categories. Now, since the elements of the chain can occupy differ- ent structural positions, they may be governed and hence licensed by distinct elements. In the simple sentence: [IP Johns tns/agr [V' ti smile]] 21 assume the relativized X-bar theory proposed in [Fukui and Speas, 1986]. the trace node which is an argument of smile forms a chain with the DP John. In its V' internal position, the theta need is satisfied by the theta give associated with the V. In subject position, the case need is satisfied by the case give on the I' projection of the inflectional morphology. Now, how might we parse using these licensing rela- tions? Abney's method is not sufficient since a single instance of licensing no longer guarantees that all of a node's licensing constraints are satisfied. I propose a sim- ple mechanism, which generalizes Abney's approach: We proceed left to right, project the current input token to its maximal projection p and add the associated gives and needs to each of the nodes. These are determined by ex- amination of information in the lexical entries (such as using the theta grid to determine theta gives), examination of language specific parameters (using head directionality in order to determined directionality of gives, for exam- pie), and consultation of UG parameters (for instance as a result of the case filter, every DP maximal projection will have an associated case need). The parser then attempts to combine this projection with previously built structure in one of two ways. We may attach p as the sister of a node n on the right frontier of the developing structure, when p is licensed by n either by a give in n and/or a need in the node p. Another possibility is that the previously built structure is attached as sister to a node, rn, dominated by the maximal projection p, by satisfying a give in rn and/or a need on the root of the previously built structure. In the case of multiple attachment possibilities, we order them according to some metric such as the one proposed by Abney, and choose the most highly ranked option. As structure is built, nodes in the tree with unsatisfied gives and needs may become closed off from the right frontier of the working structure. In such positions, they will never become satisfied. In the ease of a need in an internal node n which is unsatisfied, we posit the existence of an empty category rn, which will be attached later to the structure such that (n, ra) form a chain. We posit an element to have been moved into a position exactly when it is licensed at that position yet its needs are not completely satisfied. After positing the empty category, we push it onto the trace stack. When a node has an unsatisfied give and no longer has access to the right frontier, we must posit some element, not phonologically represented in the input, which satisfies that give relation. If there is an element on the top of the trace stack which can satisfy this give, we pop it off the stack and attach it. 3 Of course, if the trace has any remaining needs, it is returned to the Pace stack since its new position is isolated from the right frontier. If no such element appears on top of the mace stack, we 3Note that the use of this stack to recover filler-gap structures forbids non-nested dependencies as in [Fodor, 1978]. IP / ~ 8tree: <left, case, nomlaattve, 1> i needs: <th~, ?, ?> needs: <caae, non~ative, ~>Harry ! styes: <rlsht, ~anctioc.-select, VP, ?> Figure 2: Working Space after "Harry tns/agr" posit a non-mace empty category of the appropriate type, if one exists in the language. 4 Let's try this mechanism on the sentence "Harry laughs." The first token received is Harry and is projected to DP. No gives are associated with this node, but them and case needs are inserted into the need set as a result of the them criterion and the case filter. Next, tns/agr is read and projected to I", since it possesses f-features (cf. [Fuktti and Speas, 1986]). Associated with the I ° node is a rightward functional selection give of value V. On the I' node is a leftward nominative case give, from the f-features, and a leftward subject give, as a result of the Extended Projection Principle. The previously constructed DP is attached as sister to the I' node, thereby satisfying the subject and case gives of the I' as well as the case need of the DP. We are thus left with the structure in figure 2. 5 Next, we see that the them need of the DP is inaccessible from the right frontier, so we push an empty category DP whose need set contains this unsatisfied theta need onto the mace stack. The next input token is the verb laugh. This is projected to a single bar level. Since laugh assigns an external theta role, we insert a leftward theta give to a DP into the V' node. This verbal projection is attached as sister to I °, satisfying the functional selection give of I. However, the theta give in V' remains unsatisfied and since it is leftward, is inaccessible. We therefore need to posit an empty category. Since the DP trace on top of the trace stack will accept this give, the trace stack is popped and the trace is attached via Chomsky-adjunction to the 4Such a simplistic approach to determining whether a trace or non- trace empty category should be inserted is dearly not correct. For in- stance, in "tough movement" Alvin i is tough PRO to feed ti the proposed mechanism will insert the trace of Alvin in subject posi- tion rather than PRO. It remains for future work to determine the exact mechanism by which such decisions are made. 5In the examples which follow, gives are shown as 4-topics (D,T~tpe,Val, SatBI/) where D is the direction, T~tpe is the type of licensing relation, Val is the licensing relation value and SatB~ is the node which satisfies the give (marked as 7, if the relation is as yet unsatisfied). Needs are 3-tuples (Type, Val, SatB~/) where these are as in the gives. For purposes of readability, I remove previously satisfied gives and needs from the fgure. Of course, such information persists in the parser's representation. 114 IP 81ve~ o Harry need= ~ *~.eds: ,eh~,a, aS~, *,> V I laush Figure 4: Adjunction of auxiliary tree/~ into elementary tree ~ to produce 7 Figure 3: Working space after "Harry tns/agr laugh" V' node yielding the structure in figure 3. Since this node forms a chain with the subject DP, the theta need on the subject DP is now satisfied. We have now reached the end of our input. The resulting structure is easily seen to be well-formed since all gives and needs are satisfied. We have adopted a very particular view of traces: their positions in the structure must be independently motivated by some other licensing relation. Note, then, that we can- not analyze long distance dependencies through successive cyclic movement. There is no licensing relation which will cause the intermediate traces to exist. Ordinarily these traces exist only to allow a well-formed derivation, i.e. not ruled out by subjacency or by a barrier to antecedent government. Thus, we need to account for constraints on long distance movement in another manner. We will return to this in the next section. The mechanism I have proposed allows a fairly direct encoding for some of the principles of grammar such as case theory, them theory, and the extended projection prin- ciple. However, many other constraints of GB, such as the ECP, control theory, binding theory, and bounding the- try, cannot be expressed perspicuously through licensing. Since we want our parser to maintain a fairly direct con- nection with the grammar, we need some additional mech- anism to ensure the satisfaction of these constraints. Recall, again, the computational properties we wanted to hold of our parsing model: efficiency and incrementality. The structure building process I have described has worst case complexity O(n 2) since the set of possible attach- ments can grow linearly with the input. While not enor- mously computationally intensive, this is greater that the linear time bound we desire. Also, checking for satisfac- tion of non-licensing constraints over unboundedly large structures is likely to be quite inefficient. There is also the question of when these other constraints are checked. To accord with incrementality, they must be checked as soon as possible, and not function as post-processing "fil- ters." Unfortunately, it is not easily determinable when a given constraint can apply such that further input will not change the status of the satisfaction of a constraint. We do not want to rule a structure ungrammatical simply be- cause it is incomplete. Finally, it is unclear how we might incorporate this mechanism which builds an ever larger syntactic structure into a model which performs semantic interpretation incrementally. 4 Limiting the Domain with TAG These problems with our model are solved if we can place a limit on the size of the structures we construct. The number of licensing possibilities would be bounded yield- ing linear time for smacture construction. Also, constraint checking could be done in a constant amount of process- ing. Unfortunately, the productivity of language requires us to handle sentences of unbounded length and thus lin- guistic structures of unbounded size. TAG provides us with a way to achieve this paradise. TAG accomplishes linguistic description by factoring re- cursion from local dependencies [Joshi, 1985]. It posits a set of primitive structures, the elementary trees, which may be combined through the operations of adjunction and substitution. An elementary tree is a minimal non- recursive syntactic tree, a predication structure containing positions for all arguments. I propose that this is the pro- jection of a lexical head together with any of the associated functional projections of which it is a complement. For instance, a single elementary tree may contain the projec- tion of a V along with the I and C projections in which it is embedded. 6 Along the frontier of these trees may ap- pear terminal symbols (i.e. lexical items) or non-terminals. The substitution operation is the insertion of one elemen- tary tree at a non-terminal of same type as the root on the frontier of another elementary tree. Adjunction allows the insertion of one elementary tree of a special kind, an aux- iliary tree, at a node internal to another (cf. figure 4). In 6This definition of TAG elementary trees is consistent with the Lex- icalized TAG framework [Schabes et al., 1988] in that the lexical head may be seen as the anchor of the elementary trees. For further details and consequences of this proposal on elementary tree weU-fomaedness, see [Frank, 1990]. 115 auxiliary trees, there is a single distinguished non-terminal on the frontier of the tree, the foot node, which is iden- tical in type to the root node. Only adjunctions, and not substitutions, may occur at this node. TAG has proven useful as a formalism in which one can express linguistic generalizations since it seems to provide a sufficient domain over which grammatical constraints can be stated [Kroch and Joshi, 1985] [Kroch and San- torini, 1987]. Kroch, in two remarkable papers [1986] and [1987], has shown that even constraints on long distance dependencies, which intuitively demand a more "global" perspective, can be expressed using an entirely local (i.e. within a single elementary lee) formulation of the ECP and allows for the collapsing of the CED with the ECP. This analysis does not utilize intermediate traces, but in- stead the link between filler and gap is "stretched" upon the insertion of intervening structure during adjunctions. Thus, we are relieved of the problem that intermediate traces are not licensed, since we do not require their exis- tence. Let us suppose a formulation of GB in which all princi- ples not enforced through generalized licensing are stated over the local domain of a TAG elementary tree. Now, we can use the model described above to create structures corresponding to single elementary trees. However, we restrict the working space of the parser to contain only a single structure of this size. If we perform an attachment which violates this "memory limitation," we are forced to reduce the structure in our working space. We will do this in one of two ways, corresponding to the two mech- anisms which TAG provides for combining structure. Ei- ther we will undo a substitution or undo an adjunction. However, all chains are required to be localized in indi- vidual elementary tree. Once an elementary tree is fin- ished, non-licensing constraints are checked and it is sent off for semantic interpretation. This is the basis for my proposed parsing model. For details of the algorithm, see [Frank, 1990]. This mechanism operates in linear time and deterministically, while maintaining coarse grained (i.e. clausal) incrementality for grammaticality determination and semantic interpretation. Consider this model on the raising sentence "Harry seemed to kiss Sally." We begin as before with "Harry tns/agr" yielding the structure in figure 2. Before we re- ceive the next token of input, however, we see that the working structure is larger than the domain of an elemen- tary tree, since the subject DP constitutes an independent predication from the one determined by the projection of I. We therefore unsubstitute the subject DP and send it off to constraint checking and semantic interpretation. At this point, we push a copy of the subject DP node onto the trace stack due to its unsatisfied theta need. 116 IP ~'~ ~ t~i I' .,.~ <am.. r. r> /" J~ 6iw~ <risht, funclima-sdea. VP, k> I ux./aSr SiVm~ ~ ~ 1P. r> Figure 5: Working space after "Harry tus/agr seem" IP n~di: <lhela, ?, ?> x I' sirra: <rl~t, there, u', t> v ~' n~it n~d.:e .~l ~sivm: P ~ Jm~,t'nma-m~ vP. r> Figure 6: Working space after "Harry tns/agr seem to" We continue with the verb seem which projects to V' and attaches as sister to I satisfying the functional selec- tion give yielding the structure in figure 5. There remains only one elementary tree in working space so we need not perform any domain reduction. Next, to projects to I' since it lacks f-features to assign to its specifier. This is attached as object of seem as in figure 6. At this point, we must again perform a domain reduction operation since the upper and lower clauses form two separate elementary trees. Since the subject DP remains on the trace stack, it cannot yet be removed. All dependencies must be resolved withina single elementary tree. Hence, we must unadjoin the structure recursive on I' shown in figure 7 leaving the structure in figure 8 in the working space. This structure is sent off for constraint checking and semantic interpreta- tion. We continue with kiss, projecting and attaching it as functionally selected sister of I and popping the DP from the trace stack to serve as external argument. Finally, we I' /N I V' tns/agr V I' I Figure 7: Result of unadjunction IP / ~ , Stves: <le*%, subject, DP, i> &,iv~ e DPii ]I need.: ~ needs: <theta, ?, ?> / <l'~ht, funct ton-select, i to VP, ?> Figure 8: Working space after unadjunction constrained, we might be able to retain the efficient nature of the current model. Other strategies for resolving such indeterminacies using statistical reasoning or hard coded rules or templates might also be possible, but these con- structs are not the sort of grammatical knowledge we have been considering here and would entail further abstraction from the competence grammar. Another problem with the parser has to do with the incompleteness of the algorithm. Sentences such as IP v, V DP I kiss Figure 9: Working Structure after entire sentence project and attach the DP Sally as sister of V, receiving both them role and case in this position. This DP is unsub- stituted in the same manner as the subject and is sent off for further processing. We are left finally with the struc- ture in figure 9, all of whose gives and needs are satisfied, and we are finished. This model also handles control constructions, bare in- finitives, ECM verbs and binding of anaphors, modifica- tion, genitive DPs and others. Due to space constraints, these are not discussed here, but see [Frank, 1990]. 5 Problems and Future Work Boris knew that Tom ate lunch will not be parsed even though there exist well-formed sets of elementary trees which can derive them. The prob- lem results from the fact that the left to right processing strategy we have adopted is a bit too strict. The comple- mentizer that will be attached as object of know, but Tom is not then licensed by any node on the right frontier. Ul- timately, this DP is licensed by the tns/agr morpheme in the lower clause whose IP projection is licensed through functional selection by C. Similarly, the parser would have great difficulty handling head final languages. Again, these problems might be solved using extra-grammatical de- vices, such as the attention shifting of [Marcus, 1980] or some template matching mechanism, but this would entail a process of "compiling out" of the grammar that we have been trying to avoid. Finally, phonologically empty heads and head move- ment cause great difficulties for this mechanism. Heads play a crucial role in this "project and attach" scheme. Therefore, we must find a way of determining when and where heads occur when they are either dislocated or not present in the input string at all, perhaps in a similar man- ner to the mechanism for movement of maximal projec- tions I have proposed above. The parsing model which I have presented here is still rather preliminary. There are a number of areas which will require further development before this can be considered complete. I have assumed that the process of projection is en- tirely determined from lexieal lookup. It is clear, though, that lexical ambiguity abounds and that the assignment of gives and needs to the projections of input tokens is not determinate. An example of such indeterminacy has to do with the assignment to argument maximal projections of theta needs as a result of the them criterion. DPs need not always function as arguments, as I have been assuming. This problem might be solved by allowing for the state- ment of disjunctive constraints or a limited form of paral- lelism. If the duration of such parallelism could be tightly 6 Conclusion In this paper, I have sketched a psychologically plausible model for the use of GB grammars. The currently im- plemented parser is a bit too simple to be truly robust, but the general approach presented here seems promising. Particularly interesting is that the computationally moti- vated use of TAG to constrain processing locality pro- vides us with insight on the nature of the meta-grammar of possible grammatical constraints. Thus, if grammatical principles are stated over such a bounded domain, we can guarantee the existence of a perspicuous model for their use, thereby lending credence to the cognitive reality of this competence grammar. 117 References [Abney, 1986] Steven Abney. Licensing and parsing. In Proceedings of NELS 16, Amherst, MA. [Berwick and Weinberg, 1984] Robert Berwick and Amy Weinberg. The Grammatical Basis of Linguistic Per- formance. MIT Press, Cambridge, MA. [Chomsky, 1981] Noam Chomsky. Lectures on Govern- ment and Binding. Foris, Dordrecht. [Fodor, 1978] Janet D Fodor. Parsing strategies and con- straints on transformations. Linguistic Inquiry, 9. [Frank, 1990] Robert Frank. Computation and Linguistic Theory: A Government Binding Theory Parser Using Tree Adjoning Grammar. Master's thesis, University of Pennsylvania. [Fukui and Speas, 1986] Naoki Fukui and Margaret Speas. Specifiers and projec- tion. In Naold Fukui, T. Rappaport, and E. Sagey, editors, MIT Working Papers in Linguistics 8, MIT Department of Linguistics. [Johnson, 1988] Mark Johnson. Parsing as deduction: the use of knowledge of language. In The MIT Parsing Volume, 1987-88, MIT Center for Cognitive Science. [Joshi, 1985] Aravind Joshi. How much context- sensitivity is required to provide reasonable structural descriptions: tree adjoining grammars. In D. Dowty, L. Kartunnen, and A. Zwicky, editors, Natural Lan- guage Processing: Psycholinguistic, Computational and Theoretical Perspectives, Cambridge University Press. [Kroch, 1986] Anthony Kroch. Unbounded dependencies and subjacency in a tree adjoining grammar. In A. Manaster-Ramer, editor, The Mathematics of Lan- guage, John Benjamins. [Kroeh, 1987] Anthony Kroch. Assymetries in long distance extraction in a tree adjoining grammar. manuscript, University of Pennsylvania. [Kroch and Joshi, 1985] Anthony Kroch and Aravind Joshi. The Linguistic Relevance of Tree Adjoining Grammar. Technical Report MS-CS-85-16, Univer- sity of Pennsylvania Department of Computer and Information Sciences. To appear in Linguistics and Philosophy. [Kroch and Santorini, 1987] Anthony Kroch and Beatrice Santorini. The derived constituent structure of the 118 west germanic verb raising construction. In R. Frei- din, editor, Proceedings of the Princeton Conference on Comparative Grammar, MIT Press, Cambridge, MA. [Marcus, 1980] Mitchell Marcus. A Theory of Syntactic Recognition for Natural Language. MIT Press, Cam- bridge, MA. [Schabes et al., 1988] Yves Schabes, Anne Abeill6, and Aravind K. Joshi. Parsing strategies with 'lexical- ized' grammars: application to tree adjoining gram- mars. In COLING Proceedings, Budapest. [Stabler, 1990] Edward Stabler. Implementing govern- ment binding theories. In Levine and Davis, ed- itors, Formal Linguistics: Theory and Implementa- tion. forthcoming.
1990
15
A SIMPLIFIED THEORY OF TENSE REPRESENTATIONS AND CONSTRAINTS ON THEIR COMPOSITION Michael R. Brent MIT Artificial Intelligence Lab 545 Technology Square Cambridge, MA 02139 [email protected] ABSTRACT This paper proposes a set of representations for tenses and a set of constraints on how they can be com- bined in adjunct clauses. The semantics we propose ex- plains the possible meanings of tenses in a variety of sen- tential contexts. It also supports an elegant constraint on tense combination in adjunct clauses. These semantic representations provide insights into the interpretations of tenses, and the constraints provide a source of syntac- tic disambiguation that has not previously been demon- strated. We demonstrate an implemented disambiguator for a certain class of three-clause sentences based on our theory. 1 Introduction This paper proposes a set of representations for tenses and a set of constraints on how they can be combined. These representations provide insights into the interpre- tation of tenses, and the constraints provide a source of syntactic disambiguation that has not previously been demonstrated. The sentences investigated in this paper contain multiple clauses connected by temporal/causal con- nectives, words like once, by the time, when, and be- fore. (1) shows that the tenses of multi-clause sentences affect their acceptability. This raises several important *when } (1) a. * Rachel won the game *once Jon *before arrives here answer these questions. Specifically, they provide explanations in terms of the meanings of the tenses. We propose an explanatory theory and demonstrate an im- plementation which successfully disambiguates a class of three-clause sentences. The issues raised by (1) are significant for compu- tational linguistics on several accounts. First, an under- standing of the constraints on tense combinations can be used to support syntactic disambiguation. For example, consider the alternative parses shown textually in (2) and graphically in Figure -1. The first parse in both (2) a. oK b. * [s Jon will learn [s that he won s/ when Rachel arrivess] Read as: When Rachel arrives, Jon will learn that he won Jon will learn [s that he won when Rachel arrives s/ Read as: Jon will learn that, when Rachel arrives, he won (2) and Figure -1, where the adjunct clause starting with when is attached high, is fine; the second, where it is at- tached low, is unacceptable. Figure -1 demonstrates our parser discriminating between the acceptable and unac- ceptable parses of (2). The details of the representation cannot be understood until later, but it can be seen that different compositions of the tenses in the two parses result in marking the top node of the second parse as bad. The contrast between example (2) and example (3) shows that whether the preferred attachment depends on the tenses of the clauses. Examples (2) and (3) show b. OK Rachel will win the game Jon arrives when } once before questions. Which tense combinations are acceptable and which are not? Why do they have the status they do? How can observations like (1) be used to leverage prob- lems like syntactic disambiguation and knowledge repre- sentation? The representations and constraints proposed (3) a. * b. OK that there are [s Jon will learn [s that he had won s/by the time Rachel arrived s/ Read as: By the time Rachel arrived, Jon will learn that he had won Jon will learn [s that he had won by the time Rachel arrived s/ Read as: Jon will learn that by lhe lime Rachel arrived he had won interesting interactions among tenses, and 119 (print-trees (porse-ond-ceeput e-tense-structures '(Jon ulll learn that he ted ale uhen Rachel *s arrive))) I~S: E.R R_S POST , I S_R I. E FUT I I I E(e)fE(e) S, R R, E PRES [T v FUT TEUPORRL-CONN P~ uP: S_R U,E T.: [ ..... "ffi'=ll=': u., ,., ,,E,I ' E. l l S PUUTJ °ou,.ER-.. I s. "STI I::: ,.R ,.E ''ESI ITS: S R R,E FUT TS: E,R l_S JTZRE:-FUTURE ~ERF: - ITZflEZTURE, ~'~'=" ~ iSS: --T' 'U'LEU-VERll I E.R R_S ITS: $,R R.E PRES SP.'~E.R R ITZHE: ,RESENT ~ERF: - • JTEIfSE-U~ P _S POST lk'] i'~ iTTnE: PUESENTi ~ ~o.,~ER-~, I T I Ts: E.u R S PnUTI I ITXUE: ,OUT - I ± r-i~ I --g £.R R_S POST i J IO I s violates: UCTR ] R.E OuR PRES~ S_R R.~ FUT E.R I S POST I I-I s vlelotes: UCTR R.E S.R PRE: U_I R.E FU1 i: FUTURE FUIURE I T8= E.R R S POUT J I I I'I I I $ violates= OCTR I t O,E S,n ,REU~ ~ S s E.l l U PUST I I I I'1 I I I " violates: SCIR I I i u. 1 S.R,R,~ ~ I::: E.u i.s 'R-I i~;'~-':::~:* o,, I"" u., u.E ,RESI I.o..~Eu-uE'.. , W~ ----------'~ ,.u= ..u R U ..T, I*% PUEUl | ITEnE: PUUT - I $,l R.E _L ~ERF:- I T E N S E - U ~ I ~rT' D~,r~mic Lisp Listener I .I Figure -1: The output of our parser on the sentence in (2). The restrictions on tense combination disambiguate this sentence, shown by the asterisk with which our program marks the second parse as unacceptable. Note that the restrictions on the complement clauses are different from those on a~ijunct clauses. The former are not discussed in this paper, but see Hornstein (1990). 120 that a good theory of these interactions would be use- ful for syntactic disambiguation. Such a theory, and an implementation of a disambiguator based on it, are the subjects of this paper. In addition to its potential for syntactic disam- biguation, a theory of these temporal adjunction phe- nomena is may guide the construction of model-theoretic interpretations of the temporal and causal relations among events. Finally, people clearly have a lot of knowl- edge about the interaction among tenses. By making this knowledge explicit, we are likely to open new, unfore- seen avenues to improving the performance of natural language processing devices. 1.1 Context The subjects of tense and temporal representation have generated a great deal of interest in artificial intel- ligence, computational linguistics, linguistics, and phi- losophy. Work in these areas addresses a variety of in- teresting questions which can be broadly divided into two types: questions about representing the temporal knowledge conveyed by natural language, and questions about representing role of tense in sentential grammar. The former questions have often been addressed by at- tempting to construct a model-theoretic semantics of cer- tain temporally significant linguistic constructions. Im- portant work in this area includes Dowty (1979), Allen (1984), Dowty (1986), Hinrichs (1986), Moens (1987), and Hinrichs (1988). Much of the recent work in this area has used some version of Reichenbach's (1947) rep- resentation of tenses as a starting point) The questions about the role of tense in sentential grammar, and in particular about its effect on the acceptability of various sentence types, has been addressed by a different set of researchers. This work, which also uses Reichenbach as a starting point, is well represented by Hornstein (1990) and Comrie (1985), and the works cited therein. In this paper, we focus on how tenses affect the acceptability of sentences, but we attempt to explain their effect in terms of their interpretations. While we explain certain obser- vations about the acceptability of sentences in terms of interpretations, we do not attempt to develop a theory of the temporal interpretation of natural language. 2 Earlier attempts to explain the phenomena under study here include Hornstein (1977), Hornstein (1981), Yip (1986), and Hornstein (1990). In the current pa- per, we attempt to remove some semantic underdeter- mination and some theoretical redundancy that we have 1Hinrichs, 1986; Harper and Gharniak, 1987; Hinrichs, 1988; Moens and Steedman, 1988; Nakhimovsky, 1988; Pas- soneau, 1988; and Webber, 1988 2In particular, the important issue of tense as discourse anaphor is not addressed. (See Hinrichs, 1986; Moens, 1987; Hinrichs, 1988; Nakhimovsky, 1988; and Webber, 1988.) Fur- ther, we do not have a theory of the interaction of temporal interpretation with aspect. (See Dowty, 1979; Dowty, 1986; Moens, 1987; Moens and Steedman, 1988; Nakhimovsky, 1988; and Passoneau, 1988.) found in these works. Section 5 provides a more de- tailed comparison with Yip (1986) and Hornstein (1990). Along with Hornstein and Yip, Harper and Charniak (1987) also propose a set of rules to account for the ac- ceptability of tense combinations in adjunct construc- tions. However, their primary interest is in representing the temporal knowledge that can be conveyed by natu- ral language. As a result, they explicitly choose not to use their semantic system to construct an explanation for their adjunction rules; rather they propose their ad- junction rules as syntactic descriptions. By contrast, the current paper focuses primarily on developing a semantic explanation of tense compatibility. Although we do not offer specific variations on the model-theoretic approach, we hope that our work will further it indirectly. At a minimum, since many model theoretic approaches use Reichenbach's (1947) tense rep- resentations, our insights into those representations may be significant. Further, we hope that our constrained rules for composing those individual tense structures will provide a richer set of representations on which model theoretic approaches can be built. 1.2 Preview The remainder of this paper proceeds as follows. Section 2 introduces the representations for individual tenses. Section 3 presents the method of composing tenses from different clauses, and a general constraints that applies to such composition. 3 Section 4 demon- strates the computer program implementing this theory. Section 5 steps back from the technical details to assess the contributions of this paper and compare it to closely related works. Finally, Section 6 sums up the conclusions drawn throughout the paper. 4 2 The Representation of Individual Tenses In order to construct a theory explaining which tenses can be combined we need a representation of the tenses. The representation used here is variant of that used by Hornstein (1990), who bases it on Comrie (1985). It is a Neo-Reichenbachian representation (Reichenbach, 1966) in that its simple tense structures (STSs) re- late the following three entities: the time of the event named by the verb, denoted by "E", the time of speech, denoted by "S", and a reference time, denoted by "R". The reference time R is used to locate an event with re- spect to another event in sentences like (lb) above. (A mechanism for connecting tenses via the 1% point will be 3Brent (1989) presents two additional constraints on tense composition. 4While English alone has been studied in detail, prelimi- nary investigation supports the expectation that the theory will extend to Romance and Germanic languages. One of the most obvious difference between Romance and Germanic languages is addressed in Brent (1989). 121 X_Y Y_X X,Y Y,X Table 1: Notation for possible relations between time points X and Y Tense Name Simple Tense Example VP Structure past present future past perfect present perfect future perfect E,R R_S S,R R,E S-R R,E E_R R-S E_R S,R E_R S_R Jon WOn Jon wins, is winning Jon will win Jon had won Jon has won Jon will have won Table 2: The six STSs expressible in English ver- bal morphology detailed in Section 3.) Each STS consists of a relation between S mad R and one between R and E; S and E are not directly related. For any directly related time points X and Y, at most one of four possible relations holds be- tween them. These are written as in Table 1. Although we use the same notation as Hornstein (1990), we view it as merely notation for fundamentally semantic relations, whereas he appears to view the syntax as primary. For the purposes of constraining tense combination there appear to be six basic tenses 5 (Table 2). We assign STS representations to tenses as shown in Table 2. One of the main contributions of this paper over previous attempts will be its ability to completely determine the assignments of Table 2 in terms of the semantics of the representations and the meanings of actual tenses. The assignment of STSs to tenses shown in Table 2 can be derived from the possible interpretations of vari- ous tenses. Before arguing that Table 2 can be derived, we note that it is at least consistent with the interpre- tations of the tenses. Suppose that underscore is inter- preted as temporal precedence and comma as simultane- ity (As in Hornstein, 1990. Under this interpretation the various tense structures correspond to the evident mean- ings of the tenses. For example, the STS of the past tense is "E,R R.S." That is, the event referred to by the clause is simultaneous'with some reference point R, which pre- cedes the time of speech (E = R < S). It follows that the event precedes the time of speech, which corresponds to the evident meaning of the past tense. On the other hand, the proposed semantics for comma and underscore cannot completely determine the assignments shown in Table 2, because Table 2 distinguishes X,Y and Y,X, 5The constraints on tense combination appear to be en- tirely independent of whether or not the tensed verb bears progressive morphology. but the semantics does not assign them distinct mean- ings. That situation is remedied by introducing a new and slightly more complex interpretation for comma, as described in (4). (4) Interpretation of "X,Y': a. Y does not precede X. b. X is simultaneous with Y, in the absence of evidence that X precedes Y. (Such evidence can come from other tenses, adverbs, or con- nectives, as described below.) c. X precedes Y, in the presence of supporting evidence from other tenses, adverbs, or con- nectives. The reinterpretation of comma as precedence due to the presence of an adverb is illustrated in (5). Although (5) i{ leave } { OK tomorrow } am leaving for LA * yesterday leave is in the present tense, it is interpreted as a future because of the adverb tomorrow. The fact that adjec- tives can cause the present tense to be reinterpreted as a future but not as a past indicates that its STS must be S,R R,E, not any of the permutations like S,R E,R. If the present had S,R E,R as its STS then E,R could be reinterpreted such that E < R = S, a past. Similar arguments can be made for the other STSs in Table 2. Further, evidence that both tenses from other clauses and temporal/causal connectives can cause comma to be reinterpreted as precedence will be presented below. Note that (4) does not mean that "X,Y" is inter- preted as "X is prior to or simultaneous with Y". Rather, a particular occurrence of "X,Y" Mways has exactly one of the following two interpretations: 1) X is simultane- ous with Y; 2) X is prior to Y. "X,Y" is never ambiguous between the two. 6 3 Causal/Temporal Adjunct Clauses In this section we introduce a composition opera- tion on STSs, and a major constraint on composition. It is important to keep in mind that we are discussing only causal/temporal adjunct clauses. In particular, we are not considering complement clauses, as in "Rachel knows that Jon played the fool yesterday." 3.1 Tense Composition and Semantic Consistency When one clause is adjoined to another by a tem- poral/causal connective like once, by the lime, when, or before the acceptability of the resulting sentence depends in part on the tenses of the two clauses. This is demon- strated by (1). In fact, of the 36 possible ordered pairs ~This is different from Yip (1986), where comma is cru- cially interpreted as ambiguous between the two readings. 122 of tenses only nine are acceptable when put in adjunct constructions like (1). (The nine acceptable tense pairs are listed in Table 3.) 20 of the 27 unacceptable ones, but none of the nine acceptable ones, have the following character: their adjunct-clause SR relation is inconsis- tent with their matrix-clause SR relation, and cannot be reinterpreted according to (4) in a way that makes it consistent. This can be understood in terms of the merg- ing of the adjunct Sit relation with that of the matrix, yielding a combined tense structure (CTS) that has only the matrix SR relation. Besides explaining the ac- ceptability status of many CTSs, the idea of merging the adjunct SR relation into that of the matrix makes sense in terms of the representational schema. In particular, the idea that the adjunct's R point should be identified with that of the matrix through causal/temporal adjunc- tion is consistent with the representational schema which uses R as a reference point for relating one event to an- other. Furthermore, since "S" is a deictic point repre- senting the time of speech (more accurately, the time of proposition), and since both clauses represent propo- sitions made in the same context, it makes sense that they should have the same S point. Once the S and R points of the adjunct clause have been identified with that of the matrix clause, it makes sense that sentences where the matrix asserts one order for the shared S and R points while the adjunct asserts another order would be irregular. Before attempting to formalize these intuitively ap- pealing ideas, let us consider an example. The notation for CTSs is as follows: the STS of the matrix clause is written above that of the adjunct clause and, if possible, the identified S and R points are aligned and connected by vertical bars, as shown in (6). 7 (6) S_R R,E FVrURE (WIN) i f l S,R R,E PRESENT (ARRIVE) (6) is the CTS for sentence (lb). Although the SR re- lation for the present tense adjunct is not identical to that of the future tense matrix clause, the adjunct can be reconciled with that of the matrix clause if the S,R is interpreted as precedence, S < R. Notice that sentence (lb) is, in fact, interpreted such that the arriving oc- curs in the future, even though the verb is in the present tense. Because of the two possible interpretations of the comma relation proposed in (4), a single representation accounts for the possibility of interpreting the present as a future. Further, by making the (still informal) restric- tion on tense composition a semantic one, we use the same mechanism to account for tense compatibility. Now consider an unacceptable example. (la) has 7all tense structures shown in typewriter face are actual output from our program. When they are reported as the tense structure for a particular sentence, then the program generated them in response to that sentence. For more on the implementation, see Section 4. the CTS shown in (7). Note how the matrix clause as- (7) E,R R_S PAST (WIN) [ it * violates: ACIR R,E S,R PRESENT (ARRIVE) serts that the (shared) R point precedes the (shared) S point, while the adjunct clause asserts that the R point is simultaneous with the S point. The adjunct clause could be reinterpreted according to (4) such that the R point follows the S point, but this would not help -- the assertions on the two levels would still be inconsistent. In general, if the SR relation on the matrix and adjunct tiers of the CTS do not have the same left-to-right order then their meanings cannot be reconciled, s We have proposed that the adjunct SR relation must be consistent with the matrix SR relation, argued that this constraint is intuitively appealing and conso- nant with the representational system as a whole, and shown an example. Despite the intuitive appeal, there are two hypotheses here that should be made explicit: first, that the SR relation of the adjunct clause is merged with that of the matrix when temporal/causal adjuncts are interpreted; and second, that CTSs containing con- tradictory assertions as a result of that merger are ex- perienced as unacceptable, not merely implausible. We codify those two hypotheses as follows: Adjunct Clause Information Restriction (ACIR): "Adjunct clauses that introduce new SR information into the CTS are unacceptable." 3.2 Interpretation of CTSs The interpretation of comma offered in (4), in combi- nation with the ACIR, explained the incompatibility of 20 tense combinations in causal/temporal adjunct con- structions. Thus the new interpretation has important consequences for the SR portion of the CTS, the por- tion referred to by the ACIR. We now explore its conse- quences for the RE portion of the CTS. According to the ACIR a CTS contains only a sin- gle SR relation, that provided by the matrix clause. Since both the matrix event (E, nat) and the adjunct event (Ea4i) bear temporal relations to their shared R point, it follows that they may be comparable. For example, the structure shown in (8b) is interpreted as Emat < R = Earl1, by default. (Our program prints out the default Emat - Eadj comparison for valid CTSs, but they have been suppressed up to now. In addition, Table 3 lists all tense combinations that yield acceptable CTSs according to the Emat - Earl1 ordering of their SThis is shown in greater detail in Brent (1989). Also, note that Hornstein (1990) takes this condition on the form of the CTSs as primary instead of reducing it to their meanings. For discussion of the differences, see Section 5. 123 (8) a. b. Jon had won the game when Rachel arrived ( E_R R_S PAST-PERFECT [ [ J E(m)<E(a) E,R R_S PAST) inatrix adjunct matrix adjunct matrix adjunct E,~,~ < Ea@ past perf. past present perf. present future perf. present Ead i < Emat past past perf. present present perf. future present perf. Ea~j = E,~ past past present present future present Table 3: Legal tense combinations, arranged by apparent E~dj - Emat deduction default interpretation.) Sentence (8a) does indeed im- ply that the matrix event (Jon's winning) occurred be- fore the adjunct event (Rachel's arriving). If the comma in "E,~,t,R" could be reinterpreted as temporal prece- dence then, instead of Emat < R = Eadj, we would have Emat < R and E~dj < R; Era,, and E~dj would be in- comparable. Brent (1989) proposed a constraint ruling out CTSs that do not yield an Em,t - Eadj comparison. The reason for that proposal was the unacceptability 9 of sentences like (9). Now consider the following reformu- (9) a. b. $on had won the game when Rachel had ar- rived ( E_R R_S PAST-PERFECT I I I * violates: interpretation E_R R_S PAST-PERFECT) lation of that constraint: Interpretation Constraint: "An acceptable interpre- tation of a CTS must yield an E,,a, - Eadj comparison." This reformulation allows the same constraint both to narrow the possible interpretations of constructions like (8) and to explain the problematic status of construc- tions like (9). Reexamining (8), Ea~, R cannot be rein- terpreted because to do so would violate the Interpreta- tion Constraint; Emat-R cannot be reinterpreted because underscore has only the precedence interpretation. Thus (8) has only a single interpretation. Now consider CTSs with E,~a,, R and E~dj, R, and in (10c). Their default interpretation will be E, nat= R = Eaaj. But by picking appropriate temporal/causal 9For present purposes it does not matter whether sen- tences like (9) are regarded as strictly ungrammatical or merely reliably infelicitous. connectives or pragmatic contexts we can force either comma to be reinterpreted, yielding Eadj < R = E,,~, as in (10a), E,~t < R = Eadj as in (10b)) ° Of course, the (10) a. OK Jon quit his job after Rachel left him b. OK Rachel left Jon before he quit his job c. ( E, R R_S PAST [ [ [ E(m)=E(a) E,R R_S PAST) Interpretation Constraint prevents both commas from being simultaneously reinterpreted. We have shown that the interpretation of comma offered in (4) provides a flexibility in the interpretation of CTSs that is required data such as (10). Further, it restricts the interpretation of constructions like (8), where one of the clauses is in a perfect tense. Although we cannot fully explore the interpretive range of such perfect constructions here, the restriction on them has intuitive appeal. 4 The Computer Model This section describes our implementation of the theory described above. The implementation serves two pur- poses. First, we use it as a tool to verify the behavior of the theory and explore the effects of variations in it. Second, the implementation demonstrates the use of our tense theory in syntactic disambiguation. Our program operates on parse trees, building com- plex tense structures out of simple ones and determining whether or not those CTSs are acceptable, according to the constraints on tense combination. This program was linked to a simple feature-grammar parser, allowing it to take sentences as input, n In addition to building the CTS for a sentence, the program lists the apparent Emat - Ea4/ relation for the CTSs it accepts, and the constraints violated by the CTSs it rejects. Its behav- ior on several of the examples from Section 1 is shown below. Examples (la) and (lb) show the effects of the Ad- junct Clause Information Restriction on the acceptabil- ity of sentences. ;;; (la) * Rachel won the game when Jon arrives (compute-tense-structures (parse '(Rachel +ed win the game when Jon +s arrive))) 1°See also Moens and Stccdman, 1988 regarding when clauses. 11 Because morphology is quite distant from our interest in tense, the parser has no morphological component. Instead, input sentences have their tense morphemes, such as +ed, separated and preposed. A morphological parser could easily return the components in this order. -t-ed represents the past- tense morpheme, +s the present-tense morpheme, and 4-en the past participle morpheme. 124 ( E,R R_S PAST (WIN) I 11 * violates: ACIR R,E S,R PRESENT (ARRIVE)) ;;; (lb) ok Rachel sill sin the game ehen Jon arrives (compute-tense-structures (parse '(Rachel sill win the game shen Jon +s arrive))) ( S_R R,E FUTURE (WIN) J I I E(m)-E(a) S,R R,E PRESENT (ARRIVE)) Examples (2) and (3) show how a sentence with two possible adjunction sites for the adjunct clause can pro- duce two CTSs. The unacceptability of the CTSs re- sulting from one of the adjunction sites disambiguates the sentences. In sentence (2) it is high attachment, to the matrix clause, that is acceptable; in sentence (3), low attachment to the complement clause. Figure -1, page 2, shows the two possible parses of (2) output by our program. One of them is automatically labeled un- grammatical with an asterisk on its CTS. Note that the composition of tenses from subcategorized complement clauses, as opposed to adjunct clauses are not investi- gated here, but rather adopted from Hornstein (1990). 5 Discussion In this section we compare the preceding solutions to the temporal/causal adjunction problem with those offered in Yip (1986) and Hornstein (1990). 5.1 Semantics of Simple Tense Structures Two other works, Yip (1986) and Hornstein (1990), have developed theories of the effect of tense on the acceptability of temporal/causal adjunct constructions. Both of these are at least partially rooted in the mean- ings of the tenses, and both use representations for sim- ple tense structures that are similar to the ones used here. However, they both have difficulty in justifying the assignment of STSs to tenses. Yip assumes that comma is ambiguous between < and =. Notice that this is different from the default interpretation suggested here, whereby a given comma in a given tense structure has exactly one interpreta- tion at any one time. Yip's assumptions are critical for the explanatory power of his argument, which won't go through using a default interpretation. According to Yip's interpretation, "Jon is running" and "Jon runs" ought to be ambiguous between the present and the fu- ture, but they clearly are not. Both describe events or sets of events that necessarily must include the time of speech. This problem is exacerbated by Yip's proposal that the present tense be assigned two STSs, one equiva- lent to "S,R R,E", the one used here, and the other "E,R R,S". This proposal, along with the ambiguous interpre- tation of comma, would predict that the present tense could be interpreted as meaning the same thing as nearly any other tense. For example, the present could be inter- preted as equivalent to the past perfect, if both commas in its "E,R R,S" STS received the reading E < R < S. Hornstein (1990) uses the simultaneity interpreta- tion of comma exclusively in assigning STSs to tenses. Thus there is no semantic reason, in Hornstein's model, why the present tense should have "S,R R,E" rather than "S,R E,R". Furthermore, reinterpretation of comma is not invoked to explain the fact that the present tense is reinterpreted as referring to the future when it is ad- joined to a future clause or modified by a future adverb. Instead, a syntactic rewrite rule that changes X,Y to X_Y under these conditions is used. However, in the absence of semantic constraint, it is not clear why that rule is better than one that switches order too, rewrit- ing Y,X to X.Y. This alternative rewrite rule would be consistent with the observations if every X,Y in every STS were switched to Y,X. Since X,Y and Y,X are in- terpreted in the same way in Hornstein's theory, there is no reason not to make these two changes. That is to say, Hornstein's theory does not explain why the STSs and the rewrite rule are the way they are, rather than some other way. Yip could not correctly derive his STS/tense map- ping from the meanings of the tenses because he allowed each STS to have too many different meanings in the simple, unmodified situations. Even so, these meanings were too narrow for his constraint on adjunction, so he was forced to propose that the present has two STSs. This only made the underdetermination of the mean- ings of simple sentences worse. Hornstein, on the other hand, did not allow enough variation in the meanings of the simple tense structures. As a result, many of his possible STSs had equivalent meanings, and there was no way to prefer one over the other. This was exacer- bated by the fact that he used non-semantic constraints on adjunction, reducing the amount of constraint that the acceptability data on adjunctions could provide for the assignment of STSs to tenses. This paper takes an intermediate position. Comma is interpreted as simul- taneity in the unmodified case, but can be interpreted as precedence in appropriate environments. Since the con- straints on adjunction are semantically based, the inter- pretations of adjunct constructions provide evidence for the assignments of STSs to tenses that we use. 5.2 Semantics of Combined Tense Structures In addition to allowing semantics to uniquely de- termine the assignment of STSs to tenses, our default- based interpretation of comma explains a problem ac- knowledged in Hornstein (1990). If comma is inter- preted as strict simultaneity, as Hornstein initially pro- poses, then the structure in (10c) must be interpreted as Emat = R = Eadj. However, as noted above, neither sentence (10a) nor sentence (lOb) has this interpretation. Hornstein alludes to a different form of reinterpretation 125 of ER to account for examples like (10). However, his mechanism for the interpretation of Ernat - Eadj order- ing in CTSs is unrelated to his semantics for STSs or his constraints on their combination. Our explanation, by contrast, uses the same mechanism, the default-based se- mantics of comma, in every portion of the theory. Rein- terpretation of comma in the SR relation accounts for the compatibility of the present tense with future adverbs and future matrix clauses. Reinterpretation of comma in ER relations accounts for the flexible interpretation of sentences like those in (10). 6 Conclusions This paper describes two contributions to the the- ory of temporal/causal adjunction beyond those of Yip (1986), Brent (1989), and Hornstein (1990). First, we propose the asymmetric, default-based interpretation of comma described in (4). This leads to a uniform, seman- tically based theory explaining the assignments of STSs to tenses shown in Table 2, the incompatibility of many tense pairs in causal/temporal adjunction, and the in- terpretations of combined tense structures in a variety of situations. In particular, the default based interpre- tation of comma has benefits both in the interpretation of SR relations (adverbs and clausal adjuncts) and ER relations (event order in CTSs). Few of the theoretical observations or hypotheses presented in this paper con- stitute radical departures from previous assaults on the same problem. Rather, this paper has worked out incon- sistencies and redundancies in earlier attempts. Besides theoretical work, we presented a computer implementa- tion and showed that it can be used to do structural disambiguation of a certain class of sentences. Although our contribution to syntactic disambiguation only solves a small part of that huge problem, we expect that a series of constrained syntactic/semantic theories of the kind proposed hear will yield significant progress. Finally, the adjustments we have suggested to the interpretation of comma in both simple tense structures and combined tense structures should contribute to the work of the many researchers using Reichenbachian rep- resentations. In particular, constrained combination of tense structures ought to provide a richer set of represen- tations on which to expand model-theoretic approaches to interpretation. Acknowledgments Thanks to Bob Berwick and Norbert Hornstein for their detailed readings and invaluable comments on many ver- sions of this work. References [Allen, 1984] J. Allen. Towards a General Theory of Ac- tion and Time. AI Journal, 23(2), 1984. [Brent, 1989] M. Brent. Temporal/Causal Connectives: Syntax and Lexicon. In Proceedings of the 11th Annual Conference of the Cognitive Science Society. Cognitive Science Society, 1989. [Comrie, 1985] B. Comrie. Tense. Cambridge Textbooks in Linguistics. Cambridge U. Press, New York, NY, 1985. [Dowty, 1979] D. Dowty. Word Meaning and Montague Grammar. Synthese Language Library. D. Reidel, Boston, 1979. [Dowty, 1986] D. Dowty. The effects of aspectual class on the temporal structure of discourse: Semantics or pragmatics? Linguistics and Philosophy, 9:37-61, 1986. [Harper and Charniak, 1987] M. Harper and E. Char- niak. Time and tense in english. In ??th Annual Proceedings of the Association for Comp. Ling., pages 3-9. Association for Comp. Ling., 1987. [tIinrichs, 1986] E. Hinrichs. Temporal anaphora in dis- courses of english. Linguistics and Philosophy, 9:63- 82, 1986. [Hinrichs, 1988] E. Hinrichs. Tense, quantifiers, and con- text. Comp. Ling., 9(2), 1988. [Hornstein, 1977] N. Hornstein. Towards a theory of tense. Linguistic Inquiry, 8:521-557, 1977. [Hornstein, 1981] N. Hornstein. The Study of Meanin 9 in Natural Language. Longman, New York, 1981. [I-Iornstein, 1990] N. Hornstein. As Time Goes By: Tense and Universal Grammar. MIT Press, Cam- bridge, MA, 1990. [Moens and Steedman, 1988] M. Moens and M. Steed- man. Temporal Ontology and Temporal Reference. Comp. Ling., 14(2), 1988. [Moens, 1987] M. Moens. Tense, Aspect, and Tempo- ral Reference. PhD thesis, University of Edinburgh, Centre for Cognitive Science, 1987. [Nakhimovsky, 1988] A. Nakhimovsky. Aspect, aspec- tual class, and the temporal structure of narrative. Comp. Ling., 14(2), 1988. [Passoneau, 1988] R. Passoneau. A computational model of the semantics of tense and aspect. Comp. Ling., 14(2), 1988. [Reichenbach, 1966] H. Reichenbach. The Elements of Symbolic Logic. The Free Press, New York, 1966. [Webber, 1988] B. Webber. Tense as a discourse anaphor. Comp. Ling., 14(2), 1988. [Yip, 1986] K. Yip. Tense, aspect, and the cognitive rep- resentation of time. In Proceedings of the A CL. Asso- ciation for Comp. Ling., 1986. 126
1990
16
SOLVING THEMATIC DIVERGENCES IN MACHINE TRANSLATION Bonnie Doff* M.I.T. Artificial Intelligence Laboratory 545 Technology Square, Room 810 Cambridge, MA 02139, USA internet: [email protected] ABSTRACT Though most translation systems have some mechanism for translating certain types of divergent predicate-argument structures, they do not provide a genera] procedure that takes advantage of the relationship between lexical-semantic struc- ture and syntactic structure. A divergent predicate-argument structure is one in which the predicate (e.g., the main verb) or its arguments (e.g., the subject and object) do not have the same syntactic ordering properties for both the source and target language. To account for such ordering differ- ences, a machine translator must consider language-specific syntactic idiosyncrasies that distinguish a target language ¢rom a source language, while making use of lexical-semantic uniformities that tie the two languages together. This pa- per describes the mechanisms used by the UNITRAN ma- chine translation system for mapping an underlying lexical- conceptual structure to a syntactic structure (and vice ¢erea), and it shows how these mechanisms coupled with a set of gen- eral linking routines solve the problem of thematic divergence in machine translation. 1 INTRODUCTION There are a number of different divergence types that arise during the translation of a source language to a tar- get language. Figure 1 shows some of these divergences with respect to Spanish, English, and German. 1 We will look at each of these traditionally diflicnlt di- vergence types in turn. The first divergence type is a structural divergence in that the verbal object is real- ized as a noun phrase (John) in English and as a prepo- sitional phrase (a Juan) in Spanish. The second diver, *This paper describes research done at the Artificial In- telligence Laboratory of the Massachusetts Institute of Tech- nology. Support for this research has been provided by NSF Grant DCR-85552543 under a Presidential Young Investiga- tor's Award to Professor Robert C. Berwick. Useful guidance and commentary during this research were provided by Bob Berwick, Noam Chomsky, Bruce Dawson, Ken Hale, Mike Kashket, Jeff Siskind, and Patrick Winston. The author is also indebted to three anonymous reviewers for their aid in reshaping this paper into its current form. 1Many sentences may fit into these divergence classes, not just the ones listed here. Also, a single sentence may exhibit any or all of these divergences. Divergence Translation Type Ezample Structural Conflational Lexical Categorial Thematic I saw John Via Juan (I saw to John) I like Mary Ich habe Marie gem (I have Mary likingly) I stabbed John Yo le di pufialadas a Juan (I gave knife-wounds to John) I am hungry Ieh habe Hunger (I have hunger) I like Mary Maria me gusta a mf (Mary pleases me) Figure 1: Divergence Types in Machine Translation gence is conttational. Conflation is the incorporation of necessary participants (or arguments) of a given action. Here, English uses the single word like for the two Ger- man words haben (have) and gem (likingly); this is be- cause the manner argument (i.e., the likingly portion of the lexical token) is incorporated into the main verb in English. The third divergence type is a lcxical diver- gence as illustrated in the stab example by the choice of a different lexical word dar (literally give) for the word stab. The fourth divergence type is categoria] in that the predicate is adjectival (hungry) in English but nominal (hunger) in German. Finally, the fifth divergence type is a thematic divergence: the object (Mary) of the En- glish sentence is translated as the subject (Maria) in the Spanish sentence. The final divergence type, thematic divergence, is the one that will be the focus of this paper. We will look at 127 how the UNITRAN system [Doff, 1987, 1990] solves the thematic divergence problem by mapping an underlying lexical-conceptual structure to a syntactic structure (and vice versa) on the basis of a set of general linking routines and their associated mechanisms. The other divergences are also handled by the UNITRAN system, but these are discussed in [Doff, 1990]. It turns out there ate two types of thematic diver- gences that show up in the translation of a source lan- guage to a target language: the first type consists of a reordering of arguments for a given predicate; and the second type consists of a reordering of predicates with respect to their arguments or modifiers. We will look at examples of each of these types in turn. In the first case, an example is the reversal of the sub- ject with an object as in the English-Spanish example of gustar-like shown in figure 1. The predicate-argument structures axe shown here: 2 [,-MAx IN-MAX Maria] [V-MAX [V-1 [V-MIN me gusts] [P-MAX a rmq]]] (1) [I-MAX IN-MAX 1] [V-MAX [`'I [`" M~N me] [N~AX Mary]]]] Here the subject Marls has reversed places with the ob- ject mr. The result is that the object mi turns into the subject I, and the subject Marls turns into the object Mary. The reverse would be true if translation went in the opposite direction. An example of the second case of thematic divergence (not shown in figure 1) is the promotion of a comple- ment up to the main verb, and the demotion of the main verb into an adjunct position (or v/ce versa). By promo- tion, we mean placement "higher up" in the syntactic structure, and by demotion, we mean placement "lower down" in the syntactic structure. This situation arises in the translation of the Spanish sentence Juan suele ir a easa into the English sentence John usually goes home: (2) [X-MAX [~-MAX Juan] [`'-MAX [V-* [V-Mm suele] [,,-MAX ir] b-MAX a casa]]]]] [z-MAx [N-u.x John] Iv.MAX [V.X [v-i USually Iv.raN goes]] IN.MAX home]]]] Here the main verb soler takes ir as a complement; but, in English, the ir predicate has been placed into a higher position as the main verb go, and soler is placed into a lower position as the adjunct usually associated with the main verb. The reverse would be true if translation went in the opposite direction. MOlten times a native speaker of Spanish will invert the subject to post-verbal position: [I-MAX el IV-MAX [V-1 [V-Mm me gusta] [P-MAX aml]]] IN-MAX Maria]i]. However, this does not affect the internal/external reversal scheme described here since inversion takes place indepen- dently after thematic divergences have been handled. Another example of the second case of thematic di- vergence is the demotion of the main verb into a com- plement position, and the promotion of an adjunct up to the main verb (or vice versa). This situation arises in the translation of the German sentence Ich esse gem into the English sentence I like eating: [I.MAX IN-MAX Ich] IV-MAX IV-! [V-S [V-MTN esse] gem]]]] (3) [X-M~x C~-MAx ~[] [,'-MAX [V.~ [`'-~ ~e] [V-M~X eating]]]] Here the main verb essen takes gem as an adjunct; but, in English, gem has been placed into a higher po- sition as the main verb like, and the essen predicate has been placed into a lower position as the complement eating of the main verb. The reverse would be true if translation went in the opposite direction, a This paper will show how the system uses three mech- anisms along with a set of general linking routines (to be defined) to solve thematic divergences such as those that have been presented. The next section introduces the terminology and mechanisms that are used in the solution of these divergences, and, in so doing, it will provide a brief glimpse of how thematic divergences are tackled. Section 3 discusses other approaches (and their shortcomings) in light of the thematic divergence prob- lem. Finally, section 4 presents a general solution for the problem of thematic divergences, showing in more detail how a set of general linking routines and their associ- ated mechanisms provide the appropriate mapping from source to target language. 2 TERMINOLOGY AND MECHANISMS Before we examine thematic divergences and how they are solved, we must first look at the terminology and mechanisms used throughout this paper: 4 sit might be argued that a "direct" translation is possible for each of these three examples: (It) Mary pleases me (21) John is accustomed to going home (3,) I eat -~"ins]y The problem with taking a direct approach is that it is not general enough to handle a wide range of cases. For example, gem can be used in conjunction with haben to mean like: Ich babe Marie gem ('I like Mary'). The literal translation, I have Mary likingly, is not only stylistically unattractive, but it is not a valid translation for this sentence. In addition, the direct-mapping approach is not bidirectional in the general case. Thus, even if we did take (1,), (2,), and (3,) to be the translations for (1), (2), and (3), we would not be able to apply the same direct mapping on the English sentences of (1), (2), and (3) (translating in the opposite direction) because we would still need to translate like and usually into Spanish and German. It is clear that we need some type of uniform method for translating thematic divergences. 4The terms complement, specifier, and adjunct have not been defined; roughly, these correspond to syntactic object, 128 Definition 1: An LCS is a lexical conceptual structure conforming to a modified version of Jack- endoff's well-formedness rules [Jackendoff, 1983]. For example, I like Mary is represented as: [State BEIdeat ([Tsi~s REFERENT], [Place ATIdeat ([~ka, m/:FERENT], [Th'-, PERSOI~])], [, ..... LIKINGLY])] The mapping that solves thematic divergences is de- fined in terms of the RLCS, the CLCS, the syntactic structure, and the markers that specify internal/external and promotion/demotion information. These markers, or mechanisms, are specified as follows: MechAnism 1: The :INT and :EXT markers are override position markers that determine where the internal and external arguments will be po- sitioned for a given lexical root word. Definition 2: An RLCS is an uninstantiated LCS that is associated with a root word definition in the lexicon (i.e., an LCS with unfilled variable po- sitions). For example, an RLCS associated with the word like is: [Sta*, BEId,~, ([Thla, X], [Place ATIdoa, ([Thing X], [Thing "Y])], [M ..... LIKINGLY])] Definition 3: A CLCS is a composed (in- stantiated) LCS that is the result of combin- ing two or more RLCS's by means of unification (roughly). This is the interlingua or language- independent form that is the pivot between the source and target language. For example if we compose the RLCS for like with the RLCS's for I ([~hi.s REFERENT]) and Mary ([Thing PERSON]), we get the CLCS corresponding to 2" like Mary (as shown in definition 1). Definition 4: An Internal Argument Position is a syntactic complement for a lexical word of cate- gory V, N, A, P, I, or C. s Definition 5: An Ezternal Argument Position is a syntactic specifier of N for a lexical word of cat- egory N or a specifier of I for a lexical word of category V. Definition 6: An Adjunct Argument Position is a syntactic modifier that is neither internal nor external with respect to a lexieal word. Each word entry in the lexicon is associated with an RLCS, whose variable positions may have certain re- strictions on them such as internai/external and pro- motion/demotion information (to be described). The CLCS is the structure that results from combining the lexieal~ items of a source-language sentence into a single underlying pivot form. subject, and modifier, respectively. For a more detailed de- scription of these and some of the other definitions here, see [Dorr, 1990]. sv, N, A, P, I, and C stand for Verb, Noun, Adjective, Preposition, Inflection, and Complementiser, respectively. For example, the lexical entry for gustar is an RLCS that looks like the RLCS for like (see defini- tion 2) except that it includes the :INT and :EXT ma~kers: [State BEldent ([T~ims X :mT], [Place ATId.m, ([Thi-s X], [TSiffig Y :EXT])], [ma.ae, LIKINGLY])] During the mapping from the CLCS (shown in def- inition 1) to the syntactic structure, the RLCS for gustar (or like) is matched against the CLCS, and the arguments are positioned according to the specification associated with the RLCS. s Thus, the :INT and :EXT markers account for the syn- tactic distinction between Spanish and English by realizing the [Thing REFERENT] node of the CLCS (corresponding to X in the RLCS) as the inter- nal argument ml in Spanish, but as the external argument I in English; and also by realizing the [T~i,s PERSON] node of the CLCS (corresponding to Y in the RLCS) as the external argument Maria in Spanish, but as the internal argument Mary in English. Note that the :INT and :EXT mark- ers show up only in the ILLCS. The CLCS does not include any such markers as it is intended to be a language-independent representation for the source- and target-language sentence. Mechanism 2: The :PROMOTE marker associ- ated with an RLCS 7f places a restriction on the complement 7~1 of the head 7~t. 7 This restriction forces 7~1 to be promoted in the CLCS as the head 7 ~. 7~ is then dropped into a modifier position of the CLCS, and the logical subject of 7 ~ is inher- ited from the CLCS associated with the syntactic subject of ?/I. s For example, the lexical entry for soler contains a :PROMOTE marker that is associated with the RLCS: [~ ..... HABITUALLY :PROMOTE] Thus, in the above formula 7"/! corresponds to soler, and 7~1 corresponds to the complement of soler. The :PROMOTE marker forces the syntac- tic complement 7~! to be promoted into head SThe lexlced-selection procedure that maps the CLCS to the appropriate RLCS (for like or gustar) is not described in detail here (see [Dorr, 1990]). Roughly, lexical selection is a 129 unification-like process that matches the CLCS to the RLCS templates in the lexicon, and chooses the associated lexical words accordingly. position as 7 ) in the CLCS, and the head 7/I to be demoted into modifier position as 7/in the CLCS. So, in example (2) of the last section, the resulting CLCS is: 9 [,,°n, GOLo, ([Thing PERSON], [P.,h TOLo~ ([mac. ATLo. ([Thi.g PERSON], [p,.¢. HOME])])], [M ..... HABITUALLY])] Here the RLCS for soler, [M ..... HABITUALLY], corresponds to 7"l and the RLCS for it, [B,°~t GO ...], corresponds to :P. In the translation to English, [~ ..... HABITUALLY] is not promoted, so it is re- alized as an adjunct usually of the main verb go. Mechanism 3: The :DEMOTE marker associ- ated with an RLCS 7 ~ places a restriction on the head 7~1 of the adjunct :Pt. This restriction forces 7~ to be demoted into an argument position of the CLCS, and the logical subject of ~ to be inherited from the logical subject of 7"l. For example, the lexical entry for gem contains a :DEMOTE marker that is associated with the Y argument in the RLCS: [stAte BEcl,c ([Thi., x], [mac° ATm,~ ([Thins X], [~,,=, Y :DEMOTE])], [M ..... LIKINGLY])] Thus, in the above formula, T~t corresponds to gem and 7~! corresponds to the syntactic head that takes gem as an adjunct. The :DEMOTE marker forces the head 7~ I to be demoted into an argument position as 7~ in the CLCS, and the ad- junct 7~1 to be promoted into head position as 7 ~ in the CLCS. So in example (3) of the last section, the resulting CLCS is: [s,*,, BEci,c ([Thing REFERENT], [PIn°, ATci,° ([T~i=g REFERENT], [,,°n, EAT ([Thi~s REFERENT], [Thing FOOD])])], ..... LIKINGLY])] 10 Here the RLCS for gem, [s,a,oBEci~ .... ], corresponds to :P and the RLCS for es- sen, [s,nt° EAT ...], corresponds to 7"l. In the translation to English, [st**e BEc~ .... ] is not de- moted, so it is realized as the main verb like that takes eating as its complement. PIn general, a syntactic argument ul is the canonical syn- tactic realization (CS~) of the corresponding CLCS argu- ment u. The CS7~ function is a modified version of a routine proposed in [Chomsky, 1986]. See [Dorr, 1990] for a more detailed discussion of this function. SThe logical subject is the highest/left-most argument in the CLCS. 130 Now that we have looked briefly at the mechanisms involved in solving thematic divergences in UNITRAN, we will look at how other approaches have attempted to solve this problem. 3 PREVIOUS APPROACHES In tackling the more global problem of machine transla- tion, many people have addressed different pieces of the thematic divergence problem, but no single approach has yet attempted to solve the entire space of thematic di- vergence possibilities. Furthermore, the pieces that have been solved are accounted for by mechanisms that are not general enough to carry over to other pieces of the problem, nor do they take advantage of cross-linguistic uniformities that can tie seemingly different languages together. Gretchen Brown has provided a model of German- English translation that uses lezical semantic structures [Brown, 1974]. The work is related to the model devel- oped for UNITRAN since both use a form of conceptual structure as the basis of translation. While this approach goes a long way toward solving a number of translation problems (especially compound noun disamhiguation), it falls short of providing a systematic solution to the the- matic divergence problem. This is largely because the conceptual structure does not serve as a common repre- sentation for the source and target languages. Instead, it is used as a point of transfer, and as such, it is forced to encode certain language-specific idiosyncrasies such as the syntactic positioning of conceptual arguments. In terms of the representations used in UNITRAN, this approach is analogous to using a language-to-language mapping from the RLCS's of the source language to the RLCS's of the target language without using an interme- diate language-independent structure as a pivot form. In sit should be noted that promotion and demotion struc- truces are inverses of each other. Thus, although this CLCS looks somewhat "English-like," it is possible to represent the CLCS as something that looks somewhat "Spanish-like:" [State Beclze ([Thing PERSON], [Place ATcirc ([Thing PI~RSOiN], [Event GOLoc ([Thing PERSON], [Path TOLo© ([Place ATLoc ([Thing PERSON], [Place HOME])])])])], [M ..... HABITUALLY])] In this case, we would need to use the :DEMOTE marker (see mechanism 3) instead of the :PROMOTE marker, but this marker would be used in the RLCS associated with usually instead of the RLCS associated with soler. The justification for using the "English-like" version for this example is that the [Manner HABITUALLY] constituent is generally thought of as an aspcctual clement associated with a predicate (e.g., in German, the sentence would be Ich gehe gewJhnlich nach Hause ('I go usually home')); this constituent cannot be used as a predicate in its own right. Thus, the compli- cated "Spanish-like" predicate-argument structure is not a likely conceptual representation for constructions that use [Manner HABITUALLY]. 1°The default object being eaten is [Thing FOOD], although this is not syntactically realized in this example. this approach, there is no single language-independent mechanism that links the conceptual representation to the syntactic structure; thus, it is necessary to hand- code the rules of thematic divergence for English and German, and all divergence generalizations are lost. In 1982, Lytinen and Schank developed the MOP- TRANS Spanish-English system based on conceptual de- pendency networks [Lytinen & Schank, 1982]. 11 This approach is related to the UNITRAN model of transla- tion in that it uses an interlingual representation as the pivot from source to target language. The key distinc- tion is that the approach lacks a generalized linking to syntax. For example, there is no systematic method for determining which conceptual argument is the subject and which is the object. This means that there is no uniform mechanism for handling divergences such as the subject-object reversal of example (1). The LMT system is a logic-based English-German ma- chine translator based on a modular logical grammar [McCord, 1989]. McCord specifically addresses the prob- lem of thematic divergence in translating the sentence Mir gef~llt der Waged (I like the car). However, the so- lution that he offers is to provide a "transfer entry" that interchanges the subject and object positions. There are two problems with this approach. First it relies specifi- cally on this object-initial ordering, even though the sen- tence is arguably more preferable with a subject-initial ordering Der Wagen gef~llt mir; thus, the solution is dependent on syntactic ordering considerations, and will not work in the general case. Second the approach does not attempt to tie this particular type of thematic di- vergence to the rest of the space of thematic divergence possibilities; thus, it cannot uniformly translate a con- ceptually similar sentence Ich ]ahre das Wagen gem (I like to drive the car). 4 THEMATIC DIVERGENCES In section 1, we introduced some examples of thematic divergences, and in section 2 we described some of the mechanisms that are used to solve these divergences. Now that we have looked at other machine transla- tion approaches with respect to the thematic divergence problem, we will look at the solution that is used in the UNITRAN system. Recall that there are two types of thematic diver- gences: 1. Different argument positionings with respect to a given predicate. 2. Different predicate positionings with respect to arguments or modifiers. The first type covers the case of argument positions that diverge; it is accounted for by the :INT and :EXT mark- ers. The second type covers the case of predicate posi- tions that diverge; it is accounted for by the :PROMOTE 11Several researchers have worked within this framework including Goldman [1974], Schank & Abelson [1977], and many others. 131 and :DEMOTE markers. Together, these two types of divergences account for the entire space of thematic di- vergences, since all participants must be one of these two (either an argument, or a predicate, or both). In both cases of thematic divergence, it is assumed that there is a CLCS that is derived from a source- language RLCS that is isomorphic to the correspond- ing target-language RLCS (i.e., the variables in the 2 RLCS's map to the same positions, though they may be labeled differently). Furthermore, it is assumed that thematic divergence arises only in eases where there is a logical subject. A CLCS with logical subject w, non-subject arguments Zl, z2,..., z~, ..., z=, and modifiers nl, n2 .... , nz ..... n,~ will look like the structure shown in (4), where the dominating head 7 ~ is a typed primitive (e.g., BEcirc): (4) [7~ w, zl,z2 .... , zk,...,z~,nl,n2,...,n,...,n,~] In order to derive the syntactic structure from the CLCS, we need a mapping or linking rule between the CLCS positions and the appropriate syntactic positions. Roughly, this linking rule is stated as follows: General Linking Routine G: (a) Map the logical subject to the external argu- ment position. (b) Map the non-logical-subjects to internal ar- gument positions. (c) Map modifiers to adjunct positions. (d) Map the dominating head to the phrasal head position. G is used for the second half of translation (i.e., mapping to the target-language structure); we also need an in- verse routine that maps syntactic positions of the source- language structure to the CLCS positions: Inverse Linking Routine G-l: (a) Map the external argument to the logical sub- ject position. (b) Map the internal arguments to non-logical- subject positions. (c) Map adjuncts to modifier positions. (d) Map the phrasal head to the dominating head node. In terms of the representation shown in (4), the and ~-1 mappings would be defined as shown in figure 2.12,1s'14 Note that wl, zlt .... ,zM,...,znt, and nll,...,nlt,...,nm ! are the source-language re- alizations of the corresponding CLCS tokens w, zl, .. •, zk, .. •, zn, and nl, ..., nz, ..., n,~; similarly, wit, zllI, • • • , z~tll, • • •, Znll , and dill , ..., dill , ..., nmll are target-language realizations of the same CLCS tokens. This assumes that there is only one external argument and zero or more internal arguments. We will now look zc.:%...~ ] n,..=n,...%,] [Y-MAX~'[[X-M'N'p'] ' ' ' ' ' ' 4 s S'' ,,~'" ~,~ f~. -1 • • % • ..,, .. -.. ~,~; • -,. ,.. ,... ) II II # II II IS [Y-MAX ~/] [[X-MIN? ]Zl...Zk...Zn] TI, I...Y~I..OFI, m] Figure 2: Mapping From Source to Target via the CLCS at a formal description of how each type of thematic di- vergence is manifested. We will then See how the general linking routines described here take the syntactic mech- anisms into account in order to derive the appropriate result. 4.1 Divergent Argument Posltionings In order to account for the thematic revcrsa3 that shows up in the gustar-l~e example of (1), we must have a mechanism for mapping CLCS axgumcnts to different syntactic positions. In terms of the CLCS, we need to allow the syntactic realization of the logical subject w and the syntactic realization of a non-subject argument (say zk) to switch places between the source and target language. Figure 3 shows how this type of argument reversal is achieved. The :INT and :EXT markets axe used in the RLCS specifications as override markers for the G and G-I routines: the :INT marker is used to map the logi- ca3 subject of the CLCS to an internal syntactic position (and vice versa). Thus, steps (a) and (b) of ~ and g-z are activated differently if the RLCS associated with the phrasal head contains either of the :INT or :EXT over- ride mechanisms. Note that the CLCS is the same for 12The convention adopted in this paper is to use ul for the source-language realization, and url for the target-language realization for a CLCS argument u. 13Adjunction has been placed to the right at the maximal level. However, this is not the general case. A parameter setting determines the side and level at which a particu- lar adjunct will occur (as discussed in [Doff, 1990]). The configuration shown corresponds to the spec-initial/head- initial case. The other three possible configurations are: [Y-MA~ ~' Ix-, ~' ~'...~' [X-M~ ~"]] m' ..... ~,'], [Y-MAX IX-1 [X-MIN PI] Zl! g2f....Znl ] '~! I"~11 , .... am'], and [Y.~Ax [x-, z,, ~, ...-.., Ix.MxN ~"]] ~' m', .... n,,,,]. Finally, the order of the zit's and nfl's is not being addressed here; this is determined by independent principles also dis- cussed in [Dorr, 1990~. Regardless of these syntactic vari- ations, the ~ and ~- routines operate uniformly because they are language-independent. For simplicity, the spec- inltlal/head-initial configuration will be used for the rest of this paper. X~In addition to realization of arguments, the dominating CLCS head (~P) must also be realized as a lexical word (PI in the SOVLrce language and ~P, in the target language). The syntactic category of this lexical word is X, and the maximal projection is Y-MAX. In general, Y = X unless X is a Verb (in which case, Y is the Inflection category). 132 RLCS entry for~)l: [p (w :IN~),Z,, (z k :~xz),...,z, ~,,...,~,...,~.,. ] RLCS entry for p#.. ['P w, z,,...,z,,...,~.,,~ ,,...,,~,,...,,,. ] I [Y-MA~Z~[[X-MIN I " ' I ' , • p ]~,,,...z; ] ,~,...,~,...,~ ] .... }0' [P ~,z,,...,zk,...,~.,~,,...,~,...,~. ] q ll II # "q ll II II II II [Y-MAX ~O [[X-MINP ]ZI...Zk...Z] nl...~l..."m] Figure 3: Mapping From Source to Target for Divergent Arguments RLCS entry for gustar: [BE [X :IN'P] [AT IX] [Y :EXTI] LIKINGLY] RLCS entry for like: [BE [X] [AT [X] [Y]] LIKINGLY] [I-MAX [N-MAX Marlsa~ - ........ ... [V-MAX [V-1 [V-MIN me gusta]', [P-MAX a ml~]]] ', ~0" J [BE [RZFERBNT] [AT [REFERENT] [PERSON]] LIKINOLY] ' ) [I-MAX [N-MAX I] Iv [V-MAX [Vol [V-MIN like] [N-MAX Mary]]]] Figure 4: Translation of Mar{a me gusta a m~ both the source and target language; only the RLCS's in the lexica3 entries need to include language-specific in- formation in order to account for thematic divergences. Now using the ~ and ~-1 routines and the overriding :INT and :EXT mechanisms, we can show how to ac- count for the thematic divergence of example (1). Figure 4 shows the mapping from Spanish to English for example (1). is'Is Because the Spanish RLCS includes the :INT and :EXT markers, the G-z routine activates steps (a) and (b) differently: the external argu- ment Marfa is mapped to a non-logical-subject position [Thins PERSON], and the internal argument mlis mapped to the logical subject position [Thi, g REFERENT]. By lSBecause of space limitations, we will illustrate the three examples (I), (2), and (3) in one direction only. However, it should be clear that the thematic dlvergcnces are solved going in the opposite direction as well since the g and g-1 mappings are reversible. 18A shorthand notation is being used for the RLCS's and the CLCS. See section 2 for a description of the actual rep- resentations used by the system. contrast, the English RLCS does not include any spe- cial markers. Thus, the G routine activates steps (a) and (b) normally: the logical subject [Thi.g REFERENT] is mapped to the external argument I, and the non- logical-subject [Thl,s PERSON] is mapped to the internal position Mary. Now we have seen how argument positioning diver- gences are solved during the translation processJ ¢ In the next section, we will look at how we account for the second part of thematic divergences: different predicate positionings. 4.2 Divergent Predicate Positionings In the last section, we concentrated primarily on the- matic interchange of arguments. In this section, we will concentrate on thematic interchange of predicates. In so doing, we will have accounted for the entire space of thematic divergences. There are two ways to be in a predicate-argument rela- tionship: the first is by complementation, and the second is by adjunction. That is, syntactic phrases include base- generated complements and base-generated adjuncts, both of which participate in a predicate-argument struc- ture (where the predicate is the head that subcategori~.es for the base-generated complement or adjunct), ts In order to show how predicate divergences are solved, we must enumerate all possible source- language/target-language predicate positionings with respect to arguments z~, z2,..., zk, ..., z,+ and mod- ifiers nt, n~,..., nz, ..., n~. In terms of the syn- tactic structure, we must examine all the possible positionings for syntactic head 7~t with respect to its complements zzt, z~t,...,zht,...,znt and adjuncts rill, n2 I, ... ,nil,... , nrnl. xrIt should be noted that the solution presented here (as well as that of the next section) does not appeal to an already- coded set of conceptual "frames." Rather, the syntactic structures are derived procedurally on the basis of two pieces of information: lexical entries (i.e., the RLCS's) and the re- sult of composing the RLCS's into a single unit (i.e., the CLCS). It would not be possible to map declarativelp, i.e., from a set of static source-language frames to a set of static target-language frames. This is because the ~ and ~-1 rou- tines are intended to operate recursively: an argument that occurs in a divergent phrasal construction might itself be a divergent phrasal construction. For example, in the sentence le saele gustar leer a Jnan ('John usually likes to read'), there is a simultaneous occurrence of two types of divergences: the verb soler exhibits a predicate positioning divergence with respect to its complement gustar leer a Juan, which itself ex- hibits an argument positioning divergence. The procedural mappings described here are crucial for handling such cases. iSWe have left out the possibility of a base-generated spec- ifier as a participant in the predicate-argument relationship. Of course, the specifier is an argument to the predicate, but it turns out that the syntactic specifier, which corresponds to the logical subject in the LCS, has a special status, and does not participate in predicate divergences in the same way as syntactic complements and adjuncts. This will be illustrated shortly. 133 RLCS entry for~l: [P ] RLCS entry for nil; [n I :PROMOTE] RLCS entry for ~t~ ['P,o,~,,...,z+,...,z,,,n~,...,n,,...,,+. ] (~) Y-MAX I I I I I I I I tO [[X-MIN RI]~ ZI...Z k...Zn] 1"1, I ...~m] r w,z~,...,z+,...,z,,rt,,...,n,+,...,n,,, ] %~,~ %%% -.. ,. ~ tUII ~ ~ II II II U It" "~ I ; [Y-MAX [[X-MINP ]Z,...Zk...Z ] n...~lt...t1,,,,] RLCS entry forPl: (b) [P RLCS entry for'P t t!~tt I I I t [Y-MAx w [[X-M,N Z,...Z] i S } G" ] w [IX-Mere/" IZc..Zv..Zl "l"""t'"'%J Figure 5: Mapping From Source to Target for Divergent Predicates There are a large number of possible positionings that exhibit predicate divergences, but only two of them arise in natural languageJ 9 It turns out that the soler- usually example of (2) and the gem-like example of (3) are representative of the space of possibilities of predi- cate divergences. The source-language/target-language predicate positionings for these two cases are represented as shown in figure 5. Part (a) of this figure accounts for the translation of usually to soler (or vice versa), and part (b) accounts for the translation of like to gem (or vice versa). The ~ and ~-1 routines do not take into account the predicate divergences that were just presented. As in the case of argument divergences, predicate divergences re- quire override markers. The :PROMOTE marker is used to map a modifier of the CLCS to a syntactic head posi- tion (and vice versa). The :DEMOTE marker is used to map a non-subject argument of the CLCS to a syntac- tic head position (and vice versa). Thus, steps (c), and 19 There is not enough space to elaborate on this claim here. See [Doff, 1990] for a detailed discussion of what the possible positionings are, and which ones make sense in the context of linguistic structure. RLCS entry for ir : [GO /Xl [To [AT [Xl [Villi RLCS entry for go: log IX] [TO [AT [Xl [YIIII RLCS entry for soler: [HABITUALLY :PROMOTE] RLCS entry for usually. [HABITUALLY] {I-MAX IN-MAX Juan] IV-MAX [V-MIN suele] ....... . l {V-MAX [V-MIN ir][P-MAX ~,¢~a]]]]l~ "~ [GO [PERSON] [TO [AT [PERSON] [HOME]]] HABITUALLY {I-MAX {N-MAX John] ~." "'. at ,, {V-MAX [v-, [V.I usually [V-MIN goesl] [N-~AX home]]]] RLCS entry for geru: {BID [X] [AT [X] [V :DEMOTE]] LIKINGLY] RLCS entry for/{ke: [BE [X] [AT [X] [Y]] LIKINGLY] {I-MAX IN-MAX IC~I] {V-MAX [V-I[V-I[V-MIN esse] gern]]]] "I [BE {REFERENT] [AT [REFERENT] [EAT [REFERENT] {FOOD]]] %%LIKINGLY]• ~, -*x ~ ~" "" 1 {I-MAX {N-MAX I] .... " Iv-MAx iv. [V-MIN ~kel [V-MAX~ati-gllll Figure 6: Translation of Juan suele ira casa (d) of the ~ and ~-1 routines axe activated differently if the RLCS associated with the phrasal head contains the :PROMOTE override marker, and steps (b) and (d) of these routines axe activated differently if a phrasal adjunct contains the :DEMOTE override marker. Now using the ~ and G-t routines and the overriding :PROMOTE and :DEMOTE mechanisms, we can show how to account for the thematic divergences of exam- ples (2) and (3) (see figures 6 and 7, respectively). In figure 6, the Spanish RLCS for soler includes the :PROMOTE marker. Thus, steps (c) and (d) of f -1 are overridden: the internal argument ira casa is promoted into the dominating head position [B,o,, GOt.el; and the phrasal head suele is mapped into a modifier position [M ..... HABITUALLY]. By contrast, the English RLCS does not include any special markers. Thus, the G rou- tine activates steps (c) and (d) normally: the dominating head [E,o., GOL.c] is mapped into the phrasal head goes; and the modifier [M ..... HABITUALLY] is mapped into an adjunct position usually. In figure 7, the German RLCS for gem includes the :DEMOTE marker (associated with the variable Y). Thus, steps (b) and (d) of ~-1 are overridden: the phrasal head esse is demoted into a non-logical-subject position [E,,n, EAT]; and the adjunct gem is mapped into the dominating head position Is,,,, BEtide]. By contrast, the English RLCS does not include any special mark- ers. Thus, the G routine activates steps (b) and (d) normally: the dominating head Is,.. BEoI,©] is mapped into the phrasal head like; and the non-logical-subject [E,,n, EAT] is mapped into the internal position eating. 5 SUMMARY This paper has presented a solution to the problem of thematic divergences in machine translation. The so- lution has been implemented in UNITRAN, a bidirec- tional system currently operating on Spanish, English, and German, running in Commonlisp on a Symbolics 3600 series machine. We have seen that the procedures involved are general enough to operate uniformly across different languages and divergence types. Furthermore, the entire space of thematic divergence possibilities is 134 Figure 7: Translation of Ich habe Marie gem covered in this approach without recourse to language- specific routines or transfer rules. In addition to the- matic divergences, the system handles the other diver- gence types shown in figure 1, and it is expected that additional divergence types will be handled by means of equally principled methods. 6 REFERENCES [Brown, 1974] Gretchen Brown, "Some Problems in German to English Machine Translation," MAC Technical Report 142, Massachusetts Institute of Technology, Cambridge, MA, 1974. [Chomsky, 1986] NoRm A. Chomsky, Knowledge of Language: Its Nature, Origin and Use, MIT Press, Cambridge, MA, 1986. {Doff, 1987] Bonnie J. Dorr, "UNITRAN: A Principle-Based Approach to Machine Translation," AI Technical Report 1000, Master of Science thesis, Department Electrical En- gineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, 1987. [Dorr, 1990] Bonnie J. Doff, "Lexical Conceptual Structure and Machine Translation," Ph.D. thesis, Department of Elec- trical Engineering and Computer Science, Massachusetts In- stitute of Technology, Cambridge, MA, 1990. [Goldman, 1974] Nell M. Goldman, "Computer Generation of Natural Language from a Deep Conceptual Base," Ph.D thesis, Computer Science Department, Stanford University, Stanford, CA, 1974. [Jackendoff, 1983] Ray S. Jackendoff, Semantics and Cogni- tion, MIT Press, Cambridge, MA, 1983. [Lytinen & Schank, 1982] Steven Lytinen and Roger Schank, "Representation and Translation," Technical Report 234, De- partment of Computer Science, Yale University, New Haven, CT, 1982. [McCord, 1989] Michael C. McCord, "Design of LMT: A Prolog-Based Machine Translation System," Computational Linguistics, 15:1, 33-52, 1989. [Schank & Abelson, 1977] Roger C. Schank and Robert Abel- son, Scripts, Plans, Goals, and Understanding, Lawrence Erl- baum Associates, Inc., Hillsdale, N J, 1977.
1990
17
A SYNTACTIC FILTER ON PRONOMINAL ANAPHORA FOR SLOT GRAMMAR Shalom Lappin and Michael McCord IBM T.J. Watson Research Center P.O. Box 704 Yorktown Heights, NY 10598 E-mail: Lappin/[email protected] ABS]RACT We propose a syntactic falter for identifying non-coreferential pronoun-NP pairs within a sentence. The filter applies to the output of a Slot Grammar parser and is formulated m terms of the head-argument structures which the parser generates. It liandles control and unbounded de- pendency constructions without empty categories or binding chains, by virtue of the uniticational nature of the parser. The filter provides con- straints for a discourse semantics system, reducing the search domain to which the inference rules of the system's anaphora resolution component apply. 1. INTRODUCTION In this paper we present an implemented al- gorithm which filters intra-sentential relations of referential dependence between pronouns and putative NP antecedents (both full and pronomi- nal NP's) for the syntactic representations pro- vided by an English Slot Grammar parser (McCord 1989b). For each parse of a sentence, the algorithm provides a list o7 pronoun-NP pairs where referential dependence of the first element on the second is excluded by syntactic con- straints. The coverage of the filter has roughly the same extension as conditions B and C Of Chomsky's (1981, 1986) binding theory, tlow- ever, the formulation of the algorithm is sign!f" - icantly different from the conditions of the binding theory, and from proposed implementa- tions of its conditions. In particular, the filter formulates constraints on pronominal anaphora in terms of the head-argument structures provided by Slot Grammar syntactic representations rather than the configurational tree relations, partic- ularly c-command, .on which the binding theory relies. As a result, the statements of the algorithm apply straightforwardly, and without special pro- vision, to a wide variety of constructions which recently proposed implementations of the binding theory do not handle without additional devices. Like the Slot Grammar whose input it applies to, the algorithm runs in Prolog, and it is stated in essentially declarative terms. In Section 2 we give a brief description of Slot Grammar, and the parser we are employing. The syntactic filter is presented in Section 3, first through a statement of six constraints, each of which is sufficient to rule out coreference, then through a detailed description of the algorithm which implements these constraints. We illus- trate the/algorithm with examples of the lists of non-corelerential pairs which it provides for par- ticular parses. In Section 4 we compare our ap- proach to other proposals for syntactic filtering of pronominal anapliora which have appeared in the literature. We discuss Ilobbs algorithm, and we take UP two recent implementations of the binding theory. Finally, in Section 5 we discuss the integration of our filter into other systems of anaphora resolution. We indicate how it can be combined with a VP anaphora algorithm which we have recently completed. We also outline the incorporation of our algorithm into LODUS (Bemth 1989), a system for discourse represen- tation. 2. SLOT GRAMMAR The original work on Slot Grammar was done around 1976-78 and appeared in (McCord 1980). Recently, a new version (McCord 1989b) was developed in a logic programming framework, in connection with fhe machine translation system LMT (McCord 1989a,c,d). Slot Grammar is lexicalist and is dependen- cy-oriented. Every phrase has a head word (with a given word sense and morphosyntactic fea- tures). The constituents of a phrase besides tile head word (also called the modifiers of the hcad) are obtained by "Idling" slots associated with the head. Slots are symbols like sub j, obj and iobj representing grammatical relations, and are asso- ciated with a word (sense) in two ways. The lexical entry for the word specifies a set of com- plement slots (corresponding to arguments of tile word sense in logical form); and the grammar specifies a set of ad/unct slots for each part of 135 speech. A complement slot can be filled at most once, and an adjunct slot can by default be filled any number of times. The phenomena treated by augmented phrase structure rules in some grammatical systems are treated modularly by several different types of rules in Slot Grammar. The most important type of rule is the (slot) filler rule, which gives condi- tions (expressed largely through unification) on the filler phrase and its relations to the higher phrase. Filler rules are stated (normally) without ref- erence to conditions on order among constitu- ents. But there are separately stated ordering rules, l Slot~head ordering rules state conditions on the position (left or fight) of the slot (fdler) relative to the head word. Slot~slot ordering rules place conditions on the relative left-to-right order of (the fillers of) two slots. A slot is obligatory (not optional) if it must be filled, either in the current phrase or in a raised ~osition through left movement or coordination. djunct slots are always optional. Complement slots are optional by default, but they may be specified to be obligatory in a particular lexical entry, or they may be so specifiedin the grammar by obligatory slot rules. Such rules may be un- conditional or be conditional on the character- istics of the higher phrase. They also may specify that a slot is obligatory relative to the idling of another slot. For example, the direct object slot in English. may. be d.eclared obligatory on the conditmn that the indirect object slot is filled by a noun phrase. One aim of Slot Grammar is to develop a p, owerful language-independent module, a shell", which can be used together with lan- guage-dependent modules, reducing the effort of writing grammars for new languages. The Slot Grammar shell module includes the parser, which is a bottom-up chart parser. It also includes most of the treatment of coordination, unbounded de- pendencies, controlled subjects, and punctuation. And the shell contains a system for evaluating parses, extending tteidom's (1982)parse metric, which is used not only for ranking final parses but also for pruning away unlikely partial analyses during parsing, thus reducing the problem of parse space explosion. Parse evaluation expresses preferences for close attachment, for choice of complements over adjuncts, and for parallelism in coordination. Although the shell contains most of the treat- ment of the above .phenomena (coordination, etc.), a small part of their treatment is necessarily language-dependent. A (language-specific) gram- mar can include for instance (1) rules for coordi- nating feature structures that override the defaults in the shell; (2) declarations of slots (called ex- traposer slots) that allow left extraposition of other slots out oI their fdlers; (3) language-specific rules for punctuation that override defaults; and (4) language-specific controls over parse evalu- ation that override defaults. Currently, Slot Grammars are being devel- oped for English (ESG) by McCord, for Danish (DSG) by Arendse Bemth, and for German (GSG) by Ulrike Schwall. ESG uses the UDIC'F lexicon (Byrd 1983, Klavans and Wacholder 1989) having over 60,000 lemmas, with an inter- face that produces slot frames. The fdter algo- rithm has so far been successfully tested with ESG and GSG. (The adaptation to German was done by Ulrike Schwall.) The algorithm applies in a second pass to the parse output, so the important thing in the re- mainder of this section is to describe Slot Gram- mar syntactic analysis structures. A syntactic structure is a tree; each node of the tree represents a phrase in the sentence and has a unique head word. Formally, a phrase is represented by a term phrase(X,H,Sense,Features, s IotFrame,Ext,Hods), where the components are as follows: (1) X is a logical variable called the marker of the phrase. U/aifications of the marker play a crucial role in the fdter algorithm. (2) H is an integer repres- enting the position of the head word o f the phrase. This integer identifies the phrase uniquely, and is used ha the fdter algorithm as the way of referring to phrases. (3) Sense is the word sense of the head word. (4) Features is the feature structure of the head word and of the phrase. It is a logic term (not an attribute-value list), which is generally rather sparse ha informa- tion, showing mainly the part of speech and in- flectional features of the head word. (5) 5 l otFrame is the list of complement slots, each slot being ha the internal form s Iot(S iot,0b,X), where Slot is the slot name, 0b shows whether it is an obligatory form of Slot, and X is the slot marker. The slot marker is unified (essentially) with the marker of the filler phrase when the slot is fdled, even remotely, as in left movement or coordination. Such unifica- tions are important for the filter algorithm. (6) Ext is the list of slots that have been extraposed or raised to the level of the current phrase. (7) The last component Hods represents the modifi- ers (daughters) of the phrase, and is of the form mods (LHods, RMods ) where LHods and RMods are Tile distinction between slot filler rules and ordering constraints parallels the difference between Immediate Do- minance Rules and Linear Precedence Rules in GPSG. See Gazdar et al (1985) for a characterization of ID and I,P rules in GPSG. See (McCord 1989b) for more discussion of the relation of Slot Grammar to other systems. 136 Who did John say wanted to try to find him? subj(n) top subj(n) auxcmp(inf(bare)) obj(fin) preinf comp(enlinfling) ~ preinf obj(inf) obj(fin) who(X2) noun dol(Xl,X3,X4) verb John(X3) noun say(X4,X3,Xg,u) verb want(X9,X2,X2,Xl2) verb preinf(Xl2) preinf try(Xl2,X2,Xl3) verb preinf(Xl3) preinf find(Xl3,X2,Xl4,u,u) verb he(Xl4) noun Figure i. the lists of left modifiers and right modifiers, re- spectively. Each member of a modifier list is of the form Slot:Phrase where Slot is a slot and Phrase is a phrase which flUs Slot. Modifier lists reflect surface order, and a given slot may appear more than once (if it is an adjunct). Thus modifier lists are not attribute-value lists. In Figure 1, a sample parse tree is shown, displayed by a procedure that uses only one line per node and exhibits tree structure lines on the left. In this display, each line (representing a node) shows (1) the tree connection fines, (2) the slot filled by the node, (3) the word sensepredi- cation, and (4) the feature structure. The feature structure is abbreviated here by a display option, showin8 only the part of speech. The word sense predication consists of the sense name of the head word with the following arguments. The first ar- gument is the marker variable for the phrase (node) itself; it is like an event or state variable for verbs. The remaining arguments are the marker variables of the slots in the complement slot frame (u signifies "unbound"). As can be seen in the display, the complement arguments are uni- fied with the marker variables of the fdler com- plement phrases., Note that in the example the marker X2 ol the who phrase is unified with the subject variables of want, try, and find. (There are also some unifications created by ad- junct slot Idling, which will not be described here.) Forthe operation of the filter algorithm, there is a prelim~ary step in which pertinent informa- tion about the parse tree is represented in a man- ner more convenient for the algorithm. As indicated above, nodes (phrases) t]lemselves are represented by the word numbers of their head words. Properties of phrases and relations be- tween them are represented by unit clauses (predications) involving these integers (and other data), which are asserted into the Prolog work- space. Because of this "dispersed" representation with a collection of unit clauses, the original phrase structure for the whole tree is first grounded (variables are bound to unique con- stants) before the unit clauses are created. As an example for this clausal representation, the clause has ar g (P, X) says that phrase P has X one of its arguments; i.e., X is the slot marker variable for one of the complement slots of P. For the above sample parse, then, we would get clauses hasarg(5,'X2'), hasarg(5,'Xl2'). as information about the "want' node (5). As another example, the clause phmarker(P,X) is added when phrase P has marker X. Thus for the above sample, we would get the unit clause phmarker(I,'X2'). An important predicate for the fdter algorithm is argm, defined by argm(P,Q) *- phmarker(P,X) & hasarg(Q,X). This says that phrase P is an argument of phrase Q. This includes remote arguments and con- trolled subjects, because of the unifications of marker variables performed by the Slot Grammar parser. Thus for the above parse, we would get argm(1,5), argm( 1,7). argm( I ,9). showing that 'who' is an argument of 'want', "try', and "find'. 3. THE FILTER 137 A. A.I. B. B.I. C. C.l. a. b. C. d. e. £. C.2. C.2.1. C.2.2. C.3. D. D.I. E. E.I. Fo F.I The Filter Algorithm nonrefdep(P,Q) ~ refpair(P,Q) & ncorefpair(P,Q). refpair(P,Q) ~ pron(p) & noun(Q) & P=/Q. ncorefpair(P,Q) ~ nonagr(P,Q) &/. nonagr(P,Q) ~ numdif(p,Q) I typedif(P,Q) I persdif(P,Q). ncorefpair(P,Q) ~ proncom(P,Q) &/. proncom(P,Q) argm(P,H) & (argm(Q,H) &/ I -pron(Q) & cont(Q,H) & (-subclcont(Q,T) I gt(Q,p)) & (~det(Q) I gt(Q,P))). cont_i(P,Q) ~ argm(P,Q) I adjunct(P,Q). cont(P,Q) ~ cont_i(P,Q). cont(P,Q) ~ cont_i(P,R) & R=/Q & cont(R,Q). subclcont(P,Q) ~ subconj(Q) & cont(P,Q). ncorefpair(P,Q) ~ prepcom(Q,P) &/. prepcom(Q,P) ~ argm(Q,H) & adjunct(R,H) & prep(R) & argm(P,R). ncorefpair(P,Q) ~ npcom(P,Q) &/. npcom(Q,P) ~ adjunct(Q,H) & noun(H) & (argm(P,H) [ adjunct(R,H) & prep(R) & argm(P,R)). ncorefpair(P,Q) ~ nppcom(P,Q) &/. nppcom(P,Q) ~ adjunct(P,H) & noun(H) & -pron(Q) & cont(Q,H). Figure 2. In preparation for stating the six constraints, we adopt the following definitions. The agree- ment features of an NP are its number, person and gender features. We will say that a phrase P is in the argument domain of a phrase N iff P an N are both arguments of the same head. We will also say that Pis in the adjunct domain of N iff N is an argument of a head tt, P is the object of a preposition PREP, and PREP is an adjunct of It. P is in the NP domain of N iff N is the det- erminer of a noun Qand (i) P is an argument of Q, or (ii) P is the object of a preposition PREP and Prep is an adjunct of Q. The six constraints are as follows. A pronoun P is not coreferential with a noun phrase N if any of the following conditions holds. I. P and N have incompatible agreement features. II. P is in the argument domain of N. III. P is in the adjunct domain of N. IV. P is an argument of a head H, N is not a pronoun, and N is contained in tt. V. P is in the NP domain of N. VI. P is the determiner of a noun Q, and N is contained in Q. The algorithm wlfich implements I-VI defines a predicate nonrefdep(P,q) wlfich is satisfied by a pair whose first element Is a pronoun and whose second element is an NP on which the pronoun cannot be taken as referentially dependent, by virtue of the syntactic relation between them. The main clauses of the algorithm are shown in Figure 2. Rule A specifies that the main goal nonrefdep(P,Q) is satisfied by <P ,Q> if this pair is a referential pair (refpalr(P,Q)) and a non- coreferential pair (neorefpair(P,Q)). A.1 de- frees a refpatr ,:P,Q> as one in which P is a pronoun, Q'is a noun (either pronominal or non- pronominal), and P and Q are distinct. Rules B, C, D, E, and F provide a disjunctive statement of the conditions under which the non-corefer- ence goal ncorefpair(P,Q) is satisfied, and so const,tute the core of the algorithm. Each of these rules concludes with a cut to prevent un- necessary backtracking which could generate looping. Rule B, together with B. I, identifies the con- ditions under which constraint I holds. In the following example sentences, the pairs consisting of the second and the first coindexed expressions in la-c (and in lc also the pair < T,'she'> ) sat- isfy nonrefdep(P,Q) by virtue of rule B. la. John i said that they i came. 138 b. The woman i said that he i is funny. C. I i believe that she i is competent. • " • ~ , t , • The algorithm Identifies they, John > as a nonrefdep pair in la, which entails that 'they, cannot be taken as coreferential with John. However, (the referent of) "John" could of course be part of the reference set of 'they, and in suit- able discourses LODUS could identify this possi- bility. Rule C states that <P ,Q> is a non-coreferential pl.~i.r, if it satisfies the pro ncom(P,Q) predicate. s holds under two conditions, corresponding to disjuncts C. 1.a-b and C.l.a,c-f. The first con- dition specifies that the pronoun P and its puta- tive antecedent Q are both arguments of the same phrasal head, and so implements constraint II. This rules out referential dependence in 2a-b. 2a. Mary i likes her i. b. She i tikes her i. Given the fact that Slot Grammar unifies the ar- gument and adjunct variables of a head with the phrases which t'dl these variable positions, it will also exclude coreference in cases of control and unbounded dependency, as in 3a-c. 3a. Jolt. seems to want to see hirn~.. b. Whi6h man i did he i see? -- e. This is the girl i. Johh said she i saw. The second disjunct C.l.a,c-f covers cases in which the pronoun is an argument which is higher up in the head-argument structure of the sentence than a non-pronominal noun. This dis- junct corresponds to condition IV. C.2-C.2.2 provide a reeursive definition of containment within aphrase. This definition uses the relation of immediate containment, eont i (P ,Q), as the base of the recursion, where con~ i (P ,Q) holds if Q is either an argument or an adj'unct (modifier or determiner) of a head Q. The second disjunct blocks coreference in 4a-c. 4a. He~ believes that the m.a% is amusing. b. Who i did he i say Johr~. hssed? c. This Is the man i he i said John i wrote about. The wh-phrase in 4b and the head noun of the relative clause in 4c unify with variables in posi- tions contained within the phrase (more precise!y, the verb which heads the phrase) of which the pronoun is an argument. Therefore, the algo- rithm identifies these nouns as impossible ante- cedents of the pronoun. The two final conditions of the second dis- junct, C. 1 .e and C. l.f, describe cases in which the antecedent of a pronoun is contained in a pre- ceding adjunct clause, and cases in which the an- tecedent is the determiner of an NP which precedes a pronoun, respectively. These clauses prevent such structures from satisfying the non- coreference goal, and so permit referential de- pendence in 5a-b. 5a. After John i sang, he i danced. b. Johni's motherlikes him i. Notice that because a determiner is an adjunct of an NP and not an argument of the verb of which the NP is an argument, rule C. 1 also permits co- reference in 6. 6. His i mother likes John i. ltowever, C.l.a,c-e correctly excludes referential dependence in 7, where the pronoun is an argu- ment which is higher than a noun adjunct. 7. He i likes Johni's mother. The algorithm permits backwards anaphora in cases like 8, where the pronoun is not an argu- ment of a phrase 14 to wtfich its antecedent Q bears the con t (Q, fl ) relation. 8. After he i sang, John i danced. D-D.I block coreference between an NP which is the argument of a head H, and apronoun that is the object of a preposition heading a PP adjunct of 14, as in 9a-c. These rules implement constraint III. 9a. Sam. i spoke about him i. b. She i sat near her i. C. Who i did he i ask for? Finally, E-E.I and F realize conditions V and VI, respectively, in NP internal non-coreference cases like 10a-c. 10a. His i portrait of Jo .hnj. is interesting. b. JolL, i/s portrait of htrn i is interestmg. c. Hisi description of the portrait by John i is interesting. Let us look at three examples of actual lists of pairs satisfying the nonrefdep predicate which the algorithm generates for particular parse trees of Slot Grammar. The items in each pair are identified by their words and word numbers, cor- responding to their sequential position in the stnng. When the sentence Who did John say wanted to try to find him? is ~ven to the system, the parse is as shown in Figure 1 above, and the output of the filter is: Noncoref pairs: he.lO - who.l 139 Coreference analysis time = ii msec. Thus < "him','who' > is identified as a non-core- ferential pair, while coreference between 'John' and 'him is allowed. In Figure 3, the algorithm correctly lists < 'him ,'Bill > (6-3) as a non-coreferential pair, while permitting 'him' to take "John' as an ante- cedent. In Fi~c~ure 4, it correctly excludes corefer- ence between him and 'John' (he.6-John.1), and allows him to be referentially dependent upon "Bill'. John expected Bill to impress him. I I subj(n) John(X3) noun top expect(Xl,X3,X4,X5) verb obj Bill(X4) noun preinf preinf(X5) preinf comp(inf) impress(XS,X4,X6) verb obj he(X6) noun Noncoref pairs : he.6 - Bill.3 Coreference analysis time = 5 msec. complement clause subiect, tlowever, in Figure 4, the infinitival clause IS an adjunct of 'lectured' mid requires matrix subject control. 4. EXISTING PROPOSALS FOR CON- STRAINING PRONOMINAL ANAPHORA We will discuss three suggestions which have been made in the computational literature for syntactically constraining the relationship be- tween a pronoun and its set of possible antece. dents intra-sententially. The first is Hobbs (1978) Algorithm, which performs a breadth-first, left-to-right search of the tree containing the pro- noun for possible antecedents. The search is re- stricted to paths above the first NP or S node containing the pronoun, and so the pronoun cannot be boundby an antecedent in its minimal governing category. If no antecedents are found within the same tree as the pronoun, the trees of the previous sentences in the text are searched in order of proximity. There are two main .difficul- ties with this approach. First, it cannot be ap- plied to cases of control in infinitival clauses, like those given in Figures 3 and 4, or to unbounded dependencies, like those in Figure 1 and in ex- amples 3b-c and 4b-c, without significant modifi- cation. Figure 3. John lectured Bill to impress him. ! subj(n) John(X3) noun • top lecture(Xl,X3,X4) verb [ obj Bill(X4) noun ~ preinf preinf(X5) preinf vnfvp impress(X5,X3,X6) verb obj he(X6) noun Noncoref pairs: he.6 - John.l Coreference analysis time = 5 msec. Figure 4. It makes this distinction by virtue of the differ- ences between the roles of the two infinitival clauses in these sentences. In Fi~gtjre 3, the infin- itival clause is a complement o1 "expected, and this verb is marked for object control of the Second, the algorithm is inefficient in design and violates modularity by virtue of the fact that it computes both intra-sentential constraints on pronoriainal anaphora and inter-sentential ante- cedent possibilities each time it is invoked for a new pronoun in a tree. Our system computes the set ofpronoun-NP pairs for which coreference is syntactically excluded in a single pass on a parse tree. This set provides the input to a semantic- pragmatic discourse module which determines anaphora by inference and preference rules. The other two proposals are presented in Correa (1988), and in lngria and Stallard (1989). Both of these models are implementations oI Chomsky's Binding theory which make use of Government Binding type parsers. They employ essentially the same strategy. This involves com- puting the set of possible antecedents of an ana- phor as the NP s which c-command the anaphor within a minimal domain (its minimal govet:ning category). 2 The minimal domain of an NP is characterized as the first S, or the first NP without a possessive subiect, in which it is contained. The possible intra-sentential antecedents of a pronoun are the set of NP's in the tree which are not in- cluded within this minimal domain. See Reinhart (1976) and (1983) for alternative definitions of c-command, and discussions of the role of this re- lation in determining the possibilities of anaphora. See Lappin (1985) for additional discussion of the connection between c-command and distinct varieties of pronominal anal3hora. See Chomsky (1981), (1986a) and (1986b) for alternative definitions of the notion 'government' and 'rain,real governing category'. 140 This approach does sustain modularity by computing the set of possible antecedents for all pronouns within a tree in a single pass operation, prior to the application of inter-sentential search procedures. The main difficulty with the model is that because constraints on pronominal ana- phora are stated entirely in terms of configura- tional relations of tree geometry, specifically, in terms of c-command and minimal dominating S and NP domains, control and unbounded de- p endency structures can only be handled b~' ad- itional and fairly complex devices. It is necessary to generate empty categories for PRO and trace in appropriate positions in parse trees. Additional algorithms must be invoked to specify the chains of control (A-binding) for PRO, and operator (A )-binding for trace in order to link these categories to the constituents which bind them. The algorithm which computes possible antecedents for anaphors and pronouns must be formulated so that ii identifies the head of such a chain as non-coreferential with a pronoun or anaphor (in the sense of the Binding theory), if any element of the chain is excluded as a possible antecedent. Neither empty categories nor binding chains are required in our system. In Slot Grammar parse representations, wh-phrases, heads of rela- tive clauses, and NP's which control the subjects of inf'mitival clauses are unified with the variables corresponding to the roles they bind in argument positions. Tlierefore, the clauses of the algorithm apply to these constructions directly, and without additional devices or stipulations) 5. THE INTEGRATION OF THE FILTER INTO OTHER SYSTEMS OF ANAPHORA RESOLUTION We have recently implemented an algorithm for the interpretation of intrasentential VP ana- phora structures like those in 1 la-c. 1 l a. John arrived, and Mary did too. b. Bill read every book which Sam said he did. c. Max wrote a letter to Bill before Mary did to John. The VP anaphora algorithm generates a second tree which copies the antecedent verb into the position of the head of the elliptical VP. It also lists the new arguments and adjuncts which the copied verb inhei'its from its antecedent. We have integrated our filter on pronominal anaphora into this algorithm, so that the filter applies to the in- terpreted trees which the algorithm generates. consider 12. John likes to him, and Bill does too. If the [dter applies to the parse of 11, it will identify only .< him, John'> as a non-corefer- ential pair, gwen that the pair <'him','Bill'> doesn t satisfy any of the conditions of the filter algorithm. Ilowever, when the filter is applied to the interpreted VP anaphora tree of 12, the filter algorithm correctly identifies both pronoun-NP pairs, as shown in the VP output of the algorithm for 12 given in Figure 5. John likes him, and Billdoes too. Antecedent Verb-Elliptical Verb Pairs. like.2 - dol.7 Elliptical Verb-New Argument Pairs. like.7 - he.3 Interpreted VP anaphora tree. subj John(X9) noun ~ iconj like(X8,X9,Xl0) verb obj he(Xl0) noun • top and(Xi,X8,Xll) verb ~ subj BilI(XI2) noun rconj like(Xll,Xl2,Xl0) verb vadv too(Xll) adv Non-Coreferential Pronoun-NP Pairs. he.3 - John.l, he.3 - Bill.6 Coreference analysis time = 70 msec. Figure 5. Our filter also provides input to a discourse understanding system, LODUS, designed and implemented by A. Bernth, and described in (..Bernth 1988, 1989). LOI)US creates a single discourse structure from the analyses of the S|0t Grammar parser for several sentences. It inter- prets each sentence analysis in the context con- sisting of the discourse processed so far, together with domain knowledge, and it then embeds it into the discourse structure. The process of in- te.rpretation consists in applying rules of inference which encode semantic and pragmatic (know- In fact, a more complicated algorithm with approximately tile same coverage as our lilter can be formulated fi, r a parser which produces configurational surlhce trees wiulout empty categories and binding chains, if the parser provides deep grammatical roles at some level of representation. The first author has implemented such an al- gorithm for the PEG parser. For a general description of I'EG, see Jensen (1986). The current version of ['E(; provides information on deep grammatical roles by means of second pass rules which apply to the initial parse record structure. The algorithm employs both c-command and reference to deep grammatical roles. 141 ledge-based) relations among lexical items, and discourse structures. The fdter reduces the set oI possible antecedents which the anaphora resol- ution component of LODUS considers for pro- nouns. For example, this component will not consider 'the cat or that' as a .p, ossible antece- dents for either occurrence of it in the second sentence in 13, but only "the mouse' in the first sentence of this discourse. This is due to the fact that our fdter lists the excluded pairs together with the parse tree of the second sentence. 13. The mouse ran in. The cat that saw it ate it. Thus, the fdter significantly reduces the search space which the anaphora resolution component of LODUS must process. The interface between our filter and LODUS embodies the sort of mo- dular interaction of syntactic and semantic-prag- matic components which we see as important to the successful operation and efficiency of any anaphora resolution system. ACKNOWLEDGMENTS We are grateful to Arendse Bemth, Martin Chodorow, and Wlodek Zadrozny for helpful comments and advice on proposals contained in this paper. REFERENCES Bemth, A. (1988) Computational Discourse Se- mantics, Doctoral Dmsertation, U. Copenha- gen and IBM Research. Bemth, A. (1989) "Discourse Understanding In Lo~c", Proc. North American Conference on Logic Programming, pp. 755-771, MIT Press. Byrd, R. J. (1983) "Word Formation in Natural Language Processing Systems," Proceedings oflJCAI-VIII, pp. 704-706. Chomsky, N. (1981) Lectures on Government and Binding, Foils, Dordrecht. Chomsky, N. (1986a) Knowledge of Language: Its Nature, Origin, and Use, Praeger, New York. Chomsky, N. (1986b) Barriers, MIT Press, Cambridge, Mass. Correa, N. (1988) "A B'_m,,ding Rule for Govern- ment-Binding Parsing , COLING "88, Buda- pest, pp. 123-129. Gazdar, G., E. Klein, G. Pullum, and I. Sag, G1985) Generalized Phrase Structure rammar, Blackwell, Oxford. Heidorn, G. E. (1982) "Experience with an Easily Computed Metric for Ranking Alternative Parses," Proceedings of Annual ACL Meeting, 1982, pp. 82-84. I tobbs, J. (1978) j'Resolving l'ronoun References", Lingua 44, pp. 311-338. Ingria, R. and D. Stallard (1989) "A Computa- tional Mechanism for Pronominal Reference", Proceedings of the 27th Annual Meeting of the Association for Computational Linguistics, Vancouver, pp. 262-271. Jensen, K. (,1986) "PEG: A Broad-Coverage Computatmnal Syntax of English," Technical Report, IBM T.J. Watson Research Center, Yorktown Heights, NY. Klavans, J. L. and Wacholder, N. (1989) "Doc- umentation of Features and Attributes in UDICT," Research Report RC14251, IBM T.J. Watson Research Center, Yorktown Heights, N.Y. Lappin, S. (1985) "Pronominal Binding and Co- reference", Theoretical Linguistics 12, pp. 241-263. McCord, M. C. (1980) "Slot Grammars," Com- putational Linguistics, vol. 6, pp. 31-43. McCord, M. C. (1989a) "Design of LMT: A Prolog-based Machine Translation System," Computational Linguistics, vol. 15, pp. 33-52. McCord, M. C. (1989b) "A New Version of Slot Grammar," Research Report RC 14506, IBM Research Division, Yorktown Iteights, NY 10598. McCord, M. C. (198%) "A New Version of the Machine Translation System LMT," to ap- pear in Proc. International Scientific Sympo- sium on Natural Language and Logic, Springer Lecture Notes in Computer Science, and in J. Literary and Linguistic Computing. McCord, M. C. (1989d) "LMT," Proceedings of MT Summit II, pp. 94-99, Deutsche GeseU- schaft f'tir Dokumentation, Frankfurt. Reinhart, T. (1976) The Syntactic Domain of Anaphora, Doctoral Dissertation, MIT, Cam- bridge, Mass. Reinhart, T. (1983) Anaphora, Croom Ilelm, London. 142
1990
18
ACQUIRING CORE MEANINGS OF WORDS, REPRESENTED AS JACKENDOFF-STYLE CONCEPTUAL STRUCTURES, FROM CORRELATED STREAMS OF LINGUISTIC AND NON-LINGUISTIC INPUT Jeffrey Mark Siskind* M. I. T. Artificial Intelligence Laboratory 545 Technology Square, Room NE43-800b Cambridge MA 02139 617/253-5659 internet: Qobi~AI.MIT.EDU Abstract This paper describes an operational system which can acquire the core meanings of words without any prior knowledge of either the category or meaning of any words it encounters. The system is given as input, a description of sequences of scenes along with sentences which describe the [EVENTS] taking place as those scenes unfold, and produces as out- put, a lexicon consisting of the category and mean- ing of each word in the input, that allows the sen- tences to describe the [EVENTS]. It is argued, that each of the three main components of the system, the parser, the linker and the inference component, make only linguistically and cognitively plausible assump- tions about the innate knowledge needed to support tractable learning. The paper discusses the theory underlying the system, the representations and al- gorithms used in the implementation, the semantic constraints which support the heuristics necessary to achieve tractable learning, the limitations of the current theory and the implications of this work for language acquisition research. 1 Introduction Several natural language systems have been reported which learn the meanings of new words[5, 7, 1, 16, 17, 13, 14]. Many of these systems (in particular [5, 7, 1]) learn the new meanings based upon expec- tations arising from the morphological, syntactic, se- *Supported by an AT&T Bell Laboratories Ph.D. scholar- ship. Part of this research was performed while the author was visiting Xerox PARC as a research intern and as a consultant. mantic and pragmatic context of the unknown word in the text being processed. For example, if such a system encounters the sentence "I woke up yesterday, turned off my alarm clock, took a shower, and cooked myself two grimps for breakfast[5]" it might conclude that grimps is a noun which represents a type of food. Such systems succeed in learning new words only when the context offers sufficient constraint to narrow down the possible meanings to make the ac- quisition unambiguous. Accordingly, such a theory accounts only for the type of learning which arises when an adult encounters an unknown word while reading a text comprised mostly of known words. It can not explain the kind of learning which a young child performs during the early stages of language acquisition when it starts out knowing the meanings of few if any words. In this paper, I present a new theory which can account for the language learning which a child ex- hibits. In this theory, the learner is presented with a training session consisting of a sequence of sce- narios. Each scenario contains both linguistic and non-linguistic (i.e. visual) information. The non- linguistic information for each scenario consists of a time-ordered sequence of scenes, each depicted via a conjunction of true and negated atomic formulas describing that scene. Likewise, the linguistic infor- mation for each scenario consists of a time-ordered sequence of sentences. Initially, the learner knows nothing about the words comprising the sentences in the training session, neither their lexical category nor their meaning. From the two correlated sources of in- put, the linguistic and the non-linguistic, the learner can infer the set of possible lexicons (i.e. the possible 143 categories and meanings of the words in the linguistic input) which allow the linguistic input to describe or account for the non-linguistic input. This inference is accomplished by applying a compositional seman- tics linking rule in reverse and then performing some constraint satisfaction. This theory has been implemented in a working computer program. The program succeeds and is tractable because of a small number of judicious se- mantic constraints and a small number of heuristics which order and eliminate much of the search. This paper explains the general theory as well as the im- plementation details which make it work. In ad- dition, it discusses some limitations in the current theory, among which is one which prevents it from converging on a single definition of some words. 2 Background In [15], Rayner et. al. describe a system which can determine the lexical category of each word in a corpus of sentences. They observe that while in the original formulation, a definite clause grammar[12] normally defines a two-argument pred- icate parser(Sentence,Tree) with the lexicon rep- resented directly in the clauses of the grammar, an alternative formulation would allow the lexicon to be represented explicitly as an additional argument to the parser relation, yielding a three argument predi- cate paxser(Sentence,Tree,Lexicon). This three argument relation can be used to learn lexical cate- gory information by a technique summarized in Fig- ure I. Here, a query is formed containing a conjunc- tion of calls to the parser, one for each sentence in the corpus. All of the calls share a common Lexicon, while in each call, the Tree is left unbound. The Lexicon is initialized with an entry for each word appearing in the corpus where the lexical category of each such initial entry is left unbound. The pur- pose of this initial lexicon is to enforce the constraint that each word in the corpus be assigned a unique lexical category. This restriction, the monosemy con- straint, will play an important role in the work we describe later. The result of issuing the query in the above example is a lexicon, with instantiated lexical categories for each lexical entry, such that with that lexicon, all of the words in the corpus can be parsed. Note that there could be several such lexicons, each produced by backtracking. In this paper we extend the results of Rayner et. al. to the learning of representations of word mean- ings in addition to lexical category information. Our theory is implemented in an operational computer program called MAIMRA. 1 Unlike Rayner et. al.'s system, which is given only a corpus of sentences as input, MAIMRA is given two correlated streams of input, one linguistic and one non-linguistic, the later modeling the visual context in which the former were uttered. This is intended to more closely model the kind of learning exhibited by a child with no prior lexical knowledge. The task faced by MAIMRA is il- lustrated in Figure 2. MAIMRA does not attempt to solve the perception problem; both the linguistic and non-linguistic input are presented in symbolic form to MAIMRA. Thus, the session given in Figure 2 would be presented to MAIMRA as the following two input pairs: (BE(cup, AT(John))A } -~BE(cup, AT(Mary))); (BE(cup, AT(Mary))A -~BE(cup, AT(John))) The cup slid from John fo Mary. (BE(cup, AT(Mary))A } -~BE(cup, AT(Bill))); (BE(cup, AT(Bill))^ -~BE(cup, AT(Mary))) The cup slid from Mary ~o Bill. MAIMRA attempts to infer both category and mean- ing information from input such as this. 3 Architecture MAIMRA operates as a collection of modules which mutually constrain various mental representations: The organization of these modules is illustrated in Figure 3. Conceptually, each of the modules is non- directional; each module simply constrains the val- ues which may appear concurrently on each of its inputs. Thus the parser enforces a relation between a time-ordered sequence of sentences and a corre- sponding time-ordered sequence of syntactic struc- tures or parse trees which are licensed by the lexi- cal category information from a lexicon. The linker imposes compositional semantics on the parse trees produced by the parser, relating the meanings of in- dividual words found in the lexicon, to the meanings of entire utterances, through the mediation of the syntactic structures consistent with the parser. Fi- nally, the inference component relates a time-ordered sequence of observations from the non-linguistic in- put, to a time-ordered sequence of semantic struc- tures which in some sense explain the non-linguistic input. The non-directional collection of modules can 1MAIMRA, or t~lr~FJ, is the Aramaic word for word. 144 ?- Lexicon - [entry(the,_), entry(cup,_), entry(slid,_), entry(from,_), entry(john,_), entry(to,_), entry(mary,_), entry(bill,_)], parser([the,cup,slid,from,john,to,mary],_,Lexicon), parser([the,cup,slid,from,mary,to,bill],_,Lexicon), parser([the,cup,slid,from,bill,to,john],_,Lexicon). Lexicon = [entry(the,det), entry(cup,n), entry(slid,v), entry(from,p), entry(john,n), entry(to,p), entry(mary,n), entry(bill,n)]. Figure h The technique used by Rayner et. al. in [15] to acquire lexical category information from a corpus of sentences. Input: rlCeP~flO r m • BE(cup,A'r(John))A ~B~cap J%T(Mary )) rllCUtO • B~cup~%T(M~y)~ The cup slid from John to Mary rso~mio B~cup ,AT(Mary))A -,BE(cup,AT{roll )) rm=elt$ ~'y am BNcu p,AT{,Bill )g "-BNcup &~Mary)) The cup slid from Mary to Bill I! J Output: The : DET cup : N [Thing cup] slia: v [ v,nt GO(x,[Path z])] from: P [Path FROM([elace AT(x)])] lo: P [Path TO([Place AT(x)])] John : N [Thing John] Mary : N [Thing Mary] Bill : N [Thing Bill] Figure 2: A sample learning session with MAIMRA. MAIMRA is given the two scenarios as input. Each sce- nario comprises linguistic information, in the form of a sequence of sentences, and non-linguistic information. The non-linguistic information is a sequence of conceptual structure [STATE] descriptions which describe a sequence of visual scenes. MAIMRA produces as output, a lexicon which allows the linguistic input to explain the non-linguistic input. 145 lexicon Figure 3: The cognitive architecture used by MAIMRA. be used in three ways. Given a lexicon and a se- quence of sentences as input, the architecture could produce as output, a sequence of observations which are predicted by the sentences. This corresponds to language understanding. Likewise, given a lexicon and a sequence of observations as input, the archi- tecture could produce as output, a sequence of sen- tences which explain the observations. This corre- sponds to language generation. Finally, given a se- quence of observations and a sequence of sentences as input, the architecture could produce as output, a lexicon which allows the sentences to explain the observations. This last alternative, corresponding to language acquisition, is what interests us here. Of the five mental representations used by MAIMRA, only three are externally visible, namely the linguistic input, the non-linguistic input and the lexicon. Syntactic and semantic structures exist only internal to MAIMRA and are not externally visible. When using the cognitive architecture from Figure 3 for learning, the values of two of the mental rep- resentations, namely the sentences and the observa- tions, are deterministic, since they are fixed as input. The remaining three representations may be nonde- terministic; there may be multiple lexicons, syntac- tic structure sequences and semantic structure se- quences which are consistent with the fixed input. In general, each of the three modules alone provides only limited constraint on the possible values for each of the mental representations. Thus taken alone, sig- nificant nondeterminism is introduced by each mod- ule in isolation. Taken together however, the mod- ules offer much greater constraint on the mutually consistent values for the mental representations, thus reducing the amount of nondeterminism. Much of the success of MAIMRA hinges on efficient ways of representing this nondeterminism. Conceptually, MAIMRA could have been imple- mented using techniques similar to Rayner et. al.'s system. Such a naive implementation would directly reflect the architecture given in Figure 3 and is il- lustrated in Figure 4. The predicate aaimra would represent the conjunction of constraints introduced by the parser, linker and in:ference modules, ul- timately constraining the mutually consistent val- ues for sentence and observation sequences and the lexicon. Learning a lexicon would be accomplished by forming a conjunction of queries to maimra, one for each scenario, where a single Lexicon is shared among the conjoined queries. This lexi- con is a list of lexical entries, each of the form entry(Word,Category,Meaning). The monosemy constraint is enforced by initializing the Lexicon to contain a single entry for each word, each entry hav- ing unbound Category and Heaning slots. The re- sult of processing such a query would be bindings for those Category and Heaning slots which allow the Sentences to explain the Observations. The naive implementation is too inefficient to be practical. This inefficiency results from two sources: inefficient representation of nondeterministic values and non-directional computation. Nondeterministic mental representations are expressed in the naive im- plementation via backtracking. Expressing nonde- terminism this way requires that substructure shared across different alternatives for a mental representa- tion be multiplied out. For example, if MAIMRA is given as input, a sequence of two sentences $1; S~, where the first sentence has n parses and the sec- ond m parses, then there would be m x n distinct values for the parse tree sequence produced by the parser for this sentence sequence. Each such parse tree sequence would be represented as a distinct backtrack possibility by the naive implementation. The actual implementation instead represents this nondeterminism explicitly as AND/OR trees and ad- ditionally factors out much of the shared common substructure to reduce the size of the mental rep- resentations and the time needed to process them. As noted previously, the individual modules them- selves offer little constraint on the mental represen- tations. A given sentence sequence corresponds to many parse tree sequences which in turn corresponds to an even greater number of semantic structure se- quences. Most of these are filtered out, only at the end by the inference component, because they do not correspond to the non-linguistic input. Rather then have these modules operate as non-directed sets of constraints, direction-specific algorithms are used which are tailored to producing the factored mental representations in an efficient order. First, the in- ference component is called to produce all semantic structure sequences which correspond to the observa- tion sequence. Then, the parser is called to produce 146 maiDra (Sentences, Lexicon, Observations ) : - parser (Sentences, Synt act icStructures, Lexicon), linker (Trees, ConceptualStructures, Lexicon), inference (ConceptualStructures, Observat ions). 7- Lexicon - [entry(the,_,_), entry(cup .... ), entry (slid .... ), entry(from .... ), entry (john .... ), entry (to .... ) , entry (mary .... ), entry(bill .... )], mainLra( [ [the, cup, slid, from, john, to ,mary] ], Lexicon, be (cup, at ( j ohn) ) R'be ( cup (at (mary)) ) : be (cup, at (mary) ) R'be (cup (at (john) ) ) ), maimra ( [ [the, cup, slid, from,mary, to ,bill] ], Lexicon, be ( cup, at (mary)) R-be (cup (at (bill)) ) ; be (cup, at (bill)) R-be (cup (at (mary) ) ) ). =~ Lexicon - [entry (the, det, noSemant ics), entry (cup, n, cup), entry(slid,v,go(x, [from(y) ,to(z)]), entry (from, p, at (x)), entry(john,n, j ohn), entry (to ,p, at (x)), entry (mary,n, mary), entry(bill,n,bill)]. Figure 4: A naive implementation of the cognitive architecture from Figure 3 using techniques similar to those used by Rayner et. al. in [15]. all syntactic structure sequences which correspond to the sentence sequence. Finally, the linking com- ponent is run in reverse to produce meanings of lex- ical items by correlating the syntactic and semantic structure sequences previously produced. The de- tails of the factored representation, and the algo- rithms used to create it, will be discussed in Sec- tion 5. Several of the mental representations used by MAIMRA require a method for representing semantic information. We have chosen Jackendoff's theory of conceptual structure, presented in [6], as our model for semantic representation. It should be stressed that although we represent conceptual structure via a decomposition into primitives much in the same way as does Schank[18], unlike both Schank and Jackendoff, we do not claim that any particular such decompositional theory is adequate as a basis for ex- pressing the entire range of human thought and the meanings of even most words in the lexicon. Clearly, much of human experience is well beyond formaliza- tion within the current state of the art in knowledge representation. We are only concerned with repre- senting and learning the meanings of words describ- ing simple spatial movements of objects within the visual field of the learner. For this limited task, a primitive decompositional theory such as Jackend- off's seems adequate. Conceptual structures appear within three of the mental representations used by MAIMrtA. First, the semantic structures produced by the linker, as mean- ings of entire utterances, are represented as either conceptual structure [STATE] or [EVENT] descrip- tions. Second, the observation sequence comprising the non-linguistic input is represented as a conjunc- tion of true and negated [STATE] descriptions. Only [STATE] descriptions appear in the observation se- quence. It is the function of the inference component to infer the possible [EVENT] descriptions which account for the observed [STATE] sequences. Fi- nally, meaning components of lexical entries are rep- resented as fragments of conceptual structure which contain variables. The conceptual structure frag- ments are combined by the linker, filling in the vari- ables with other fragments, to produce the variable free conceptual structures representing the meanings of whole utterances from the meanings of their con- stituent words. 4 Learning Constraints Each of the three modules implements some linguis- tic or cognitive theory, and accordingly, makes some assumptions about what knowledge is innate and what can be learned. Additionally, each module cur- rently implements only a simple theory and thus has limitations on the linguistic and cognitive phenom- ena that it can account for. This section discusses the innateness assumptions and limitations of each 147 S --~ g --. NP --, VP pp -.-, AUX {COMP} [~] {DEW} ~ {S[NP[VP[PP}" {AUX} ~ {glNPIVPIPP }" [~] {g[NPIVP[PP}" {DOIBEI{MODALITOI {{MODALITO}} HAVE} {BE}} Figure 5: The context free grammar used by MAIMRA. This grammar is motivated by X-theory. The head of each rule is enclosed in a box. This head information is used by the linker. module in greater detail. 4.1 The Parser While MAIMRA can learn lexical category informa- tion required by the parser, the parser is given a fixed context-free grammar which is assumed to be innate. This fixed grammar used by MAIMRA is shown in Figure 5. At first glance it might seem unreasonable to assume that the grammar given in Figure 5 is innate. A closer look however, reveals that the par- ticular context-free grammar we use is not entirely arbitrary; it is motivated by X-theory[2, 3] which many linguists take to be innate. Our grammar can be derived from X-theory as follows. We start with a version of X-theory which allows non-binary branch- ing nodes and where maximal projections carry bar- level one (i.e. XP is X--). First, fix the parameters HEAD-first and SPEC-first to yield the prototype rule: XP ---* {XsPEc} X complement*. Second, instantiate this rule for each of the lexi- cal categories N, V and P viewing NSPEC as DET, VSPEC as AUX and making PSpEC degenerate. Third, add the rules for S and S stipulating that is a maximal projection. 2 Fourth, declare all max- imal projections to be valid complements. Finally, add in the derivation for the English auxiliary sys- tem. Thus, our particular context-free grammar is little more than instantiating X-theory with the En- glish lexical categories N, V and P, the English pa- rameters HEAD-first and SPEC-first and the English auxiliary system. 2A more principled way of deriving the rides for S and from T-theory is given in [4] We make no claim that the syntactic theory im- plemented by MAIMRA is complete. Many linguistic phenomena remain unaccounted for in our grammar, among them agreement, tense, aspect, adjectives, ad- verbs, negation, coordination, quantifiers, wh-words, pronouns, reference and demonstratives. While the grammar is motivated by GB theory, the only com- ponents of GB theory which have been implemented are T-theory and 0-theory. (0-theory is enforced via the linking rule discussed in the next subsection.) Although future work may increase the scope and accuracy of the syntactic theory incorporated into MAIMRA, even the current limited grammar offers a sufficiently rich framework for investigating lan- guage acquisition. It's most severe limitation is a lack of subcategorization; the grammar allows nouns, verbs and prepositions to take any number of com- plements of any kind. This causes the grammar to severely overgenerate and results in a high degree of non-determinism in the representation of syntactic structure. It is interesting that despite the use of a highly ambiguous grammar, the combination of the parser with the linker and inference component, to- gether with the non-linguistic context, provide suffi- cient constraint for the system to learn words quickly with few training scenarios. This gives evidence that many of the constraints normally assumed to be im- posed by syntax, actually result from the interplay of multiple modules in a broad cognitive system. 4.2 The Linker The linking component of MAIMRA implements a single linking rule which is assumed to be innate. This rule is best illustrated by way of the exam- ple given in Figure 6. Linking proceeds in a bottom up fashion from the leaves of the parse tree towards its root. Each node in the parse tree is annotated with a fragment of conceptual structure. The anno- tation of leaf nodes comes from the meaning entry for that word in the lexicon. Every non-leaf node has a distinguished daughter called the head. Knowledge of which daughter node is the head for any given phrasal category is assumed to be innate. For the grammar used by MAIMRA, this information is indi- cated in Figure 5 by the categories enclosed in boxes. The annotation of a non-leaf node is formed by copy- ing the annotation of its head daughter node, which may contain variables, and filling some of its variable slots with the annotation of the remaining non-head daughters. Note that this is a nondeterministic pro- cess; there is no stipulation of which variables get linked to which complements. Because of this non- determinism, there can be many linkings associated 148 with any given lexicon and parse tree. In addition to this linking ambiguity, existence of multiple lexi- cal entries with different meanings for the same word can cause meaning ambiguity. A given variable may appear multiple times in a fragment of conceptual structure. The linking rule stipulates that when a variable is linked to an argu- ment, all instances of the same variable get linked to that argument as well. Additionally, the linking rule maintains the constraint that the annotation of the root node, as well as any node which is a sister to a head, must be variable free. Linkings which violate this constraint are discarded. There must be at least as many distinct variables in the conceptual struc- ture annotating the head as there are sisters of the head. Again, if there are insufficient variables in the head the partial linking is discarded. There may be more, however, which means that the annotation of the parent will contain variables. This is acceptable if the parent is not itself a sister to a head. MAIMRA imposes two additional constraints on the linking process. First, meanings of lexical items must have some semantic content; they can not be simply a variable. Second, the functor of a con- ceptual structure fragment can not be a variable. In other words, it is not possible to have a frag- ment FROM(z(John)) which would link with AT to produce FROM(AT(John)). These constraints help reduce the space of possible lexicons and sup- port search pruning heuristics which make learning faster. In summary, the linking component makes use of six pieces of knowledge which are assumed to be in- nate. 1. The linking rule. 2. The head category associated with each phrasal category. 3. The requirement that the root semantic struc- ture be variable free. 4. The requirement that conceptual structure frag- ments associated with sisters of heads be vari- able free. 5. The requirement that no lexical item have empty semantics. 6. The requirement that no conceptual structure fragment contain variable functors. There are at least two limitations in the theory of linking discussed above. First, there is no attempt to give an adequate semantics for the categories DET, AUX and COMP. Currently, the linker assumes that nodes labeled with these categories have no concep- tual structure annotation. Furthermore, DET, AUX and COMP nodes which are sisters to a head are not linked to any variable in the conceptual structure an- notating the head. Second, while the above linking rule can account for predication, it cannot account for the semantics of adjuncts. This shortcoming re- sults not just from limitations in the linking rule but also from the fact that Jackendoff's conceptual struc- ture is unable to represent adjunct information. 4.3 The Inference Component The inference component imposes the constraint that the linguistic input must "explain" the non-linguistic input. This notion of explanation is assumed to be innate and comprises four principles. First, each sentence must describe some subsequence of scenes. Everything the teacher says must be true in the current non-linguistic context of the learner. The teacher cannot say something which is either false or unrelated to the visual field of the learner. Sec- ond, while the teacher is constrained to making only true statements about the visual field of the learner, the teacher is not required to state every- thing which is true; some non-linguistic data may go undescribed. Third, the order of the linguistic de- scription must match the order of occurrence of the non-linguistic [EVENTS]. This is necessary because the language fragment handled by MAIMRA does not support tense and aspect. It also adds substantial constraint to the learning process. Finally, sentences must describe non-overlapping scene sequences. Of these principles, the first two seem very reasonable. The third is in accordance with the evidence that children acquire tense and aspect later in the lan- guage learning process. Only the fourth principle is questionable. The motivation for the fourth principle is that it enables the use of the inference algorithm discussed in Section 5. More recent work, beyond the scope of this paper, suggests using a different infer- ence algorithm which does not require this principle. The above four learning principles make use of the notion of a sentence "describing" a sequence of scenes. The notion of description is expressed via the set of inference rules given in Figure 7. Each rule enables the inference of the [EVENT] or [STATE] description on its right hand side from a sequence of [STATE] descriptions which match the pattern on its left hand side. For example, Rule 1 states that if there is a sequence of scenes which can be divided into two concatenated subsequences of scenes, such that each subsequence contains at least one scene, and in every scene in that first subsequence, x is at 149 NP cup DET N cup I The cup S GO(cup, [FROM(AT(John)), TO(AT(Mary))]) VP GO(z, [FROM(AT(John)), TO(AT(Mary))I) V PP PP GO(x, [y, z]) FROM(AT(John)) TO(AT(Mary)) P NP P NP slid FROM(AT(x)) John TO(AT(x)) Mary I I I I N N from John to Mary • I I John Mary Figure 6: An example of the linking rule used by MAIMRA showing the derivation of conceptual structure for the sentence The cup slid from John to Mary from the conceptual structure meanings of the individual words, along with a syntactic structure for the sentence. y and not at z, while in every scene in the second subsequence, x is at z but not at y, then we can de- scribe that entire sequence of scenes by saying that x went on a path from y to z. This rule does not stip- ulate that other things can't be true in those scenes embodying an [EVENT] of type GO, just that at a minimum, the conditions on the right hand side must hold over that scene sequence. In general, any given observation may entail multiple descriptions, each describing some subsequence of scenes which may overlap with other descriptions. MAIMRA currently assumes that these inference rules are innate. This seems tenable as these rules are very low level and are probably implemented by the vision system. Nonetheless, current work is focus- ing on removing the innateness requirement of these rules from the inference component. One severe limitation of the current set of inference rules is the lack of rules for describing the causality incorporated in the CAUSE and LET primitive con- ceptual functions. One method we have considered is to use rules like: CAUSE(w, GO(x, [FROM(y), TO(z)])) (BE(w, y) A BE(x, y) A -,BE(x, z))+; (BE(x, z) A -~BE(x, y))+. This states that w caused z to move from y to z if w was at the same location y, as x was, at the start of the motion. This is clearly unsatisfactory. One would like to incorporate a more accurate notion of causality such as that discussed in [9]. Unfortunately, it seems that Jackendoff's conceptual structures are not expressive enough to support the more complex notions of causality. This is another area for future work. 5 Implementation As mentioned previously, MAIMRA uses directed al- gorithms, rather than non-directed constraint pro- cessing, to produce a lexicon. When processing a scenario, MAIMRA first applies the inference compo- nent to the non-linguistic input to produce semantic structures. Then, it applies the parser to the linguis- tic input to produce syntactic structures. Finally, it applies the linking component in reverse, to both the syntactic structures and semantic structures, to produce a lexicon as output. This process is best illustrated by way of an example. 150 GO(z, [FROM(y), TO(z)]) GO(z, FROM(y)) GO(x, TO(z)) GO(z, [ 1) STAY(z, y) STAY(z, [ ]) GOExt (z, [FROM(y), TO(z)]) GOExt (z, FROM(y)) GOExt(z, TO(z)) BE(z,y) ORIENT(z, [FROM(y), TO(z)]) ORIENT(z, FROM(y)) ORIENT(z, TO(y)) (BE(z, y) ^ -"BE(z, z))+; (BE(z, z) ^ --BE(z, y))+ (1) • -- (BE(z, y) A --BE(z, z))+; (BE(z, z) A --BE(z, y))+ (2) (BE(z, y) ^ -~BE(z, z))+; (BE(z, z) ^ --BE(z, y))+ (3) ~- (BE(z, y) ^ -.BE(z, z))+; (BE(z, z) ^ -.BE(x, y))+ (4) ~- BE(z,y);(BE(z, y))+ (5) ~- BE(z,y); (BE(z,y))+ (6) • -- (BE(z, y) ^ BE(z, z) ^ y # z) + (7) • -- (BE(z,y) ^ BE(z, z) A y # z) + (8) .-- (BE(z, y) ^ BE(z, z) ^ y # z) + (9) BE(z, y)+ (10) ~-- ORIENT(z,[FROM(y),TO(z)]) + (11) • -- (ORIENT(z, [FROM(y), TO(z)]) V ORIENT(x, FROM(y))) + (12) (ORIENT(z, [FROM(y), TO(z)]) v ORIENT(z, TO(y))) + (13) Figure 7: The inference rules used by the inference component of MAIMRA to infer [EVENTS] from [STATES]. Consider the following input scenario. (BE(cup, AT(John))); (BE(cup, AT(Mary))A --BE(cup, AT(John))); (BE(cup, AT(Mary))); (BE(cup, AT(Bill))A -,BE(cup, AT(Mary))); The cup slid from John to Mary.; The cup slid from Mary to Bill. This scenario contains four scenes and two sentences. First, frame axioms are applied to the scene se- quence, yielding a sequence of scene descriptions con- taining all of the true [STATE] descriptions pertain- ing to those scenes, and only those true [STATE] descriptions. BE(cup, AT(John)); BE(cup, AT(Mary)); BE(cup, AT(Mary)); BE(cup, AT(Bill)) Given a scenario with n sentences and m scenes, find all possible ways of partitioning the m scenes into sequences of n partitions, where the partitions each contain a contiguous subsequence of scenes, but where the partitions themselves do not overlap and need not be contiguous. If we abbreviate the above sequence of four scenes as a; b; e; d, then partitioning for a scenario containing two sentences produces the following disjunction: {[a]; ([b] V [c] V [d] V [b;c] v [c;d] v [b; c;d])}v {([b] V [a; b]); ([c] V [d] V [c; d])}V {([c] V [b;c] V [a; b; c]); [d]}. Next, apply the inference rules from Figure 7 to each partition in the resulting disjunctive formula, replac- ing each partition with a disjunction of all [EVENTS] and [STATES] which can describe that partition. For our example, this results in the replacements given in Figure 8. The disjunction that remains after these replace- ments describes all possible sequences comprised of two [EVENTS] or [STATES] that can explain the input scene sequence. Notice how non-determinism is managed with a factored representation produced directly by the algorithm. After the inference component produces the se- mantic structure sequences corresponding to the non-linguistic input, the parser produces the syntac- tic structure sequences corresponding to the linguis- tic input. A variant of the CKY algorithm[8, 19] is used to produce factored parse trees. Finally, the linker is applied in reverse to each corresponding parse-tree/semantic-structure pair. This inverse linking process is termed fracturing. Fracturing is a recursive process applied to a parse tree fragment and a conceptual structure fragment. At each step, the conceptual structure fragment is as- signed to the root node of the parse tree fragment. If the root node of the parse tree has n non-head daugh- ters, then compute all possible ways of extracting n variable-free subexpressions from the conceptual structure fragment and assigning them to the non- head daughters, leaving distinct variables behind as place holders. The residue after subexpression ex- traction is assigned to the head daughter. Fractur- ing is applied recursively to the conceptual structures 151 [a] =~ BE(cup, AT(John)) [b],[c] =~ BE(cup, AT(Mary)) [d] =~ BE(cup, AT(Bill)) [a;b], [a;b;c] ::~ (GO(cup,[FROM(AT(John)),TO(AT(Mary))]) v GO(cup, FROM(AT(John))) v GO(cup, TO(AT(Mary))) v GO(cup, [ ])) [b; c] ::~ (BE(cup, AT(Mary)) V STAY(cup, AT(Mary))) [c; d], [b; c; d] ::~ (GO(cup, [FROM(AT(Mary)),TO(AT(Bill))]) V GO(cup, FROM(AT(Mary))) V GO(cup, TO(AT(Bill))) v GO(cup, [])). Figure 8: The replacements resulting from the application of the inference rules from Figure 7 to the example given in the text. assigned to daughters of the root node of the parse tree fragment, along with their annotations. The results of these reeursive calls are then conjoined to- gether. Finally, a disjunction is formed over each possible way of performing the subexpression extrac- tion. This process is illustrated by the following ex- ample. Consider fracturing the conceptual structure fragment GO(z, [FROM(AT(John)), TO(AT(Mary))]) along with a VP node with a head daughter labeled V and two sister daughters labeled PP. This produces the set of possible extractions shown in Figure 9. The fracturing recursion terminates when a lexical item is fractured. This returns a lexical entry triple com- prising the word, its category and a representation of its meaning. The end result of the fracturing pro- cess is a monotonic Boolean formula over definition triples which concisely represents the set of all pos- sible lexicons which allow the linguistic input from a scenario to explain the non-linguistic input. Such a factored lexicon (arising when processing a scenario similar to the second scenario of the training session given in Figure 2) is illustrated in Figure 10. The disjunctive lexicon produced by the fractur- ing process may contain lexicons which assign more than one meaning to a given word. We incorporate a monosemy constraint to rule out such lexicons. Con- ceptually, this is done by converting the factored dis- junctive lexicon to disjunctive normal form and re- moving lexicons which contain more than one lex- ical entry for the same word. Computationally, a more efficient way of accomplishing the same task is to view the factored disjunctive lexicon as a mono- tonic Boolean formula (I) whose propositions are lex- ical entries. We conjoin • with all conjunctions of the form ~ where the ai and ~j are both dis- tinct lexieal entries for the same word that appear in ~. The resulting formula is no longer monotonic. Satisfying assignments for this formula correspond to conjunctive lexicons which meet the monosemy constraint. The satisfying assignments can be found using well known constraint satisfaction techniques such as truth maintenance systems[10, 11]. While the problem of finding satisfying assignments for a Boolean formula (i.e. SAT) is NP-complete, our ex- perience is that in practice, the SAT problems gen- erated by MAIMRA are easy to solve and that the fracturing process of generating the SAT problems takes far more time than actually solving them. The monosemy constraint may seem a bit restric- tive. It can be relaxed somewhat by allowing up to n alternate meanings for a word by conjoining in conjunctions of the form n+l A~ij j=l where each of the aij are distinct lexical entries for the same word that appear in ~, instead of the pair- wise conjunctions used previously. 152 GO(z, [y, z]) GO(z, [y, 4) GO(z, [FROM(y), z]) GO(z, [FROM(y), z]) GO(z, [FROM(AT(y)), z]) GO(z, [FROM(AT(y)), z]) GO(z, [y, TO(z)]) GO(x, [y, TO(z)]) GO(z, [FROM(y), TO(z)]) GO(z, [FROM(y), TO(z)]) GO(z, [FROM(AT(y)), TO(z)]) GO(z, [FROM(AT(y)), TO(z)]) GO(z, [y, TO(AT(z))]) GO(z, [y, TO(AT(z))]) GO(z, [FROM(y), TO(AT(z))]) GO(z, [FROM(y), TO(AT(z))]) GO(z, [FROM(AT(y)), TO(AT(z))]) GO(z, [FROM(AT(y)), TO(AT(z))]) FROM(AT(John)) TO(AT(Mary)) TO(AT(Mary)) FROM(AT(John)) AT(John) TO(AT(Mary)) TO(AT(Mary)) AT(John) John TO(AT(Mary)) TO(AT(Mary)) John FROM(AT(John)) AT(Mary) AT(Mary) FROM(AT(John)) AT(John) AT(Mary) AT(Mary) AT(John) John AT(Mary) AT(Mary) John FROM(AT(John)) Mary Mary FROM(AT(John)) AT(John) Mary Mary AT(John) John Mary Mary John i • conju.ction disjunction. Figure 9: A recursive step of the fracturing process illustrating all possible subexpression extractions from the conceptual structure fragment given in the text, and their assignments to non-head daughters. The center column contains fragments annotating the first PP while the rightmost column contains fragments annotating the second PP. The leftmost column shows the residue which annotates the head. Each row is one distinct possible extraction. (AND (DEFINITION CUP N CuP) (OR (AND (OR (A~D (DEFINITIONIt~RY N (IT It~RY)) (DEFINITIONTO P (TO 70))) (AND (DEFINITION MARY N MARY) (DEFINITION TO P (TO (AT ?0))))) (OR (AND (OR (AND (DEFINITION JOHN N (AT JOHN)) (DEFINITION FROM P (FROM 70))) (AND (DEFINITION JOHN N JOHN) (DEFINITION FROM P (FROM (AT 70))))) (DEFINITION SLID V (GO 70 (PATH 71 72)))) (AND (DEFINITION JOHN N JOHN) (DEFINITION FROM P (AT 70)) (DEFINITION SLID V (GO ?0 (PATH 71 (FROM ?2))))))) (AND (DEFINITION MARY N MARY) (DEFINITION TO P (AT 70)) (OR (AND (OR (AND (DEFINITION JOHN N (AT JOHN)) (DEFINITION FROM P (FROM ?0))) (AND (DEFINITION JOHN N JOHN) (DEFINITION FROM P (FROM (AT 70))))) (DEFINITION SLID V (GO 70 (PATH 71 (TO 72))))) (AND (DEFINITION JOHN N JOHN) (DEFINITION FROM P (AT 70)) (DEFINITION SLID V (GO ?0 (PATH (FROM ?I) (TO ?2))))))))) Figure 10: A portion of the disjunctive lexicon which results from processing a scenario similar to the second scenario of the training session given in Figure 2. 153 6 Discussion When presented with a training session 3 much like that given in Figure 2, MAIMRA converges to a unique lexicon within six scenarios and several min- utes of CPU time. It is not however, able to converge to a unique meaning for the word enter when given scenarios of the form: (BE(John, AT(outside))A } -,BE(John, IN(room))); (BE(John, IN(room))A . --BE(John, AT(outside))) John entered the room. It turns out that there is no way to force MAIMRA to realize that the sentence describes the entire sce- nario and not just the first or last scene alone. Thus MAIMRA does not rule out the possibility that en- ter might mean "to be somewhere." The reason MAIMRA is successful with the session from Figure 2 is that the empty semantics constraint rules out asso- ciating the sentences with just the first or last scene because the semantic structures representing those scene subsequences have too little semantic material to distribute among the words of the sentence. One way around this problem would be for MAIMRA to attempt to choose the lexicon which maximizes the amount of non-linguistic data which is accounted for. Future work will investigate this issue further. We make three claims as a result of this work. First, this work demonstrates that the combina- tion of syntactic, semantic and pragmatic modules, each incorporating coguitively plausible innateness assumptions, offers sufficient constraint for learning word meanings with no prior lexical knowledge in the context of non-linguistic input. This offers a general framework for explaining meaning acquisi- tion. Second, appropriate choices of representation and algorithms allow efficient implementation within the general framework. While no claim is being made that children employ the mechanisms described here, they nonetheless can be used to construct useful en- gineered systems which learn language. The third 3Although not strictly required by either the theory or the implementation, we currently incorporate into the train- ing session given to MAIMRA, all initial lexicon telling it that 'John,' 'Mary' and 'Bill' are nouns, 'from' and 'to' are preposi- tions and 'the' is a determiner. This is to reduce the combina~ torics of generating ambiguous parses. Category information is not given for any other words, nor is meaning information given for any words occurring in the training session. In the- ory it would be possible to efficiently bootstrap the categories for these words as well, via a longer training session containing a few shorter sentences to constrain the possible categories for these words. We have not done so yet, however. claim is more bold. Most language acquisition re- search operates under a tacit assumption that chil- dren acquire individual pieces of knowledge about language by experiencing single short stimuli in iso- lation. This is often extended to an assumption that knowledge of language is acquired by discovering dis- tinct cues in the input, each cue elucidating one pa- rameter setting in a parameterized linguistic theory. We will call this assumption the local learning hy- pothesis. This is in contrast to our approach where knowledge of language is acquired by finding data consistent across longer correlated sessions. Our ap- proach requires the learner to do some puzzle solving or constraint satisfaction. 4 It is normally believed that the latter approach is not cognitively plausi- ble. The evidence for this is that children seem to have short "input buffers." The limited size of the input buffers is taken to imply that only short iso- lated stimuli can take part in inferring each new lan- guage fact. MAIMRA demonstrates that despite a short input buffer with the ability of retaining only one scenario at a time, it is nonetheless possible to produce a disjunctive representation which supports constraint solving across multiple scenarios. We be- lieve that without cross scenario constraint solving, it is impossible to account for meaning acquisition and thus the local learning hypothesis is wrong. Our ap- proach offers a viable alternative to the local learning hypothesis consistent with the observed short input buffer effect. 7 Related Work While most prior computational work on meaning ac- quisition focuses on contextual learning by scanning texts, some notable work has pursued a path simi- lax to that described here attempting to learn from correlated linguistic and non-linguistic input. In [16, 17], Salveter describes a system called MORAN. The non-linguistic component of each scenario pre- sented to MORAN consists of a sequence of exactly two scenes, where each scene is described by a con- junction of atomic formula. The linguistic compo- nent of each scenario is a preparsed case frame anal- ysis of a single sentence describing the state change occurring between those two scenes. From each sce- nario in isolation, MORAN infers what Salveter calls a Conceptual Meaning Structure (CMS) which at- tempts to capture the essence of the meaning of the verb in the sentence. This CMS is a subset of the 4We are not claiming that such puzzle solving is conscious. It is likely that constraint satisfaction, if done by children or adults, is a very low level subconscious cognitive function not subject to introspective observation. 154 two scenes identifying the portion of the scenes re- ferred to by the sentence, with the arguments of the atomic formula linked to noun phrases replaced by variables labeled with the syntactic positions those noun phrases fill in the sentence. The process of inferring CMSs involves two processes reminiscent of tasks performed by MAIMRA, namely the fig- ure/ground distinction whereby the inference com- ponent suggests possible subsets of the non-linguistic input as being referred to by the linguistic input (as distinct from the part which is not referred to) and the fracturing process whereby verb meanings are constructed by extracting out arguments from whole sentence meanings. MORAN's variants of these tasks are much simpler than the analogous tasks performed by MAIMRA. First, the figure/ground distinction is easier since each scenario presented to MORAN con- tains but a single sentence and a pair of scenes. MORAN need not figure out which subsequence of scenes corresponds to each sentence. Second, the linguistic input comes to MORAN preparsed which relies on preexisting knowledge of the lexical cate- gories of the words in the sentence. MORAN does not acquire category information, and furthermore does not deal with any ambiguity that might arise from the parsing process or the figure/ground distinction. Finally, the training session presented to MORAN re- lies on a subtle implicit link between the objects in the world and linguistic tokens used to refer to them. Part of the difficulty faced by MAIMRA is discerning that the linguistic token John refers to the concep- tual structure fragment John. MORAN is given that information a pr/or/by lacking a formal distinction between the notion of a linguistic token and concep- tual structure. Given this information, the fractur- ing process becomes trivial. MORAN therefore, does not exhibit the cross-scenario correlational behavior attributed to MAIMRA and in fact learns every verb meaning with just a single training scenario. This seems very implausible as a model of child language acquisition. In contrast to MAIMRA, MORAN is able to learn polysemous senses for verbs; one for each sce- nario provided for a given verb. MORAN focuses on extracting out the common substructure for polyse- mous meanings attempting to maximize commonal- ity between different word senses and build a catalog of higher level conceptual building blocks, a task not attempted by MAIMRA. In [13, 14], Pustejovsky describes a system called TULLY, which also operates in a fashion similar to MAIMRA arid MORAN, learning word meanings from pairs of linguistic and non-linguistic input. Like MORAN, the linguistic input given to TULLY for each scenario is a single parsed sentence. The non- linguistic input given along with that parsed sentence is a predicate calculus description of three parts of a single event, its beginning, middle and end. From this input, TULLY derives a Thematic Mapping In- dex, a data structure representing the 8-roles borne by each of the arguments to the main predicate. Like MORAN, the task faced by TULLY is much simpler than that faced by MAIMRA, since TULLY is pre- sented with unambiguous parsed input, is given the correspondence between nouns and their referents and is given the correspondence between a single sen- tence and the semantic representation of the event described by that sentence. TULLY does not learn lexical categories, does not have to determine fig- ure/ground partitioning of non-linguistic input and implausibly learns verb meanings from single scenar- ios without any cross-scenario correlation. Multiple scenarios for the same verb cause TULLY to gener- alize to the least common generalization of the in- dividual instances. TULLY however, goes beyond MAIMRA in trying to account for the acquisition of a variety of markedness features for 0-roles includ- ing [+motion], [+abstract], [±direct], [±partitive] and [±animate] 8 Conclusion The MAIMRA system successfully learns word mean- ings with no prior lexical knowledge of any words. It works by applying syntactic, semantic and prag- matic constraints to correlated linguistic and non- linguistic input. In doing so, it more accurately re- flects the type of learning performed by children, in contrast to previous lexical acquisition systems which focus on learning unknown words encountered while reading texts. Although, each module imple- ments a weak theory, and in isolation offers only lim- ited constraint on possible mental representations, the collective constraint provided by the combina- tion of modules is sufficient to reduce the nondeter- minism to a manageable level. It demonstrates that with a reasonable set of assumptions about innate knowledge, combined with appropriate representa- tions and algorithms, tractable learning is possible with short training sessions and limited processing. Though there may be disagreement as to the lin- guistic and cognitive plausibility of some of the in- nateness assumptions, and while the particular syn- tactic, semantic and pragmatic theories currently in- corporated into MAIMRA may be only approxima- tions to reality, nonetheless, the general framework shows promise of explaining how children acquire word meanings. In particular, it offers a viable al- 155 ternative to the local learning hypothesis which can explain how children acquire meanings that require correlation of experience across many input scenar- ios, with only limited size input buffers. Future work will attempt to address these potential shortcomings and will focus on supporting more robust acquisition of a broader class of word meanings. ACKNOWLEDGMENTS I would like to thank Peter Szolovitz, Patrick Win- ston and Victor Zue for giving me the freedom to embark on this project and encouraging me to elab- orate on it; AT&T Bell Laboratories for supporting this work through a Ph.D. scholarship; Johan deK- leer, Kris Halvorsen and everybody at Xerox PARC for listening to half-baked versions of this work prior to completion; Bob Berwick, Barbara Grosz, David McAllester and George Lakoff for many interesting discussions; and Ron Rivest for pushing me to com- plete this paper. References [1] Robert C. Berwick. Learning word meanings from examples. In Proceedings of the Eighth In- ternational Joint Conference on Artificial Intel- ligence, pages 459-461, 1983. [2] Noam Chomsky. Lectures on Government and Binding, volume 9 of Studies in Generative Grammar. Forts Publications, 1981. [3] Noam Chornsky. Some Concepts and Conse- quences of the Theory of Government and Bind- ing, volume 6 of Linguistic lnquiry Monographs. The M. I. T. Press, Cambridge, Massachusetts and London, England, 1982. [4] Noam Chomsky. Barriers, volume 13 of Lin- guistic Inquiry Monographs. The M. I. T. Press, Cambridge, Massachusetts and London, Eng- land, 1986. [5] Richard H. Granger, Jr. FOUL-UP a program that figures out meanings of words from context. In Proceedings of the Fifth International Joint Conference on Artificial Intelligence, pages 172- 178, 1977. [6] Ray Jackendoff. Semantics and Cognition. The M. I. T. Press, Cambridge, Massachusetts and London, England, 1983. 156 [7] Paul Jacobs and Uri Zernik. Acquiring lexical knowledge from text: A case study. In Proceed- ings of the Seventh National Conference on Ar- tifical Intelligence, pages 739-744, August 1988. [8] T. Kasami. An efficient recognition and syn- tax algorithm for context-free languages. Sci- entific Report AFCRL-65-758, Air Force Cam- bridge Research Laboratory, Bedford MA, 1965. [9] George Lakoff and Mark Johnson. Metaphors We Live By. The University of Chicago Press, 1980. [10] David Allen McAllester. Solving SAT problems via dependency directed backtracking. Unpub- lished manuscript received directly from author. [11] David Allen McAllester. An outlook on truth maintenance. A. I. Memo 551, M. I. T. Artificial Intelligence Laboratory, August 1980. [12] Fernando C. N. Pereira and David It. D. War- ren. Definite clause grammars for language analysis--a survey of the formalism and a com- parison with augmented transition networks. Artificial Intelligence, 13(3):231-278, 1980. [13] James Pustejovsky. On the acquisition of lexi- cal entries: The perceptual origin of thematic relations. In Proceedings of the 25 th Annual Meeting of the Association for Computational Linguistics, pages 172-178, July 1987. [14] James Pustejovsky. Constraints on the acquisi- tion of semantic knowledge. International Jour- nal of Intelligent Systems, 3(3):247-268, 1988. [15] Manny Rayner, /~sa Hugosson, and GSran Hagert. Using a logic grammar to learn a lex- icon. Technical Report 1%88001, Swedish Insti- tute of Computer Science, 1988. [16] Sharon C. Salveter. Inferring conceptual graphs. Cognitive Science, 3(2):141-166, 1979. [17] Sharon C. Salveter. Inferring building blocks for knowledge representation. In Wendy G. Lehn- eft and Martin H. Ringle, editors, Strategies for Natural Language Processing, chapter 12, pages 327-344. Lawrence Erlbaum Associates, 1982. [18] Roger C. Schank. The fourteen primitive actions and their inferences. Memo AIM-183, Stanford Artificial Intelligence Laboratory, March 1973. [19] D. H. Younger. Recognition and parsing of context-free languages in time O(n3). Informa- tion and Control, 10(2):189-208, 1967.
1990
19
STRUCTURE AND INTONATION IN SPOKEN LANGUAGE UNDERSTANDING* Mark Steedman Computer and Information Science, University of Pennsylvania 200 South 33rd Street Philadelphia PA 19104-6389 ([email protected]) ABSTRACT The structure imposed upon spoken sentences by intonation seems frequently to be orthogo- hal to their traditional surface-syntactic struc- ture. However, the notion of "intonational struc- ture" as formulated by Pierrehumbert, Selkirk, and others, can be subsumed under a rather dif- ferent notion of syntactic surface structure that emerges from a theory of grammar based on a "Combinatory" extension to Categorial Gram, mar. Interpretations of constituents at this level are in tam directly related to "information struc- ture", or discourse-related notions of "theme", "rheme", "focus" and "presupposition". Some simplifications appear to follow for the problem of integrating syntax and other high-level mod- ules in spoken language systems. One quite normal prosody (13, below) for an answer to the following question (a) intuitively impotes the intonational structure indicated by the brackets (stress, marked in this case by raised pitch, is indicated by capitals): (1) a. I know that Alice prefers velveL But what does MAry prefer? b. (MAry prefers) (CORduroy). Such a grouping is orthogonal to the traditional syn- tactic structure of the sentence. Intonational structure nevertheless remains strongly constrained by meaning. For example, contours im- posing bracketings like the following are not allowed: (2) #(Three cats)(in ten prefer corduroy) *I am grateful to Steven Bird, Julia Hirschberg, Aravind Joshi, Mitch Marcus, Janet Pierrehumben, and Bonnie Lynn Webber for comments and advice. They are not to blame for any errors in the translation of their advice into the present form. The research was supposed by DARPA grant no. N0014-85-K0018, and ARO grant no. DAAL03-89-C003 l. 9 Halliday [6] observed that this constraint, which Selkirk [14] has called the "Sense Unit Condition", seems to follow from the function of phrasal into- nation, which is to convey what will here be called "information structure" - that is, distinctions of focus, presupposition, and propositional attitude towards en- floes in the discourse model. These discourse entities are more diverse than mere nounphrase or proposi- tional referents, but they do not include such non- concepts as "in ten prefer corduroy." Among the categories that they do include are what Wilson and Sperber and E. Prince [13] have termed "open propositions". One way of introducing an open proposition into the discourse context is by asking a Wh-question. For example, the question in (1), What does Mary prefer? introduces an open proposition. As Jackendoff [7] pointed out, it is natural to think of this open proposition as a functional abstraction, and to express it as follows, using the notation of the A-calculus: (3) Ax [(prefer' x) mary'] (Primes indicate semantic interpretations whose de- tailed nature is of no direct concern here.) When this function or concept is supplied with an argu- ment corduroy', it reduces to give a proposition, with the same function argument relations as the canonical sentence: (4) (prefer' corduroy') mary' It is the presence of the above open proposition rather than some other that makes the intonation contour in (1)b felicitous. (l~at is not to say that its presence uniquely determines this response, nor that its explicit mention is necessary for interpreting the response.) These observations have led linguists such as Selkirk to postulate a level of "intonational struc- ture", independent of syntactic structure and re- lated to information structure. The theory that results can be viewed as in Figure 1: directionality of their arguments and the type of their result: LF:Argument Structure I Surface Structure ~.____q LF:Information Structure I I Structure ~Phonological Form( Figure 1: Architecture of Standard Metrical Phonology The involvement of two apparently uncoupled lev- els of structure in natural language grammar appears to complicate the path from speech to interpretation unreasonably, and to thereby threaten a number of computational applications in speech recognition and speech synthesis. It is therefore interesting to observe that all natu- ral languages include syntactic constructions whose semantics is also reminiscent of functional abstrac- tion. The most obvious and tractable class are Wh- constructions themselves, in which exactly the same fragments that can be delineated by a single intona- tion contour appear as the residue of the subordinate clause. Another and much more problematic class of fragments results from coordinate constructions. It is striking that the residues of wh-movement and con- junction reduction are also subject to something like a "sense unit condition". For example, strings like "in ten prefer corduroy" are not conjoinable: (5) *Three cats in twenty like velvet, and in ten prefer corduroy. Since coordinate constructions have constituted an- other major source of complexity for theories of nat- ural language grammar, and also offer serious ob- stacles to computational applications, it is tempt- ing to think that this conspiracy between syntax and prosody might point to a unified notion of structure that is somewhat different from traditional surface constituency. COMBINATORY GRAMMARS. Combinatory Categorial Grammar (CCG, [16]) is an extension of Categorial Grammar (CG). Elements like verbs are associated with a syntactic "category" which identifies them as functions, and specifies the type and (6) prefers := (S\NP)/NP : prefer' The category can be regarded as encoding the seman- tic type of their translation, which in the notation used here is identified by the expression to the right of the colon. Such functions can combine with arguments of the appropriate type and position by functional ap- plication: (7) Mary prefers corduroy I/P (S\NP)/NP NP . . . . . . . . . . . . . . . . > S\PIP . . . . . . . . . . . . . < s Because the syntactic types are identical to the se- mantic types, apart form directionality, the deriva- tion also builds a compositional interpretation, (prefer' corduroy') mary', and of course such a "pure" categorial grammar is context free. Coordina- tion might be included in CG via the following rule, allowing constituents of like type to conjoin to yield a single constituent of the same type: (8) X conj X ::~ X (9) I loath and detest velvet NP (S\NP)/NP conj (S\NP)//~P NP .It Cs\m')/~ (The rest of the derivation is omitted, being the same as in (7).) In order to allow coordination of con- tiguons strings that do not constitute constituents, CCG generalises the grammar to allow certain op- erations on functions related to Curry's combinators [3]. For example, functions may nondeterministically compose, as well as apply, under the following rule: (10) Forward Composition: X/Y : F Y/Z : G =~, X/Z : Ax F(Gz) The most important single property of combinatory rules like this is that they have an invariant semantics. This one composes the interpretations of the functions that it applies to, as is apparent from the right hand side of the rule. 1 Thus sentences like I suggested, tThe rule uses the notation of the ,~-calculus in the semantics, for clarity. This should not obscure the fact that it is functional composition itself that is the primitive, not the ,~ operator. 10 and would prefer, corduroy can be accepted, via the following composition of two verbs (indexed as B, following Curry's nomenclature) to yield a composite of the same category as a transitive verb. Crucially, composition also yields the appropriate interpretation for the composite verb would prefer: (11) suggested and would prefer . . . . . . . . . . . . . . . . . . . . . . . . . . . . (S\NP)/NP conj (S\NP)/VP VP/NP ............... >B (S\NP)/NP . . . . . . . . . . . . . . . . . . . . . . . . . . (S\NP)INP Combinatory grammars also include type-raising rules, which turn arguments into functions over functions-over-such-arguments. These rules allow ar- guments to compose, and thereby take part in coordi- nations like I suggested, and Mary prefers, corduroy. They too have an invariant compositional semantics which ensures that the result has an appropriate inter- pretation. For example, the following rule allows the conjuncts to form as below (again, the remainder of the derivation is omitted): (12) Subject Type-raising: NP : y :=~ S/(S\NP) : AF Fy (13) I suggested and Mary prefers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . |P (S\|P)/|P conj |P (S\|P)/|P ........ >T ........ >T s/Cs\le) s/cs\mP) . . . . . . . . . . . . . . . . . . >B . . . . . . . . . . . . . . . . . . >B slip slip . . . . . . . . . . . . . . . . . . . . . . . . . . . SliP This apparatus has been applied to a wide variety of coordination phenomena (cf. [4], [15]). INTONATION AND CONTEXT Examples like the above show that combinatory gram- mars embody a view of surface structure according to which strings like Mary prefers are constituents. It follows, according to this view, that they must also be possible constituents of non-coordinate sentences like Mary prefers corduroy, as in the following derivation: 11 (14) Mary prefers corduroy . . . . . . . . . . . . . . . . . . . . . . . . . liP (S\NP)/NP NP ........ >T s/(s\JP) .................. >B S/NP S (See [9], [18] and [19] for a discussion of the ob- vious problems for parsing written text that the pres- ence of such "spurious" (i.e. semantically equivalent) derivations engenders, and for some ways they might be overcome.) An entirely unconstrained combina- tory grammar would in fact allow any bracketing on a sentence, although the grammars we actually write for configurational languages like English are heavily constrained by local conditions. (An example might be a condition on the composition rule that is tacitly assmned below, forbidding the variable Y in the com- position rule to be instantiated as NP, thus excluding constituents like .[ate the]v P/N). The claim of the present paper is simply that par- ticular surface structures that are induced by the spe- cific combinatory grammar that are postulated to ex- plain coordination in English subsume the intona- tional structures that are postulated by Pierrehumbert et al. to explain the possible intonation contours for sentences of English. More specifically, the claim is that that in spoken utterance, intonation helps to de- termine which of the many possible bracketings per- mitted by the combinatory syntax of English is in- tended, and that the interpretations of the constituents that arise from these derivations, far from being "spu- rious", are related to distinctions of discourse focus among the concepts and open propositions that the speaker has in mind. The proof of this claim lies in showing that the rules of combinatory grammar can be made sensitive to intonation contour, which limit their application in spoken discourse. We must also show that the major constituents of intonated utterances like (1)b, under the analyses that are permitted by any given intona- tion, correspond to the information structure of the context to which the intonation is appropriate, as in (a) in the example (1) with which the paper begins. This demonstration will be quite simple, once we have established the following notation for intonation con- tours. I shall use a notation which is based on the theory of Pierrehumbert [10], as modified in more recent work by Selkirk [14], Beckman and Pierrehumbert [1], [11], and Pierrehumbert and Hirschberg [12]. I have tried as far as possible to take my examples and the associated intonational annotations from those au- thors. The theory proposed below is in principle com- patible with any of the standard descriptive accounts of phrasal intonation. However, a crucial feature of Pierrehumberts theory for present purposes is that it distinguishes two subcomponents of the prosodic phrase, the pitch accent and the boundary. 2 The first of these tones or tone-sequences coincides with the perceived major stress or stresses of the prosodic phrase, while the second marks the righthand bound- ary of the phrase. These two components are essen- tially invariant, and all other parts of the intonational tune are interlx)lated. Pierrehumberts theory thus cap- tures in a very natural way the intuition that the same tune can be spread over longer or shorter strings, in order to mark the corresponding constituents for the particular distinction of focus and propositional atti- tude that the melody denotes. It will help the exposi- tion to augment Pierrehumberts notation with explicit prosodic phrase boundaries, using brackets. These do not change her theory in any way: all the information is implicit in the original notation. Consider for example the prosody of the sentence Fred ate the beans in the following pair of discourse settings, which are adapted from Jackendoff [7, pp. 260]: (15) Q: I/el1, what about the BEAns? Who ate THEM? A : FRED ate the BEA-ns. ( H* L )( L+H* LHg ) two tunes are reversed: this time the tune with pitch accent T.+H* and boundary LH% is spread across a prosodic phrase Fred ate, while the other tune with pitch accent H* and boundary LL% is carried by the prosodic phrase the beans (again starting with an in- terpolated or null tone). 4 The meaning that these tunes convey is intuitively very obvious. As Pierrehumbert and Hirschberg point out, the latter tune seems to be used to mark some or all of that part of the sentence expressing information that the speaker believes to be novel to the hearer. In traditional terms, it marks the "comment" - more pre- cisely, what Halliday called the '~rheme'. In contrast, the r.+H* LH% tune seems to be used to mark some or all of that part of the sentence which expresses in- formation which in traditional terms is the "topic" - in I-lalliday's terms, the "theme". 5 For present pur- poses, a theme can be thought of as conveying what the speaker assumes to be the subject of mutual inter- est, and this particular tune marks a theme as novel to the conversation as a whole, and as standing in a contrastive relation to the previous one. (If the theme is not novel in this sense, it receives no tone in Pierrehumbert's terms, and may even be left out altogether.) 6 Thus in (16), the L+H* Lrt% phrase in- cluding this accent is spread across the phrase Fred ate. 7 Similarly, in (15), the same tune is confined to the object of the open proposition ate the beans, be- cause the intonation of the original question indicates that eating beans as opposed to some other comestible is the new topic, s (16) q: I/ell, what about FRED? What did HE eat7 A: FRED ate the BEAns. ( L+H* LH~ )( H* LL~ ) In these contexts, the main stressed syllables on both Fred and the beans receive a pitch accent, but a dif- ferent one. In the former example, (15), there is a prosodic phrase on Fred made up of the pitch accent which Pierrehumbert calls H*, immediately followed by an r. boundary. There is another prosodic phrase having the pitch accent called L+H* on beans, pre- ceded by null or interpolated tone on the words ate the, and immediately followed by a boundary which is written LH%. (I base these annotations on Pierre- humber and Hirschberg's [12, ex. 33] discussion of this example.) 3 In the second example (16) above, the 2For the purpose s of this abstract, I am ignoring the distinction between the intonational phrase proper, and what Pierrehumben and her colleagues call the "intermediate" phrase, which differ in respect of boundary tone-sequences. 3I continue to gloss over Pierrehumbert's distinction between *'intermediate" and "intonational" phrases. COMBINATORY PROSODY The r,+H* r,H% intonational melody in example (16) belongs to a phrase Fred ate ... which corresponds under the combinatory theory of grammar to a gram- 4The reason for notating the latter boundary as LLg, rather than L is again to do with the distinction between intonational and in- termediate phrases. 5The concepts of theme and rheme are closely related to Grosz et al's [5] concepts of "backward looking center" and "forward looking center". 6Here I depart slightly from Halliday's definition. The present paper also follows Lyons [8] in rejecting Hallidays' claim that the theme must necessarily be sentence-initial. ran alternative prosody, in which the contrastive tune is con- fined to Fred, seems equally coherent, and may be the one intended by Jackendoff. I befieve that this altemative is informationally dis- tinct, and arises from an ambiguity as to whether the topic of this discourse is Fred or What Fred ate. It too is accepted by the rules below. SNore that the position of the pitch accent in the phrase has to do with a further dimension of information structure within both theme and theme, which me might identify as "focus': I ignore this dimension here. 12 matical constituent, complete with a translation equiv- alent to the open proposition Az[(ate' z) fred']. The combinatory theory thus offers a way to derive such intonational phrases, using only the independently motivated rules of combinatory grammar, entirely un- der the control of appropriate intOnation contOurs like L+H* LH%. 9 It is extremely simple tO make the existing combi- natory grammar do this. We interpret the two pitch accents as functions over boundaries, of the following types: I0 (17) L+H* := Theme/Bh H* := Rheme/Bl - that is, as functions over boundary tOnes into the two major informational types, the Hallidean "theme" and "rheme". The reader may wonder at this point why we do not replace the category Theme by a functional category, say Utterance/Rheme, cor- responding to its semantic type. The answer is that we do not want this category to combine with any- thing but a complete rheme. In particular, it must not combine with a function into the category Rheme by functional composition. Accordingly we give it a non-functional category, and supply the following special purpose prosodic combinatory rules: (18) Theme Rheme =~ Utterance Rheme Theme =~ Utterance We next define the various boundary tOnes as ar- guments to these functions, as follows: (19) LH% := Bh LL% := B1 L := B1 (As usual, we ignore for present purposes the distinc- tion between intermediate- and intonational- phrase boundaries.) Finally, we accomplish the effect of in- terpolation of other parts of the tune by assigning the following polymorphic category to all elements bear- ing no tOne specification, which we will represent as the tOne 0: (20) 0 := x/x 9I am grateful to Steven Bird for discussions on the following proposal. 1°An alternative (which would actually be closer to Pierrchum- bert and Hirschberg's own proposal to compositionally assemble discourse meanings from more primitive elements of meaning car- fled by each individual tone) would be to make the boundary tone the function and the pitch accent an argument. 13 Syntactic combination can then be made subject to the following simple restriction: (21) The Prosodic Constituent Condition: Com- bination of two syntactic categories via a syntactic combinatory rule is only allowed if their prosodic categories can also combine. (The prosodic and syntactic combinatory rules need not be the same). This principle has the sole effect of excluding cer- tain derivations for spoken utterances that would be allowed for the equivalent written sentences. For ex- ample, consider the derivations that it permits for ex- ample (16) above. The rule of forward composition is allowed tO apply tO the words Fred and ate, because the prosodic categories can combine (by functional application): (22) Fred ate ... ( L+H* LHZ ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NP : fred ' (S\NP)/NP : at e ' Theme/Bh Bh ................... >T S/(S\NP) : ~P[P fred'] Theme/Bh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . >B S/NP: kX[(ate' X) fred'] Theme The category x/x of the null tone allows intonational phrasal tunes like T,+H* LH% tune tO spread across any sequence that forms a grammatical constituent according to the combinatory grammar. For example, if the reply to the same question What did Fred eat? is FRED must have eaten the BEANS, then the tune will typically be spread over Fred must have eaten .... as in the following (incomplete) derivation, in which much of the syntactic and semantic detail has been omitted in the interests of brevity: (23) Fred must have eaten . .. ( L+H* LHT. ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NP (S\NP)/VP VP/VPen VPen/NP Theme/Bh X/X X/X Bh ........ >T Theme/Bh .................. >B Theme/Bh ..................... >B Theme/Bh ..................... >B Theme The rest of the derivation of (16) is completed as follows, using the first rule in ex. (18): (24) Fred ate the beans ( L+H* LH• ) ( H* LL% ) ......................................... IP:fred' (S\IIP)/IIP:ate' IP/I: the' l:beans' Theae/Bh Bh X/I Rheae ......... >T .................. > S/(S\|P) : IP:the' beans' ~P[P fred'] lUteme Theme/Sh ....................... )B S/IP: ~i[(ate ~ X) fred'] Thame ................................. ) S: ate' (the' beans') fred' Utterance The division of the utterance into an open proposition constituting the theme and an argument constituting the rheme is appropriate to the context established in (16). Moreover, the theory permits no other deriva- tion for this intonation contour. Of course, repeated application of the composition rule, as in (23), would allow the L+H* LH% contour to spread further, as in (FRED must have eaten)(the BEANS). In contrast, the parallel derivation is forbidden by the prosodic constituent condition for the alternative intonation contour on (15). Instead, the following derivation, excluded for the previous example, is now allowed: (25) Fred ate the beans ( II* L ) ( L+II* LI~ ) ......................................... BP:fred' (S\|P)/llP:ate' IP/|:the' I:beans' P.hme XlX XIX Theme ........ >T .................. > SI(sklP) : IP:the' beans' ~P[P fred'] Theme Rheme ................................ ) SkiP:eat' (the' beans') Theme ........................................ ) S: ear'(the' beams') ~red' Utterance No other analysis is allowed for (25). Again, the derivation divides the sentence into new and given in- formation consistent with the context given in the ex- ample. The effect of the derivation is to annotate the entire predicate as an L+H* LH%. It is emphasised that this does not mean that the tone is spread, but that the whole constituent is marked for the corresponding discourse function -- roughly, as contrastive given, or theme. The finer grain information that it is the ob- ject that is contrasted, while the verb is given, resides in the tree itself. Similarly, the fact that boundary se- quences are associated with words at the lowest level of the derivation does not mean that they are part of the word, or specified in the lexicon, nor that the word is the entity that they are a boundary of. It is 14 prosodic phrases that they bound, and these also are defined by the tree. All the other possibilities for combining these two contours on this sentence are shown elsewhere [17] to yield similarly unique and contextually appropriate interpretations. Sentences like the above, including marked theme and rheme expressed as two distinct intona- tionalAntermediate phrases are by that token unam- biguous as to their information structure. However, sentences like the following, which in Pierrehum- berts' terms bear a single intonational phrase, are much more ambiguous as to the division that they convey between theme and rheme: (26) I read a book about CORduroy ( H* LL% ) Such a sentence is notoriously ambiguous as to the open proposition it presupposes, for it seems equally apropriate as a response to any of the following ques- tions: (27) a. What did you read a book about? b. What did you read? c. What did you do? Such questions could in suitably contrastive contexts give rise to themes marked by the L+H* LH% tune, bracketing the sentence as follows: (28) a. (1 read a book about)(CORduroy) b. (I read)(a book about CORduroy) c. (I)(read a book about CORduroy) It seems that we shall miss a generalisation concern- ing the relation of intonation to discourse information unless we extend Pierrehumberts theory very slightly, to allow null intermediate phrases, without pitch ac- cents, expressing unmarked themes. Since the bound- aries of such intermediate phrases are not explicitly marked, we shall immediately allow all of the above analyses for (26). Such a modification to the theory can be introduced by the following rule, which non- deterministically allows certain constituents bearing the null tone to become a theme: (29) r. r~ X/X ::~ Theme The symbol E is a variable ranging over syntactic categories that are (leftward- or rightward- looking) functions into S. al The rule is nondeterministic, so it correctly continues to allow a further analysis of the entire sentence as a single Intonational Phrase convey- ing the Rheme. Such an utterance is the appropriate response to yet another open-proposition establishing question, What happened?.) With this generalisation, we are in a position to make the following claim: (30) The structures demanded by the theory of in- tonation and its relation to contextual infor- marion are the same as the surface syntac- tic structures permitted by the combinatory grammar. A number of corollaries follow, such as the following: (31) Anything which can coordinate can be an intonational constituent, and vice versa. CONCLUSION The pathway between phonological form and inter- pretation can now be viewed as in Figure 2: I Logical Form = Argument Structure Z Surface Structure -- Intonation Structure = Information Structure I Ph°n°l°gi P°rm I Figure 2: Architecture of a CCG-based Prosody Such an architecture is considerably simpler than the one shown earlier in Figure 1. Phonological form maps via the rules of combinatory grammar directly onto a surface structure, whose highest level con- stituents correspond to intonational constituents, an- notated as to their discourse function. Surface struc- ture therefore subsumes intonational structure. It also subsumes information structure, since the translations of those surface constituents correspond to the enti- ties and open propositions which constitute the topic or theme (if any) and the comment or rheme. These in 11The inclusion in the full grammar of further roles of type- raising in addition to the subject rule discussed above means that the set of categories over which ~ ranges is larger than it is possible to reveal in the present paper. (For example, it includes object complements). See the earlier papers and [17] for digcussion. 15 turn reduce via functional application to yield canon- ical function-argument structure, or "logical form". There may be significant advantages for automatic spoken language understanding in such a theory. Most obviously, where in the past parsing and phono- logical processing have tended to deliver conflicting structural analyses, and have had to be pursued inde- pendently, they now are seen to be in concert. That is not to say that intonational cues remove all local struc- tural ambiguity. Nor should the problem of recognis- ing cues like boundary tones be underestimated, for the acoustic realisation in the fundamental frequency F0 of the intonational tunes discussed above is en- tirely dependent upon the rest of the phonology - that is, upon the phonemes and words that bear the tune. It therefore seems most unlikely that intona- tional contour can be identified in isolation from word recognition. 12 What the isomorphism between syntactic structure and intonational structure does mean is that simply structured modular processors which use both sources of information at once can be more easily devised. Such an architecture may reasonably be expected to simplify the problem of resolving local structural am- biguity in both domains. For example, a syntactic analysis that is so closely related to the structure of the signal should be easier to use to "filter" the am- biguities arising from lexical recognition. However, it is probably more important that the constituents that arise under this analysis are also semantically interpreted. The interpretations are di- rectly related to the concepts, referents and themes that have been established in the context of discourse, say as the result of a question. These discourse en- tities are in turn directly reducible to the structures involved in knowledge-representation and inference. The direct path from speech to these higher levels of analysis offered by the present theory should therefore make it possible to use more effectively the much more powerful resources of semantics and domain- specific knowledge, including knowledge of the dis- course, to filter low-level ambiguities, using larger grammars of a more expressive class than is cur- rently possible. While vast improvements in purely bottom-up word recognition can be expected to con- rinue, such filtering is likely to remain crucial to suc- cessful speech processing by machine, and appears to be characteristic of all levels of human processing, for both spoken and written language. 12This is no bad thing. The converse also applies: intonation contour effects the acoustic rcalisation of words, particularly with respect to timing. It is therefore likely that the benefits of combin- ing intonational recognition and word recognition will be mutual. REFERENCES [1] Beckman, Mary and Janet Pierrehumbert: 1986, 'Intonational Structure in Japanese and English', Phonology Yearbook, 3, 255-310. [2] Chomsky, Noam: 1970, 'Deep Structure, Sur- face Structure, and Semantic Interpretation', in D. Steinberg and L. Jakobovits, Semantics, CUP, Cambridge, 1971, 183-216. [3] Curry, Haskell and Robert Feys: 1958, Combi- natory Logic, North Holland, Amsterdam. [4] Dowty, David: 1988, Type raising, functional composition, and non-constituent coordination, in Richard T. Oehrle, E. Bach and D. Wheeler, (eds), Categorial Grammars and Natural Lan- guage Structures, Reidel, Dordrecht, 153-198. [5] Grosz, Barbara, Aravind Joshi, and Scott We- instein: 1983, 'Providing a Unified Account of Definite Noun Phrases in Discourse, Proceed- ings of the 21st Annual Conference of the ACL, Cambridge MA, July 1983, 44-50. [6] Halliday, Michael: 1967, Intonation and Gram- mar in British English, Mouton, The Hague. [7] Jackendoff, Ray: 1972, Semantic Interpretation in Generative Grammar, MIT Press, Cambridge MA. [8] Lyons, John: 1977. Semantics, vol. H, Cam- bridge University Press. [9] Pareschi, Remo, and Mark Steedman. 1987. A lazy way to chart parse with categorial gram- mars, Proceedings of the 25th Annual Confer- ence of the ACL, Stanford, July 1987, 81--88. [10] Pierrehumbert, Janet: 1980, The Phonology and Phonetics of English Intonation, Ph.D disserta- tion, MIT. (Dist. by Indiana University Linguis- tics Club, Bloomington, IN.) [11] Pierrehumbert, Janet, and Mary Beckman: 1989, Japanese Tone Structure, MIT Press, Cambridge MA. [12] Pierrehumbert, Janet, and Julia Hirschberg, 1987, 'The Meaning of Intonational Contours in the Interpretation of Discourse', ms. Bell Labs. [13] Prince, Ellen F. 1986. On the syntactic marking of presupposed open propositions. Papers from the Parasession on Pragmatics and Grammati- cal Theory at the 22nd Regional Meeting of the Chicago Linguistic Society, 208-222. 3.6 [14] Selkirk, Elisabeth: Phonology and Syntax, MIT Press, Cambridge MA. [15] Steedman, Mark: 1985a. Dependency and Co- ordination in the Grammar of Dutch and En- glish, Language 61.523-568. [16] Steedman, Mark: 1987. Combinatory grammars and parasitic gaps. Natural Language & Lin- guistic Theory, 5, 403-439. [17] Steedman, Mark: 1989, Structure and Intona- tion, ms. U. Penn. [18] Vijay-Shankar, K and David Weir: 1990, 'Poly- nomial Time Parsing of Combinatory Catego- rial Grammars', Proceedings of the 28th Annual Conference of the ACL, Pittsburgh, Jane 1990. [19] Wittenburg, Kent: 1987, 'Predictive Combina- tors: a Method for Efficient Processing of Com- binatory Grammars', Proceedings of the 25th Annual Conference of the ACL, Stanford, July 1987, 73--80.
1990
2
TYPES IN FUNCTIONAL UNIFICATION GRAMMARS Michael Elhadad Department of Computer Science Columbia University New York, NY 10027 Internet: [email protected] ABSTRACT Functional Unification Grammars (FUGs) are popular for natural language applications because the formalism uses very few primitives and is uniform and expressive. In our work on text generation, we have found that it also has annoying limitations: it is not suited for the expression of simple, yet very common, taxonomic relations and it does not allow the specification of completeness conditions. We have implemented an extension of traditional functional unification. This extension addresses these limitations while preserving the desirable properties of FUGs. It is based on the notions of typed features and typed constituents. We show the advantages of this exten- sion in the context of a grammar used for text genera- tion. 1 INTRODUCTION Unification-based formalisms are increasingly used in linguistic theories (Shieber, 1986) and com- putational linguistics. In particular, one type of unification formalism, functional unification grammar (FUG) is widely used for text generation (Kay, 1979, McKeown, 1985, Appelt, 1985, Paris, 1987, McKeown & Elhadad, 1990) and is beginning to be used for parsing (Kay, 1985, Kasper, 1987). FUG enjoys such popularity mainly because it allies expres- siveness with a simple economical formalism. It uses very few primitives, has a clean semantics (Pereira&Shieber, 1984, Kasper & Rounds, 1986, E1- hadad, 1990), is monotonic, and grants equal status to function and structure in the descriptions. We have implemented a functional unifier (EI- hadad, 1988) covering all the features described in (Kay, 1979) and (McKeown & Paris, 1987). Having used this implementation extensively, we have found all these properties very useful, but we also have met with limitations. The functional unification (FU) for- malism is not well suited for the expression of simple, yet very common, taxonomic relations. The tradi- tional way to implement such relations in FUG is ver- bose, inefficient and unreadable. It is also impossible to express completeness constraints on descriptions. In this paper, we present several extensions to the FU formalism that address these limitations. These extensions are based on the formal semantics presented in (Elhadad, 1990). They have been im- plemented and tested on several applications. 157 We first introduce the notion of typed features. R allows the definition of a structure over the primitive symbols used in the grammar. The unifier can take advantage of this structure in a manner similar to (Ait- Kaci, 1984). We then introduce the notion of typed constituents and the FSET construct. It allows the dec- laration of explicit constraints on the set of admissible paths in functional descriptions. Typing the primitive elements of the formalism and the constituents allows a more concise expression of grammars and better checking of the input descriptions. It also provides more readable and better documented grammars. Most work in computational linguistics using a unification-based formalism (e.g., (Sag & Pollard, 1987, Uszkoreit, 1986, Karttunen, 1986, Kay, 1979, Kaplan & Bresnan, 1982)) does not make use of ex- plicit typing. In (Ait-Kaci, 1984), Ait-Kaci introduced V-terms, which are very similar to feature structures, and introduced the use of type inheritance in unifica- tion. W-terms were intended to be general-purpose programming constructs. We base our extension for typed features on this work but we also add the notion of typed constituents and the ability to express com- pleteness constraints. We also integrate the idea of typing with the particulars of FUGs (notion of con- stituent, NONE, ANY and CSET constructs) and show the relevance of typing for linguistic applications. 2 TRADITIONAL FUNCTIONAL UNIFICATION ALGORITHM The Functional Unifier takes as input two descrip- tions, called functional descriptions or FDs and produces a new FD if unification succeeds and failure otherwise. An FD describes a set of objects (most often lin- guistic entities) that satisfy certain properties. It is represented by a set of pairs [a:v], called features, where a is an attribute (the name of the property) and v is a value, either an atomic s3anbol or recursively an FD. An attribute a is allowed to appear at most once in a given FD F, so that the phrase "the a of F" is always non ambiguous (Kay, 1979). It is possible to define a natural partial order over the set of FDs. An FD Xis more specific than the FD Y if X contains at least all the features of Y (that is X _c Y). Two FDs are compatible if they are not con- tradictory on the value of an attribute. Let X and Y be two compatible FDs. The unification of X and Y is by definition the most general FD that is more specific than both X and Y. For example, the unification of {year:88, time: {hour:5} } and {time:{mns:22}, month:10} is {year:88, month: i0, time: {hour: 5, mns:22 } }. When properties are simple (all the values are atomic), unification is therefore very similar to the union of two sets: XuY is the smallest set containing both X and Y. There are two problems that make unification different from set union: first, in general, the union of two FDs is not a consistent FD (it can contain two different values for the same label); second, values of features can be complex FDs. The mechanism of unification is therefore a little more complex than sug- gested, but the FU mechanism is abstractly best under- stood as a union operation over FDs (cf (Kay, 1979) for a full description of the algorithm). Note that contrary to structural unification (SU, as used in Prolog for example), FU is not based on order and length. Therefore, { a : 1, b : 2 } and { b : 2, a : 1 ] are equivalent in FU but not in SU, and { a : 1 } and {b:2, a:l } are compatible in FU but not in SU (FDs have no fixed arity) (cf. (Knight, 1989, p.105) for a comparison SU vs. FU). TERMINOLOGY: We introduce here terms that constitute a convenient vocabulary to describe our ex- tensions. In the rest of the paper, we consider the unification of two FDs that we call input and gram- mar. We define L as a set of labels or attribute names and C as a set of constants, or simple atomic values. A string of labels (that is an element of L*) is called a path, and is noted <11...11,>. A grammar defines a domain of admissible paths, A c L*. A defines the skeleton of well-formed FDs. • An FD can be an atom (element of 6') or a set of features. One of the most attractive characteristics of FU is that non-atomic FDs can be abstractly viewed in two ways: either as a fiat list of equations or as a structure equivalent to a directed graph with labeled arcs (Karttunen, 1984). The possibility of using a non- structured representation removes the em- phasis that has traditionally been placed on structure and constituency in language. • The meta-FDs NONE and ANY are provided to refer to the status of a feature in a description rather than to its value. [label:NONE] indicates that label cannot have a ground value in the FD resulting from the unification. [label:ANY] indicates that label ~- must have a ground value in the resulting FD. Note that NONE is best viewed as imposing constraints on the definition of A: an equation <II...ln>=NONE means that <ll...ln > ~ A. 158 • A constituent of a complex FD is a distin- guished subset of features. The special label CSET (Constituent Set) is used to identify constituents. The value of CSET is a list of paths leading to all the con- stitueuts of the FD. Constituents trigger recursion in the FU algorithm. Note that CSET is part of the formalism, and that its value is not a valid FD. A related con- struct of the formalism, PATTERN, imple- ments ordering constraints on the strings denoted by the FDs. Among the many unification-based formalisms, the constructs NONE, ANY, PATrEKN, CSET and the no- tion of constituent are specific to FUGs. A formal semantics of FUGs covering all these special con- structs is presented in (Elhadad, 1990). 3 TYPED FEATURES A LIMITATION OF FUGS: NO STRUCTURE OVER THE SET OF VALUES: In FU, the set of constants C has no structure. It is a fiat collection of symbols with no relations between each other. All constraints among symbols must be expressed in the grammar. In lin- guistics, however, grammars assume a rich structure between properties: some groups of features are mutually exclusive; some features are only defined in the context of other features. Noun I Question I Personal Pronoun --I I Demonstrative [ Quantified Proper I Count Common ---I I Mass Figure l: A systemforNPs Let's consider a fragment of grammar describing noun-phrases (NPs) (cf Figure 1) using the systemic notation given in (Winograd, 1983). Systemic net- works, such as this one, encode the choices that need to be made to produce a complex linguistic entity. They indicate how features can be combined or whether features are inconsistent with other combina- tions. The configuration illustrated by this fragment is typical, and occurs very often in grammars. 1 The schema indicates that a noun can be either a pronoun, a proper noun or a common noun. Note that these 1We have implemented a grammar similar to OVinograd, 1983, appendix B) containing 111 systems. In this grammar, more than 40% of the systems are similar to the one described here. ( (cat noun) (alt (( (noun pronoun) (pronoun ( (alt (question personal demonstrative quantified) ) ) ) ) ( (noun proper) ) ( (noun common) (common ((alt (count mass)))))))) Figure 2: A faulty FUG for the NP system ((alt (( (noun pronoun) (common NONE) (pronoun ( (alt (question personal demonstrative quantified) ) ) ) ) ((noun proper) (pronoun NONE) (common NONE)) ( (noun common) (pronoun NONE) (common ((alt (count mass)))))))) The input FD describing a personal pronoun is then: ((cat noun) (noun pronoun) (pronoun personal) ) Figure 3: A correct FUG for the NP system three features are mutually exclusive. Note also that the choice between the features { question, per- sonal, demonstrative, quantified} is relevant only when the feature pronoun is selected. This system therefore forbids combinations of the type { pronoun, proper } and { common, personal }. The traditional technique for expressing these con- straints in a FUG is to define a label for each non terminal symbol in the ~stem. The resulting gram- 2 mar is shown in Figure 2. This grammar is, however, incorrect, as it allows combinations of the type ( (noun proper) (pronoun question) ) or even worse ( (noun proper) (pronoun zouzou) ). Because unification is similar to union of features sets, a feature (pronoun question) in the input would simply get added to the output. In order to enforce the correct constraints, it is therefore necessary to use the meta-FD NONE (which prevents the addition of unwanted features) as shown in Figure 3. There are two problems with this corrected FUG implementation. First, both the input FD describing a pronoun and the grammar are redundant and longer than needed. Second, the branches of the alternations in the grammar are interdependent: you need to know in the branch for pronouns that common nouns can be sub-categorized and what the other classes of nouns are. This interdependence prevents any modularity: if a branch is added to an alternation, all other branches 2ALT indicates that the lists that follow are alternative noun types. 159 need to be modified. It is also an inefficient mechanism as the number of pairs processed during unification is O (n ~) for a taxonomy of depth d with an average ofn branches at each level. TYPED FEATURES: The problem thus is that FUGs do not gracefiilly implement mutual exclusion and hierarchical relations. The system of nouns is a typi- cal taxonomic relation. The deeper the taxonomy, the more problems we have expressing it using traditional FUGs. We propose extracting hierarchical information from the FUG and expressing it as a constraint over the symbols used. The solution is to define a sub- sumption relation over the set of constants C. One way to define this order is to define types of symbols, as illustrated in Figure 4. This is similar to V-terms defined in (Ait-Kaci, 1984). Once types and a subsumption relation are defined, the unification algorithm must be modified. The atoms X and Y can be unified ff they are equal OR if one subsumes the other. The resuR is the most specific of X and Y. The formal semantics of this extension is detailed in (Elhadad, 1990). With this new definition of unification, taking ad- vantage of the structure over constants, the grammar and the input become much smaller and more readable as shown in Figure 4. There is no need to introduce artificial labels. The input FD describing a pronoun is a simple ( (cat personal-pronoun) ) instead of the redundant chain down the hierarchy ((cat noun) (noun pronoun) (pronoun (define-type noun (pronoun proper common)) (define-type pronoun (personal-pronoun question-pronoun demonstrative-pronoun quantified-pronoun)) (define-type common (count-noun mass-noun)) The ~amm~becomes: ((cat noun) (alt (((cat pronoun) (cat ((alt (question-pronoun personal-pronoun demonstrative-pronoun quantified-pronoun))))) ((cat proper)) ((cat common) (cat ((alt (count-noun mass-noun)))))))) Andthemput: ((cat personal-pronoun)) Figure 4: Using typed ~atures Typedeelarat~ns: (define-constituent determiner (definite distance demonstrative possessive)) InputFDd~cr~ingadeterminer: (determiner ((definite yes) (distance far) (demonstrative no) (possessive no))) F~ure 5: A typed constitue~ personal)). Because values can now share the same label CAT, mutual exclusion is enforced without adding any pair [ 1 : NONE] .3 Note that it is now pos- sible to have several pairs [a :v i ] in an FD F, but that the phrase "the a of F" is still non-ambiguous: it refers to the most specific of the v i. Finally, the fact that there is a taxonomy is explicitly stated in the type definition section whereas it used to be buried in the code of the FUG. This taxonomy is used to document the grammar and to check the validity of input FDs. 4 TYPED CONSTITUENTS: THE FSET CONSTRUCT A natural extension of the notion of typed features is to type constituents: typing a feature restricts its possible values; typing a constituent restricts the pos- sible features it can have. Figure 5 illustrates the idea. The define constituent statement allows only the four given features to appear under the constituent determiner. This statement declares what the 3In this example, the grammar could be a simple flat alternation ((cat ((alt (noun pronoun personal-pronoun .., common mass-noun count-noun))))), but this expression would hide the structure of the gIan~n~. 16 0 grammar knows about determiners. Define constituent is a completeness constraint as defined in LFGs (Kaplan & Bresnan, 1982); it says what the grammar needs in order to consider a con- stituent complete. Without this construct, FDs can only express partial information. Note that expressing such a constraint (a limit on the arity of a constituent) is impossible in the tradi- tional FU formalism. It would be the equivalent of putting a NONE in the attribute field of a pair as in NONE:NONE. In general, the set of features that are allowed un- der a certain constituent depends on the value of another feature. Figure 6 illustrates the problem. The fragment of grammar shown defines what inherent roles are defined for different types of processes (it follows the classification provided in (Halliday, 1985)). We also want to enforce the constraint that the set of inherent roles is "closed": for an action, the inherent roles are agent, medium and benef and noth- ing else. This constraint cannot be expressed by the standard FUG formalism. A define constituent makes it possible, but nonetheless not very efficient: the set of possible features under the constituent inherent-roles depends on the value of the feature process-type. The first part of Figure 6 shows how the correct constraint can be implemented with define constituent only: we need to exclude all the roles that are not defined WithoutFSET: (define-constituent inherent-roles (agent medium benef carrier attribute processor phenomenon)) ( (cat clause) (alt ( ( (process-type action) (inherent-roles ((carrler NONE) (attribute NONE) (processor NONE) (phenomenon NONE) ) ) ) ( (process-type attributive) (inherent-roles ( (agent NONE) (medium NONE) (benef NONE) (processor NONE) (phenomenon NONE) ) ) ) ( (process-type mental) (inherent-roles ((agent NONE) (medium NONE) (benef NONE) (carrier NONE) (attribute NONE) ) ) ) ) ) ) With FSET: ( (cat clause) (alt ( ( (process-type action) (inherent-roles ( (FEET (agent medium benef) ) ) ) ) ( (process-type attributive) (inherent-roles ( (FEET (carrier attribute) ) ) ) ) ( (process-type mental) (inherent-roles ( (FEET (processor phenomenon) ) ) ) ) ) ) ) Figure 6: The FSET Construct for the process-type. Note that the problems are very similar to those encountered on the pronoun system: explosion of NONE branches, interdependent branches, long and inefficient grammar. To solve this problem, we introduce the construct FEET (feature set). FEET specifies the complete set of legal features at a given level of an FD. FEET adds constraints on the definition of the domain of admis- sible paths A. The syntax is the same as CSET. Note that all the features specified in FEET do not need to appear in an FD: only a subset of those can appear. For example, to define the class of middle verbs (e.g., "to shine" which accepts only a medium as inherent role and no agent), the following statement can be unified with the fragment of grammar given in Figure 6: ( (verb ( (lex "shine") )) (process-type action) (voice-class middle) (inherent-roles ( (FSET (medium)) ) ) ) The feature (FEET (medium)) can be unified vAth (FSET (agent medium benef)) and the result is (FSET (medium)). Typing constituents is necessary to implement the theoretical claim of LFG that the number of syntactic functions is limited. It also has practical advantages. 161 The first advantage is good documentation of the grammar. Typing also allows checking the validity of inputs as defined by the type declarations. The second advantage is that it can be used to define more efficient data-structures to represent FDs. As suggested by the definition of FDs, two types of data-structures can be used to internally represent FDs: a fiat list of equations (which is more appropriate for a language like Prolog) and a structured represen- tation (which is more natural for a language like Lisp). When all constituents are typed, it becomes possible to use arrays or hash-tables to store FDs in Lisp, which is much more efficient We are currently inves- tigating alternative internal representations for FDs (cf. (Pereira, 1985, Karttunen, 1985, Boyer, 1988, Hirsh, 1988) for discussions of data-structures and compilation of FUGs). 5 CONCLUSION Functional Descriptions are built from two com- ponents: a set C of primitives and a set L of labels. Traditionally, all structuring of FDs is done using strings of labels. We have shown in this paper that there is much to be gained by delegating some of the structuring to a set of primitives. The set C is no longer a fiat set of symbols, but is viewed as a richly structured world. The idea of typed-unification is not new (Ait-Kaci, 1984), but we have integrated it for the first time in the context of FUGs and have shown its linguistic relevance. We have also introduced the FSET construct, not previously used in unification, en- dowing FUGs with the capacity to represent and reason about complete information in certain situa- tions. The structure of C can be used as a meta- description of the grammar: the type declarations specify what the grammar knows, and are used to check input FDs. It allows the writing of much more concise grammars, which perform more efficiently. It is a great resource for documenting the grammar. The extended formalism described in this paper is implemented in Common Lisp using the Union-Find algorithm (Elhadad, 1988), as suggested in (Huet, 1976, Ait-Kaci, 1984, Escalada-Imaz & Ghallab, 1988) and is used in several research projects (Smadja & McKeown, 1990, Elhadad et al, 1989, McKeown & Elhadad, 1990, McKeown et al, 1991). The source code for the unifier is available to other researchers. Please contact the author for further details. We are investigating other extensions to the FU formalism, and particularly, ways to modify control over grammars: we have developed indexing schemes for more efficient search through the grammar and have extended the formalism to allow the expression of complex constraints (set union and intersection). We are now exploring ways to integrate these later extensions more tightly to the FUG formalism. ACKNOWLEDGMENTS This work was supported by DARPA under con- tract #N00039-84-C-0165 and NSF grant IRT-84-51438. I would like to thank Kathy McKeown for her guidance on my work and precious comments on earlier drafts of this paper. Thanks to Tony Weida, Frank Smadja and Jacques Robin for their help in shaping this paper. I also want to thank Bob Kasper for originally suggesting using types in FUGs. 162 REFERENCES Ait-Kaci, Hassan. (1984). A Lattice-theoretic Ap- proach to Computation Based on a Calculus of Partially Ordered Type Structures. Doctoral dissertation, University of Pennsylvania. UMI #8505030. Appelt, Douglass E. (1985). Planning English Sentences. Studies in Natural Language Processing. Cambridge, England: Cambridge University Press. Boyer, Michel. (1988). Towards Functional Logic Grammars. In Dahl, V. and Saint-Dizier P. (Ed.), Natural Language Programming and Logic Programming, II. Amsterdam: North Holland. Elhadad, Michael. (1988). The FUF Functional Unifier: User's manual. Technical Report CUCS-408-88, Columbia University. Elhadad, Michael. (1990). A Set-theoretic Semantics for Extended FUGs. Technical Report CUCS-020-90, Columbia University. Elhadad, Michael, Seligmann, Doree D., Feiner, Steve and McKeown, Kathleen R. (1989). A Com- mon Intention Description Language for Inter- active Multi-media Systems. Presented at the Workshop on Intelligent Interfaces, IJCAI 89. Detroit, MI. Esealada-Imaz, G. and M. Ghallab. (1988). A Prac- tically Efficient and Almost Linear Unification Algorithm. Artificial Intelligence, 36, 249-263. Halliday, Michael A.K. (1985). An Introduction to Functional Grammar. London: Edward Ar- nold. Hirsh, Susan. (1988). P-PATR: A Compiler for Unification-based Grammars. In Dahl, V. and Saint-Dizier, P. fed.), Natural Language Un- derstanding and Logic Programming, II. Amsterdam: North Holland. Huet, George. (1976). Resolution d'Equations dans des langages d'ordre 1,2,...,co. Doctoral disser- tation, Universite de Paris VII, France. Kaplan, R.M. and J. Bresnan. (1982). Lexical- functional grammar: A formal system for gram- matical representation. In The Mental Representation of Grammatical Relations. Cambridge, MA: MIT Press. Karttunen, Lauri. (July 1984). Features and Values. Coling84. Stanford, California: COLING, 28-33. Karttunen, Lauri. (1985). Structure Sharing with Bi- 163 nary Trees. Proceedings of the 2Zrd annual meeting of the ACL. ACL, 133-137. Karttunen, Lauri. (1986). Radical Lexicalism. Tech- nical Report CSLI-86-66, CSLI - Stanford University. Kasper, Robert. (1987). Systemic Grammar and Functional Unification Grammar. In Benson & Greaves (Ed.), Systemic Functional Perspec- tives on discourse: selected papers from the 12th International Systemic Workshop. Nor- wood, N J: Ablex. Kasper, Robert and William Rounds. (June 1986). A Logical Semantics for Feature Structures. Proceedings of the 24th meeting of the ACL. Columbia University, New York, NY: ACL, 257-266. Kay, M. (1979). Functional Grammar. Proceedings of the 5th meeting of the Berkeley Linguistics Society. Berkeley Linguistics Society. Kay, M. (1985). Parsing in Unification grammar. In Dowty, Karttunen & Zwicky fed.), Natural Language Parsing. Cambridge, England: Cambridge University Press. Knight, Kevin. (March 1989). Unification: a Mul- tidisciplinary Survey. Computing Surveys, 21(1), 93-124. McKeown, Kathleen R. (1985). Text Generation: Using Discourse Strategies and Focus Con- straints to Generate Natural Language Text. Studies in Natural Language Processing. Cambridge, England: Cambridge University Press. McKeown, Kathleen and Michael Ethadad. (1990). A Contrastive Evaluation of Functional Unifica- tion Grammar for Surface Language Generators: A Case Study in Choice of Connec- tives. In Cecile L. Paris, William R. Swartout and William C. Mann (Eds.), Natural Language Generation in Artificial Intelligence and Com- putational Linguistics. Kluwer Academic Publishers. (to appear, also available as Tech- nical Report CUCS-407-88, Columbia Univer- sity). McKeown, Kathleen R. and Paris, Cecile L. (July 1987). Functional Unification Grammar Revisited. Proceedings of the ACL conference. ACL, 97-103. McKeown, K., Elhadad, M., Fukumoto, Y., Lira, J., Lombardi, C., Robin, J. and Smadja, F. (1991). Natural Language Generation in COMET. In Dale, R., Mellish, C. and Zock, M. (Ed.), Proceedings of the second European Workshop on Natural Language Generation. To appear. Paris, Cecile L. (1987). The Use of Explicit User models in Text Generation: Tailoring to a User's level of expertise. Doctoral dissertation, Columbia University. Pereira, Fernando. (1985). A Structure Sharing For- realism for Unification-based Formalisms. Proceedings of the 23rd annual meeting of the ACL. ACL, 137-144. Pereira, Fernando and Stuart Shieber. (July 1984). The Semantics of Grammar Formalisms Seen as Computer Languages. Proceedings of the Tenth International Conference on Computational Linguistics. Stanford University, Stanford, Ca: ACL, 123-129. Sag, I.A. and Pollard, C. (1987). Head-driven phrase structure grammar: an informal synopsis. Tech- nical Report CSLI-87-79, Center for the Study of Language and Information. Shieber, Stuart. (1986). CSLILecture Notes. Vol. 4: An introduction to Unification-Based Ap- proaches to Grammar. Chicago, Ih University of Chicago Press. Smadja, Frank A. and McKeown, Kathleen R. (1990). Automatically Extracting and Representing Col- locations for Language Generation. Proceedings of the 28th annual meeting of the ACL. Pittsburgh: ACL. Uszkoreit, Hanz. (1986). Categorial Unification Grammars. Winograd, Terry. (1983). Language as a Cognitive Process. Reading, Ma.: Addison-Wesley. 164
1990
20
DEFAULTS IN UNIFICATION GRAMMAR Gosse Bouma Research Institute for Knowledge Systems Postbus 463, 6200 AL Maa.qtrlcht. The Netherlands e-mall : [email protected] ABSTRACT Incorporation of defaults in grammar formalisms is important for reasons of linguistic adequacy and grammar organization. In this paper we present an algorithm for handling default information in unification grammar. The algorithm specifies a logical operation on feature structures, merging with the non-default structure only those parts of the default feature structure which are not constrained by the non-default structure. We present various linguistic applications of default unification. L INTRODUCTION MOTIVATION. There a two, not quite unrelated, reasons for incorporating defaults mechanisms into a linguistic formalism. First, linguists have often argued that certain phenomena are described most naturally with the use of rules or other formal devices that make use of a notion of default (see, for instance, Gazdar 1987). The second reason is that the use of defaults simplifies the development of large and complex grammars, in particular, the development of lexicons for such grammars (Evans & Gazdar 1988). The latter suggests that the use of defaults Is of particular relevance for those brands of Unification Grammar (UG) that are lexicalist, that is, in which the lexicon is the main source of grammatical information (such as Categorial Unification Grammar (Uskorelt 1986, Calder et al. 1988) and Head-driven Phrase Structure Grammar (Pollard & Sag 1987)). We propose a method for incorporating defaults Into UG, in such a way that It both extends the linguistic adequacy of UG and supports the formulation of rules, templates and lexical entries for many unification-based theories. In the next section, we define default unification, a logical operation on feature structures. It is defined for a language. FM/, ~, which is in many respects identical to the language FML as defined in Kasper & Rounds (1986). Next, we come to linguistic applications of default unification. A linguistic notation is introduced, which can be used to describe a number of linguistically interesting phenomena, such as feature percolation. coordination, and many aspects of inflectional morphology. Furthermore. it can be used in the sense of Fllcglnger et al. (1985) to define exceptions to rules, non-monotonic specialization of templates or irregular lexlcal entries. BACKGROUND. There are several proposals which hint at the possibility of adding default mechanisms to the linguistic formalisms and theories Just mentioned. The fact that GPSG (Gazdar et al., 1985) makes heavy use of defaults, has led to some research concerning the compatibility of GPSG with a formalism such PATR-II (Shieber 1986a) and concerning the logical nature of the mechanisms used in GPSG (Evans 1987). Shieber (1986a) proposes an operation .rid conservatively, which adds information of a feature structure A to a feature structure B, in as far as this information is not in conflict with information In B. Suggestions for similar operations can be found in Shivber (1986b:59-61) (the overwrite option of PATR-II) and Kaplan (1987) (priority union). Fllckinger et al. (1985) argue for the incorporation of default inheritance mechanisms In UG as an alternative for the template system of PATR-II. A major problem with attempts to define an operation such as default unification for complex feature structures. Is that there are at least two ways to think about this operation. It can be defined as an operation which Is like ordinary unification, with the exception that In case of a unification failure, the value of the non-default feature structure takes precedence (Kaplan 1987, Shieber 1986a). Another option Is not to rely on unification failure, but to remove default information about a feature f already if the non-default feature structure constrains the contents off in some way. This view underlies most of the default mechanisms used in GPSG 1 . The 1 Actually, in GPSG both notions of default unification are used. In Shleber's (1986a) formulation of the of the Foot Feature Principle, for example, the operation add conservatively (which normally relies on unification failure) is restricted to features that are free (i.e. uninstantlated and not covarying with some other feature). 165 distinction between the two approaches is especially relevant for reentrant feature values. The definition presented in the next section is defined as an operation on arbitrary feature structures, and thus it is more general than the operations odd conservatively or overwrite, in which only one sentence at a time (say, <X 0 head> = <X 1 head> or <subject case> = nominative] is added to a feature description. An obvious advantage of our approach is that overwriting a structure F with 1 ~ is equivalent to adding F as default information to F'. Default unification, as defined below, follows the approach in which default information is removed if it is constrained in the non-default structure. This decision is to a certain extent linguistically motivated (see section 3), but perhaps more important is the fact that we wanted to avoid the following problem. For arbitrary feature structures, there is not always a unique way to resolve a unification conflict, nor is it necessarily the case that one solution subsumes other solutions. Consider for instance the examples in (I). (1) default non-default a <f>ffia <f> = <g> <g> = b b. <f> = <g> <f>fa <g> ffi b To resolve the conflict, in Ca), either one of the equations could be removed. In (b), either the fact that <g> = b or the reentrancy could be removed (in both cases, this would remove the inpllcit fact that <f> = b). An approach which only tries to remove the sources of a unification conflict, will thus be forced to make arbitrary decisions about the outcome of the default unification procedure. At least for the purposes of grammar development, this seems to be an undesirable situation 1. 2. DESCRIPTION OF THE ALGORITHM THE LANGUAGE FML*. Default unification is defined in terms of a formal language for feature structures, based on Kasper & Rounds' (1986) language FML. FML* does not contain disjunction, however, and furthermore, equations of the form /:f(where ¢~ is an arbitrary formula) are replaced by equations 1 However, in Evans' (1987) version of Feature Specification Defaults, it is simply allowed that a category description has more than one 'stable expansion'. of the form <p> : ¢x (where a is atomic or NIL or TOP). (2) ~ ~ FML* NIL TOP a a • A (the set of atoms) <p> : a p e L* (L the set of labels) and a • A u {TOP,NIL} [<pl>,..,<pn>] each Pie L* ¢ ^ ¥ ¢,¥ • FML* We assume that feature structures are represented as directed acycllc graphs (dags). The denotation D(¢) of a formula ¢ is the minimal element w,r.t, subsumption 2 in the set of dags that satisfy it. The conditions under which a dag D satisfies a formula of FML* (where D/<p> is the dag that is found if we follow the path p through the dag D) are as follows : (3) S~-WmTZCS Or FML ° a D ~ NIL always b. D ~ TOP never c D ~ a ifDfa d D ~ <p>: a ifD/<p> is defined 3 and D/<p> ~ a, e. D J=¢^X ffD ~b and DR £ D ~ [<pl>,..,<pn>] if the values of all Pl (I _< I < n) are equivalent. NORMAL FORM REQUIREMENTS. Default unification should be a semantically well- behaved operation, that is, the result of this operation should depend only on the denotation of the formula's involved. Since default unification is a non-monotonic operation, however, in which parts of the default information may disappear, and since there are in general many formulae denoting the same dag, establishing this is not completely trivial. In particular, we must make sure that the formula which provides the default information is in the following normal form: 2 A dag D subsumes a dag D' if the set of formulae satisfying D' contains the set of formulae satisfying D (Eisele & DSrre. 1988: 287}. 3 D/<l> is defined iff I e Dom(D). D/<Ip> is defined iff D/,<l> and D'/<p> are defined, where D'= D/<I>. 166 (4) FML" Normal Form A formula Sis in FML* NFiff: a VE/nS,<PlP2>:a inS: <pl>e E "->VP3EE :<P3P2>:uin S l~ ~EI, E2 in S: <plP2 > E E2, <pl > E E 1 --> ~P3 6 E1 : <p3P2 > E E 2 c. V E in S, there is no <p> e E, such that <pl> is re~ll,ed in S. d V E in S, there is no <p> e E such that <p> : a (a e A) is in S. (5) B A path <pl> is realized in S lff <pr> is defined in D(@ (l,r E L) (cf. Elsele & D0n-e, 1988 : 288). For every formula S in FML*, there is a formula S' in FML* NF. which is equivalent to it w.r.t unification, that is, for which the following holds: (6) ~/7. e FML*: S ^ 7. ~ TOP ¢~ S' ^ 7. ~ TOP Note that this does not imply that S and S' have the same denotation. The two formulae below, for example, are equivalent w.r.t. unification, yet denote different dags : (7) a. <:f> : a ^ [<f>,<g>] b. <f>:a^ <g>:a For conditions (4a,b), it is easy to see that (6) holds (it follows, for instance, from the equivalence laws (21} and (22) in Kasper & Rounds, 1986: 261). Condition (4c} can be met by replacing every occurence of an equivalence class [<pl>,..,<pn>] in a formula S by a conjunction of equivalences [<p11>,..,<pnl>] for every <pi/> (1 < i < n} realized in D(S}. For example, if L = {f,g), (Sb} is the NF of (Sa). {8) a [<f>,<g>]^ <ff>:NiL b. [<ff>,<gf>] ^ [<fg>,<gg>] ^ ~ : NIL Condition (4d} can be met by eliminating equivalence classes of paths leading to an atomic value. Thus, (To) is the NF of (7a). Note that the effect of (4c,d) is that the value of every path which is member of some equivalence class is NIL. A default formula has to be in FML" NF for two reasons. First, all information which is implicit in a formula, should be represented explicitly, so we can check easily which parts of a formula need to be removed to avoid potential unification conflicts with the non- default formula.. This Is guaranteed by (4a,b). Second, all reentrant paths should have NIL as value. This is guaranteed by (4c,d) and makes it possible to replace an equivalence class by a weaker set of equat/ons, in which arbitrary long extensions of the old pathnames may occur (if some path would have a value other than NIL, certain extensions could lead to inconsistent results}. LAWS FOR DEFAULT UNIFICATION. Default unification is an operation which takes two formulas as arguments, representing default and non-default informat/on respectively. The dag denoted by the resultant formula is subsumed by that of the non-default argument, but not necessarily by that of the default argument. The laws for default unification (defined as Default ~ Non-default = Result, where Default is in FMLS-NF] are listed below. (9) D~AULTUNa~C.ATSOm : a Se NIL =S SeTOP =TOP NIL (B S =~b TOP ~B S =S b. a~S =S S ~a =a c. <p>:a~S =S, ffD(S)I=<P'> :a, p' a preflxofp, a e A. = ~, ifD(S} I = <pp'> :a. =~, ff 3p'EE:D(O) I =Eandp' is a prefix of p. = <p>: a ^ S, otherwise. cL E G) S = F~E//~ where E'is {<p>~ E I D(S)~E'and p'e E'} u { <p>e E I D(S) ~ <p'> : a} (p' a prefix of p, a e A) and Z is {<p'> l D(S) l = <pp'> :a. and p ~ E}. e. (¥A~)(B~= $, ffyA~=TOP, = (W (B ¢) A (X ~B ¢}, otherwise. 167 This definition of default unification removes all default information which might lead to a unification conflict. Furthermore, it is designed in such a way that the order in which information is removed is irrelevant (note that otherwise the second case in (9e) would be invalid). The first two cases of (9c) are needed to remove all sentences <p> : a, which refer to a path which is blocked or which cannot receive an atomic value in ¢. The third case in (9c) is needed for situations such as (I0). (I0) (<fg> : a ^ <h g> : b) (9 [<f>, <h>] In (9d), we first remove from an equivalence class all paths which have a prefix that is already in an equivalence class or which has an atomic value. The result of this step is E-E'. Next, we modify the equivalence class, so that it allows exceptions (i.e. the posslbtlity of non- unifiable values) for all paths which are extensions of paths in E-E' and are defined in ¢. We can think of modified equivalence classes as abbreviations for a set of (unmodified) equivalence classes: (11) [<pl > .... <pn>]//Z = ¢,where ~ is the conjunction of all equivalence classes [<plpl> .... <pnpl>]. such that pl is not defined in Z, but pr is in z, for some l,r e L An example should make this clearer: (12) [<f>,<g>,<h>l (9(<g>:aA<fg>:b)= l<f>,<h> l//{<g>} A (<f> : a ^ <fg> : b). The result of default unification in this case is that one element ( <g> } is removed from the default equivalence class since it is constrained in by the non-default information. Furthermore, the equivalence is modified, so that it allows for exceptions for the paths <fg> and <h g>. Applying the rule in (I I), and assuming that L = {f,g,h}, we conclude that (13) [<f>,<h> ]//{<g>} = [<ff>, <hf>] A [<fh>, <h h> ]. Note that the replacement of modified equivalence classes by ordinary equivalence classes is always possible, and thus the result of (9el) is equivalent to a formula in FML*. Finally. (ge) says that. given a consistent default formula, the order in which default information is added to the non-default formula is unimportant. 1 (This does not hold for inconsistent default formulae, however, since default unification with the individual conJuncts might filter out enough information to make the resultant formula a consistent extension of the non-defauR formula, whereas TOPO¢ = ¢}. The monotonlclty properties of default unification are listed below {where < is subsumption}: (14} a ,~X^, (but not X~X^* ) b. X-<X ' ~ (X ^@ ~ 0C'^~) (butnot ¢ s¢' ~ (g ^¢) <_ (X^¢') ) (14a) says that default unification is montonlc addition of information to the non-default information. (14b) says that the function as a whole is monotonic only w.r.t, the default argument: adding more default information leads to extensions of the result. Adding non- default information is non-monotonic. however, as this might cause more of the default information to get removed or overwritten. The laws in (9) prove that formulae containing the (9-operator can always be reduced to standard formulae of FML*. This implies that formulae using the (9-operator can still be interpreted as denoting dags. Furthermore, it follows that addition of default unification to a unification-based formalism should be seen only as a way to increase the expressive power of tools used in defining the grammar (and thus. according to D6rre et al. (1990) default unification would be an 'off line' extension of the formalism, that is, its effects can be computed at compile time). A NOTE ON IMPLEMENTATION. We have implemented default unification in Prolog. Feature structures are represented by open ended lists (containing elements of the form label=Value ), atoms and variables to represent complex feature structures, atomic values and reentrancies respectively (see Gazdar & Mellish, 1989). This implementation has the advantage that it is corresponds to FML* NF. 1 This should not be confused with the (invalid) statement that ¥ (9 (X (9 ~ } = X (9 (V (9¢). 168 (15) a. If=X, gfXl Y] b. [f=a,g=a I _Y] c. [f=[h=a I Xl ],g=[hfa I XI ] I_Y] d [f=[h=a I Xl,g=[h=._Z IX1] I Y] If we unify (15a) with [[=al_Yl]. we get (15b), in which the value of g has been updated as well Thus, the requirements of (4a,b) are always met, and furthermore, the reentrancy as such between fand g is no longer visible (condition 4c). If we unify (I 5a) with U'=[h=a IX2) I Y3], we get (15c), in which the variable Xhas been replaced by X1, which can be interpreted as ranging over all paths that are realized but not defined underf(condltlon (4d)). Note also that this representation has the advantage that we can define a reentrancy for all realized features, without having to specify the set of possible features or expanding the value off into a list containing all these features. If we default unify (15a) with [f=[hffial_X2II_X,3] as non-default information, for instance, the result is representable as (15d). The reentrancy for all undefined features under f is represented by X1. The constant NIL of FML* is represented as a Prolog variable ( _Z in this case). Thus, the seemingly space consuming procedure of bringing a formula into FML* NF and transforming the output of (9d) into FML* is avoided completely. The actual default unification procedure is a modified version of the merge operation defined in D6rre & Elsele (1986). 3. LINGUISTIC APPLICATIONS Default unification can be used to extend the standard PATR-II (Shieber et al.. 1983) methods for defining feature structures. In the examples, we freely combine default and non- default information (prefixed by I') in template definitions. (16) a. DET:( l<cat arg> ffi N t<cat val> ffi NP <cat dir> = right <cat arg> = <cat val> <cat val num> = sg <cat val case> = nom ). b. NP: ( <cat> =noun <bar> ffi2 ). c. N : ( <cat> =noun <bar> =1 ). (16) describes a fragment of Categorlal Unification Grammar (Uszkorelt. 1986, Calder et al. 1988. Bouma. 1988). The corresponding feature structure for a definition such as (16a) 169 is determined as follows: first, all default information and all non-default information is unified separately, which results in two feature-structures (17a,b). The resulting two feature structures are merged by means of default unification (I 7c). (]7) ] ] case = nora a. |dir = right t-arg = <1> b. El t°" 'II vaJ = bar = cat = [cat=:] Larg bar = c. ml cat ffi m r°., = ] lbar = 2 val ffi {1}/nu m Lcase dir ffi right r,,,,, = 2r,,] Ibar arg ffi {1}/nu m - Lease m m m In (17c) the equivalence <cat val> = <cat an3> had to be replaced by a weaker set of equivalences, which holds for all features under val or arg. except cat and bar. We represent this by using []-bracketed indices, instead of <> and by marking the attributes which are exceptions in ix)/([ italic.. TWo things are worth noticing. First of all, the unificaUon of non-default information prior to merging it with the non-default information, guarantees that all default information must be unifiable, and thus it eliminates the possibility of inheritance conflicts inside template definitions. Second, the distinction between default and non-default information is relevant only in definitions, not in the corresponding feature structures. This makes the use of the T-operator completely local: if a definlUon contains a template, we can replace this template by the corresponding feature structure and we do not need to worry about the fact that this template might contain the T-operator. The notation Just introduced increases the expressive power of standard methods for the description of feature structures and can be used for an elegant treatment of several linguisUc phenomena. NON-MONOTONIC INHERITANCE OF INFORMATION IN TEMPLATES. The use of default unification enables us to use templates even in those cases where not all the information in the template Is compatible with the information already present in the definition. German transitive verbs normally take an accusative NP as argument but there are some verbs which take a dative or genitive NP as argument. This Is easily accounted for by defining the case of the argument of these verbs and lnherittng all other Information from the template ~r. (]8) a. "IV: ( <cat val> =VP <cat arg> ffi NP <cat arg case> =acc ). b. he]fen (Whelp) : ( TV I <cat arg case> ffi dat ). gedenken (to c~nmem~ate) ( TV ! <cat arg case> = gen ). SPECIALIZATION OF REENTRANCIES. An important function of default unification is that It allows us to define exceptions to the fact that two reentrant feature structures always have to denote exactly the same feature structures. There Is a wide class of linguistic constructions which seems to require such mechanisms. Specifiers in CUG can be defined as functors which take a constituent of category C as argument, and return a constituent of category C, with the exception that one or more specific feature values are changed (see Bach, ]983, Bouma, ]988). Examples of such categories are determiners (see (]6a)), complementizers and auxiliaries. (]9) a. that :( <cat yah = <cat arg> <cat arg> = S <cat arg vform> = fin 1<cat arg comp> = none l<cat val comp> = that ). b. will : ( <cat val> = <cat arg> <cat rag> = VP <cat val> =VP 1<cat arg vform> ffi bse l<cat val vform> ffi fin ). Note that the equation <cat val> = <cat arg> will cause all additional features on the argument which are not explicitly mentioned In the non-default part of the definition to percolate up to the value. Next, consider coordination of NPs. (20) X0 --> X] X2Xo <X2 cat> ffi conJ ¢X0> ffi <XI> ¢Y,O> ffi ~ <g0 cat> = np <X 2 wform> ffi and kX0 num> ffi plu I<X 1 num> =NIL ! <X2 num> ffi NIL). {20) could be used as a rule for conjunction of NPs in UG. It requires identity between the mother and the two coordinated elements. However, requiring that the three nodes be unifiable would be to strict. The number of a conjoined NP Is always plural and does not depend on the number of the coordinated NPs. Furthermore, the number of two coordinated elements need not be identical. The non- default information in (20) takes care of this. The effect of this statement Is that adding the default informaUon <X0> = <XI> and <gO > ffi <X3> will result in a feature structure in which XO, X1 and X3 are unified, except for their values for <num>. We are not interested in the ruan-values of the conJuncts, so they are set to N/L {which should be interpreted as in section 2). The hum -value of the result is always p/u. INFLECTIONAL MORPHOLOGY. When seen from a CUG perspective, the categories of inflectional affixes are comparable to those of specifiers. The plural suffix -s for forming plural nouns can, for instance, be encoded as a function from (regular) singular nouns into Identical, but plural, nouns. Thus. we get the following categorization: (21) -s : ( <cat val> = <cat arg> <cat arg cat> ffi noun <cat arg class> = regular l<cat arg num> ffi sg l<cat val Hum> = plu ). Again, all additional information present on the argument which Is not mentioned in the non-default part of the definition, Is percolated up to the value automatically. I, EXICAL DEFAULTS. The lexical feature specification defaults of GPSG can also be incorporated. Certain information holds for most lexlcal items of a certain category, but not for phrases of thls category. A uniflclatlon-based grammar that includes a morphological component (see, for instance, Calder, 1989 and Evans & Gazdar, 1989), would probably list only (regular) root forms as lexlcal items. For regular nouns, for instance, 170 only the singular form would be listed in the lexicon. Such information can be added to lexicon definitions by means of a lexlcal default rule: {22) v. N ==> ( 3SG <class> = regular} b. (x~v ffi N. sheep = ( N <mum> =NIL <class> = irregular}. The interpretation of A ==> B is as follows: If the definition D of a lexical item is unifiable with A, than extend D to B(B D. Thus, the lexlcal entry cow would be extended with all the information in the default rule above, whereas the lexical entry for sheep would only be extended with the information that <person> = 3. Note that adding the default information to the template for N directly, and then overwriting it in the irregular cases is not a feasible alternative, as this would force us to distinguish between the template N if used to describe nouns and the template N if used in complex categories such as NP/N or N/N (i.e. for determiners or adjectives it is not typically the case that they combine only with regular and singular nouns). & CONCLUSIONS We have presented a general definition for default unification. The fact that It does not focus one the resolution of feature conflicts alone, makes it possible to define default unification as an operation on feature structures, rather than as an operation adding one equation at a tlme to a given feature description. This generalization makes it possible to give a uniform treatment of such things as adding default Information to a template, overwriting of feature values and lexical default rules. We believe that the examples in section 3 demonstrate that this is a useful extension of UG, as it supports the definition of exceptions, the formulation more adequate theories of feature percolation, and the extension of UG with a morphological component. REFERENCES Bach, Emmon 1983 Generalized Categorial Grammars and the English Auxiliary. In F.Heny and B.R/chards (eds.) Linguistic Cateyor/es, Vol II, Dordrecht, Reidel. Bouma, Gosse 1988 Modifiers and Specifiers in Categorlal Unification Grammar, Lingu/st/cs, vo126, 21-46. Calder, Jonathan 1989 Paradigmatic Morphology. Proceedings of the fourth Conference of the European Chapter of the ACL, University of Manchester, Institute of Science and Technology, 58- 65. Calder, Jo; Klein, Ewan & Zeevat, Henk 1988 Unification Categorial Grammar: a concise, extendable grammar for natural language processing. Proceedings of Collng 1988, Hungarian Academy of Sciences, Budapest, 83-86. DOrre, Jochen; Eisele, Andreas; Wedekind, Jflrgen; Calder, Jo; Reape, Mike 1990 A Survey of Lingustfcally Motivated extensions to Unlficatlon-Based Formalisms. ESPRIT Basic Research Action 3175, Deliverable R3.I.A. Eisele, Andreas & D6rre, Jochen1986 A Lexlcal-Functlonal Grammar System in Prolog. Proceedings of COLING 86, Institut fQr angewandte KommunikaUons- und Sprachforschung, Bonn, 551-553. Eisele, Andreas & D6rre, Jochen 1988 Unification of Disjunctive Feature Descriptions. Proceedings of the 26th Annual Meeting of the Association for Computational Linguistics, State University of New York, Buffalo, NY, 286- 294. Evans, Roger 1987 Towards a Formal specification of Defaults in GPSG. In E. Klein & J. van Benthem (eds.), Categories, Polymorphlsm and Unification. University of Edinburgh, Edinburgh/ University of Amsterdam, Amsterdam, 73-93. Evans, Roger & Gazdar, Gerald 1989 Inference in DATR. Proceedings of the fourth Conference of the European Chpater of the ACL, University of Manchester, Institute of Science and Technology, 66- 71. Fllckinger, Daniel; Pollard, Carl & Wasow, Thomas 1985 Structure-Sharlng In I.exical Representation. Proceedings of the 23rd Annual Meeting of the Association for 171 Computational Linguistics, University of Chicago, Chicago, Illinois, 262-267. Gazdar, Gerald 1987 Linguistic Applications of Default Inheritance Mechanisms. In P. Whitelock, H. Somers, P. Bennett, R, Johnson, and M. McGee Wood (eds.), Linguistic Theory and Computer Applicatfons. Academic Press, London, 37-68. Gazdar, Gerald; Klein, Ewan: Pullum, Geoffry; Sag, Ivan 1985 Generalized Phrase Structure Grammar. Blackwell, London. Oazdar, Gerald & Mellish, Chris 1989 Natural Language Processing in Prolog. An introduction to Computational Linguistics. Addison-Wesley, Reading, M/L Kaplan, Ronald 1987 Three seductions of Computational Psycholinguistics. In P. Whitelock, H. Somers, P. Bennett, R, Johnson, and M. McGee Wood (eds.), Linguistic theory and Computer Applications. Academic Press, London, 149-188. Kasper, Robert & Rounds, William1986 A Logical Semantics for Feature Structures. Proceedings of the 26th Annual Meeting of the Association for Computational Linguistics, Columbia University, New York, NY, 257-266. Pollard, Carl & Sag, Ivan 1987 Information- Based Syntax and Semantics, vol I : Fundamentals, CSLI Lecture Notes 13, University of Chicago Press. Chicago. Shieber, Stuart; Uszkorelt, Hans; Perelra, Fernando; Robinson, Jane; & Tyson, Mabry 1983 The Formalism and Implementation of PATR-II. In B. Grosz & M. SUckel (eds.) Research on Interactive Acquisition and Use of Knowledge, SRI International, Menlo Park, Ca. Shieber, Stuart 1986a A Simple Reconstruction of GPSG. Proceedings of COL/NG 1986. Instltut f(Ir angewandte Kommunikations- und Spraehforschung, Bonn, 211-215. Shieber, Stuart 1986b An Introduction to Unlflcatlon-based Approaches to Grammar. CSLI Lecture Notes 4, University of Chicago Press, Chicago. Uszkoreit, Hans 1986 Categorlal Unification Grammars. Proceedings of COIINO 1986. Instltut fQr angewandte Kommunikations- und Sprachforschung, Bonn, 187-194. 172
1990
21
EXPRESSING DISJUNCTIVE AND NEGATIVE FEATURE CONSTRAINTS WITH CLASSICAL FIRST-ORDER LOGIC. Mark Johnson, Cognitive and Linguistic Sciences, Box 1978, Brown University, Providence, RI 02912. [email protected] ABSTRACT In contrast to the "designer logic" approach, this paper shows how the attribute-value feature structures of unification grammar and constraints on them can be axiomatized in classical first-order logic, which can express disjunctive and negative constraints. Because only quantifier-free formulae are used in the axiomatization, the satisfiability problem is NP- complete. INTRODUCTION. Many modern linguistic theories, such as Lexical-Functional Grammar [1], Functional Unification Grammar [12] Generalized Phrase- Structure Grammar [6], Categorial Unification Grammar [20] and Head-driven Phrase- Structure Grammar [18], replace the atomic categories of a context-free grammar with a "feature structure" that represents the.syntactic and semantic properties of the phrase. These feature structures are specified in terms of constraints that they must satisfy. Lexical entries constrain the feature structures that can be associated with terminal nodes of the syntactic tree, and phrase structure rules simultaneously constrain the feature structures that can be associated with a parents and its immediate descendants. The tree is well-formed if and only if all of these constraints are simultaneously satisfiable. Thus for the purposes of recognition a method for determining the satisfiability of such constraints is required: the nature of the satisfying feature structures is of secondary importance. Most work on unification-based grammar (including the references cited above) has adopted a type of feature structure called an attribute-value structure. The elements in an attribute-value structure come in two kinds: constant elements and complex elements. Constant elements are atomic entities with no internal structure: i.e. they have no attributes. Complex elements have zero or more attributes, whose 173 values may be any other element in the structure (including a complex element) and ally element can be the value of zero, one or several attributes. Attributes are partial: it need not be the case that every attribute is d ef!ned for every complex element. The set of attribute-value structures partially ordered by the subsumption relation (together with all additional entity T that every attribute-value structure subsumes) forms a lattice, and the join operation on this lattice is called the unification operati(m 119]. Example: (from [16]). The attribute-value structure (1) has six complex elements labelled el ... e6 and two corastant elements, singular and third. The complex element el has two attributes, subj and pred, the value of which are the complex elements e 2 and e 3 respectively. (1) e2 e2¢ r number singular el subj~pred "~e 3 verb agr 'e6 person third e 7 (2) ~pred )ubi ""5 e,) verb e8 )el() agr.. ell The unification of elements el of(l) and e7 of(2) results in the attribute-value structure (3), the minimal structure in the subsumption lattice which subsumes both (1)and (2). ¢1 ¢7 (3) ~pred ~ ubj "~e3 e9 e2 e8 verb ...~e 5 el0 a g r ~ a g r number person singular third If constraints on attribute-value structures are restricted to conjunctions of equality constraints (i.e. requirements that the value of a path of attributes is equal to a constant or the value of another path) then the set of satisfying attribute- value structures is the principal filter generated by the minimal structure that satisfies the constraints. The generator of the satisfying principal filter of the conjunction of such constraints is the unification of the generators of the satisfying principal filters of each of the conjuncts. Thus the set of attribute-value structures that simultaneously satisfy a set of such constraints can be characterized by computing the unification of the generators of the corresponding principal filters, and the constraints are satisfiable iff the resulting generator is not "T (i.e. -T- represents unification failure). Standard t, nification-based parsers use unification in exactly this way. When disjunctions and negations of constraints are permitted, the set of satisfying attribute-value structures does not always form a principal filter [11], so the simple unification- based technique for determining the satisfiability of feature structure constraints must be extended. Kasper and Rounds [11] provide a formal framework for investigating such constraints by reviving a distinction originally made (as far as I am aware) by Kaplan and Bresnan [10] between the language in which feature structure constraints are expressed and the structures that satisfy these constraints. Unification is supplanted by conjunction of constraints, and disjunction and negation appear only in the constraint language, not in the feature structures themselves (an exception is [3] and [2], where feature bundles may contain negative arcs). Research in this genre usually follows a general pattern: an abstract model for feature structures and a specialized language for expressing constraints on such structures are "custom-crafted" to treat some problematic feature constraint (such as negative feature constraints). Table 1 sketches some of the variety of feature structure models and constraint types that previous analyses have used. This paper follows Kasper and Rounds and most proposals listed in Table 1 by distinguishing the constraint language from feature structures, and restricts disjunction and negation to the constraint language alone. It Table 1: Constraint Languages and Feature Structure Models. Author Kaplan and Bresnan [10] Model of Feature Structures Partial functions Constraint Lanl~ua~e Features Disjunction, negation, set- values Pereira and Shieber [17] Information Domain F=[A---)F]+C Kasper and Rounds [11] Acyclic finite automata Disjunction Moshier and Rounds [14] Forcing sets of acyclic finite lntuitionistic negation automata Dawar and Vijayashankar [3] Acyclic finite automata Three truth values, negation Gazdar, Pullum, Carpenter, Category structures Based on propositional modal Klein, Hukari and Levine [7] logic Johnson [9] "Attribute-value structures" Classical negation, disjunction... 174 (A1) For all Constants c and attributes a, a(c) = 3-. (A2) For all distinct pairs of constants Cl, c2, Cl ~ c2. (A3) For all attributes a, a(3-) = ±. (A4) For all constants c, c ~ ±. (A5) For all terms u, v, U = V ~-~ ( U = V A U # ± ) Figure 1: The axiom schemata that define attribute-value structures. differs by not proposing a custom-built "designer logic" for describing feature structures, but instead uses standard first-order logic to axiomatize attribute-value structures and express constraints on them, including disjunctive and negative constraints. The resulting system is a simplified version of Attribute-Value Logic [9] which does not allow values to be used as attributes (although it would be easy to do this). The soundness and completeness proofs in [9] and other papers listed in Table 1 are not required here because these results are well-known properties of first- order logic. Since both the axiomatizion and the constraints are actually expressed in a decidable class of first-order formulae, viz. quantifier-free formulae with equality, 1 the decidability of feature structure constraints follows trivially. In fact, because the satisfiability problem for quantifier-free formulae is NP-complete [15] and the relevant portion of the axiomatization and translation of constraints can be constructed in polynomial time, the satisfiability problem for feature constraints (including negation) is also NP-complete. AXIOMATIZING ATTRIBUTE-VALUE STRUCTURES This section shows how attribute-value structures can be axiomatized using first-order quantifier-free formulae with equality. In the next section we see that equality and inequality constraints on the values of the attributes can also be expressed as such formulae, so systems of these constraints can be solved using standard techniques such as the Congruence Closure algorithm [15], [5]. The elements of the attribute-value structure, both constant and complex, together with an additional element ± constitute the domain of individuals of the intended interpretation. The attributes are unary partial functions over this domain (i.e. mappings from elements to elements) which are always undefined on constant elements. We capture this partiality by the standard technique of adding an additional element 3_ to the domain to serve as the value 'undefined'. Thus a(x) = 3_ if x does not have an attribute a, otherwise a(x) is the value of x's attribute a. We proceed by specifying the conditions an interpretation must satisfy to be an attribute- value structure. Modelling attributes with functions automatically requires attributes to be single-valued, as required. Axiom schema (A1)describes the properties of constants. It expresses the requirement that constants have no attributes. Axiom schema (A2) requires that distinct constant symbols denote distinct elements in any satisfying model. Without (A2) it would be possible for two distinct constant elements, say singular and plural, to denote the same individual. 2 Axiom schema (A3) and (A4) state the properties of the "undefined value" 3-. It has no attributes, and it is distinct from all of the constants (and from all other elements as well - this will be enforced by the translation of equality constraints). This completes the axiomatization. This axiomatization is finite iff the sets of attribute symbols and constant symbols are finite: in the intended computational and linguistic applications this is always the case. The claim is that any interpretation which satisfies all of these The close relationship between quantifier- free formulae and attribute-value constraints was first noted in Kaplan and Bresnan [10]. 175 Such a schema is required because we are concerned with satisfiability rather than validity (as in e.g. logic programming). axioms is an attribute-value structure; i.e. (A1) - (A4) constitute a definition of attribute-value structures. Example (continued): The interpretation corresponding to the attribute-value structure (1) has as its domain the set D = { el ..... e6, singular, third, 3-}. The attributes denote functions from D to D. For example, agr denotes the function whose value is 3_ except on e2 and es, where its values are e4 and e6 respectively. It is straight-forward to check that all the axioms hold in the three attribute-value structures given above. In fact, any model for these axioms can be regarded as a (possibly infinite and disconnected) attribute-value feature structure, where the model's individuals are the elements or nodes, the unary functions define how attributes take their values, the constant symbols denote constant elements, and _L is a sink state. EXPRESSING CONSTRAINTS AS QUANTIFIER-FREE FORMULAE. Various notations are currently used to express attribute-value constraints: the constraint requiring that the value of attribute a of (the entity denoted by) x is (the entity denoted by) y is written as (x a> = y in PATR-II [19], as (x a) = y in LFG [10], and as x(a) = y in [9], for example. At the risk of further confusion we use another notation here, and write the constraint requiring that the value of attribute a of x is y as a(x) = y. This notation emphasises the fact that attributes are modelled by functions, and simplifies the definition of '-'. Clearly for an attribute-value structure to satisfy the constraint u = v then u and v must denote the same element, i.e. u = v. However this is not a sufficient condition: num(x) = num(y) is not satisfied if num(x) or num(y) is I. Thus it is necessary that the arguments of '=' denote identical elements distinct from the denotation of_L. Even though there are infinitely many instances of the schema in (A5) (since there are infinitely many terms) this is not problematic, since u = v can be regarded as an abbreviation for U=VAU~:/. Thus equality constraints on attribute- value structures can be expressed with quantifier-free formulae with equality. We use classically interpreted boolean connectives to express conjunctive, disjunctive and negative feature constraints. Example (continued): Suppose each variable xi denotes the corresponding e i, 1 <_i <_11, of(l) and (2). Then subj(xl) ~ x2, number(x4) = singular and number(agr(x2 ) ) = number(x 4) are true, for example. Since e 4 and e5 are distinct elements, x8 = Xll is false and hence x8 ~Xll is true. Thus " ~" means "not identical to" or "not unified with", rather than "not unifiable with". Further, since agr(xl ) = J-, agr( x l ) = agr(x l ) is false, even though agr(xl) = agr(xl) is true. Thus t = t is not a theorem because of the possibility that t = J_. SATISFACTION AND UNIFICATION Given any two formulae ~ and q0, the set of models that satisfy both ~) and q0 is exactly the set of models that satisfy ~ ^ q). That is, the conjunction operation can be used to describe the intersection of two sets of models each of which is described by a constraint formula, even if these satisfying models do not form principal filters [11] [9]. Since conjunction is idempotent, associative and commutative, the satisfiability of a conjunction of constraint formulae is independent of the order in which the conjuncts are presented, irrespective of whether they contain negation. Thus the evaluation (i.e. simplification) of constraints containing negation can be freely interleaved with other constraints. Unification identifies or merges exactly the elements that the axiomatization implies are equal. The unification of two complex elements e and e' causes the unification of elements a(e) and a(e') for all attributes a that are defined on both e and e'. The constraint x = x' implies a(x) : a(x') in exactly the same circumstances; i.e. when a(x) and a(x') are both distinct from 3-. Unification fails either when two different constant elements are to be unified, or when a complex element and a constant element are unified (i.e. constant-constant clashes and constant- complex clashes). The constraint x : x' is unsatisfiable under exactly the same circumstances, x -~ x' is unsatisfiable when x and x' are also required to satisfy x = c and x' = c' for distinct constants c, c', since c ~ c' by axiom schema (A2). x = x" is also unsatisfiable when x and x' are required to satisfy a(x) : t and x' ~ c' 176 for any attribute a, term t and constant c', since a(c') = _t_ by axiom schema (A3). Since unification is a technique for determining the satisfiability of conjunctions of atomic equality constraints, the result of a unification operation is exactly the set of atomic consequences of the corresponding constraints. Since unification fails precisely when the corresponding constraints are unsatisfiable, failure of unification occurs exactly when the corresponding constraints are equivalent to False. Example (continued): The sets of satisfying models for the formulae (1") and (2') are precisely the principal filters generated by (1) and (2) above. (1') subj(xl) = x2 ^ agr(x2) = x4 ^ number(x4) = singular A pred(xl) = x3 A verb(x3) = x5 A agr(x 5) ~- X6 ^ person(x6) = third (2') subj(x7) = x8 ^ agr(x8) = Xll ^ pred(x7) = x9 a verb(x9) = Xl0 A agr(xlO) = Xll Because the principal filter generated by the unification of el and e7 is the intersection of the principal filters generated by (1) and (2), it is also the set of satisfying models for the conjunction of (1') and (2') with the formula Xl = x7 (3'). (3') subj(xl) = x 2 ^ agr(x 2) = x4 ^ nmber(x4) = singular ^ pred(xl) ~- x3 ^ verb(x3) = x5 ^ agr(x5) = x6 ^ person(x6) -~ third a subj(x7) = x8 ^ agr(x8) = Xll A pred(x7) ~- x9 A verb(x 9) = Xl0 A agr(xlO) = Xll A X 1 ~ X 7 . The satisfiability of a formula like (3') can be shown using standard techniques such as the Congruence Closure Algorithm [15], [5]. In fact, using the substitutivity and transitivity of equality, (3') can be simplified to (3"). It is easy to check that (3) is a satisfying model for both (3") and the axioms for attribute-value structures. The treatment of negative and disjunctive constraints is straightforward. Since negatiou is interpreted classically, the set of satisfying models do not ahvays form a filter (i.e. they are not always upward closed [16]). Nevertheless, the quantifier-free language itself is capable of characterizing exactly the set of feature structures that satisfy any boolean combination of constraints, so the failure of upward closure is not a fatal flaw of this approach. At a methodological level, I claim that after the mathematical consequences of two different interpretations of feature structure constraints have been investigated, such as the classical and intuitionistic interpretations of negation in feature structure constraints [14], it is primarily a linguistic question as to which is better suited to the description of natural language. I have been unable to find any linguistic analyses which can yield a set of constraints whose satisfiablity varies under the classical and intuitionistic interpretations, so the choice between classical and intuitionistic negation may be moot. For reasons of space the following example (based on Pereira's example 116] demonstrating a purported problem arising from the failure of upward closure with classical negation) exhibits only negative constraints. Example: The conjunction of the formulae number(agr(x) ) = singular and agr(x) = y A ~ (pers(y) = 3rd A number(y) = singular ) can be simplified by substitution and transitivity of equality and boolean equivalences to (4') agr(x) = y A number(y) ~- singular A pers(y) ~ 3rd. This formula is satisfied by the structure (4) when x denotes e and y denotes f. Note the failure of upward closure, e.g. (5) does not satisfy (4'), even though (4) subsumes (5). (3") subj(xl) = x2 A agr(x2) = x4 A number(x4) = singular A person(x4) = third A pred(xl) = x3 A verb(x 3) = x5 A agr(xs) = X4 A Xl = X7 ^ X2 = X5 ^ X3 = X9 AX5 = Xl0 ^ X4 ~- X6 A X4 = X11. 177 (4) el (5) el number number pers singular singular 3rd However, if (4') is conjoined with pers(agr(x) ) ~- 3rd the resulting formula (6)/s unsatisfiable since it is equivalent to (6'), and 3rd ~ 3rd is unsatisfiable. (6) agr(x) ~, y ^ number(y) = singular ^ pers(y) ~ 3rd ^ pers(agr(x)) = 3rd. (6') agr(x) = y a number(y) ~ singular ^ pers(y) = 3rd ^ 3rd ~ 3rd. CONCLUSION This paper has shown how attribute-value structures and constraints on them can be axiomatized in a decidable class of first-order logic. The primary advantage of this approach over the "designer logic" approach is that important properties of the logic of the feature constraint language, such as soundness, completeness, decidability and compactness, follow immediately, rather than proven from scratch. A secondary benefit is that the substantial body of work on satisfiability algorithms for first-order formulae (such as ATMS-based techniques that can efficiently evaluate some disjunctive constraints [13]) can immediately be applied to feature structure constraints. Further, first-order logic can be used to axiomatize other types of feature structures in addition to attribute-value structures (such as "set-valued" elements) and express a wider variety of constraints than equality constraints (e.g. subsumption constraints). In general these extended systems cannot be axiomatized using only quantifier-free formulae, so their decidability may not follow directly as it does here. However the decision problem for sublanguages of first-order logic has been intensively investigated [4], and there are decidable classes of first-order formulae [8] that appear to be expressive enough to axiomatize an interesting variety of feature structures (e.g. function-free universally-quantified prenex formulae can express linguistically useful constraints on "set-valued" elements). An objection that might be raised to this general approach is that classical first-order logic cannot adequately express the inherently "partial information" that feature structures represent. While the truth value of any formula with respect to a model (i.e. an interpretation and variable assignment function) is completely determined, in general there will be many models that satisfy a given formula, i.e. a formula only partially identifies a satisfying model (i.e. attribute-value structure). The claim is that this partiality suffices to describe the partiality of feature structures. BIBLIOGRAPHY . Bresnan, J. The Mental Representation of Grammatical Relations. 1982 The MIT Press. Cambridge, Mass. . Dawar, A. and K. Vijayashanker. Three- Valued Interpretation of Negation in Feature Structure Descriptions. University of Delaware Technical Report 90-03. 1989. . Dawar, A. and K. Vijayashanker. "A Three-Valued Interpretation of Negation in Feature Structures", in The 27th Annual Meeting of the Association of Computational Linguistics, Vancouver, 1989, . Dreben, B. and W. D. Goldfarb. The Decision Problem: Solvable Classes of Quantificational Formulas. 1979 Addison- Wesley. Reading, Mass. . Gallier, J. H. Logic for Computer Science. 1986 Harper and Row. New York. . Gazdar, G., E. Klein, G. Pullum and I. Sag. Generalized Phrase Structure Grammar. 1985 Blackwell. Oxford, England. . Gazdar, G., G. K. Pullum, R. Carpenter, E. Klein, T. E. Hukari and R. D. Levine. "Category Structures." Computational Linguistics. 14.1:1 - 20, 1988. . Gurevich, Y. "The Decision Problem for Standard Classes." JSL. 41.2: 460-464, 1976. . Johnson, M. Attribute-Value Logic and the Theory of Grammar. CSLI Lecture Notes Series. 1988 University of Chicago Press. Chicago. 178 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. Kaplan, R. and J. Bresnan. "Lexical- functional grammar, a formal system for grammatical representation," in The Mental Representation of Grammatical Relations, Bresnan ed., 1982 The MIT Press. Cambridge, Mass. Kasper, R. T. and W. C. Rounds. "A logical semantics for feature structures", in The Proceedings of the 24th Annual Meeting of the Association for Computational Linguistics, Columbia University, New York, 1986, 257-266. Kay, M. "Unification in Grammar," in Natural Language Understanding and Logic Programming, Dahl and Saint-Dizier ed., 1985 North Holland. Amsterdam, The Netherlands. Maxwell, J. T., III and R. Kaplan. "An Overview of Disjunctive Constraint Satisfaction", in International Workshop on Parsing Technologies, Pittsburgh, PA., 1989, 18 - 27. Carnegie Mellon. Moshier, M. D. and W. C. Rounds. "A logic for partially specified data structures", in The ACM Symposium on the Principles of Programming Languages, Munich, Germany, 1987, Association for Computing Machinery. Nelson, G. and D. C. Oppen. "Fast Decision Procedures based on Congruence Closure." J. ACM. 27.2: 245-257, 1980. Pereira, F. C. N. "Grammars and Logics of Partial Information", in The Proceedings of the International Conference on Logic Programming, Melbourne, Australia, 1987. Pereira, F. C. N. and S. M. Shieber. "The semantics of grammar formalisms seen as computer languages", in COLING-84, Stanford University, 1984, 123-129. The Association for Computational Linguistics. Pollard, C. and I. Sag. Information-based Syntax and Semantics, Volume 1. CSLI Lecture Notes. 1987 Chicago University Press. Chicago. Shieber, S. M. An Introduction to Unification-based Approaches to Grammar. 179 20. CSLI Lecture Notes Series. 1986 University of Chicago Press. Chicago. Uszkoreit, H. "Categorial unification grammar", in COLtNG-86, 1986, 187-194.
1990
22
LAZY UNIFICATION Kurt Godden Computer Science Department General Motors Research Laboratories Warren, MI 48090-9055, USA CSNet: [email protected] ABSTRACT Unification-based NL parsers that copy argument graphs to prevent their destruction suffer from inefficiency. Copying is the most expensive operation in such parsers, and several methods to reduce copying have been devised with varying degrees of success. Lazy Unification is presented here as a new, conceptually elegant solution that reduces copying by nearly an order of magnitude. Lazy Unification requires no new slots in the structure of nodes, and only nominal revisions to the unification algorithm. PROBLEM STATEMENT degradation in performance. This performance drain is illustrated in Figure 1, where average parsing statistics are given for the original implementation of graph unification in the TASLINK natural language system. TASLINK was built upon the LINK parser in a joint project between GM Research and the University of Michigan. LINK is a descendent of the MOPTRANS system developed by Lytinen (1986). The statistics below are for ten sentences parsed by TASLINK. As can be seen, copying consumes more computation time than unification. 20.0 19 91% Unification is widely used in natural language processing (NLP) as the primary operation during parsing. The data structures unified are directed acyelic graphs (DAG's), used to encode grammar rules, lexical entries and intermediate parsing structures. A crucial point concerning unification is that the resulting DAG is constructed directly from the raw material of its input DAG's, i.e. unification is a destructive operation. This is especially important when the input DAG's are rules of the grammar or lexical items. If nothing were done to prevent their destruction during unification, then the grammar would no longer have a correct rule, nor the lexicon a valid lexical entry for the DAG's in question. They would have been transformed into the unified DAG as a side effect. The simplest way to avoid destroying grammar rules and lexical entries by unification is to copy each argument DAG prior to calling the unification routine. This is sufficient to avoid the problem of destruction, but the copying itself then becomes problematic, causing severe b/.17Vo I- Unification • Copying [] Other j Figure 1. Relative Cost of Operations during Parsing PAST SOLUTIONS Improving the efficiency of unification has been an active area of research in unification-based NLP, where the focus has been on reducing the amount of DAG copying, and several approaches have arisen. Different versions of structure sharing were employed by Pereira (1985) as well as Karttunen and Kay (1985). In Karttunen (1986) structure sharing was abandoned for a technique allowing reversible unification. Wroblewski (1987) presents what he calls a non-destructive unification algorithm that avoids destruction by incrementally copying the DAG nodes as necessary. 180 All of these approaches to the copying problem suffer from difficulties of their own. For both Pereira and Wroblewski there are special cases involving convergent arcs-- ares from two or more nodes that point to the same destination node--that still require full copying. In Karttunen and Kay's version of structure sharing, all DAG's are represented as binary branching DAG's, even though grammar rules are more naturally represented as non-binary structures. Reversible unification requires two passes over the input DAG's, one to unify them and another to copy the result. Furthermore, in both successful and unsuccesful unification the input DAG's must be restored to their original forms because reversible unification allows them to be destructively modified. Wroblewski points out a useful distinction between early copying and over copying. Early copying refers to the copying of input DAG's before unification is applied. This can lead to inefficiency when unification fails because only the copying up to the point of failure is necessary. Over copying refers to the fact that when the two input DAG's are copied they are copied in their entirety. Since the resultant unified DAG generally has fewer total nodes than the two input DAG's, more nodes than necessary were copied to produce the result. Wroblewski's algorithm eliminates early copying entirely, but as noted above it can partially over copy on DAG's involving convergent arcs. Reversible unification may also over copy, as will be shown below. LAZY UNIFICATION I now present Lazy Unification (LU) as a new approach to the copying problem. In the following section I will present statistics which indicate that LU accomplishes nearly an order of magnitude reduction in copying compared to non-lazy, or eager unification (EU). These results are attained by turning DAG's into active data structures to implement the lazy evaluation of copying. Lazy evaluation is an optimization technique developed for the interpretation of functional programming languages (Field and 181 Harrison, 1988), and has been extended to theorem proving and logic programming in attempts to integrate that paradigm with functional programming (Reddy, 1986). The concept underlying lazy evaluation is simple: delay the operation being optimized until the value it produces is needed by the calling program, at which point the delayed operation is forced. These actions may be implemented by high-level procedures called delay and force. Delay is used in place of the original call to the procedure being optimized, and force is inserted into the program at each location where the results of the delayed procedure are needed. Lazy evaluation is a good technique for the copying problem in graph unification precisely because the overwhelming majority of copying is unnecessary. If all copying can be delayed until a destructive change is about to occur to a DAO, then both early copying and over copying can be completely eliminated. The delay operation is easily implemented by using closures. A closure is a compound object that is both procedure and data. In the context of LU, the data portion of a closure is a DAG node. The procedural code within a closure is a function that processes a variety of messages sent to the closure. One may generally think of the encapsulated procedure as being a suspended call to the copy function. Let us refer to these closures as active nodes as contrasted with a simple node not combined with a procedure in a closure. The delay function returns an active node when given a simple node as its argument. For now let us assume that delay behaves as the identity function when applied to an active node. That is, it returns an active node unchanged. As a mnemonic we will refer to the delay function as delay-copy-the-dag. We now redefine DAG's to allow either simple or active nodes wherever simple nodes were previously allowed in a DAG. An active node will be notated in subsequent diagrams by enclosing the node in angle brackets. In LU the unification algorithm proceeds largely as it did before, except that at every point in the algorithm where a destructive change is about to be made to an active node, that node is first replaced by a copy of its encapsulated node. This replacement is mediated through the force function, which we shall call force-delayed-copy. In the case of a simple node argument force-delayed- copy acts as the identity function, but when given an active node it invokes the suspended copy procedure with the encapsulated node as argument. Force-delayed-copy returns the DAG that results from this invocation. To avoid copying an entire DAG when only its root node is going to be modified by unification, the copying function is also rewritten. The new version of copy-the-dag takes an optional argument to control how much of the DAG is to be copied. The default is to copy the entire argument, as one would expect of a function called copy-the-dag. But when copy-the-dag is called from inside an active node (by force-delayed-copy invoking the procedural portion of the active node), then the optional argument is supplied with a flag that causes copy-the- dag to copy only the root node of its argument. The nodes at the ends of the outgoing arcs from the new root become active nodes, created by delaying the original nodes in those positions. No traversal of the DAG takes place and the deeper nodes are only present implicitly through the active nodes of the resulting DAG. This is illustrated in Figure 2. v _~gJ becomes <b> a2<><c> "~<d> Figure 2. Copy-the-dag on 'a' from Inside an Active Node Here, DAG a was initially encapsulated in a closure as an active node. When a is about to undergo a destructive change by being unified with some other DAG, force- delayed-copy activates the suspended call to copy-the-dag with DAG a as its first argument and the message delay-ares as its optional argument. Copy-the-dag then copies only node a, returning a2 with outgoing arcs pointing at active nodes that encapsulate the original destination nodes b, e, and d. DAG a2 may then be unified with another DAG without destroying DAG a, and the unification algorithm proceeds with the active nodes <b>, <c>, and <d>. As these subdag's are modified, their nodes are likewise copied incrementally. Figure 3 illustrates this by showing DAG a2 after unifying <b>. It may be seen that as active nodes are copied one by one, the resulting unified DA(3 is eventually constructed. b2 a2~i<c> "~<d> Figure 3. DAG a2 after Unifying <b> One can see how this scheme reduces the amount of copying if, for example, unification fails at the active node <e>. In this case only nodes a and b will have been copied and none of the nodes e, d, e, f, g, or h. Copying is also reduced when unification succeeds, this reduction being achieved in two ways. 182 First, lazy unification only creates new nodes for the DAG that results from unification. Generally this DAG has fewer total nodes than the two input DAG's. For example, if the 8-node DAG a in Figure 2 were unified with the 2-node DAG a-->i, then the resulting DAG would have only nine nodes, not ten. The result DAG would have the arc '-->i' copied onto the 8-node DAG's root. Thus, while EU would copy all ten original nodes, only nine are necessary for the result. Active nodes that remain in a final DAG represent the other savings for successful unification. Whereas EU copies all ten original nodes to create the 9-node result, LU would only create five new nodes during unification, resulting in the DAG of Figure 4. Note that the "missing" nodes e, f, g, and h are implicit in the active nodes and did not require copying. For larger DAG's, this kind of savings in node copying can be significant as several large sub-DAG's may survive uncopied in the final DAG . <b> a2 ~ <c> Figure 4. Saving Four Node Copies with Active Nodes A useful comparison with Karttunen's reversible unification may now be made. Recall that when reversible unification is successful the resulting DAG is copied and the originals restored. Notice that this copying of the entire resulting DAG may overcopy some of the sub-DAG's. This is evident because we have just seen in LU that some of the sub-DAG's of a resulting DAG remain uncopied inside active nodes. Thus, LU offers less real copying than reversible unification. Let us look again at DAG a in Figure 2 and discuss a potential problem with lazy unification as described thus far. Let us suppose that through unification a has been partially copied resulting in the DAG shown in Figure 5, with active node <f> about to be copied. b2 02 a2 ~< f > ~ h2 d> Figure 5. DAG 'a' Partially Copied Recall from Figure 2 that node f points at e. Following the procedure described above, <f> would be copied to f2 which would then point at active node <e>, which could lead to another node e 3 as shown in Figure 6. What is needed is some form of memory to recognize that e was already copied once and that f2 needs to point at e2 not <e>. b2 e2 b c< a2 ~ ~t 2 ----.-~~ h2 d> Figure 6. Erroneous Splitting of Node e into e2 and e3 This memory is implemented with a copy environment, which is an association list relating original nodes to their copies. Before f2 is given an arc pointing at <e>, this alist is searched to see if e has already been copied. Since it has, e2 is returned as the destination node for the outgoing arc from f2, thus preserving the topography of the original DAG. 183 Because there are several DAG's that must be preserved during the course of parsing, the copy environment cannot be global but must be associated with each DAG for which it records the copying history. This is accomplished by encapsulating a particular DAG's copy environment in each of the active nodes of that DAG. Looking again at Figure 2, the active nodes for DAG a2 are all created in the scope of a variable bound to an initially empty association list for a2's copy environment. Thus, the closures that implement the active nodes <b>, <c>, and <d> all have access to the same copy environment. When <b> invokes the suspended call to copy-the-dag, this function adds the pair (b. b2)to the copy environment as a side effect before returning its value b2. When this occurs, <c> and <d> instantly have access to the new pair through their shared access to the same copy environment. Furthermore, when new active nodes are created as traversal of the DAG continues during unification, they are also created in the scope of the same copy environment. Thus, this alist is pushed forward deeper into the nodes of the parent DAG as part of the data portion of each active node. Returning to Figure 5, the pair (e. e2) was added to the copy environment being maintained for DAG a 2 when e was copied to e2. Active node <f> was created in the scope of this list and therefore "remembers" at the time f2 is created that it should point to the previously created e2 and not to a new active node <e>. There is one more mechanism needed to correctly implement copy environments. We have already seen how some active nodes remain after unification. As intermediate DAG's are reused during the nondeterministic parsing and are unified with other DAG's, it can happen that some of these remaining active nodes become descendents of a root different from their original root node. As those new root DAG's are incrementally copied during unification, a situation can arise whereby an active node's parent node is copied and then an 184 attempt is made to create an active node out of an active node. For example, let us suppose that the DAG shown in Figure 5 is a sub-DAG of some larger DAG. Let us refer to the root of that larger DAG as node n. As unification of n proceeds, we may reach a2 and start incrementally copying it. This could eventually result in c2 being copied to c3 at which point the system will attempt to create an outgoing arc for c3 pointing at a newly created active node over the already active node <f>. There is no need to try to create such a beast as <<f>>. Rather, what is needed is to assure that active node <f> be given access to the new copy environment for n passed down to <f> from its predecessor nodes. This is accomplished by destructively merging the new copy environment with that previously created for a2 and surviving inside <f>. It is important that this merge be destructive in order to give all active nodes that are descendents of n access to the same information so that the problem of node splitting illustrated in Figure 6 continues to be avoided. It was mentioned previously how calls to force-delayed-copy must be inserted into the unification algorithm to invoke the incremental copying of nodes. Another modification to the algorithm is also necessary as a result of this incremental copying. Since active nodes are replaced by new nodes in the middle of unification, the algorithm must undergo a revision to effect this replacement. For example, in Figure 5 in order for <b> to be replaced by b2, the corresponding arc from a2 must be replaced. Thus as the unification algorithm traverses a DAG, it also collects such replacements in order to reconstruct the outgoing arcs of a parent DAG. In addition to the message delay-arcs sent to an active node to invoke the suspended call to copy-the-dag, other messages are needed. In order to compare active nodes and merge their copy environments, the active nodes must process messages that cause the active node to return either its encapsulated node's label or the encapsulated copy environment. 40000 EFFECTIVENESS OF LAZY UNIFICATION Lazy Unification results in an impressive reduction to the amount of copying during parsing. This in turn reduces the overall slice of parse time consumed by copying as can be seen by contrasting Figure 7 with Figure 1. Keep in mind that these charts illustrate proportional computations, not speed. The pie shown below should be viewed as a smaller pie, representing faster parse times, than that in Figure 1. Speed is discussed below. 45.78% 18.67% J l~ Unification • Copying [] Other I " Figure 7. Relative Cost of Operations with Lazy Unification Lazy Unification copies less than 7% of the nodes copied under eager unification. However, this is not a fair comparison with EU because LU substitutes the creation of active nodes for some of the copying. To get a truer comparison of Lazy vs. Eager Unification, we must add together the number of copied nodes and active nodes created in LU. Even when active nodes are taken into account, the results are highly favorable toward LU because again less than 7% of the nodes copied under EU are accounted for by active nodes in LU. Combining the active nodes with copies, LU still accounts for an 87% reduction over eager unification. Figure 8 graphically illustrates this difference for ten sentences. 30000 Number of 20000 Nodes 10000 Eager Lazy Active Copies Copies Nodes Figure 8. Comparison of Eager vs. Lazy Unification From the time slice of eager copying shown in Figure 1, we can see that if LU were to incur no overhead then an 87% reduction of copying would result in a faster parse of roughly 59%. The actual speedup is about 50%, indicating that the overhead of implementing LU is 9%. However, the 50% speedup does not consider the effects of garbage collection or paging since they are system dependent. These effects will be more pronounced in EU than LU because in the former paradigm more data structures are created and referenced. In practice, therefore, LU performs at better than twice the speed of EU. There are several sources of overhead in LU, The major cost is incurred in distinguishing between active and simple nodes. In our Common Lisp implementation simple DAG nodes are defined as named structures and active nodes as closures. Hence, they are distinguished by the Lisp predicates DAG-P and FUNCTIONP. Disassembly on a Symbolics machine shows both predicates to be rather costly. (The functions TYPE-OF and TYPEP could also be used, but they are also expensive.) 185 Another expensive operation occurs when the copy environments in active nodes are searched. Currently, these environments are simple association lists which require sequential searching. As was discussed above, the copy environments must sometimes be merged. The merge function presently uses the UNION function. While a far less expensive destructive concatenation of copy environments could be employed, the union operation was chosen initially as a simple way to avoid creation of circular lists during merging. All of these sources of overhead can and will be attacked by additional work. Nodes can be defined as a tagged data structure, allowing an inexpensive tag test to distinguish between active and inactive nodes. A non-sequential data structure could allow faster than linear searching of copy environments and more efficient merging. These and additional modifications are expected to eliminate most of the overhead incurred by the current implementation of LU. In any case, Lazy Unification was developed to reduce the amount of copying during unification and we have seen its dramatic success in achieving that goal. CONCLUDING REMARKS There is another optimization possible regarding certain leaf nodes of a DAG. Depending on the application using graph unification, a subset of the leaf nodes will never be unified with other DAG's. In the TASLINK application these are nodes representing such features as third person singular. This observation can be exploited under both lazy and eager unification to reduce both copying and active node creation. See Godden (1989) for details. It has been my experience that using lazy evaluation as an optimization technique for graph unification, while elegant in the end result, is slow in development time due to the difficulties it presents for debugging. This property is intrinsic to lazy evaluation, (O'Donnell and Hall, 1988). 186 The problem is that a DAG is no longer copied locally because the copy operation is suspended in the active nodes. When a DAG is eventually copied, that copying is performed incrementally and therefore non-locally in both time and program space. In spite of this distributed nature of the optimized process, the programmer continues to conceptualize the operation as occurring locally as it would occur in the non-optimized eager mode. As a result of this mismatch between the programmer's visualization of the operation and its actual execution, bugs are notoriously difficult to trace. The development time for a program employing lazy evaluation is, therefore, much longer than would be expected. Hence, this technique should only be employed when the possible efficiency gains are expected to be large, as they are in the case of graph unification. O'Donnell and Hall present an excellent discussion of these and other problems and offer insight into how tools may be built to alleviate some of them. REFERENCES Field, Anthony J. and Peter G. Harrison. 1988. Functional Programming. Reading, MA: Addison-Wesley. Godden, Kurt. 1989. "Improving the Efficiency of Graph Unification." Internal technical report GMR-6928. General Motors Research Laboratories. Warren, MI. Karttunen, Lauri. 1986. D-PATR: A Development Environment for Unification- Based Grammars. Report No. CSLI-86-61. Stanford, CA. Karttunen, Lauri and Martin Kay. 1985. "Structure-Sharing with Binary Trees." Proceedings of the 23 rd Annual Meeting of the Association for Computational Linguistics. Chicago, IL: ACL. pp. 133- 136A. Lytinen, Steven L. 1986. "Dynamically Combining Syntax and Semantics in Natural Language Processing." Proceedings of the 5 t h National Conference on Artificial Intelligence. Philadelphia, PA- AAAI. pp. 574-578. O'Donnell, John T. and Cordelia V. Hall. 1988. "Debugging in Applicative Languages." Lisp and Symbolic Computation, 1/2. pp. 113-145. Pereira, Fernando C. N. 1985. "A Structure-Sharing Representation for Unification-Based Grammar Formalisms." Proceedings of the 23 rd Annual Meeting of the Association for Computational Linguistics. Chicago, IL: ACL. pp. 137-144. Reddy, Uday S. 1986. "On the Relationship between Logic and Functional Languages," in Doug DeGroot and Gary Lindstrom, eds. Logic Programming : Functions, Relations, and Equations. Englewood Cliffs, NJ. Prentice-Hall. pp. 3- 36. Wroblewski, David A. 1987. "Nondestructive Graph Unification." Proceedings of the 6 th National Conference on Artificial Intelligence. Seattle, WA: AAAI. pp. 582-587. 187
1990
23
ZERO MORPHEMES IN UNIFICATION-BASED COMBINATORY CATEGORIAL GRAMMAR Chinatsu Aone The University of Texas at Austin & MCC 3500 West Balcones Center Dr. Austin, TX 78759 ([email protected]) Kent Wittenburg MCC 3500 West Balcones Center Dr. Austin, TX 78759 ([email protected]) ABSTRACT In this paper, we report on our use of zero morphemes in Unification-Based Combinatory Categorial Grammar. After illus- trating the benefits of this approach with several examples, we describe the algorithm for compil- ing zero morphemes into unary rules, which al- lows us to use zero morphemes more efficiently in natural language processing. 1 Then, we dis- cuss the question of equivalence of a grammar with these unary rules to the original grammar. Lastly, we compare our approach to zero mor- phemes with possible alternatives. 1. Zero Morphemes in Categorial Grammar In English and in other natural languages, it is attractive to posit the existence of morphemes that are invisible on the surface but have their own syntactic and semantic defini- tions. In our analyses, they are just like any other overt morphemes except for having null strings (i.e. " "), and we call them zero mor- phemes. Most in Categorial Grammar and relat- ed forms of unification-based grammars, on the other hand, take the rule-based approach. That is, they assume that there are unary rules that change features or categories of their arguments (cf. Dowty 1977, Hoeksema 1985, Wittenburg 1986, Wood 1987). Below, we will discuss the advantages of our zero morpheme approach over the rule-based approach. Zero morphemes should be distin- guished from so-called "gaps" in wh-questions and relative clauses in that zero morphemes are not traces or "place holders" of any other overt morphemes in a given sentence. There are at 1. The work described here is implemented in Common Lisp and being used in the Lucy natural language understanding system at MCC. 188 least two types of zero morphemes: zero mor- phemes at the morphology level and those at the syntax level. A zero morpheme at the morphology level applies to a free morpheme and forms an inflected word. Such examples are present tense zero morpheme (PRES) as in 'I like+PRES dogs" and a singular zero morpheme (SG) as in "a dog+SG". These two are the coun- terparts of a third person singular present tense morpheme C+s" as in "John like+s dogs" and a plural morpheme C+s" as in 'two dog+s'~, re- spectively. (1) dog +SG N[num:null] N[num:sg]~N[num:null] dog +s N[num:null] N[num:pl]\N[num:null] Notice that, unlike the rule-based approach, the declarative and compositional nature of the zero morpheme approach makes the semantic analy- sis easier, since each zero morpheme has its semantic definition in the lexicon and therefore can contribute its semantics to the whole inter- pretation just as an overt morpheme does. Also, the monotonicity of our 'feature adding" ap- proach, as opposed to "default feature" ap- proach (e.g., Gazdar 1987), is attractive in com- positional semantics because it does not have to retract or override a semantic translation contrib- uted by a word with a default feature. For exam- ple, "dog" in both "dog+SG" and "dog+s" contrib- utes the same translation, and the suffixes "+SG" and "+s" just add the semantics of num- ber to their respective head nouns. In addition, this approach helps reduce redundancy in the lexicon. For instance, we do not have to define for each base verb in the lexicon their present- tense counterparts. a man REL-MOD the daughter of whom N (N\N)/S[reI:+] NP/N N (N\N)/NP NP[rel:+] apply> ..... N[rel:+]\N apply< .... N[rel:+] I apply> ,, NP[rel:+] LIFT S[reh+]/(S/NP) apply> S[rel:+] apply> N\N John liked NP (S\NP)/NP ~p~pe-raising s/NP compose> Figure 1: Derivation of "a man the daughter of whom John liked" Some zero morphemes at the syntax level are those which may apply to a constituent larg- er than a single word and change the categories or features of the constituent. They are like ordi- nary derivational or inflectional morphemes ex- cept that their application is not confined within a word boundary. In English, one example is the noun compounding zero morpheme (CPD), which derives a noun modifier from a noun. In Categorial Grammar, its syntactic type is (N/N)\N. 2 For instance, a noun compound "dog food" might have the following derivation. (2) dog CPD food N (N/N)\N N apply< N/N N apply> In knowledge-based or object-oriented semantics (cf. Hirst 1987); which our LUCY sys- tem uses, the treatment of compound nouns is straightforward when we employ a zero mor- pheme CPD. 3 In LUCY, CPD has a list of trans- lations in the semantic lexicon, each of which is a slot relation (a two-place predicate as its syn- tactic type) in the knowledge base. For exam- ple, for "dog food" CPD may be translated into (food-typically-eaten x y), where x must be an in- stance of class Animal and y that of Food. Thus, a translation of CPD is equivalent to a 2. CPD is leftward-looking to parallel the defini- tion of a hyphen as in "four-wheeler". 3. Some compound nouns are considered as "idiomatic" single lexical entries, and they do not have a CPD morpheme. (e.g. "elephant garlic") 189 value bound to the "implicit relation" called nn that Hobbs and Martin (1987) introduce to re- solve compound nouns in TACITUS. In our case, having CPD as a lexical item, we do not have to introduce such an implicit relation at the semantics level. An analogous zero morpheme provides a natural analysis for relative clauses, deriving a noun modifier from S. This zero morpheme, which we call REL-MOD, plays an important role in an analysis of pied-piping, which seems diffi- cult for other approaches such as Steedman (1987, 1988). (See Pollard (1988) for his criti- cism of Steedman's approach.) Steedman as- sumes that relative pronouns are type-raised al- ready in the lexicon and have noun-modifier type (N\N)/(S/(SINP). In Figure 1, we show a deriva- tion of a pied-piping relative clause "a man the daughter of whom John liked " using REL- MOD.4 s Other zero morphemes at the syntax level are used to deal with optional words. We define a zero morpheme for an invisible morpheme that is a counterpart of the overt one. An example is an accusative relative pronoun as in "a student (who) I met yesterday". Another example of this kind is '~ou" in imperative 4. We assume that accusative wh-words are of basic NP type in the lexicon. A unary rule LIFT, which is similar to type-raising rule, lifts any NP of basic type with [rel:+] feature to a higher type NP, characteristic of fronted phrases. This feature is passed up by way of unification. 5. We actually use predictive versions of com- binators in our runtime system (Wittenburg 1987). X X/Y y I unify A/B R: X/Y Y ==> X M: A/B A A -'=> ~ ""-'> I A/B B B Figure 2: Compiling a zero morpheme sentences. Having a zero morpheme for the unrealized '~'ou" makes parsing and the interpretation of imperative sentences straightforward, s (3) IMP IMP-YOU finish dinner S[mood:imp]/S NP (S\NP)/NP NP [case:nom] apply> S\NP apply< S apply> S [mood:imp] Analogous to the treatment of optional words, VP-ellipsis as in "Mary likes a dog, and Bill does too" is handled syntactically by defining a syntax-level zero morpheme for an elided verb phrase (called VP-ELLIPSIS). During the discourse process in LUCY, the antecedent of VP-ELLIPSIS is recovered. 7 (4) Bill NP does VP-ELLIPSIS S\NP/(S\NP) S\NP S\NP S apply> apply< Now to summarize the advantages for having zero morphemes, first, zero morphemes like PRES and SG reduce redundancy in the lexicon. Second, zero morphemes seem to be a natural way to express words that do not appear 6. Each sentence must have one of the three mood features -- declarative, interrogative, and imperative mood. They are added by zero morphemes DECL, QUES, and IMP, respectively. 7. See Kameyama and Barnett (1989). 190 on the surface but have their overt counterparts (e.g., null accusative relative pronouns, vp-ellipsis). Third, since each zero morpheme has its own syntax and semantic interpretation in much the same way as overt morphemes, and since the semantic interpretations of binary rules that combine a zero morpheme with its argument (or functor) are kept as simple as they are in Categorial Grammar, semantic interpretations of sentences with zero mor- phemes are compositional and straightforward. Typically in the rule-based approach, the semantic operations of unary rules are more complicated: they might perform such operations as introducing or retracting some semantic primitives that do not exist in the semantic lexicon. But with our zero morpheme approach, we can avoid such complication. Lastly, using zero morpheme REL-MOD makes the analysis of pied-piping and preposition fronting of relative clauses in Categorial Grammar possible. In the following section, we propose an approach that keeps all these advantages of zero morphemes while maintaining the efficiency of the rule approach in terms of parsing. 2. Compiling Zero Morphemes In natural language processing, simply proposing zero morphemes at each juncture in a given input string during parsing would be a nightmare of inefficiency. However, using the fact that there are only a few binary rules in Categorial Grammar and each zero morpheme can combine with only a subset of these rules because of its directionality compatibility, we can pre-compile zero morphemes into equivalent unary rules and use the latter for parsing. Our approach is an extension of the predictive com- binator compilation method discussed in Wittenburg (1987). The idea is that we first unify a zero morpheme M with the left or right daugh- Let M be a zero morpheme, R be a binary rule. For each M in the grammar, do the following: For each binary rule R in the grammar if the syntax graph of M unifies with the left daughter of R then call the unified binary graph R', and make the right daughter of R' the daughter of a new unary rule R1 make the parent of R' the parent of R1 if the syntax graph of M unifies with the right daughter of R then call the unified binary graph R' make the left daughter of R' the daughter of a new unary rule R1 make the parent of R' the parent of RI. Figure 3: Algorithm for compiling zero morphemes ter of each binary rule R. If they unify, we create a specialized version of this binary rule R', main- taining features of M acquired through unifica- tion. Then, we derive a unary rule out of this specialized binary rule and use it in parsing. Thus, if M is of type NB, R is forward applica- tion, and M unifies with the left daughter of R, the compiling procedure is schematized as in Figure 2. Now I shall describe the algorithm for compiling zero morphemes in Figure 3. During this compiling process, the semantic interpreta- tion of each resulting unary rule is also calculat- ed from the interpretation of the binary rule and that of the zero morpheme. For example, if the semantics of M is M', given that the semantic in- terpretation of forward application is ~,fun- ;~arg(fun arg), we get Zarg(M' arg) for the se- mantic interpretation of the compiled unary rule. 8 We also have a mechanism to merge two resulting unary rules into a new one. That is, if a unary rule R1 applies to some category A, giving A', and then a unary rule R2 applies to A', giving A", we merge R1 and R2 into a new unary rule R3, which takes A as its argument and returns A". For example, after compiling IMP-rule and IMP-YOU-rule from zero mor- phemes IMP and IMP-YOU (cf. (3)), we could merge these two rules into one rule, IMP+IMP- YOU rule. During parsing, we use the merged rule and deactivate the original two rules. 3. The Grammar with Compiled zero mor- phemes The grammar with the resulting unary rules has the same generative capacity as the 8. See Wittenburg and Aone (1989) for the de- tails of Lucy syntax/semantics interface. 191 source grammar with zero morphemes in the lexicon because these unary rules are originally derived by only using the zero morphemes and binary rules in the source grammar. Thus, a derivation which uses a unary rule can always be mapped to a derivation in the original gram- mar, and vice versa. For example, look at the following example of CPD-RULE vs. zero mor- pheme CPD: (5) a. dog food N N N/I~' cpd-rule N apply> b. dog CPD food N (N/N)\N N N/N apply< N apply> Now, if we assume that we use Categorial Grammar with four binary rules, namely, apply>, apply<, compose>, and com- pose<, as Steedman (1987) does, we can pre- dict, among 8 possibilities (4 rules and the 2 daughters for each rule), the maximum number of unary rules that we derive from a zero mor- pheme according to its syntactic type. 9 If a zero morpheme is of type NB, it unifies with the left daughters of apply>, apply< and compose> and with the right daughters of apply> and corn- 9. Zero morphemes do not combine with wh- word type-raising rule LIFT, which is the only unary rule in our grammar besides the com- piled unary rules from zero morphemes. pose>. Thus, there are 5 possible unary rules for this type of zero morpheme. If a zero mor- pheme is of type A\B, there are also 5 possibili- ties. That is, it unifies with the left daughter of apply< and compose<, and the right daughters of apply>, apply< and compose<. If a zero mor- pheme is of basic type, there are only 2 possibil- ities; it unifies only with the left daughter of apply< and the right daughter of apply>. Furthermore, in our English grammar, we have been able to constrain the number of unary rules by pre-specifying for compilation which rules to unify a given zero morpheme with. 1° We add such compiler flags in the definition of each zero morpheme. We can do this for the morphology-level zero morphemes because they are never combined with anything other than their root morphemes by binary rules, and because we know which side of a root morpheme a given zero affix appears and what are the possible syntactic types of the root morpheme. As for zero morphemes at the syntax level, we can ignore composition rules when compiling zero morphemes which are in islands to "extraction", since these rules are only necessary in extraction contexts. CPD, REL-MOD and IMP-YOU are such syntax-level zero morphemes. Additional facts about English have allowed us to specify only one binary rule for each syntax-level zero morpheme in our English grammar. An example of a zero morpheme definition is shown below. (6) (defzeromorpheme PRES :syntax S[tns:pres]\S[tns :null] :compile-info (:binary-rule compose< :daughter R)) 4. Comparison in View of Parsing Zero Morphemes In this section, we compare our approach to zero morphemes to alternative ways from the parsing point of view. Since we do not know any other comparable approach which specifically included zero morphemes in natural language processing, we compare ours to the possible approaches which are analogous to those which tried to deal with gaps. For example, in Bear and Karttunen's (1979) treatment of wh-question and relative pronoun gaps in Phrase Structure Grammar, a gap is proposed at each vertex during parsing if there is a wh-question word or a relative pronoun in the stack. We can use an analogous approach for zero morphemes, but clearly this will be extremely inefficient. It is more so because 1) there is no restriction such as that there should be only one zero morpheme within an S clause, and 2) the stack is useless because zero mor- phemes are independent morphemes and are not "bound" to other morphemes comparable to wh-words. Shieber (1985) proposes a more efficient approach to gaps in the PATR-II formalism, extending Earley's algorithm by using restriction to do top-down filtering. While an approach to zero morphemes similar to Shieber's gap treatment is possible, we can see one advantage of ours. That is, our approach does not depend on what kind of parsing algorithm we choose. It can be top-down as well as bottom-up. 5. Conclusion Hoeksema (1985] argues for the rule-based approach over the zero morpheme approach, pointing out that the postulation of zero morphemes requires certain arbitrary decisions about their position in the word or in the sentence. While we admit that such arbitrariness exists in some zero morphemes we have defined, we believe the advantages of positing zero morphemes, as discussed in Section 1, outweigh this objection. Our approach combines the linguistic advantages of the zero morpheme analysis with the efficiency of a rule-based approach. Our use of zero morphemes is not restricted to the traditional zero-affix domain. We use them, for example, to handle optional words and VP-ellipsis, extending the coverage of our grammar in a natural way. ACKNOWLEDGEMENTS We would like to thank Megumi Kameyama and Michael O'Leary for their help. 10. In fact, we use more than two kinds of com- position rules for the compilation of the mor- phology-level zero morphemes. (e.g. PRES in (1)) But this does not cause any "rule pro- liferation" problem for this reason. 192 REFERENCES Bear, John and Lauri Karttunen. 1979. PSG: A Simple Phrase Structure Parser. In R. Bley- Vroman and S. Schmerling (eds.), Texas LinguisticForurn. No. 15. Dowty, David. 1979. Montague Grammar. Company. Word Meaning and D. Reidel Publishing Gazdar, Gerald. 1987. Linguistic Applications of Default Inheritance Mechanisms. In P. Whitelock et al. (eds.), Linguistic Theory and Computer Applications. Academic Press. Hirst, Graeme. 1987. Semantic lnterpretation and the Resolution of Ambiguity. Cambridge University Press. Hobbs, Jerry and Paul Martin. 1987. Local Pragmatics. In Proceedings IJCAk87. Hoeksema, Jack. 1985. Categoria/ Morphology. Garland Publishing, Inc. New York & London. Kameyama, Megumi and Jim Bamett. 1989. VP Ellipsis with Distributed Knowledge Sources. MCC Technical Report number ACT-HI-145-89. Categorial Grammars and Natural Language Structures. D. Reidel Publishing Company. Wittenburg, Kent. 1986. Natural Language Parsing with Combinatory Categorial Grammar in a Graph-Unification-Based Formalism. Doctoral dissertation, The University of Texas, Austin. 1987. Predictive combina- tots: A Method for Efficient Processing of Combinatory Categorial Grammars. In Proceedings of the 25th Annual Meetings of the Association for Computational Linguistics. Wittenburg, Kent and Chinatsu Aone. 1989. Aspects of a Categorial Syntax/Semantics Interface. MCC Technical Report number ACT-HI-143-89. Wood, Mary. 1987. Paradigmatic Rules for Categorial Grammars. CCIJUMIST Report, Centre for Computational Linguistics, University of Manchester, Institute of Science and Technology. Pollard, Carl. 1988. Categorial Grammar and Phrase Structure Grammar: An Excursion on the Syntax-Semantics Frontier. In R. Oehrle et al. (eds.), Categorial Grammars and Natural Language Structures. D. Reidel Publishing Company. Shieber, Stuart. 1985. Using Restriction to Extend Parsing Algorithms for Complex- Feature-Based Formalisms. In Proceedings of the 23rd Annual Meetings of the Association for Computational Linguistics. 145-152. 1986. An Introduction to Unification-Based Approaches to Grammar. CSLI, Stanford University, California. Steedman, Mark. 1985. Dependency and Coordination in the Grammar of Dutch and English. Language. 61:523-568. 1987. Combinatory Grammars and Parasitic Gaps. Natural Language and Linguistic Theory. 5:403-439. Grammars, 1988. Combinators and In R. Oehrle et al. (eds.), 193
1990
24
THE LIMITS OF UNIFICATION Robert J. P. Ingria BBN Systems and Technologies Corporation 10 Moulton Street, Mailstop 6/4C Cambridge, MA 02138 Intemet: [email protected] ABSTRACT Current complex-feature based grammars use a sin- gle procedure--unification--for a multitude of pur- poses, among them, enforcing formal agreement between purely syntactic features. This paper presents evidence from several natural languages that unification--variable-matching combined with variable substitution--is the wrong mechanism for effecting agreement. The view of grammar developed here is one in which unification is used for semantic interpre- tation, while purely formal agreement involves only a check for non-distinctness---i.e, variable-matching without variable substitution. 1 Introduction In recent years, a great deal of attention has been de- voted to complex-feature based grammar formalisms-- i.e. grammar formalisms in which syntactic elements are not atomic symbols, but rather complex elements, such as value-attribute or term structu~s; see Shieber (1986) for an overview. Typically such formalisms use a single mechanism--variable substitution--for all pur- poses, and the most widely used variable substitution mechanism is unification) Such complex-feature based grammars, then, axe viewed as systems in which partial feature structures are built up, by the process of unifica- tion, into successively more specified structures. While it is formally elegant to use a single mechanism for a number of purposes, this theoretical elegance is real- ized in practice only if the mechanism does not require the other modules of the system to be complicated to achieve this "elegance". Currently, unification is used for at least four puq3oses: 1 In the rest of this paper, for convenience I will use the term "unification" instead of "variable substitution", since it is the most commonly used type of variable substitution, but it should be borne in mind that the point being made here holds for variable substitution, in general. • to enforce formal agreement between purely syn- tactic features • to "percolate" features between a pre-terminal cat- egory and the phrase which it heads • to pass features between a dislocated element-- such as a WH-phrase--and its trace • to build up semantic representations This paper will focus on the use of unification to enforce agreement and will present evidence from sev- eral natural languages which argues against its use in the case of purely formal syntactic features: when such features are lexically or morphologically underspecified, they remain so, even under agreement, contrary to the predictions of a system using unification for agreement. Moreover, it is worthwhile stressing at the outset that the main argument of this paper is not that there are certain constructions that present a problem for unifica- lion, and, hence, require some technical solution. The point is much stronger:, even if some elaborate analy- sis can be devised that allows unification to be used to effect agreement, this would be the wrong tack to take. Rather, the argument will go, using unification to effect agreement is incorrect both for theoretical reasons--it presents a view of language which is contradicted by the facts--and for practical reasons---using unification to effect agreement can impede a system's robustness and transportability. 2 The Paradox A typical paradigm thin is presented to show the almost transparent application of unification to agreement phe- nomena is the following: (I) a. The sheep is ready. b. The sheep are ready. c. The sheep is there. d. The sheep are there. e. *The sheep that is ready are there. 194 Sentences (la) through (ld) are taken to indicate that "sheep" is underspecified with regard to number;, it can be either singular or plural. (le), on the other hand, shows that "sheep" cannot be both singular and plural at the same time. In the relative clause, "is" is marked as singular, and "sheep", interpreted as its subject via the relative connector "that", must also be singular. On the other hand, "are" in the matrix clause is marked as plural, and "sheep", its subject, must also be plural. Under a unification analysis, these facts are explained in the following way: "sheep" is syntactically unspeci- fied for the feature number. The process of subject-verb agreement is effected by unification. Therefore, when "sheep" appears as the subject of a finite verb, unifica- tion will fix its number as singular or plural (unless the finite verb itself is ambiguous). (le) is ungrammatical, then, since the values singular and plural cannot unify and the fact that "sheep" must agree with both "is" and "are" in number would require their unification. This illegal feature configuration is shown in (2). (2) IV [num:sg]] --. [N [num:{sg,pl}]] ,-- IV [num:pl]] ("is") ("sheep") ("are") Here, the arrows indicate the notional flow of informa- tion under agreement, but have no theoretical status. They indicate that agreement between "sheep" and "is" would set "sheep" 's number feature to singular, while agreement with "axe" would set it to plural More gen- erally, the unification approach to agreement rules out the following configuration: (3) *[X [F:a]] -~ [36 [F:{a, ;3}]] #- [Z [F:~]] Here [F : z] denotes feature F, with value z, and [F : {z,g}] indicates feature F with value either z or V, z and Y distinct. Thus, this schema indicates that a category which is specified for the values a and B for feature F cannot simultaneously agree in this feature with categories that specify distinct values for F. In the rest of this section, I will show cases of constructions which match this schema but are still grammatical. 2.1 Case 1: German Free Relatives In German, as Groos and van Riemsdijk (1979) demon- strate, free relative clauses require that the relative pro- noun agree in Case both with the position of the rela- five clause as a whole and also with the position with which the relative pronoun is construed (i.e. with the gap which the relative pronoun fills). This is shown in (4) and (5), where the matrix verb and the verb in the flee relative are annotated with the Case the relative pronoun must bear in that clause. (4) a. Wer nicht stark ist, muss klug sein. who not strong is must clever be NOM NOM NOM 'whoever isn't strong must be clever.' b. *Wen Gott schwach geschaffen hat, muss klug sein. Ace ACC NOM *Wer Gott schwach geschaffen hat, muss klug sein. NOM ACC NOM who God weak created has must clever be 'Who(m)ever God has created weak must be clever.' (5) a. Ich nehme, wen du mir empfiehlst. I take who you me recommend ACC ACC ACC 'I take whomever you recommend to me.' b. *Ich nehme, wen du vertraust. ACC ACC DAT *Ich nehme, were du vertraust. ACC DAT DAT I take who you trust 'I take whomever you trust.' Assuming that "Case assignment" is actually a form of agreement between a verb and a noun phrase that it governs, the data in (4)-(5) seems to fit nicely into a unification approach. However, the neuter fee rel- ative pronoun was, which is both nominative and ac- cusative, can seemingly agree with both nominative and accusative Case assigning elements at the same time: (6) Was du mix gegeben hast, ist p~chtig. What you me given have is wonderful NOM/ACC ACC NOM 'What you have given to me is wonderful.' (7) Ich habe gegessen was noch tlbrig war. I have eaten what still left was Ace NOM/ACC NOM 'I ate what was left.' Note that sentences (6) and (7) are precisely in- stances of schema (3), just as (le) is. Hence, if the explanation of the ungrammaticality of (le) is correct, we should expect (6) and (7) to be ungrammatical. (8) a. [V [case:ACC]] --, [N [case:{N,A}]] ,-- [V [case:NOM-]] ("gegeben") ("was") Cist") h. IV [case:ACC]] -~ IN [case:{Y~}]] ~- IV [case:NOM]] ("gegessen") ("was") ("war") 195 A possible solution to this seeming paradox, which still uses unification to effect agreement, is the following. 2 Assume that Case in German is not a single-valued feature, but rather an array of the differ- ent Cases of the language, each of which takes on one of the values T or NIL. We can then handle the data above with the following feature specifications. (The (a) representations use a "path" notation, consisting of attribute-value pairs, like that in Shieber (1986); while the (b) representations use a term notation, with posi- tional features, like that in Definite Clause Grammars (Pereira and Warren (1980)).) (9) wer: a. [case: [nom: T] [gen: NIL] [dat: NIL] [acc: NIL]] b. (CASE T NIL NIL NIL) (10) wem: a. [case: [nom: NIL] [gem NIL] [dat: T] [acc: NIL]] b. (CASE NIL NIL T NIL) (11) wen: a. [case: [nom: NIL] [gem NIL] [dat: NIL] [ace: T]] b. (CASE NIL NIL NIL T) (12) was: a. [case: [nora: T] [gem NIL] [dat: NIL] [acc: T]] b. (CASE T NIL NIL T) Assuming that a verb is only specified for the Case it assigns and is unspecified for the others, the Case specifications for verbs that take nominal complements would be: (13) geschaffen,nehme,empfiehlst,gegehen,gegessen: a. [case: [ace: T]] b. (CASE ?val ?val ?val T) (14) vertraust: a. [case: [dat: T]] b. (CASE ?val ?val T ?val) Similarly, the Case specification for nominative Case assignment, whether this is a property of syntactic structures or of particular lexical items, would be: (15) a. [case: [nom: T]] b. (CASE T ?val ?val ?val) This solution works, then, because was, and no other free relative pronouns, specifies the value T for 2This possibility was pointed out to me by Andy Haas. more than one element in its Case array and because verbs and other Case "assigning" elements only specify a value for the Case they "assign", and for no others. This solution of factoring out seemingly contradictory values for a single feature into values of different fea- tures allows us to get around the superficial violation of the schema in (3). However, there axe other construc- tions which are harder to decompose in this fashion. 2.2 Case 2: Hungarian WH Movement and Topicalization Let us now turn to a more complicated example, from Hungarian, described in Szamosi (1976). In Hungarian, WH words, like full NPs, are marked as either definite or indefinite. The verb in Hungarian is also marked as definite or indefinite, in agreement with its complement. When the complement is an accusative noun phrase, the definiteness marking on verb and noun phrase is the same. (16) a. Akart egy kOnyvet. he-wanted a book -DEF -DEF b. *Akarta egy kSnyvet. he-wanted a book +DEF -DEF 'He wanted a book.' c. *Akart a k0nyvet. he-wanted the book -DEF +DEF d. Akarta a kSnyvet. he-wanted the book +DEF +DEF 'He wanted the book.' e. Egy k0nyv amit akart a book which he-wanted -DEF -DEF f. *Egy kt~nyv arnit akarta a book which he-wanted -DEF +DEF 'A book which he wanted g. *Ez az a k0nyv amelyiket akart this that the book which he-wanted +DEF -DEF h. Ez az a k0nyv amelyiket akarta thisthat thebook which he-wanted +DEF +DEF 'This book is the one which he wanted.' When the complement is a finite clause, the verb bears definite agreement. 196 (17) a. J/lnos akarta, hogy elhozzak egy k0nyvet. John wanted that I-bring a book +DEF +DEF b. *Janos akart, hogy elhozzak egy kOnyvet. John wanted that I-bring a book -DEF +DEF 'John wanted me to bring a book.' Finally, WH phrases and topicalized constituents in Hungarian typically appear immediately preceding the verb; verb and WH word or topic~ized noun phrase must agree in definiteness. 3 From these constraints, it follows that WH phrases and topicalized noun phrases extracted from complement clauses must be marked definite. Since the clausal complement forces the verb to bear definite agreement, and since the WH word or topicalized N-P must agree with the verb in definiteness, the WH word or topicalized NP can only be definite. This is shown in the following examples: b. [NP [def.'-] --, IV [def.-{+,-}] ,-- [c [def:+] ("egy kt~nyv") ("akartam') ("hogy") Let us consider the consequences of expanding out the definiteness feature into an array of separate values, analogous to the German example. First, this would require the underspecified verb forms to be represented as in (23). (23) akam{u~,akartam: a. [definiteness: [definite: T] [indefinite: T]] b. (DEFINITENESS T T) Next, it would require that the WH pronouns be speci- fied as in (24) and (25): (24) amelyiket: a. [definiteness: [definite: T]] b. (DEFINITENF~S T ?val) (25) amit (18) Ez az a k/~nyv amelyiket akarta this that thebook which he-wanted that I-bring +DEF +DEF +DEF +DEF 'This is the book which he wanted me to bring.' (19) *Egy k0nyv amit akarta hogy elhozzak. a book which he-wanted that I-bring -DEF +DEF +DEF-DEF 'A book which he wanted me to bring.' However, certain Hungarian verb forms 4 bear an ending which is ambiguous between definite and indef- inite. In sentences involving such verbs, the WH word may be indefinite. (20) A k0nyv amit akarn~mk, hogy elhozzon. the book which we-would-want that he-brings -DEF ~DEF +DEF-DEF 'The book which we would want him to bring.' (21) Egy kOnyv akartam, hogy elhozzon. a book 1-wanted that he-brings -DEF +DEF +DEF-DEF 'It was a book that I wanted him to bring.' Once again, the grammatical (20) and (21) match the prohibited schema (3) Cc" = "complementizer"): (22) a. IN [def.'-] ---, IV [def:{+,-}] ~ [c [dd:+] ("amit") CakarnCmk") ("bogy") 3Th¢ situation is actually somewhat more complex; s¢~ Szamosi (1976) fo¢ full details. 4The first person singular past indicative and the first person plural present conditional. hogy elhozzam, a. [definiteness: [indefinite: T]] b. (DEFINITENESS ?val T) Note that when either of these pronouns appeared with an underspecified definite and indefinite verb, such as those in (23), it would wind up with the definiteness specification in (23). This would totally neutralize the definiteness~ndefiniteness contrast in such cases. But, in fact, no such ambiguity of interpretation is reported: a definite or indefinite WH phrase or topicalized noun phrase that appears in the suitable configuration with one of these ambiguously definite or indefinite verbs is interpreted as uniquely definite or indefinite, as is con- sistent with its overt maddng, and not as ambiguous between definite and indefinite, as the proposed unifi- cation analysis would require. Thus, this unification based solution to a problem of a morphological ambi- guity entails an ambiguity of interpretation that is not attested. Moreover, aside from the empirically incorrect pre- dictions about semantic interpretation, there is a more fundamental problem with the unification account of agreement. As was pointed out above, treating agree- ment as unification implies that structures meeting the schema in (3) should be superficially ungrammatical. In fact, this seems to be universally false: in every case in natural language in which an element does not molpho- logically distinguish between two or more values of a featureqa situation often referred to as morphological neutralization---it behaves as if this distinction is also neutralized for purposes of agreement. That is, instead of the configuration in (3) being universally ruled out, 197 it is universally attested. This creates a paradox, since the ungrammaticality of (le) seems to depend on the ungrammaticality of structures matching the configura- tion in (3). To demonstrate that this seeming paradox is supported by the data, in the rest of this section, other examples will be presented to show that the configura- tion ruled out in (3) recurs again and again across the languages of the world. 2.3 Case 3: Objects of Conjoined VPs In French, as Kayne (1975) points out, it is possible to conjoin past participles following the past auxiliary and a weak Cclitic") object pronoun which is the common object of the conjoined participles, under the require- ment that the verbs of the conjuncts assign the pronoun the same Case. This is shown in (26) and (27): (26) Paul l'a insuR~ et mis ~t la porte. Paul him-hasinsulted andput to the door ACC ACC ACC 'Paul insulted him and threw him out.' (27) *Paul l'a frapp~ et doun¢ des coups de pied. Paul him-hasstmck andgiven blows of foot ACC ACC DAT 'Paul struck him and gave him some kicks.' However, once again, if the object pronoun is marked for more than one Case, the conjunction of par. ticiples assigning those Cases is allowed. b. Harm stal og bor0aOi kOku. he stole and ate a cookie ACC DAT ACC/DAT 'He stole and ate a cookie.' And German also has similar data, as Pullum and Zwicky (1986) show: (31) a. *Sie findet und hilft Manner/M~nem. she finds and helps men ACC DAT ACC DAT 'She finds and helps men.' b. Er findet und hilft Frauen. he finds and helps women ACC DAT ACC/DAT 'He finds and helps women.' The French, Icelandic, and German examples fall into the now familiar configuration. (32) [V [case:A]] --~ [NP [case:{A,D}]] ,-- [V [case:D]] 2.4 Case 4: Elided Verbs in German Eisenberg (1973) points out that in conjoined German subordinate clauses, the verb in all the non-final clauses can be elided, under identity of person and number agreement. (33) ...weft Hans Bier und Franz Milch trinkt. because Hans beer and Franz milk drinks 3rd 3rd 3rd (28) Paul nous a frapp~ et Paul us has struck andgiven blows ACC/DAT ACC DAT 'Paul struck us and gave us some kicks.' (29) On salt que la police t'a frapl~ one knows that thepolice you-has struck ACC/DAT ACC et donn~ des coups de pied. andgiven blows of foot DAT 'Everybody knows that the police struck you and gave you some kicks.' donnd des coups de pied. '...because Hans drinks beer and Franz, Milk.' of foot (34) *...weft ich Bier und du Milch trinkst/trinke. becauseI beerandyou milk drinks 1st 2nd 2nd 1st '...because I drink beer and you, Milk.' Similar facts hold for Icelandic, as well, as Zaenen and Karttunen (1984) point out. However, in forms which neutralize the person marking on the verb, elision is fine: (30) a. *Harm stal og bor0a0i k6kuna/k0kunni. he stole and ate the cookie ACC DAT ACC DAT 'He stole and ate the cookie.' (35) ...weft wir das Hans und die Muellers den Garten kaufet because we the house and the Muellers the garden buy 1st 3rd lst/3rc '...because we buy the house and the Muellers, the garden (36) ...weft Franz das Hans und ich den Garten kaufen k0nn! because Franz the house and I the garden buy could 3rd I st I st/3J '...because Franz could buy the house and I, the garden.' This (37) is yet another instance of our infamous schema: [NP [per:l]] --, [V [per:{1,3}]] ~- [NP [per:3]] 198 3 Resolving the Paradox The previous section presented a paradox. There seems to be evidence, in the form of ungramatical utterances such as (le), that the configuration in (3) is ungram- matical. However, the rest of the section presented evidence from different constructions and different lan- guages which strongly indicates that (3) is the stan- dard agreement configuration throughout the languages of the world. In this section, I will resolve this paradox by proposing that agreement is not effected by unifi- cation and but rather by a test for non-distinctness of feature values. 3,1 Neutralization versus Ambiguity First, let us return to example (le), repeated here for convenience (38) *The sheep that is ready are there. Recall that the analysis of this utterance which argued for the ungrammaticality of configuration (3) was based on the assumption that "sheep" is unspecified or under- specified for number. Note that this analysis is tenable if syntactic features alone are considered: syntactically, it seems plausible that "sheep" either has no number feature or that it has a variable, rather than a constant, as the value of this feature. However, when the ramifi- cations of this analysis for semantics are considered, it becomes less tenable: while syntactic frameworks have been constructed in which features can take on under- specified values, most semantic frameworks require fea- tures such as singular/plural to be fully specified. That is, semantically, "sheep" can denote an individual or a set of individuals 5 but it cannot denote something indeterminate. This suggests that "sheep" is not un- derspe¢ified, or vague, but rather ambiguous. That is, there is not a single representation for "sheep", which is underspecified for number, but rather two distinct entries, fully specified for number in both its syntactic and semantic aspects. If this is the case, the reason that (38) is ungrammatical is not that unification has filled in the underspecified value for number, but rather that subject-verb agreement disambiguates which of the two senses of "sheep" has been encountered and once one of 5Nothing in the present argument hinges on this being the correct Ueatrnent of the singular/plural distinction. It does not matter which of the various proposals about the semantic interpretation of number is chosen. All that matters is that semantic theories require that singular and plural have different denotations, and do not allow indeterminate representations. the fully specified entries is chosen, it naturally cannot agree with a constituent which bears a distinct number. Once utterances like (38) are analyzed as not match- ing the agreement configuration in (3), it is possible to handle all the cases of morphological neutralization discussed in the previous section. Note that the feature involved in each example of neutralization discussed-- Case in German and French, and definiteness on verbs in Hungarian---is either inherently formal, without se- mantic content (Case) or a feature that does not have any semantic ramification for the category in which it is neutralized: definiteness does affect the interpretation of noun phrases, but it serves purely as a formal agree- ment marker on verbs. If this observation is correct, then the solution to the apparent paradox runs along the following lines: • Syntactic features which have semantic ramifica- tions, such as number on nouns, tense on verbs, degree on adjectives, are never neutralized (under- specified). They are always fully specified and items which seem to be underspecified with re- gard to them are, in fact, ambiguous items with distinct, fully specified representations. (But see the discussion in the Section 4.) • Purely formal syntactic features, on the other hand, can be neutralized, producing truly underspecified representations, either through the use of value dis- junction or through the use of a variable, rather than a constant, as a feature value. 6 • Agreement is effected not by unification but rather by a non.distinctness check. That is, since we can view unification as logically composed of two parts---variable checking and variable substitution---agreement should be analyzed as involv- ing only variable-matching, but not variable substitu- tion. This would explain why constituents that neutral- ize a syntactic feature distinction are universally able to behave as if they are simultaneously marked for all the values of the feature that they neutralize: since agree- ment only involves variable matching, but not variable substitution, the original, underspecified representation is always available for agreement. To make this proposal clearer, I will present an analysis of the German and Hungarian facts, 7 using a 6Pullum and Zwicky (1986, p. 766) make a similar distinction be- tween features "freely chosen" vs. those "syntactically imposed...by rules of agreement ... or govemmenf'. 7For concreteness, I have analyzed nominative Case assignment as being a property of verbs in German. It is possible that this is a 199 term structure type notation, and adding the :OR op- erator, which introduces disjunctions of variable-free terms, s German: Item: Case: wer (nom) wem (dat) wen (arc) was (:or (nora) (acc)) empfiehlst... (arc) vertraust (dat) ist,war... (nom) Hungarian: Item: Definite: amit (-) amelyiket hogy (+) (+) akart (-) akarta (+) akarn@~k,akartam ?val These representations are matched by a non- distinctness check, which performs the same tests as unification. However, the non-distinctness check dif- fers from unification in what it returns. Unification, when applied to two expressions, typically returns ei- ther a distinguished symbol, such as Fail, if they do not unify, or a single substitution expression, which is the most general unifier of its input; see e.g. Pereira and Shieber (1987, pp. 63-64). When two expressions are identical, this substitution expression is empty, since no substitutions need to be performed. In this case, then, unification effectively leaves its input unchanged. Thus, unification can be viewed as returning a single indicator of failure and an unbounded set of substitution expres- sions. Non-distinctness checking, on the other hand, returns a single indicator of failure but also a single indicator of success--an empty substitution expression. Alternatively, non-distinctness checking may be view- ing as determining that two expressions are unifiable, without actually unifying them. The following table contrasts the behavior of unifi- cation (U) and non-distinctness (~): property of structures instead; however, the Case specification of the appropriate structure would be the same as here. A similar consider- ation holds for Hungarian, where the property of a direct sentential complement triggering definite verb agreement might be either a lex- ical or structural property. SFor a full discussion of the issues involved in adding disjunction to complex-fealm'e based formalisms, see Karttamen (1984), Kasper and Rounds (1986), Kasper (1987), and Iohnson (1989). Case: 1. z, y are variable-free and non-disjunctive: a. z=y b. otherwise: 2. ~, y contain variables but are non-disjunctive: a. 3MGU(z, y) b. otherwise: 3. ;~, y are both disjunctions: a. zny40 b. otherwise: z is a disjunction: a. y is a term in b. otherwise: y is a disjunction: NIL NIL Fail Fail MGU(a~, y) NIL Fail Fail (z,-- z Ny, y *-- zfly) Fail 4. Fail 5. NIL Fail (z *--- y) NIL Fail a. z is a term in y (y ,-- z) NIL b. otherwise: Fail Fail where MGU(z, y) is the most general unifier of z, y; NIL is the empty substitution expression; and (a ,--- t) indicates a substitution expression in which fl substitutes for a. 9 In examples (4) and (5) in German, and (16)-- (19) in Hungarian, clause 1 applies. Since the terms involved in the agreements in all these examples are variable-fi'ee, the results are identical under the unifi- cation and non-distincmess analyses. In the German examples (6) and (7), which involve was, clauses 4 and 5 are the relevant ones, and it is here that the difference between the unification approach to agreement and the non-distinctness approach is apparent. Under the unifi- cation approach, once the disjunctive Case feature value associated with was unifies with a fully specified Case feature, a substitution list is produced that replaces the disjunction with one of its values: (39) (:or (nom) (arc)) U (nom) ((:or (nom) (arc)),--(nom)) On the other hand, the non-distincmess check returns a null subsitution, so that the disjunction remains, allow- ing the Case feature of was to agree with distinct values of Case on different applications of non-distinctness. (40) a. (:or (nom) (acc)) ~ (nom)) =~ NIL b. (:or (nom) (acc)) ~ (arc)) => NIL 9Note that this is an extension of the standard conception of sub- stitution" in systems without disjunction, in which a term substitutes for a variable, but not for a variable-free term. However, the addition of disjunction requires such an extension. 200 A similar analysis holds for examples (20) and (21) in Hungarian; in these cases, it is clause 2 which is relevant. The treatment of the conjoined verb phrase facts in Section 2.3 is analogous to that of the cases already dis- cussed. However, one point is worth discussing here. It has not yet been made clear how it is that the object of the conjoined verb phrase is able to agree separately with each verb in the conjunct. While it might be pos- sible to handle this mechanically by postulating some special percolation rule that combines the features of the conjuncts together into some underspecified or dis- junctive form, there is a much more straightforward solution, namely, to postulate that the examples in Sec- tion 2.3 are generated by ellipsis. Certainly, given the strong lexical thrust of recent grammatical frameworks, in which syntactic structures, such as verbal comple- ment structures, are projected from lexical representa- tions, it is hard to see how such examples could not be analyzed as cases of ellipsis, at least in conligurational languages. Thus, example (28) would be analyzed as in (42) rather than (41). (41) nous a [vP [vp frappe] et [vP donn~ des coups de pied]] (42) [vP [vP nous a frapp4] et [vF, [xP e] [v e] donn4 des coups de pied]] In non-configurational languages, since comple- ments may not be localized in any fixed position, some other mechanism for associating a head with its com- plements is needed, independent of these neutralization facts. In an active-objects approach to syntax, such as that outlined in Ingria and Pustejovsky (1990), message- passing would be the logical way of associating a head with its complements and would extend to the conjunc- tion cases, as well. In any event, whatever mechanism is operative in the non-conjunctive case should also ap- ply to the conjoined case. 3.2 Related Work This paper is not the first to consider the problem that neutralization facts pose for theories of agreement. In particular, Zaenen and Karttunen (1984) and Pullum and Zwicky (1986) consider data of the type presented in Sections 2.3 and 2.4. However, the analysis of agree- ment proposed here seems more general in a number of ways. 1° loin all fairness, Zaenen and Karttunen and Pullum and Zwicky also consider aspects of conjunction and agreement that fall outside the scope of the present paper. While the earlier analyses only considered neutral- ization in the context of conjoined structures, as in Sec- tions 2.3 and 2.4, this paper has examined the problem in general. In particular, the solutions proposed by Za- enen and KartOmen and Pullum and Zwicky crucially depend on the neutralized item standing in an agreement relation with a conjunction and, hence, cannot extend to cases of neutralization that do not involve conjunction. While Zaenen and Karttunen and Pullum and Zwicky agree with the present analysis in associating the neutralized constisment with each conjunct of the conjunction directly, rather than through the conjunc- tion as a whole, both of their analyses require this asso- ciation to be stated as a separate principle. If the brief sketch presented at the end of the preceding section is correct, no such stipulation is necessary. Rather, the behavior of neutralization with respect to conjunction follows from the interaction of the general agreement procedure with the way in which heads are associated with their complements. Zaenen and Karttunen leave the bulk of the ques- tion of what features can be neutralized as a research topic. Pullum and Zwicky, on the other hand, limit neutralization to those features imposed by agreement. This is essentially the position argued for here, although there are subtle differences between the two proposals and some problematic data (which we will return to in Section 4). However, this proposal does seem to be fundamentally correct, and, combined with the view of agreement as non-distinctness, yields a more empiri- cany valid theory of agreement than one which equates unification with agreement or which limits the effects of neuWalization to conjoined structures. Moreover, this view of agreement should contribute to the portability of natural language systems across languages. While it might be possible to reconcile the type of agreement behavior discussed here with a for- realism in which unification is used for agreement by the use of arrays of feature values or some even more byzantine mechanism, such an approach would increase the fragility of any system embodying it. In a theory such as the one here, it should be possible to distin- guish cases of ambiguity firom cases of neutralization straightforwardly and to assign the appropriate repre- sentation accordingly. In a system that tried to maintain the use of unification for agreement by means of elab- orated representations, the designer of a grammar for a new language would be faced with the problem of either using the elaborated representation for all cases of mor- phological underspecification, and, perhaps, blowing up the size and complexity of the grammar, or reserving 201 the elaborated representation for just those forms which enter into an agreement relation. This would require a thorough study of all the morphological forms of the language and the constructions they enter into before feature structures could be designed and might entail large scale changes later if previously unnoticed cases of neutralization were discovered. 3.3 The Place of Unification in Grammar The proposal that agreement is not effected by unifica- tion does not, however, mean that unification plays no role in grammar. On the contrary: in most complex- feature based systems, semantic features axe also full- fledged parts of syntactic representations and unifica- tion is used to build up more complex terms out of simpler or less specified terms and to build up formu- las out of terms, n There is no argument at all in the data presented here that unification does not continue to play this role. In fact, there is a certain histori- cal niceness in the picture of grammar that has been developed here: variable matching (non-distinctness) is used to effect agreement, and variable substitution (uni- fication) is used to build up semantic representations. The reason why this view is historically satisfying is that it corresponds to views of agreement and seman- tic interpretation that were independently developed in theoretical and computational linguistics. In the ear- liest forms of generative grammar, it was recognized that certain constructions, such as the various types of ellipsis, depended on a notion of identity. Over the years, this notion of identity was refined into one of non-distinctness. Two linguistic representations agree if they are non-distinct from one another;, they do not need to be identical (see Chomsky (1965, p. 181)). The view of agreement presented here accords with this well-established view. The use of unification for build- ing up semantic representations, in turn, is based on Robinson's (1965) work on resolution theorem proving. Thus, using unification to build up semantic represen- tat'ions, but not for agreement, returns it to something close to its original use. There are two other places where unification may play a role in grammar, although other mechanisms are also possibile in these cases. The first is feature per- colation and the second is the use of empty categories, such as traces. Whereas agreement has been used here to mean matching of features between sister nodes, typi- 11S¢e Percira and Warren (1980), Shieber (1986), and Pereira and Shieber (1987) for more detailed discussion of semantic interpretation in complex-feature based grammars. cally of distinct categories, feature-percolation involves the matching of features between one constituent and a constituent which it dominates, where the dominat- ing constituent is a projection of the dominated, in the sense of the X-Bar theory of phrase structure (Chore- sky (1970), lackendoff (1977)). For example, a noun phrase has the same person and number features as its head noun, a verb phrase, the tense and mood of its head verb, etc. Unification has typically been used to effect feature-percolation and nothing in the data pre- sented here suggests that it is wrong to use unification for this purpose. And while the proposal that agreement and feature-percolation are handled by different mecha- nisms is not usual in complex-feature based grammars, it is also not unprecedented. Ross's (1981) Local Gram- mar formalism is a complex-feature based grammar in which feature percolation and agreement are distinct. Finally, unification has been used to "pass" fea- tures between a "dislocated" element and its trace. Here again, unification remains a viable mechanism. How- ever, there are alternatives mechanisms for both these functions, such as inheritance and delegation, whose use should probably be investigated. 4 Future Research There are a number of theoretical and practical issues that the analysis presented here raises. Their discussion will conclude this paper. First of all, there is the question of how the non- distinctness test for agreement can be incorporated into a system in which unification is used for semantic in- terpretation and other purposes. Since non-distinctness returns a subset of the values returned by unifica- tion, interaction between non-distinctness and unifica- tion should be straightforward. However, a system us- ing both these mechanisms would also need to contain some method for specifying which features of which constituems are subject to unification and which are subject to non-distinctness. This suggests the neces- sity of some sort of type declaration system, in which features are declared as semantically relevant or not for a particular category. The BBN ACFG formalism (Boisen 1989a,b), a form of Definite Clause Grammar, already includes a type declaration system, which has proven very useful for maintaining the consistency of large grammars. It should be possible to extend this kind of type system to the degree of delicacy required by a system incorporating both unification and non- distinctness. 202 A more problematic issue is the exact specification of the features which can be neutralized and those which can be ambiguous, and their contexts. In Section 3.1, it was suggested that semantically relevant features enter into ambiguity relations, while all others produce neu- tralization. However, the notion of semantic relevance may need to be refined. Zaenen and Kartunnen (1984) produce examples such as the following: (43) der Antrag des oder der Dozenten the petition the or the docent(s) SG PL GEN-SG/GEN-PL 'the petition of the docent or docents' (44) *Ich have den Dozenten gesehen und geholfen. I have the docent(s) seen and helped A-SG/D-PL ACC DAT '! have seen the docent and helped the docents.' Example (44), which by the account presented here would involve the attempted neutralization of number, a semantically relevant feature, is ungrammatical, just as is predicted. However, (43), which also seems to involve the attempted neutralization of number, is un- expectedly grammatical. Zaenen and Karttunen also present an example from Finnish parallel to (43): (45) He luldvat hanen uusimman _ ja They read his newest and GEN-SG me hanen parhaat _ kirjansa. we his best book(s) NOM-PL GEN-SG/NOM-PL 'They read his newest book and we his best books.' Here again, number, a semantically relevant feature, appears to be neutralized. Although Zaenen and Kart- tunen's treaUnent of neutralization is different from that suggested here is several respects, they suggest a cru- cial difference between (43) and (45) on the one hand and (44) on the other that may carry over. In (44), the constitutent level at which neutralization is attempted is that of the phrase (N-P), whereas in (43) and (45) it is at the level of the pre-tenninal (N). Zaenen and Karttunen (1984, p. 317) suggest that the neutralization is possible at the one level but not the other because "reference is assigned to noun phrases, not to common nouns." Or, in the terms we have been using here, number is se- mantically relevant for noun phrases, but not nouns. 12 Clearly, more research needs to be done to determine 12In our work on the BBN ACl~ system (Boisen 1989e, b), we have also found that features such as number, degree, and tense seem to have their semantic effect at the phrasal level, rather than that of if the proposed distinction is valid or not. Moreover, if it is valid, the theory of feature percolation needs to be modified to allow number to be neutralized at the level of N, but to produce ambiguity at the level of NP. Finally, one issue that has not yet been mentioned is that of speaker preferences. While the discussion in Section 2 treated constructions involving the neu- tralized forms as perfectly grammatical, variation in speaker judgement has been reported. Thus, Zaenen and Karttunen (1984) comment that some Icelandic speak- ers reject (30b) as well as (30a). Pullum and Zwicky (1986) present similar sorts of judgements for other con- sU'uctions. Moreover, there axe also judgements in the opposite direction. For example, Modern Greek, unlike German, does not require that the relative pronoun in a free relative clause have a Case compatible with both its source and superficial positions; see, for example Mack- ridge (1985, pp. 259ff) for discussion. This means that the Modem Greek equivalents of (4b) and (5b) are grammatical. Nevertheless, some speakers 13, while ac= cepting such sentences as grammatical, report that sen= tences containing a free relative pronoun which neu- tralizes the abstract Case conflict are somewhat more acceptable. These facts set us a broader research goal: that of proposing a theory of agreement which does not produce simple binary grammaticality statements but one which is capable of estimating degrees of relative grammaticality. Since the necessity of such a finer- grained theory of grammaticality is becoming more and more obvious in computational linguistics as a whole, it is no surprise to find it appearing in the study of agreement, as well. 5 Acknowledgments The work reported here was supported by the Advanced Research Projects Agency and was monitored by the Office of Naval Research under Contract No. N00014- 89-C-0008. The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official poli- cies, either expressed or implied, of the Defense Ad- vanced Research Projects Agency or the United States Government. I would like to thank Leland George, Sabine Iatridou, James Pustejovsky, Lance Ramshaw, Philip Resnik, David Stallard, and Annie Zaenen for useful comments and assistance. the lexical head. Moreover, this distinction between the behavior of number on N and NP is reminiscent of Chomsky's (1965, pp. 171fO claim that number is not an inherent feature of nouns. 13Sabine Iatridou. personal communication. 203 6 References 1. Boisen, S., Y. Chow, A. Haas, R. Ingria, S. Roucos, R. Scha, D. Stallard and M. Vilain (1989a) Integration of Speech and Natural Language: Fi- nal Report, Report No. 6991, BBN Systems and Technologies Corporation, Cambridge, Mas- sachusetts. 2. Boisen, Sean, Yen-Lu Chow, Andrew Haas, Robert Ingria, Salim Roukos, and David Stallard (1989b) "The BBN Spoken Language System" In Proceed- ings of the Speech and Natural Language Work- shop February 1989, Morgan Kaufmann Publish- ers, Inc., San Mateo, California, pp. 106-111. 3. Chomsky, Noam (1965) Aspects of the Theory of Syntax, The M.I.T. Press, Massachusetts Institute of Technology, Cambridge, Massachusetts. 4. Chomsky, Noam (1970) "Remarks on Nominaliza- tion", in R. A. Jacobs and P. S. Rosenbaum, eds., Readings in English Transformational Grammar, Ginn and Co., Waltham, Mass, pp. 184-221. 5. Eisenberg, Peter (1973) "A Note on 'Identity of Constituents' ", Linguistic Inquiry 4, pp. 417-420. 6. Groos, Anneke and Henk van Riemsdijk (1981) "Matching Effects in Free Relatives: A Parame- ter of Core Grammar", in A. Belletti, L. Brand.i, and L. Rizzi, eds., Theory of Markedness in Gen- erative Grammar, Proceedings of the 1979 GLOW Conference, Scuola Normale Superiore, Pisa, pp. 171-216. 7. Ingria, Robert J. P. and James Pustejovsky (1990) "Active Objects in Syntax, Semantics, and Pars- ing", in Carol Tenny, ed, Papers from the Parsing Seminar, MIT Center for Cognitive Science. 8. Jackendoff, Ray S. (1977) X Syntax: A Study of Phrase Structure, Linguistic Inquiry Monograph No. 2, The MIT Press, Cambridge, Massachusetts. 9. Johnson, Mark E. (1989) Attribute-Value Logic and the Theory of Grammar, Center for the Study of Language and Information. 10. Karttunen, Laud (1984) "Features and Values", in Proceedings of Coling84, Association for Compu- tational Linguistics, Morristown, NJ, pp. 28-33. 11. Kasper, Robert T. (1987) "A Unification Method for Disjunctive Feature Descriptions", in 25th An- nual Meeting of the Association for Computational 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. Linguistics: Proceedings of the Conference, Asso- ciation for Computational Linguistics, Morristown, NJ, pp. 235-242. Kasper, Robert T. and William C. Rounds (1986) "A Logical Semantics for Feature Structures", in 24th Annual Meeting of the Association for Com- putational Linguistics: Proceedings of the Confer- ence, Association for Computational Linguistics, Morristown, NJ, pp. 257-266. Kayne, Richard S. (1975) French Syntax: The Transformational Cycle, The MIT Press, Cam- bridge, Massachusetts, and London, England. Mackridge, Peter (1985) The Modern Greek Lan- guage, Oxford University Press. Pereira, Femando C. N. and Stuart M. Shieber (1987) Prolog and Natural-Language Analysis, Center for the Study of Language and Information. Pereira, Femando C. N. and David H. D. War- ten (1980) "Definite Clause Grammars for Lan- guage Analysis--A Survey of the Formalism and a Comparison with Augmented Transition Net- works", Artificial Intelligence 13, pp. 231-278. Pullum, Geoffrey K. and Arnold M. Zwicky (1986) "Phonological Resolution of Syntactic Fea- ture Conitict", Language 62, pp. 751-773. Robinson, J. A. (1965) "A Machine-oriented Logic Based on the Resolution Principle", Communica- tions oftheACM 12, pp. 23 ~4. Ross, Kenneth M. (1981) Parsing English Phrase Structure, Ph.D. Dissertation, University of Mas- sachusetts at Amherst. Shieber, Stuart M. (1986) An Introduction to Unification-Based Approaches to Grammar, Cen- ter for the Study of Language and Information. Szamosi, Michael (1976) "On a Surface Structure Constraint in Hungarian", in/ames D. McCawley, ed, Syntax and Semantics Volume 7: Notes from the Linguistic Underground, Academic Press, New York, pp. 409--425. Zaenen, Annie and Laud Karttunen (1984) "Mor- phological Non-Distinctness and Coordination", in ESCOL 84, pp. 309-320. 204
1990
25
Asymmetry in Parsing and Generating with Unification Grammars: Case Studies From ELU Graham Russell,* Susan Warwick,* and John Carroll? * ISSCO, 54 rte. des Acacias ? Cambridge University Computer Laboratory 1227 Geneva, Switzerland New Museums Site, Pembroke Street [email protected] Cambridge CB2 3QG Abstract Recent developments in generation algorithms have enabled work in nnificafion-based computational linguistics to approach more closely the ideal of grammars as declarative statements of linguistic facts, neutral between analysis an0_ synthesis, x-"~-oui this perspective, however, the situation is still far from perfect; all known methods of generation impose constraints on the grammars they assume. We briefly consider a number of proposals for generation, outlining their consequences for the form of grammacs, and then report on experience arising from the addition of a generator to an exist- ing unification environment. The algorithm in question (based on that of Shieber et al. (1989)), though among the most permissive currently avail- able, excludes certain classes of parsable analyses. 1. Introduction Parsing and generation me both concerned with the relation between texts and representations, and in so far as a grammar defines this relation without reference to direction, it may be regarded as rever- sible. Yet, in practice, the program which 'applies' a grammar for the purpose of parsing is quite dis- tinct from the one which performs generation.t The essential difference between parsing and generating lies in the nature of the input. The text, as a string of words, traditionally establishes the starting point of parsing; whether the processing is top-down or bottom-up, the basis for selecting grammar roles is information associated with words in the lexicon. In the case of generation, there is in general no guarantee that the constituents of an input representation correspond to words; a portion of the input may be related directly to a given word, or it may be the result of combining representations associated to some sequence of rules, portions of which me ultimately related to lexical items. For example, if the sentence John kicked the bucket receives the semantic representation die(John), it is i Parsing and generation need not employ dif- ferent algorithms or control strategies; see Shieber (1988) for discussion. However, a truly reversible gram would be an entirely different undergoking what is described here. One such project is currently under way at New Mexico State Universi- ty (Yorick Wilk~, p.c.). 205 relatively easy to see how during parsing the recog- uilion of kicked and the bucket will provide the necessary information (from the lexical entry for kick) to build that represemo~on. "l'ne representa- tion and the lexical items are in general related not dhectly, but rather via intennediate syntactic rules, any of which is able to manipulate the representa- tion in arbitrary ways; in generation, it is not possi- ble to identify the correct lexical item without con- sidering the syntactic rules which may intervene. The generation problem, then, consists in how to build a syntactic structure faom an initial representation, taking it as the root, and extending the structure 'downward' to the lexicon by select- ing rules from the grammar and attaching them at the appropriate points. Though unification based systems have been in use for parsing for a number of years, generation has until recently not attracted comparable atten- tion; Wedet-lnd (1988), Dymetmaun & lsabelle (1988) and Shieber (1988) describe tluee systems of note. Not surprisingly, given the relative infancy of these explorations, none of these systems is without problems. The most permissive of the current proposals appears to be Shieber et al.'s (1989) revision of the Shieber (1988) algorithm, yet several plausible grammatical analyses handled by the parser me beyond the capacity of even approach. This paper reports on experience arising from the addition of a generator component to the FLU 2 environment; the algorithm is a variant of that pro- posed in Shieber et al. (1989). We first consider general aspects of adapting unification grammars initially developed for parsing to their use in gen- eration. A brief description of the generator in ELU highlights the differences and improvements we have adopted. We then demonstrate shortcomings 2 "Environnement Linguistique d'Unification". Cf. Johnson & Rosner (1989) for a description of UD (Unification Device) which includes the parser and facilities such as procedural abswactions and extended data types (lists and trees) and Estival et al. (1989) for a description of the extended ELU system which incorporates the ori~ml UD plus a generation and translation component. of this class of generation algorithms on the basis of two case studies. 2. Generating with Unification Gram- mars The goal of employing a single, minimally aug- mented, grammar for both parsing and generation has become more accessible with the introduction of declaratve grammar formalisms (cf. Kay, 1985). In the context of machine translation, for which the ELU system has been developed, the use of the same grammar for both tasks is highly desirable; indeed much of the work on bidirectional grammars has been carried out in centres working on MT (cf. Busemann, 1987; van Nonrd, to appear;, Dymet- mann & Isabelle, 1988; and Wedekind, 1988). Regardless of the application, however, the ability to generate with a grammar is extremely useful as a method of checking its adequacy. Despite the objective of reversibility, all of the systems mentioned here impose generation-specific restrictions on their grammars, either by limiting the form of possible rules or by augmenting them with annotations. DymeUnann & Isabelle (1988) require the grammar writer to specify for each role the order in which daughters should be generated; however, an order that might be correct when gen- erating from one structure can lead to non- terminating search with another. Busemann (1987) and Saim-Dizier (1989) describe methods of gen- eration which rely on the parsing of a control struc- ture using a specialized grammar to build the syn- tax of a sentence; it is questionable to what extent the latter two systems can be considered to operate with bidirectional grammars. Constraints imposed by Wedekind (1988) and van Noord (to appear) exclude certain linguistic analyses from generation. In order tO overcome the high degree of non-determinism inherent in the top-down approach, Wedekind stipulates that a daughter of a rule must be 'connected' (i.e. that its semantics must be instantiated) before it can be generated from. Less restrictively, van Noord stipulates similar constraints on rules, i.e. that if the semantics of the mother node is known, then the semantics of the head daughter is instantiated, and additionally that if the syntax of the semantic head is known, then the semantics of each daughter is known. These restrictions limit the class of possi- ble analyses, excluding accounts appropriate to LFG (Kaplan and Bresnan, 1982), HPSG (Pollard & Sag, 1987) and UCG (7_eevat et al., 1987). The disparate state of progress in parsing and generation raises important issues concerning the adequacy of grammatical descriptions and the com- putational tools that interpret them. A situation exists in which a grammar may be 'correct' for analysis, but 'incorrect' for generation. Significantly, this may be the case even when the restrictions and annotations mentioned above are taken into account. Grammatical analyses developed in a purely parsing environment cannot 206 always be transferred slraightforwardly into a for- mat suitable for generation. Two types of conclu- sion may be drawn from this: failures may be ascribed to inadequacies of current generator tech- nology, or the grammatical analyses in question may be re-evaluated. Practical remedies will involve two related strands of research; improving methods of generation so as to IDinimiTe restric- tions on the form of grammars that can be gen- erated fzom, and identifying problematic properties of grammars. It is the second of these which the present paper chiefly addresses, though we also remark, in the next section, on some enhancements to the Shieber et al. (1989) algorithm that have been incorporated in the ELU generator. 3. The Generator in ELU In this section we describe the generation algorithm in ELU, and discuss in what respects it differs from that described by Shieber et al. (1989). 3 Two notions central to this method of generation are that of the 'pivot', and that of partitioning the grammar intO 'chaining' and 'noD-chaining ' rules. Loosely, the 'pivot' of a structure to be generated from is the lowest node in a path down semantic heads of rules at which the semantics of the current generation root structure remain~ unchanged. A ch~inlng lille is one in which the semantics of the object associ- ated with the right-hand side category that has been declared as the head unifies with that of the left- hand side category. Other rules are non-chaining roles. Rules that apply between the root and the pivot are, by definition, chaining rules; further, any rule which can be attached below the pivot is, by definition, a non-chaining rule. Rules are parti- tioned into these two groups drain 8 grammar com- pilaton. Once the chaining rules have been identifed, the grammar compiler computes the possible sequences of such rules alon 8 a path through their mothers and semantic heads. The result is a 'teachability table', each of whose elements is a pair of restrictor value sets4 representing classes of FSs which can occur at the top and bouom of such a path; in each case, the 'bottom' restrictor set characterizes a pivot. A res- trictor set is also computed for each lexical stem, in order to retrieve words efficiently during genera- tion The generation algorithm uses the distinction between chaining and non-chaining rules as well as 3 Our discussion will therefore assume familiari- ty with this paper. 4 Restrictors are attributes selected by the writer of a grammar as being maximally distinctive; when two FSs are to be unified, their respective restrictor values axe first checked for compatibility, so as to eliminate the cost of an attempted nnificaton which is bound to fail. See Shieber (1985). that between head and non-head dauglzers, the reachability table for chaining rules, the semantic portion of the FS to be generated fi~m 5, and the restrictors for lexicon stems. The algorithm is: 1. Take all grammar rules declared as 'initial' (or all rules in the grammar if no such declaration has been made); for each of these rules whose mother unifies with the input FS, apply the role top-down, building FSs for each of the daughters, and, starting with the head daughter, execute step 2 for each one. If generation firom the daughters is successful, compute all possible word-forms (as constrained by the locally avail- able syntactic information) for each lexical stem generated. 2. Create a pivot COnSisting of just the semantic portion of the current FS. Non-determiniejc- ally perform steps 2a and 2b: a. Fmd a lexical stem which unifies with the pivot, making sure Coy checking with the reachability table) that the FS resulting from the unification can be linked through seman- tic heads of just chaining rules up to the current FS. b. Fmd a non-chaining rule which can have the pivot as mother, similarly making sure that the FS resulting from the unification of the pivot and the mother can be linked up to the current FS. Recursively (through 2) gen- erate the rule's daughters, starting with the head daughter. 3. Link the pivot up to the current FS through semantic beads of just chaining rules (at each stage, before adding a new rule in the chain, checking with the teachability table that further linking will be possible) and then recursively (through 2) generate the non-bead Os, ghters of these rules. In this algorithm non-cbaining roles are used top- down, while chaining rules are used bottom-up. Linking information is used both to check the appli- cability of a lexical stem or a non-chainlng role when generating top-down from a pivot, and also to control search when generating bottom-up, by ensuring that the left-hand side of any role con- sidered still lies on a possible path through chaining rules to the current FS. One innovation of the ELU generator is that the notion 'semantic bead' is interpreted rather dif- ferently; whereas the earlier work simply defines the semantic bead of a rule as the daughter whose semantics unifies with that of the left-hand side, and thus leaves the notion undefined for non-chalnlng rules, that described here permits the grammar writer to identify one daughter in each rule as the 5 The relevant paths being determined by the user's declaration semantic head. A role in which a O~ghter sluues the semantics of the mother can thus be made into a chaining rule or a non-chaining rule, according to whether that daughter is identified as the semantic head, and a rule that would otherwise have multiple semantic heads can be assigned just one. 6 A rule in which there is no such daughter will remain a non- chaining rule, but may nevertheless be annotated with a similar specification. The rationale is two- fold: the ability to coerce what would otherwise be a chaining rule to a non-chaining rule grallts the grammar writer more control over generation, and the ability to specify one daughter as semantic~dly more si£nlf~mnt than the others may be exploited in order to direct the attention of the generator towards !hat daughter. A second difference is the order of events in bottom-up generation. Instead of generating firom the non-head daughters of each chaining rule as it is attached, the pivot is firm linked to the root, so that, if backtracking is forced, effort will not have been spent on processing StrU~h-e that must be dis- carded. Finally, on each occasion that top-down genera- tion is initiated, an auempt is made to add a lexical item below the current root, rather than extending the path by application of non-chainlng rules until no such rule is applicable. Here, the motivation is that lexical information may be made available as soon as possible without forcing the grammar writer to adopt analyses that will produce bottom- up generation. This is important because global syntactic properties of a sentence are ofteu deter- mined by lexical information. 4. Grammars for Generation 4.1. Introduction In this section we examine more closely interac- tions between generator and grammar. These fall under two headings: (i) the presence of now deterwini.~m in the grammar, and (ii) the role of lexicalism. One aspect of non-detetmini.qm in generation, that of the ordering of role application, is partially overcome in FLU by the user specification of the bead daughter. Non-determinism with respect to the order of solving constraint equations is less well understood. The use of restrictors helps to reduce the number of feature structures to be considered. 6 Thus circumventing a problem noted by Shieber et al. (1989, f~4) in connection with such rules. Van Noord (p.c.) stipulates that any daughter which has the same semantics as the mother, but is not the semantic bead, may not branch: this con- straint is clearly too strong, precluding, among oth- er things, linguistically motivated accounts of coor- dination. 207 However, in FLU, the use of relational abstractions as a generalization of temj~late facilities increases the problem considerably/Relational abstractions permit the grammar writer to augment the phrase structure rules with statements which may receive multiple definitions in terms of constraint equa- tions; the 'Linear Precedence' definition in (2) below is an example. This facility is a standard ELU device for collapsing what would in an unex- tended PATR-like formalLqr¢ he several distinct rules, thereby capturing linguistic generalizations that would otherwise go unexpressed. It is particularly impoRant to control non- determinism in generation, since, at least when pro- cessing is initiated, there is relatively little informa- tion available to direct the search. Expanding multi- ple definitions as they are encountered would give rise to an n~cceptable number of alternatives, many of which might he identical, and often the information from the abstraction is not required until all but one of the alternatives have been excluded by other factors. This is not always the case, however, and when exceptions occur their effect may be drastic. We now describe one such exception to demonstrate how an elegant analysis for parsing is unsuitable for generation. 4.2. A grammar for French clitics A common technique in modem lexically-oriented grammars, and one which reflects and extends the traditional notion of 'valency', is to encode infor- marion about the various phrases with which a verb combines in items on a subcategorization list. The grammar then enforces a match between a member of the list and a phrase which is to combine with some projection of the verb and removes the item from the list. When a sentence is complete, i.e. the verb has 'found' all necessary phrases, a grammar may require that the list he empty, or perhaps that any remaining item is in some way specified as optional. See e.g. Shieber (1986) and Pollard and Sag (1987) for applications of this method. A complete grammar of French must account for the position and ordering of clitic pronouns. These precede the verb, while other complement phrases follow. Moreover, they appear in a fixed order, as shown in (1): (1) me le lui y en te la leur se les nous vons Up to three clitics may occur, but for the sake of this discussion, we consider only the simpler case 7 Cf. Johnson & Rosuer (1989) for a fuller description of relational abstractions. of two critics as complement phrases to the verb. s There are of course many ways of accounting for their distribution; 9 the subcategorization list device seems a natural solution, since any complement phrase may be realized as a critic. The grammar rule in (2) introduces up to two clitics before the verb, their relative order determined by a relational abstraction which is defined by a number of clauses, each clause licensing one of the possible clitic sequences. (2) vplus -> CI1 C12 I-IV H'recede(Cll,O_2) List = <HV subcat> -- CII <vplus subcat> = List -- C12 Precede(X,Y) <X person> = first/second <Y person> -- third Ptecede(X,Y) <X case> = accusative <Y case> = dative Some remarks on notation will be helpful: calls to relational abstractions are indicated by the exclama- tion mark, feature-value disjunction is indicated by the slash, and an equation of the form 'X = Y--Z', where X and Y are lists, nnifies X non-detenninistically with the result of extracting one instance of Z from Y. The effect of this rule, then, is to associate a pair of clitics with a verb, checkln~ that they are correctly ordered, and unifying the subcategoriza- tion list of the left-hand side category with a copy of that of the head verb from which objects unify- ing with each of the clitlcs have been removed. The problem emerges when information assumed to he held in the subcategorizafion list of 'vplus' is required in order to control further gen- eration. For example, if 'vplus' appears as sister to another complement phrase, and the same pro- cedure of unifying the latter with an item on the list takes place, then because the generator has suspended expansion of non-determini.~tic abstrac- lions, the subcategorization list itself will he unin- stantiated, and therefore no information regarding the semantics of the complement phrase will he available to restrict top-down generation. s This is something of an oversimplification, as not only complement phrases, but also adverbials and parts of complement phrases are realized as cli- tics. See Grimshaw (1982) for a partial LFG ac- count of these phenomena. We also ignore the is- sue of negation, which considerably complicates the clitic-aux-verb structure. 9 The categorial treatment proposed in Baschung et al. (1987) not only makes use of order of argu- ments, but also codes each clitic for all possible combinations. 208 Modifications to the syntactic constituency assumed bere do not affect the principle; as long as the instanfiation of so central an element of the grammar as the subcategorization list is delayed, the problem will remain. An alternative type of analysis would remove the non-determinism from the grammar by factoring it out into a larger nomber of rules. This solution is not without its own disadvantages; the number of distinct rules needed by a full treatment of French critics, integrated with the placement of the various nega- tive panicles and auxiliaries, should not be underes- timated. We postpone further discussion of non- determini.~m and delay until the conclusion and turn now to the problem of empty semantic heads, an important problem for bead-driven generation algo- rithms. 1o 4.3. Empty Semantic Heads In German and Dutch, there are two positions in a sentence where tensed verbs may appear: in second position of a main clause, and in final position of a subordinate clause. Once again, a multitude of ana- lyses are possible within ELU grammars. One approach is to control the distribution of verbs with grammar rules specific to clause-type; this solution gives rise to what might be felt to be an unaccept- able degree of duplication in the grammar. A more elegant approach, successful for parsing, exploits the possibility of assoc/ating a word or phrase appearing in one position within a sentence with a 'gap' elsewbere. The latter analysis will be recognized as a vari- ant of a standard Govermnent-Binding treatment, in which a tensed verb in a main clause is 'raised' from an 'underlying' sentence-final position to a 'surface' second position (see e.g. Haider (1985), Platzack (1985) for discussion of this class of ana- lyses). The dependency may be implemented by the use of a feature, say 'v2', whose value in a verb-second construction is a feature structure representing the verb to be raised, and in other con- stmctions an atomic constant such as 'none', which serves to block the dependency. At the extraction site, any value of 'v2' other than 'none' may be cashed out as an empty production. Information regarding the various syntactic properties of the raised verb is passed in the normal fashion between the verb's true position and the extraction site, wbere it is able to exert the same constraints upon complement phrases that a lexically-realiTed verb would. The simplified rule set given in (3) will serve as a basis for discussion. Recall that the generator operates by partitioning the rules of the grammar 1o This problem is alluded to in Shieber et al. (1989, fn.4) and is discussed in a draft of an ex- panded version of the paper. 209 into classes to be applied top-down (non-ch~inlng rules - here 'S-gap' and 'V2') and bottom-up (chaining rules - here 'TOP', 'S' and 'V'). Bottom-up generation is only practical if the input structure to that phase of generation contains sufficient information, e.g. the verb with its sub- categorization list. (3) # Rule TOP TOP -> XP I-I_S <* cat> = top <* head> = <H_S head> <XP cut> = np <H_S subcat> = [XP] <H_S cat> = sbar #Rule V2 Sbar -> H_V2 S <* cat> = sbar <H_V2 cat> - v <S cat> = s <* subcat> = <S suboat> <S v2> = H_V2 <* bead> = <S bead> <H_V2 head syn vfonn> = finite #Rule S S -> XPH_S <S cat>ffis <XP cat> = ap <H_S cat> = s <* v2> = <H_S v2> <* subcat> = <H_S subcat> -- XP <* head> = <I-IS head> #Rule V S -> H_V <S cat> =s <* head> = <H_V bead> <H_V cat> = v <* subcat> = <H_V subcat> # Rule S-gap S->- <S cat> = s <S bead> = <V2 head> <S v2> ffi V2 <S subcat> = <V2 subcat> The verb-raising analysis sketched here has the unfortunate property of supplying the generator with a semantic bead (the verb gap) about which nothing is known. At the stage when top-down processing has identified the verb gap as the start- ing point fog boUom-up generation, the input featm'e structure is underspecified. In particular, the subeategorization list of the missing verb is -ninstalltiated, and in the grammar in question, it is the length of this list which controls invocation of the recumive role 'S'. No bindings can be found, and the generator suspends evaluation of that equa- tion in the hope, in-founded on this occasion, that information not yet present will later allow its solu- tion. The result is that'S' is repeatedly added above 'S-gap', in a non-termlnating attempt to ensure completeness of the search. Van Noord (1989) describes two solutions to this problem, both of which are additions to the ori- ginal program, and whose only motivation (so far) is to overcome this specific problem. The first, somewhat ad-hoc, solution allows the verb to have as one of its morphological realizations the empty string. Since word forms are generated at the end of processing by a morphological front-end, the generator can posit the same word in both positions (for the purpose of relrieving its subcategorizafion behaviour f~om the lexicon, for example). The morphological component then generates one empty string and one full word according to the position of the verb (i.e. in a main or subordinate clause). "['nis mechani.~n is not available in ELU. The second solution adds an additional 'connect' clause in the Prolog program, specific to gaps, in order to assure that the gap is first instanfiated before further processing; this solution raises the issue of I~ming programs to treat specific problems as they are encountenxL There are other constructions which raise the same kind of problem; the fronting of apparently non-constitnent verbal sequences in German (Ner- boone, 1986) introduces more complex dependen- cies, while in English the phenomena of Gapping and Verb-Phrase Ellipsis both manifest themselves syntactically in the absence from a sentence of a verb and possibly other material. Here, the difficulty is, if anything, greater, as the dependen- cies in question are anaphoric in nature, rather than syntactic. 5. Conclusion We have seen, in the preceding section, how in order to write grammars suitable for use with the generator, one must either modify the technical aspects of the grammar or dispense with cemfin classes of grammatical analysis (losing the benefits of relational abstraction on one hand, and lexical- ism on the other, for example). Both of these may be interpreted as restricting the freedom of the grammar writer. The problematic case illustrated in section 4.2 raises the issue of non-deterrolni~m, a potential pitfall for all unification-based systems. In parsing, the result may be long processing limes, but when generating with algorithms of this class, the consequence is often non-tern~inafion. As Shieber et al. (1989, fn.4) observe, failure to choose the right daughter as the starting point for recursive generation may prevent tenuinafion. The desire to exploit the power of unification by using the lexicon as a repository of essentially syn- tactic (beyond pure semantic) information is natural, and has been encouraged by the success in theoretical linguistics of grammatical formalisms which employ such techniques. Yet the use of these techniques in grammar writing, which are highly attractive from the point of view of economy and expressive power, deprives the generator of information that is, strictly speaking, syntactic. Semantic heads alone are not sufficient to drive the generation process, if syntactic information cannot also be made available. Our interim conclusion is that strong versions of the lexicalist position do not appear to be compatible with our current generator, at least for a number of cases. This is not to say that it should be abandoned - the benefits in terms of clarity and economy are probably too great - but some care is needed if it is to be exploited effec- 210 lively. Given that work on this type of generation is in its early stages, it is to be hoped that confimfing research will enable less restricted grammars to be written. Nevertheless, the currently available facili- ties have been employed successfully in general, mJking it possible to envisage defining the 'ade- quacy' of a grammar in terms of its behavior both in parsing and in generation. References Baschung, K, G.G. Bes, A. Corluy, and T. Guillotin (1987) "Auxiliaries and Critics in French UCG Grammar". Proceedings of the Third Confer- ence of the European Chapter of the Associa- tion for Computational Linguistics, Copen- hagen, Denmark, April lst-3rd 1987: 173-178. Bmsnan, J. (ed.) (1982) The Mental Representation of Grammatical Relations. Cambridge, MA: MIT Pmm. Busemann, S. (1987) "Generienmg mit GPSG". KIT-Report 49, Techni~che Universit~t Berlin. Dymelman, M. & P. Isabelle (1988) "Reversible Logic Grammars for Machine Translation". Proceedings of the 2nd International Confer- ence on Theoretical and Methodological Issues in Machine Translation of Natural Languages, Camegie-Mellon University, Pittsburgh, USA. Estival, D., A. Ballim, G. Russell, and S. Warwick (1989) "A Syntax and Semantics for Feanue- Structu~ Transfer". MS, ISSCO. Grimshaw, J. (1982) "On the Lexical Representa- tion of Romance Reflexive Clitics", in Bresnan (ed.): 87- 148. Haider, H. (1985) "V-Second in German", in H. Haider and M. Prinzhom (eds.) Verb Second Phenomena in Germanic Languages: 49 - 75. Dordrecht: Foris. Johnson, R. and M. Rosner (1989) "A Rich Environment for Experimentation with Unification Grammars". Proceedings of the Fourth Conference of the European Chapter of the Association for Computational Linguistics, Manchester, UK, April 10th-12th 1989: 182-189. Kaplan, R.M. and J. Breanan (1982) "Lexical- Functional Grammar:. A Formal System for Grammatical Representation", in Bresnun (ed.): 173-281. Kay, M. (1985) "Parsing in Functional Unification Grammar", in D. Dowry, L. Kamunen, and A. Zwicky (eds.) Natural Language Parsing. Cambridge: Cambridge University Press: 251-278. Nerbonne, J. (1986) "'Phantoms' and German Fronting: Poltergeist Constituents?". Linguis- tics 24-5, 857-870. van Noord, G. (to appear) "Bottom Up Genemtinn in Unification-based Formalisms", in C. Mell- ish, R. Dale, and M. Zock (eds.) Proceedings of the Second European Workshop on Natural Language Generation. Platzack, C. (1985) "A Survey of Generative Ana- lyses of the Verb Second Phenomenon in Ger- manic". Nordic Journal of Linguistics 8: 49-73. Pollard, C. and I.A. Sag (1987) Information-Based Syntax and Semantics, Volume 1: Fundamen. ta/s. CSLI Lecture Notes no. 13 Saint-Dizier, P. (1989) "A Generation Method Based on Principles of Government-Binding Theory". Paper presented at the Second Euro- 211 pean Natural Language Generation Wmkshop, Edinburgh, April 1989. Shieber, S.M. (1985) "Using Restriction to Extend Parsing Algorithms for Complex-l~ttme-Bued Forfoali~ms". Proceedings of the 23rd Annual Meeting of the Association for Computational Linguistics:. 145-152. Shieber, S.M. (1986) An Introduction to Unifration-Based Approaches to Grammar. CSLI l.,~tu~ Notes no. 4. Shieber, S.M. (1988) "A Uniform Ardfiteotme for Prosing and Generation". Proceedings of the 12th International Conference on Computa. tionai Linguistics, Budapest, Hungary: 614--619. Shieber, S.M., van Noord, G., R.C. Moore, and EC.N. Pexeira (1989) "A Semantic-Head- Driven Algorithm for Unification-Based For- malisms". Proceedings of the 27th Annual Meeting of the Association for Computational Linguis~cy. 7-17. Wedekind, J. (1988) "Generation as Stmoture- Driven Derivation". Proceedings of the 12th International Conference on Computational Linguistics, Budapest, Hungmy: 732-737. Zeevat, I-L, E. Klein, and J. Calder (1987) "Unification Categorial Grammar". Categorial Grammar, Unification Grammar, and Parsing, Edinburgh Working Papers in Cognitive Sci- ence, Volume 1. Cenue for Cognitive Science, University of Edinburgh: 195-222
1990
26
AUTOMATED INVERSION OF LOGIC GRAMMARS FOR GENERATION Tomek Strzalkowski and Ping Peng Courant Institute of Mathematical Sciences New York University 251 Mercer Street New York, NY 10012 ABSTRACT We describe a system of reversible grammar in which, given a logic-grammar specification of a natural language, two efficient PROLOG programs are derived by an off-line compilation process: a parser and a generator for this language. The centerpiece of the system is the inversion algorithm designed to compute the generator code from the parser's PRO- LOG code, using the collection of minimal sets of essential arguments (MSEA) for predicates. The sys- tem has been implemented to work with Definite Clause Grammars (DCG) and is a part of an English-Japanese machine translation project currently under development at NYU's Courant Insti- tute. INTRODUCTION The results reported in this paper are part of the ongoing research project to explore possibilities of an automated derivation of both an efficient parser and an efficient generator for natural language, such as English or Japanese, from a formal specification for this language. Thus, given a grammar-like descrip- tion of a language, specifying both its syntax as well as "semantics" (by which we mean a correspondence of well-formed expressions of natural language to expressions of a formal representation language) we want to obtain, by a fully automatic process, two pos- sibly different programs: a parser and a generator. The parser will translate well-formed expression of the source language into expressions of the language of "semantic" representation, such as regularized operator-argument forms, or formulas in logic. The generator, on the other hand, will accept well-formed expressions of the semantic representation language and produce corresponding expressions in the source natural language. Among the arguments for adopting the bidirec- tional design in NLP the following are perhaps the most widely shared: • A bidirectional NLP system, or a system whose inverse can be derived by a fully automated pro- cess, greatly reduces effort required for the sys- tem development, since we need to write only one program or specification instead of two. The actual amount of savings ultimately depends upon the extend to which the NLP system is made bidirectional, for example, how much of the language analysis process can be inverted for gen- eration. At present we reverse just a little more than a syntactic parser, but the method can be applied to more advanced analyzers as well. • Using a single specification (a grammar) underly- ing both the analysis and the synthesis processes leads to more accurate capturing of the language. Although no NLP grammar is ever complete, the grammars used in parsing tend to be "too loose", or unsound, in that they would frequently accept various ill-formed strings as legitimate sentences, while the grammars used for generation are usu- ally made "too tight" as a result of limiting their output to the "best" surface forms. A reversible system for both parsing and generation requires a finely balanced grammar which is sound and as complete as possible. • A reversible grammar provides, by design, the match between system's analysis and generation capabilities, which is especially important in interactive systems. A discrepancy in this capa- city may mislead the user, who tends to assume that what is generated as output is also acceptable as input, and vice-versa. • Finally, a bidirectional system can be expected to be more robust, easier to maintain and modify, and altogether more perspicuous. In the work reported here we concenlrated on unification-based formalisms, in particular Definite Clause Grammars (Pereira & Warren, 1980), which can be compiled dually into PROLOG parser and gen- erator, where the generator is obtained from the parser's code with the inversion procedure described below. As noted by Dymetman and Isabelle (1988), this transformation must involve rearranging the order of literals on the right-hand side of some clauses. We noted that the design of the string gram- mar (Sager, 1981) makes it more suitable as a basis of a reversible system than other grammar designs, although other grammars can be "normalized" (Strzalkowski, 1989). We also would like to point out that our main emphasis is on the problem of 212 reversibility rather than generation, the latter involv- ing many problems that we don't deal with here (see, e.g. Derr & McKeown, 1984; McKeown, 1985). RELATED WORK The idea that a generator for a language might be considered as an inverse of the parser for the same language has been around for some time, but it was only recently that more serious attention started to be paid to the problem. We look here only very briefly at some most recent work in unificatlon-hased gram- mars. Dymelman and Isabelle (1988) address the problem of inverting a definite clause parser into a generator in context of a machine translation system and describe a top-down interpreter with dynamic selection of AND goals 1 (and therefore more flexible than, say, left-to-right interpreter) that can execute a given DCG grammar in either direction depending only upon the binding status of arguments in the top- level literal. This approach, although conceptually quite general, proves far too expensive in practice. The main source of overhead comes, it is pointed out, from employing the nick known as goal freezing (Colmerauer, 1982; Naish, 1986), that stops expan- sion of currently active AND goals until certain vari- ables get instantiated. The cost, however, is not the only reason why the goal freezing techniques, and their variations, are not satisfactory. As Shieber et al. (1989) point out, the inherently top-down character of goal freezing interpreters may occasionally cause serious troubles during execution of certain types of recursive goals. They propose to replace the dynamic ordering of AND goals by a mixed top- down/bottom-up interpretation. In this technique, cer- tain goals, namely those whose expansion is defined by the so-called "chain rules "2, are not expanded dur- ing the top-down phase of the interpreter, but instead they are passed over until a nearest non-chain rule is reached. In the bottom-up phase the missing parts of the goal-expansion tree will be filled in by applying the chain rules in a backward manner. This tech- nique, still substantially more expensive than a fixed-order top-down interpreter, does not by itself guarantee that we can use the underlying grammar formalism bidirectionally. The reason is that in order to achieve bidirectionality, we need either to impose a proper static ordering of the "non-chain" AND * Literals on the right-hand side of a clause create AND goals; llterals with the same predicate names on the left-hand sides of different ehuses create OR goals. 2 A chain rule is one where the main binding-canying argu- ment is passed unchanged from the left-hand side to the righL For example, assert (P) --> subJ (PI), verb (P2), obJ (P1, P2, P). is a chain rule with respect to the argmnent P. goals (i.e., those which are not responsible for mak- ing a rule a "chain rule"), or resort to dynamic order- ing of such goals, putting the goal freezing back into the picture. In contrast with the above, the parser inversion procedure described in this paper does not require a run-time overhead and can be performed by an off- line compilation process. It may, however, require that the grammar is normalized prior to its inversion. We briefly discuss the grammar normalization prob- lem at the end of this paper. IN AND OUT ARGUMENTS Arguments in a PROLOG literal can be marked as either "in" or "out" depending on whether they are bound at the time the literal is submitted for execu- tion or after the computation is completed. For example, in tovo ( [to, eat, fish], T4, [np, [n, john] ] ,P3) the first and the third arguments are "in", while the remaining two are "out". When tovo is used for generation, i.e., tovo (TI, T4, PI, [eat, [rip, [n, john] ], [np, [n, fish] ] ] ) then the last argument is "in", while the first and the third are "out"; T4 is neither "in" nor "out". The information about "in" and "out" status of arguments is important in determining the "direction" in which predicates containing them can be run s . Below we present a simple method for computing "in" and "out" arguments in PROLOG literals. 4 An argument X of literal pred('" X "" ) on the rhs of a clause is "in" if (A) it is a constant; or (B) it is a function and all its arguments are "in"; or (C) it is "in" or "out" in some previous literal on the rhs of the same clause, i.e., I(Y) :-r(X,Y),pred(X); or (D) it is "in" in the head literal L on lhs of the same clause. An argument X is "in" in the head literal L = pred(... X... ) of a clause if (A), or (B), or (E) L is the top-level literal and X is "in" in it (known a priori); or ~ X occurs more than once in L and at s For a discussion on directed predicates in ~OLOO see (Sho- ham and McDermott, 1984), and (Debray, 1989). 4 This simple algorithm is all we need to complete the exper- iment at hand. A general method for computing "in"/"out" argu- ments is given in (Strzalkowski, 1989). In this and further algo- rithms we use abbreviations rhs and lhs to stand for right-hand side and left-hand side (of a clause), respectively. 213 least one of these occurrences is "in"; or (G) for every literal L 1 = pred (" • • Y" • • ) unifiable with L on the rhs of any clause with the head predicate predl different than pred, and such that Y unifies with X, Yis "in" inL1. A similar algorithm can be proposed for com- puting "out" arguments. We introduce "unknwn" as a third status marker for arguments occurring in certain recursive clauses. An argument X of literal pred (. • • X ... ) on the rhs of a clause is "out" if (A) it is "in" in pred(... X • • • ); or (B) it is a functional expression and all its arguments are either "in" or "out"; or (C) for every clause with the head literal pred( . . . Y • • • ) unifiable with pred( " • X "" ) and such that Y unifies with X, Y is either "in", "out" or "unknwn", and Y is marked "in" or "out" in at least one case. An argument X of literal pred(... X... ) on the lhs of a clause is "out" if (D) it is "in" in pred(.'.X...); or (E) it is "out" in literal predl(" • • X .." ) on the rhs of this clause, providing that predl ~ pred; 5 if predl = pred then X is marked "unknwn". Note that this method predicts the "in" and "out" status of arguments in a literal only if the evaluation of this literal ends successfully. In case it does not (a failure or a loop) the "in"/"out" status of arguments becomes irrelevant. COMPUTING ESSENTIAL ARGUMENTS Some arguments of every literal are essential in the sense that the literal cannot be executed success- fully unless all of them are bound, at least partially, at the time of execution. For example, the predicate t ovo ( T 1, T 4, P 1, P 3 ) that recognizes "to+verb+object" object strings can be executed only if either T1 or P3 is bound. 6 7 If tovo is used to parse then T:I. must be bound; if it is used to gen- erate then P3 must be bound. In general, a literal may have several alternative (possibly overlapping) sets of essential arguments. If all arguments in any one of such sets of essential arguments are bound, s Again, we must take provisions to avoid infinite descend, c.f. (G) in "in" algorithm. 6 Assuming that tovo is defined as follows (simplified): tovo(T1,T4,P1,P3) :- to(T1,T2), v(T2,T3,P2), object (T3, T4,P1,P2,P3). 7 An argument is consideredfu/ly bound is it is a constant or it is bound by a constant; an argument is partially bound if it is, or is bound by, a functional expression (not a variable) in which at least one variable is unbound. 214 then the literal can be executed. Any set of essential arguments which has the above property is called essential. We shall call a set MSEA of essential argu- ments a minimal set of essential arguments if it is essential, and no proper subset of MSEA is essential. A collection of minimal sets of essential argu- ments (MSEA's) of a predicate depends upon the way this predicate is defined. If we alter the ordering of the rhs literals in the definition of a predicate, we may also change its set of MSEA's. We call the set of MSEA's existing for a current definition of a predi- cate the set of active MSEA's for this predicate. To run a predicate in a certain direction requires that a specific MSEA is among the currently active MSEA's for this predicate, and if this is not already the case, then we have to alter the definition of this predicate so as to make this MSEA become active. Consider the following abstract clause defining predicate Rf Ri(X1,"" ,Xk):- (D1) QI('" "), Q2('"), a,(...). Suppose that, as defined by (D1), Ri has the setMSi = {ml, "" • ,mj} of active MSEA's, and let MRi ~ MSi be the set of all MSEA for Ri that can be obtained by permuting the order of literals on the right-hand side of (D1). Let us assume further that R i occurs on rhs of some other clause, as shown below: e(xl,'" ,x.):- (C1) R 1 (X1.1, "'" ,Xl,kl), R2(X2,1, ... ,X2,kz), R,(X,, 1,"" ,X,,k,): We want to compute MS, the set of active MSEA's for P, as defined by (C1), where s _> 0, assuming that we know the sets of active MSEA for each R i on the rhs. s If s =0, that is P has no rhs in its definition, then if P (X1, "'" ,X~) is a call to P on the rhs of some clause and X* is a subset of {X1, "'" ,X~} then X* is a MSEA in P if X* is the smallest set such that all arguments in X* consistently unify (at the same time) with the corresponding arguments in at most I occurrence of P on the lhs anywhere in the program. 9 s MSEA's of basic predicates, such as concat, are assumed to be known a priori; MSEA's for reeursive predicates are first com- puted from non-n~cursive clauses. 9 The at most 1 requirement is the strictest possible, and it can be relaxed to at most n in specific applications. The choice of n may depend upon the nature of the input language being processed (it may be n-degree ambiguous), and/or the cost of backing up from unsuccessful calls. For example, consider the words every and all: both can be translated into a single universal quantifier, but upon generation we face ambiguity. If the representation from When s ___ 1, that is, P has at least one literal on the rhs, we use the recursive procedure MSEAS to compute the set of MSEA's for P, providing that we already know the set of MSEA's for each literal occurring on the rhs. Let T be a set of terms, that is, variables and functional expressions, then VAR (T) is the set of all variables occurring in the terms of T. Thus VAR({f(X),Y,g(c,f(Z),X)}) = {X,¥,Z}. We assume that symbols Xi in definitions (C1) and (D1) above represent terms, not just variables. The follow- ing algorithm is suggested for computing sets of active MSEA's in P where i >1. MSEAS (MS,MSEA, VP,i, OUT) (1) Start with VP =VAR({X1,-'.,X,}), MSEA = Z, i=1, and OUT = ~. When the computation is completed, MS is bound to the set of active MSEA's for P. (2) Let MR 1 be the set of active MSEA's of R 1, and let MRU1 be obtained from MR 1 by replacing all variables in each member of MR1 by their corresponding actual arguments of R 1 on the rhs of (C1). (3) IfR I = P then for every ml.k e MRU1 if every argument Y, e m 1,k is always unifiable with its corresponding argument Xt in P then remove ml.k from MRUI. For every set ml.,i = ml,k u {XI.j}, where X1j is an argument in R1 such that it is not already in m ~,~ and it is not always unifiable with its corresponding argument in P, and m 1,kj is not a superset of any other m u remaining in MRUI, add m 1.kj to MRUl.10 (4) For each mlj e MRU1 (j=l'"rl) compute I.h.j := VAR(ml:) c~ VP. Let MP 1 = {IXl,j I ~(I.h,j), j=l..-r'}, where r>0, and ~(dttl,j) = [J.tl, j ~: Q~ or (LLh, j = O and VAR(mI,j) = O)]. If MP1 = O then QUIT: (C1) is ill-formed and can- not be executed. which we generate is devoid of any constraints on the lexieal number of surface words, we may have to tolerate multiple choices, at some point. Any decision made at this level as to which arguments are to be essential, may affect the reversibility of the grammar. l0 An argument Y is always unifiable with an argument X if they unify regardless of the possible bindings of any variables oc- curring in Y (variables standardized apart), while the variables oc- curring in X are unbound. Thus, any term is always unifiable with a variable; however, a variable is not always unifiable with a non- variable. For example, variable X is not always unifiable with f (Y) because if we substitute g (Z) for X then the so obtained terms do not unify. The purpose of including steps (3) and (7) is to elim- inate from consideration certain 'obviously' ill-formed reeursive clauses. A more elaborate version of this condition is needed to take care of less obvious cases. 215 (5) For each ~h,j e MP1 we do the following: (a) assume that ~tl, j is "in" in R1; (b) compute set OUT1j of "out" arguments for R1; (c) call MSEAS(MSI,j,IXl.j,VP,2,0UTIj); (d) assign MS := t,_) MS 1,j. j=l..r (6) In some i-th step, where l<i<s, and MSEA = lxi-l,,, let's suppose that MRi and MRUi are the sets of active MSEA's and their instantiations with actual arguments of R i, for the literal Ri on the rhs of (C 1). (7) If R i = P then for every mi. u E MRUi if every argument Yt e mi. u is always unifiable with its corresponding argument Xt in P then remove mi.u from MRUi. For every set mi.uj = mi.u u {Xij } where X u is an argument in R~ such that it is not already in mio u and it is not always unifiable with its corresponding argument in P and rai, uj is not a superset of any other rai, t remaining in MRUi, add mi.,j to MRU I. (8) Again, we compute the set MPi = {!%.i I j=l ...r i}, where ~tid = (VAR (mij) - OUTi_l,k), where OUTi_I, ~ is the set of all "out" arguments in literals R 1 to Ri_ 1 . (9) For each I.t/d remaining in Me i where i$.s do the following: (a) if lXij = O then: (i) compute the set OUTj of "out" arguments ofRi; (ii) compute the union OUTi.j := OUTj u OUTi-l.k; (iii) call MSEAS (MSi.j,~ti_I.k, VP,i + I,OUTI.j); Co) otherwise, if ~ti.j *: 0 then find all distinct minimal size sets v, ~ VP such that whenever the arguments in v, are "in", then the argu- ments in l%d are "out". If such vt's exist, then for every v, do: (i) assume vt is "in" in P; (ii) compute the set OUT,.j, of "out" arguments in all literals from R1 to Ri; (iii) call MSEAS (MSi. h,la i_l,*t.mt, VP,i + 1,OUTi, h); (c) otherwise, if no such v, exist, MSid := ~. (10)Compute MS := k.) MSi.y; jfl..r (11)For i=s+l setMS := {MSEA}. The procedure presented here can be modified to compute the set of all MSEA's for P by considering all feasible orderings of literals on the rhs of (C1) and using information about all MSEA's for Ri's. This modified procedure would regard the rhs of (C1) as an tmordered set of literals, and use various heuristics to consider only selected orderings. REORDERING LITERALS IN CLAUSES When attempting to expand a literal on the rhs of any clause the following basic rule should be observed: never expand a literal before at least one its active MSEA's is "in", which means that all argu- ments in at least one MSEA are bound. The following algorithm uses this simple principle to reorder rhs of parser clauses for reversed use in generation. This algorithm uses the information about "in" and "out" arguments for literals and sets of MSEA's for predi- cates. If the "in" MSEA of a literal is not active then the rhs's of every definition of this predicate is recur- sively reordered so that the selected MSEA becomes active. We proceed top-down altering definitions of predicates of the literals to make their MSEA's active as necessary. When reversing a parser, we start with the top level predicate pa=a_gen (S, P) assuming that variable t, is bound to the regularized parse structure of a sentence. We explicitly identify and mark P as "in" and add the requirement that S must be marked "out" upon completion of rhs reordering. We proceed to adjust the definition of para_gen to reflect that now {P} is an active MSEA. We continue until we reach the level of atomic or non-reversible primitives such as concat, member, or dictionary look-up routines. If this top-down process succeeds at reversing predicate definitions at each level down to the primitives, and the primitives need no re- definition, then the process is successful, and the reversed-parser generator is obtained. The algorithm can be extended in many ways, including inter- clausal reordering of literals, which may be required in some situations (Strzalkowski, 1989). INVERSE("head :- old-rhs",ins,outs); {ins and outs are subsets of VAR(head) which are "in" and are required to be "out", respectively} begin compute M the set of all MSEA's for head; for every MSEA m e M do begin OUT := ~; if m is an active MSEA such that me ins then begin compute "out" arguments in head; add them to OUT; if outs cOUT then DONEChead:-old-rhs" ) end else if m is a non-active MSEA and m cins then begin new-rhs := ~; QUIT := false; old-rhs-1 := old-rhs; for every literal L do M L := O; {done only once during the inversion} repeat mark "in" old-rhs-1 arguments which are either constants, or marked "in" in head, or marked "in", or "out" in new-rhs; 216 select a literal L in old-rhs-1 which has an "in" MSEA m L and if m L is not active in L then either M L = O or m L e ML; set up a backtracking point containing all the remaining alternatives to select L from old-rhs-1; if L exists then begin if m L is non-active in L then begin if M L -- ~ then M L := M L u {mL}; for every clause "L1 :- rhsu" such that L1 has the same predicate as L do begin INVERSECL1 :- rhsm",ML,~); if GIVEUP returned then backup, undoing all changes, to the latest backtracking point and select another alternative end end; compute "in" and "out" arguments in L; add "out" arguments to OUT; new-rhs := APPEND-AT-THE-END(new-rhs,L); old-rhs- 1 := REMOVE(old-rhs- 1,L) end {if} else begin backup, undoing all changes, to the latest backtracking point and select another alternative; if no such backtracking point exists then QUIT := true end {else} until old-rhs-1 = O or QUIT; if outs cOUT and not QUIT then DONE("head:-new-rhs") end {elseif} end; {for} GIVEUPCcan't invert as specified") end; THE IMPLEMENTATION We have implemented an interpreter, which translates Definite Clause Grammar dually into a parser and a generator. The interpreter first transforms a DCG grammar into equivalent PROLOG code, which is subsequently inverted into a generator. For each predicate we compute the minimal sets of essential arguments that would need to be active if the program were used in the generation mode. Next, we rearrange the order of the fight hand side literals for each clause in such a way that the set of essential arguments in each literal is guaranteed to be bound whenever the literal is chosen for expansion. To implement the algorithm efficiently, we compute the minimal sets of essential arguments and reorder the literals in the right-hand sides of clauses in one pass through the parser program. As an example, we con- sider the following rule in our DCG grammar: 11 assertion (S) -> sa (SI) , subject (Sb), sa ($2), verb (V) , {Sb:np:number :: V:number}, sa (S3), object (O,V, Vp, Sb, Sp), sa ($4) , {S.verb:head : : Vp:head}, {S:verb:number :: V:number}, {S:tense : : [V:tense, O:tense] }, {S:subject :: Sp}, {S:object :: O:core}, {S:sa : : [$1: sa, $2 : sa, $3: sa,O: sa, S4 : sa] }. When lranslated into PROLOG, it yields the following clause in the parser: assertion (S, LI, L2) • - sa (SI, LI, L3) , subject (Sb, L3, L4), sa (S2, L4, L5), verb (V, L5, L6) , Sb:np:number :: V:number, sa (S3, L6, L7), object (0, V, Vp, Sb, Sp, L7, L8), sa ($4, L8, L2), S:verb:head : : Vp:head, S:verb:number :: V:number, S:tense :: [V:tense,O:tense], S:subject : : Sp, S:object :: O:core, S:sa : : [Sl:sa, S2:sa, S3:sa,O:sa, S4:sa] . The parser program is now inverted using the algo- rithms described in previous sections. As a result, the assertion clause above is inverted into a genera- tor clause by rearranging the order of the literals on its right-hand side. The literals are examined from the left to right: if a set of essential arguments is bound, the literal is put into the output queue, otherwise the tt The grammar design is based upon string grammar (Sager, 1981). Nonterminal net stands for a string of sentence adjuncts, such as prepositional or adverbial phrases; : : is a PROLOG-defined predicate. We show only one rule of the grammar due to the lack of space. 217 literal is put into the waiting stack. In the example at hand, the literal sa (Sl, L1, L3) is examined first. Its MSEA is {Sl}, and since it is not a subset of the set of variables appearing in the head literal, this set cannot receive a binding when the execution of assertion starts. It may, however, contain "out" arguments in some other literals on the right-hand side of the clause. We thus remove the first sa literal from the clause and place it on hold until its MSEA becomes fully instantiated. We proceed to consider the remaining literals in the clause in the same manner, until we reach S: verb • head : • Vp : head. One MSEA for this literal is { S }, which is a subset of the arguments in the head literal. We also determine that S is not an "out" argument in any other literal in the clause, and thus it must be bound in assertion whenever the clause is to be exe- cuted. This means, in turn, that S is an essential argument in assertion. As we continue this pro- cess we find that no further essential arguments are required, that is, {S} is a MSEA for assertion. The literal S : verb: head : : Vp: head is out- put and becomes the top element on the right-hand side of the inverted clause. After all literals in the original clause are processed, we repeat this analysis for all those remaining in the waiting stack until all the literals are output. We add prefix g_ to each inverted predicate in the generator to distinguish them from their non-inverted versions in the parser. The inverted assertion predicate as it appears in the generator is shown below. g_assertion (S, L1, L2) • - S:verb:head :: Vp:head, S:verb:number :: V:number, S:tense :: [V:tense,O:tense], S:subject : : Sp, S:object :: O:core, S:sa : : [SI : sa, $2 : sa, $3 : sa, O: sa, $4 : sa] , g_sa ($4, L3, L2) , g_object (O,V, Vp, Sb, Sp, L4, L3), g_sa ($3, L5, L4), Sb:np:number :: V:number, g_verb (V, L6, L5), g_sa ($2, L7, L6) , g_subject (Sb, L8, L7), g_sa ($1, LI, L8) . A single grammar is thus used both for sentence pars- ing and for generation. The parser or the generator is invoked using the same top-level predicate pars_gen(S,P) depending upon the binding status of its arguments: if S is bound then the parser is invoked, if P is bound the generator is called. I ?- yes I ?- P = yes load_gram (grammar) . pars_gen([jane,takes,a,course],P). [[catlassertion], [tense,present,[]], [verbltake], [subject, [np,[headljane], [numberlsingular], [classlnstudent], [tpos], [apos] , [modifier, null] ] ], [object, [np,[headlcourse], [numberlsingular], [classlncourse], [tpos I a], [apos] , [modifier, null] ] ], [sa, [1, [1, [1, [1, [111 ?- pars_gen(S, [[catlassertion], [tense,present,[]], [verbltake], [subject, [np,[headljane], [numberlsingular], [classlnstudent], [tpos], [apos], [modifier, null]]], [object, [np,[headlcourse], [numberlsingular], [classlncourse], [tposla], [apos], [modifier,null]I], [sa,[],[],[],[],[]]]). S = [jane,takes, a, course] yes GRAMMAR NORMALIZATION Thus far we have tacitly assumed that the grammar upon which our parser is based is wriuen in 218 such a way that it can be executed by a top-down interpreter, such as the one used by PROLOG. If this is not the case, that is, if the grammar requires a dif- ferent kind of interpreter, then the question of inverti- bility can only be related to this particular type of interpreter. If we want to use the inversion algorithm described here to invert a parser written for an inter- preter different than top-down and left-to-right, we need to convert the parser, or the grammar on which it is based, into a version which can be evaluated in a top-down fashion. One situation where such normalization may be required involves certain types of non-standard recursive goals, as depicted schematically below. vp (A, P) vp (A, P) v(A,P) -> vp(f (A, PI) ,P) ,compl (PI) . -> v(A,P) . -> lex. If vp is invoked by a top-down, left-to-right inter- preter, with the variable P instantiated, and if P1 is the essential argument in comp1, then there is no way we can successfully execute the first clause, even if we alter the ordering of the literals on its right-hand side, unless, that is, we employ the goal skipping technique discussed by Shieber et al. How- ever, we can easily normalize this code by replacing the first two clauses with functionally equivalent ones that get the recursion firmly under control, and that can be evaluated in a top-down fashion. We assume that P is the essential argument in v (A, P) and that A is "out". The normalized grammar is given below. vp(A,P) -> v(B,P),vpI(B,A). vpl (f (B, PI) ,A) -> vpl (B,A), compl (PI) . vpl (A,A) . v(A,P) -> lex. In this new code the recursive second clause will be used so long as its first argument has a form f(a,fl), where u and 13 are fully instantiated terms, and it will stop otherwise (either succeed or fail depending upon initial binding to A). In general, the fact that a recur- sive clause is unfit for a top-down execution can be established by computing the collection of minimal sets of essential arguments for its head predicate. If this collection turns out to be empty, the predicate's definition need to be normalized. Other types of normalization include elimina- tion of some of the chain rules in the grammar, esl~- ciany if their presence induces undue non- determinism in the generator. We may also, if neces- sary, tighten the criteria for selecting the essential arguments, to further enhance the efficiency of the generator, providing, of course, that this move does not render the grammar non-reversible. For a further discussion of these and related problems the reader is referred to (Strzalkowski, 1989). CONCLUSIONS In this paper we presented an algorithm for automated inversion of a unification parser for natural language into an efficient unification genera- tor. The inverted program of the generator is obtained by an off-line compilation process which directly manipulates the PROLOG code of the parser program. We distinguish two logical stages of this transforma- tion: computing the minimal sets of essential argu- ments (MSEA's) for predicates, and generating the inverted program code with INVERSE. The method described here is contrasted with the approaches that seek to define a generalized but computationally expensive evaluation strategy for running a grammar in either direction without manipulating its rules (Shieber, 1988), (Shieber et al., 1989), 0Vedekind, 1989), and see also (Naish, 1986) for some relevant techniques. We have completed a first implementa- tion of the system and used it to derive both a parser and a generator from a single DCG grammar for English. We note that the present version of INVERSE can operate only upon the declarative specification of a logic grammar and is not prepared to deal with extra-logical control operators such as the cut. ACKNOWLEDGMENTS Ralph Grishman and other members of the Natural Language Discussion Group provided valu- able comments to earlier versions of this paper. We also thank anonymous reviewers for their sugges- tions. This paper is based upon work supported by the Defense Advanced Research Project Agency under Contract N00014-85-K-0163 from the Office of Naval Research. REFERENCES Colmerauer, Main. 1982. PROLOG H: Manuel de reference et mode& theorique. Groupe d'Intelligence Artificielle, Faculte de Sciences de Luminy, Marseille. Debray, Saumya, K. 1989. "Static Inference Modes and Data Dependencies in Logic Programs." ACM Transactions on Programming Languages and Systems, 11(3), July 1989, pp. 418-450. Derr, Marcia A. and McKeown, Kathleen R. 1984. "Using Focus to Generate Complex and Sim- ple Sentences." Proceedings of lOth COLING, Bonn, Germany, pp. 319-326. 219 Dymetman, Marc and Isabelle, Pierre. 1988. "Reversible Logic Grammars for Machine Transla- tion." Proc. of the Second Int. Conference on Machine Translation, Pittsburgh, PA. Grishman, Ralph. 1986. Proteus Parser Refer- ence Manual. Proteus Project Memorandum #4, Courant Institute of Mathematical Sciences, New York University. McKeown, Kathleen R. 1985. Text Genera- tion: Using Discourse Strategies and Focus Con- straints to Generate Natural Language Text. Cam- bridge University Press. Naish, Lee. 1986. Negation and Control in PROLOG. Lecture Notes in Computer Science, 238, Springer. Pereira, Fernando C.N. and Warren, David H.D. 1980. "Definite clause grammars for language analysis." Artificial Intelligence, 13, pp. 231-278. Sager, Naomi. 1981. Natural Language Infor- mation Processing. Addison-Wesley. Shieber, Stuart M. 1988. "A uniform architec- ture for parsing and generation." Proceedings of the 12th COLING, Budapest, Hungary (1988), pp. 614- 619. Shieber, Smart M., van Noord, Gertjan, Moore, Robert C. and Pereira, Feruando C.N. 1989. "A Semantic-Head-Driven Generation Algorithm for Unification-Based Formalisms." Proceedings of the 27th Meeting of the ACL, Vancouver, B.C., pp. 7-17. Shoham, Yoav and McDermott, Drew V. 1984. "Directed Relations and Inversion of PROLOG Pro- grams." eroc. of the Int. Conference of Fifth Gen- eration Computer Systems. Strzalkowski, Tomek. 1989. Automated Inver- sion of a Unification Parser into a Unification Gen- erator. Technical Report 465, Department of Com- puter Science, Courant Institute of Mathematical Sci- ences, New York University. Strzalkowski, Tomek. 1990. "An algorithm for inverting a unification grammar into an efficient unification generator." Applied Mathematics Letters, vol. 3, no. 1, pp. 93-96. Pergamon Press. Wedekind, Jurgen. 1988. "Generation as structure driven derivation." Proceedings of the 12th COLING, Budapest, Hungary, pp. 732-737.
1990
27
ALGORITHMS FOR GENERATION IN LAMBEK THEOREM PROVING Erik-Jan van der Linden * Guido Minnen Institute for Language Technology and Artificial Intelligence Tilburg University PO Box 90153, 5000 LE Tilburg, The Netherlands E-maih vdlindenOkub.nl ABSTRACT We discuss algorithms for generation within the Lambek Theorem Proving Framework. Efficient algorithms for generation in this framework take a semantics-driven strategy. This strategy can be modeled by means of rules in the calculus that are geared to generation, or by means of an al- gorithm for the Theorem Prover. The latter pos- sibility enables processing of a bidirectional cal- culus. Therefore Lambek Theorem Proving is a natural candidate for a 'uniform' architecture for natural language parsing and generation. Keywords: generation algorithm; natural lan- guage generation; theorem proving; bidirection- ality; categorial grammar. 1 INTRODUCTION Algorithms for tactical generation are becoming an increasingly important subject of research in computational linguistics (Shieber, 1988; Shieber et al., 1989; Calder et al., 1989). In this pa- per, we will discuss generation algorithms within the Lambek Theorem Proving (LTP) framework (Moortgat, 1988; Lambek, 1958; van Benthem, 1988). In section (2) we give an introduction to a categorial calculus that is extended towards bidi- rectionality. The naive top-down control strategy in this section does not suit the needs of efficient generation. Next, we discuss two ways to imple- ment a semantics-driven strategy. Firstly, we add inference rules and cut rules geared to generation to the calculus (3). Secondly, since these changes in the calculus do not support bidirectionality, we *We would llke to thank Gosse Bouma, Wietske Si~tsma and Marianne Sanders for their comments on an earlier draft of the paper. 220 introduce a second implementation: a bottom-up algorithm for the theorem prover (4). 2 EXTENDING THE CAL- CULUS Natural Language Processing as deduction The architectures in this paper resemble the uni- form architecture in Shieber (1988) because lan- guage processing is viewed as logical deduction, in analysis and generation: "The generation of strings matching some crite- ria can equally well be thought of as a deductive process, namely a process of constructive proof of the existence of a string that matches the crite- ria." (Shieber, 1988, p. 614). In the LTP framework a categorial reduction sys- tem is viewed as a logical calculus where parsing a syntagm is an attempt to show that it follows from a set of axioms and inference rules. These inference rules describe what the processor does in assembling a semantic representation (representa- tional non-autonomy: Crain and Steedman, 1982; Ades and Steedman, 1982). Derivation trees rep- resent a particular parse process (Bouma, 1989). These rules thus seem to be nondeclarative, and this raises the question whether they can be used for generation. The answer to this question will emerge throughout this paper. Lexical information As in any categorial grammar, linguistic information in LTP is for the larger part represented with the signs in the lex- icon and not with the rules of the calculus (signs are denoted by prosody:syntax:semantlcs). A generator using a categorial grammar needs lex- ical information about the syntactic form of a functor that is connected to some semantic func- tot in order to syntactically correctly generate the semantic arguments of this functor. For a parser, the reverse is true. In order to fulfil both needs, lexical information is made available to the the- orem prover in the form of in~t6aces of o~ionu. I Axioms then truely represent what should be ax- iomatic in a lexicalist description of a language: the ]exical items, the connections between form and meaning. 2 I* sliainationrules */ (U,[Pros_Fu:X/Y:Functor],[TIR],V)=>[Z] <- [Pros_Fu:X/Y:Functor] => [Pros_Fu:X/Y:Functor] k [TIR] => [Pros Arg:Y:Ar~ k (U,[(Pros_Fu*l~os_Arg):X:Functor@Arg],V) => [z]. (U,[T[R],[Pros_Fu:Y\X:Functor],V) => [Z] <- [Pros_Fu:Y\X:Functor] => [Pros_Fu:Y\X:Functor] k [TIR] => [Pros_arg:Y:krg] k (U,[(Pros_krg*Pros_Fu):X:FunctorQArg],V) => [z]. Rules Whenever inference rules are applied, an attempt is made to axiomatize the functor that participates in the inference by the first subse- quent of the elimination rules. This way, lexical information is retrieved from the lexicon. /* introduction rulss */ [T[R]=>[Pros:Y\X:Var_Y'Tsra_X] <- nogsnvar(Y\X) k ([id:Y:Var_Y],[T[R]) => [(id*Pros):X:Tarm_X]. A prosodic operator connects prosodic ele- ments. A prosodic identity element, id, is neces- sary because introduction rules are prosodical]y vacuous. In order to avoid unwanted matching between axioms and id-elements, one special ax- iota is added for id-elements. Meta-logical checks are included in the rules in order to avoid vsri- ables occuring in the final derivation, nogenv,2r reeursively checks whether any part of an expres- sion is a variable. A sequent in the calculus is denoted with P => T, where P, called the antecedent, and T, the succedent, are finite sequences of signs. The calculus is presented in (1) . In what follows, X and ¥ are categories; T and Z, are signs; R, U and V are possibly empty sequences of signs; @ denotes functional application, a caret denotes ~- abstraction, s (i) /* axioms */ [Pros:X:¥] => [Pros:X:Y] <- [Pros:l:Y] =i> [Pros:X:Y] k true. [Pros:X:Y] => [Pros:X:Y] <- (nossnvar(X), nonvar(Y)) k 1;rue. [TIR] => [Pros:X/Y:Var_Y'Tsrm_X] <- nogsnvar(X/Y) k ([T[R],Cid:Y:Var_Y]) -> [(Pros*id):l:Term_X]. /* axiom for prosodic id-element */ [id:X:Y] =i> [id:X:Y] <- isvs.r(Y). /* lexicon, lexioms */ [john:np:john] =1> [john:np:john]. [mary:np:mexy] =1> [maxy:np:maxy]. [loves:(np\s)/np:lovn] =1> [loves:(np\s)/np:lows]. In order to initiate analysis, the theorem prover is presented with sequents like (2). Inference rules are applied recursively to the antecedent of the sequent until axioms are found. This regime can be called top-down from the point of view ofprob- ]em solving and bottom-up from a "parsing" point of view. For generation, a sequent like (3) is pre- sented to the theorem prover. Both analysis and generation result in a derivation like (4). Note that generation not only results in a sequence of lexical signs, but also in a peosodic pl~rasing that could be helpful for speech generation. (2) lVem der Linden and Minnen (submitted) contains a more elaborate comparison of the extended cedcu]tm with the origins] calculus as proposed in Moortgat (1988). 2A suggestion similar to this proposal was made by K~nig (1989) who stated that lexicsI items are to be seen as axioms, but did not include them as such in her de- scription of the L-calculus. SThroughout this paper we will use a Prolog notation because the architectures presented here depend partly on the Prolog un[i~cstlon mechanism. 221 [john:A:B,lovss:C:D,msxy:E:F] => [Pros:s:Ssm] (3) U => [Pros:s:loves@maryQjohn] Although both (2) and (3) result (4), in the case of generation, (4) does not represent the (4) john:np:john 1or*s: (np\s)/np:loves ma~ry:np:mary => john*(loves*mary):s:lovesQaary@john <- loves: (np\s)/np:loves => loves: (np\s)/np:1oves <- loves: (np\s)/np:loves =I> loves:(np\s)/np:1oves <- true aary:np:aary => aary:np:aary <- ms.ry:np:aa~ry =I> aary:np:aary <- true john: np: J olm loves*mary : np\s : lovea@aary => j ohn* (loves*mary) : s : loves@aary@j olm <- loves*aary : np\s : loves@mary => loves*aary :np\s : loves@mary <- true john:np:john => john:np:john <- john:np:john -1> john:np:john <- true john* (loves*aary) :s : lovss@aaryQj ohn => john* (loves*mary) : s : loves@aary@j ohn: <- true exact proceedings of the theorem prover. It starts applying rules, matching them with the an- tecedent, without making use of the original se- mantic information, and thus resulting in an in- efficient and nondeterministic generation process: all possible derivations including all hxical items are generated until some derivation is found that results in the succedent. 4 We conclude that the algorithm normally used for parsing in LTP is in- efficient with respect to generation. 3 CALCULI DESIGNED FOR GENERATION A solution to the ei~ciency problem raised in the previous section is to start from the origi- hal semantics. In this section we discuss calculi that make explicit use of the original semantics. Firstly, we present Lambek-like rules especially designed for generation. Secondly, we introduce a Cut-rule for generation with sets of categorial reduction rules. Both entail a variant of the cru- cial starting-point of the semantic-he~d-driven al- gorithms described in Calder et al. (1989) and Shieber et al. (1989): if the functor of a semantic representation can be identified, and can be re- fated to a lexical representation containing syn- tactic information, it is possible to generate the arguments syntactically. The efficiency of this strategy stems from the fact that it is guided by the known semantic and syntactic information, and lexical information is retrieved as soon as pos- sible. In contrast to the semantic-head-driven al>- proach, our semantic representations do not al- low for immediate recognition of semantic heads: these can only be identified after all arguments 4ef. Shleber et el. (1989) on top-down generation algorithms. 2 2 2 have been stripped of the functor recursively (loves@mary@john =:> loves@mary => loves). Calder et al. conjecture that their algorithm "(...) extends naturally to the rules of compo- sition, division and permutation of Combinatory Categorial Grammar (Steedman, 1987) and the Lambek Calculus (1958)" (Calder et al., 1989, p. 23 ). This conjecture should be handled with care. As we have stated before, inference rules in LTP de- scribe ho~ a processor operates. An important difference with the categorial reduction rules of Calder et al. is that inference.rules in LTP implic- itly initiate the recursion of the parsing and gen- eration process. Technically speaking, Lambek rules cannot be arguments of the rule-predicate of Calder et al. (1989, p. 237). The gist of our strategy is similar to theirs, but the algorithms dilTer. Lambek-llke generation Rules are presented in (5) that explicitly start from the known infor- mation during generation: the syntax and seman- tics of the succedent. Literally, the inference rule states that a sequent consisting of an antecedent that unifies with two sequences of signs U and Y, and a succedent that unifies with a sign with semantics Sem_FuQSem_Arg is a theorem of the calculus if Y reduces to a syntactic functor looking for an argument on its left side with the functor-meaning of the original semantics, and U reduces to its argument. This rule is an equiva- lent of the second elimination rule in (I). (5) /* el~inationrule */ ~,v] => [(Pros_krg*Pros_Fu):X:Sem_Fu@Sea_krg] <- V =>[Pros_Fu:Y\X:Sen_Fu] t U =>[Pros_Arg:Y:Sen_krg]. /* introduction-rule */ [T[R] => [Pros:Y\l:Var_Y'Tera_X] <- nogenvsr(Y\X) k (CCid:Y:Vnur_Y]],CTIR]) => [(id*Pros):X:Tora_l]. 4 A COMBINED BOT- TOM-UP/TOP-DOWN REGIME In this section, we describe an algorithm for the theorem prover that proceeds in a combined bottom-up/top-down fashion from the problem solving point of view. It maintains the same semantics-driven strategy, and enables efficient generation with the bidirectional calculus in (I). The algorithm results in derivations like (4), in the same theorem prover architecture, be it along another path. A Cut-rule for generation A Cut-rule is a structural rule that can be used within the L- calculus to include partial proofs derived with categorial reduction rules into other proofs. In (6) a generation Cut-rule is presented together with the AB-system. (6) /* Cut-rule for generation */ [U.V] => [Pros_Z:Z:Su_Z] <- [Pros_X:X:Sem_X, Pros_Y:Y:Sem_Y] =*> [Pros_g:z:sem_Z] U => [Pros_Z:X:Sem_Z] V ffi> [Proe_Y:Y:Sem_Y]. /* reduction rules, system AB */ [Pros_Fu:X/Y:Functor. lhcos_Arg:Y:lrg] =*> (Pros_FU*Pros_Arg):X:Functor@Arg]. [Pros_Arg:Y:Arg, Pros_Fu:Y\l:Functor] =*> (Pros.Arg*Pros_Fu):X:Functor@ArS]. The generator regimes presented in this section are semantics-driven: they start from a seman- tic representation, assume that it is part of the uppermost sequent within a derivation, and work towards the lexical items, axioms, with the recur- sive application of inference rules. From the point of view of theorem proving, this process should be described as a top-down problem solving strat- egy. The rules in this section are, however, geared towards generation. Use of these rules for pars- ing would result in massive non-determinism. El- ficient parsing and generation require different rules: the calculus is not bidirectioaal. 223 Bidirectionality There are two reasons to avoid duplication of grammars for generation and interpretation. Firstly, it is theoretically more el- egant and simple to make use of one grammar. Secondly, for any language processing system, hu- man or machine, it is more economic (Bunt, 1987, p. 333). Scholars in the area of language gen- eration have therefore pleaded in favour of the bidirectionalit~ of linguistic descriptions (Appelt, 1987). Bidirectionality might in the first place be im- plemented by using one grammar and two sepa- rate algorithms for analysis and generation (Ja- cobs, 1985; Calder et el., 1989). However, apart from the desirability to make use of one and the same grammar for generation and analysis, it would be attractive to have one and the same processiag architecture for both analysis and gen- eration. Although attempts to find such architec- tures (Shieber, 1988) have been termed "looking for the fountain of youth', s it is a stimulating question to what extent it is possible to use the same architecture for both tasks. Example An example will illustrate how our algorithm proceeds. In order to generate from a sign, the theorem prover assumes that it is the succedent of one of the subsequeats of one of the inference rules (7-1/2). (In case of an introduction rule the sign is matched with the succedent of the headseq~en~; this implies a top- down step.) If unification with one of these subse- quents can be established, the other subsequents and the headsequent can be partly instantiated. These sequents can then serve as starting points for further bottom-up processing. Firstly, the headsequent is subjected to bottom-up process- SRon Kaplan during discussion of the $hieber presen- tation at Coling 1988. Generation of nounphrase ~he ~abie. Start with sequent P => [Pros :np: the@table] l- Assume suecedent is part of an axiom: [Pros : np: the0t able] => [Pros :np: the@table] 2- Match axiom with last subsequent of an inference rule: (U, [Pros_Fu:X/Y:Functor], [T[I~,V) => [Z] <- [Pros_Fu:X/Y:Functor] => [Pros_Fu:X/Y:Functor] & [T [ R] => [Pros_krg : Y : Arg] & (U, [ (Pros_Fu*Pros_Arg) : X: Functor@~g], V) => [Z]. Z = Pros:np:the@table; Functor : the; Arg = table; X = np; U = [ ]; V = [ ]. 3- Derive instantiated head sequent: [Pros_Fu: np/Y: the], [T [ R] => [Pros :rip: the0table] 4- No more applications in head sequent: Prove (bottom-up) first instantiated subsequent: [Pros_Fu: np/Y: the] ,,> [Pros_Fu :np/Y : the] Unifies with the axiom for "the": Pros_Fu = the; Y = n. 5- Prove (bottom-up) second instantiated subsequent: [T[ R] => [Pros_Arg: n: "~ able] Unifies with axiom for "table": Pros_Arg = table; T = table:n:table; R = [ ] 6- Prove (bottum-up) last subsequent: is a nonlexical ax/om. [ (the*t able) :np : the@table] => [ (the*table) : np: theQtable]. 7- Final derivation: the:np/n:the table:n:table => the*table:np.the@table <- the:np/n:the => the:np/n:the <- the:np/n:the =1> the:np/n:the <- true table:n:table => table:n:table <- table:n:table =i> tabls:n:table <- true the*table :np:the@table => the*table :np:the@table <- true 224 ing (7-3), in order to axiomatize the head functor as soon as possible. Bottom-up processing stops when no more application operators can be elim- insted from the head sequent (7-4). Secondly, working top-down, the other subsequents (7-4/5) are made subject to bottom-up processing, and at last the last subsequent (7-6). (7) presents gen- eration of a nounphrsse, the ~able. Non-determinism A source for non-determin- ism in the semantics-driven strategy is the fact that the theorem prover forms hypotheses about the direction a functor seeks its arguments, and then checks these against the lexicon. A possibil- ity here would be to use a calculus where dora- inance and precedence are taken apart. We will pursue rids suggestion in future research. 5 CONCLUDING REMARKS Implementation The algorithms and calculi presented here have been implemented with the use of modified versions of the categorial calculi interpreter described in Moortgat (1988). Conclusion Efl]cient, bidirectional use of cat- egorial calculi is possible if extensions are made with respect to the calculus, and if s combined bottom-up/top-down algorithm is used for gener- ation. Analysis and generation take place within the same processing architecture, with the same linguistics descriptions, be it with the use of dif- ferent algorithms. LTP thus serves as a natural candidate for a uniform architecture of parsing and generation. Semantic non-monotonieity A constraint on grammar formalisms that can be dealt with in current generation systems is semantic mono- tonicity (Shieber, 1988; but cf. Shieber et al., 1989). The algorithm in Calder et al. (1989) re- quires an even stricter constaint. Firstly, in van der Linden and Minnen (submitted) we describe how the addition of a unification-based semantics to the calculus described here enables process- ing of non-monotonic phenomena such as non- compositional verb particles and idioms. Identity semantics (cf. Calder et al. p. 235) should be no problem in this respect. Secondly, unary rules and type-raising (ibid.) are part of the L-calculus, and are neither fundamental problems. Inverse E-reduction A problem that exists for all generation systems that include some form of ~-semantics is that generation necessitates the in- verse operation of~-reduction. Although we have implemented algorithms for inverse E-reduction, these are not computationally tractable, e A way out could be the inclusion of a unification based semantics. 7 SBunt (1987) states that an expression with n constants results in 2 n - 1 possible inverse ~-reductlons. 7As proposed in van der Linden and Minnen (submit- ted) for the calculus in (2). 225 6 REFERENCES Ades, A., and Steedman, M., 1982 On the order of words. Linguistics and Pkilosoph~/, 4, pp. 517- 558. Appelt, D.E., 1987 Bidirectional Grammars and the Design of Natural Language Systems. In Wilks, Y. (Ed.), Theoretical Issues in Natural Language Processing. Las Cruces, New Mexico: New Mexico State University, January 7-9, pp. 185-191. Van Benthem, J., 1988 Categorial Grammar. Chapter 7 in Van Benthem, J., Essays in Logi- cal Semantics. Reidel, Dordrecht. Boums, G., 1989 Emcient Processing of Flexi- ble Categorial Grammar. In Proceedings of the EACL 1989, Manchester. pp. 19-26. Bunt, H., 1987 Utterance generation from seman- tic representations augmented with pragmatic in- formation. In Kempen 1987. Calder, J., Reape M., and Zeevat, H., 1989 An algorithm for generation in Unification Catego- rial Grammar. In Proceedings of the EACL 1989, Manchester. pp. 233-240. Crain, S., and Steedman, M., 1982 On not being led up the garden path. In Dowry, Karttunen and Zwicky (Eds.) Natu~l language pQrsing. Cam- bridge: Cambridge University Press. Jacobs, P., 1985 PHRED, A generator for Natural Language Interfaces. Computational Linguistics 11, 4, pp. 219-242. Kempen , G., (Ed.) 1987 Natural language gen- eration: new results in artificial intelligence, pay. cttology and linouiatics. Dordrecht: Nijhoff. K6nig, E., 1989 Parsing as natured deduction. In Proceedings of the ACL 1989, Vancouver. Lsmbek, J., 1958 The mathematics of sentence structure. Am. Math Monthly, 85, 154-169. Linden, E. van der, and Minnen, G., (submit- ted) An account of Non-monotonous phenomena in bidirectional Lambek Theorem Proving. Moortgat, M., 1988 Categorial Inueatigetions. Logical and Hnguistic ¢apects of the Lambek cal- culus. Disseration, University of Amsterdam. Shieber, S., 1988 A uniform architecture for Pars- ing and Generation. In Proceedings of Coling 1988, Budapest, pp. 614-619. Shieber, S., van Noord, G., Moore, R., and Pereira, P., 1989 A semantic-Head-Driven Gen- eration Algorithm for Unification-Based For- mallsms. In Proceedings of ACL 1989 Vancouver. Steedman, M., 1987 Combinatory Grammars and Parasitic Gaps Natural Language and Linguistic Theory, 5, pp. 403-439. 226
1990
28
Multiple Underlying Systems: Translating User Requests into Programs to Produce Answers Robert J. Bobrow, Philip Resnik, Ralph M. Weischedel BBN Systems and Technologies Corporation 10 Moulton Street Cambridge, MA 02138 ABSTRACT A user may typically need to combine the strengths of more than one system in order to perform a task. In this paper, we describe a component of the Janus natural language interface that translates inten- sional logic expressions representing the meaning of a request into executable code for each application program, chooses which combination of application systems to use, and designs the transfer of data among them in order to provide an answer. The com- plete Janus natural language system has been ported to two large command and control decision support aids. 1. Introduction The norm in the next generation of user en- vironments will be distributed, networked applications. Many problems will be solvable only by use of a corn- bination of applications. If natural language technol- ogy is to be applicable in such environments, we must continue to enable the user to talk to computers about his/her problem, not about which application(s) to use. Most current natural language (NL) systems, whether accepting spoken or typed input, are designed to interface to a single homogeneous under- lying system; they have a component geared to producing code for that single class of application sys- tems, such as a single relational database[12]. Providing an English interface to the user's data base, a separate English interface to the same user's plan- ning system, and a third interface to a simulation package, for instance, will neither be attractive nor cost-effective. By contrast, a seamless, multi-modal, natural language interface will make use of a heterogeneous environment feasible and, ff done well, transparent; this can be accomplished by enabling the user to state information needs without specifying how to decompose those needs into a program calling the various underlying systems required to meet those needs. We believe users who see that NL technology does insulate them from the underlying impleman- tation idiosyncrasies of one application will expect that our models of language and understanding will extend to simultaneous access of several applications. Consider an example. In DARPA's Fleet Com- mand Center Battle Management Program (FCCBMP), several applications (call them underlying systems) are involved, including a relational data base (IDB), two expert systems (CASES and FRESH), and a decision support system (OSGP). The hardware platforms include workstations, conventional time- sharing machines, and parallel mainframes. Suppose the user asks Which of those submarines has the greatest probability of locating A within 10 hours? Answering that question involves subproblems from several underlying applications: the display facility, to determine what "those submarines" refers to; FRESH, to calculate how long each submarine would take to get to A's vicinity; CASES, for an intensive, paral- lelizable numerical calculation estimating the probabilities; and the display facility again, to present the response. While acoustic and linguistic processing can determine what the user wants, the problem of trans- lating that into an effective program to do what the user wants is a challenging, but solvable problem. In order to deal with multiple underlying systems, not only must our NL interface be able to represent the meaning of the user's request, but it must also be capable of organizing the various application programs at its disposal, choosing which combination of resources to use, and supervising the transfer of data among them. We call this the multiple underlying systems (MUS) problem. This paper provides an overview of our approach and results on the MUS problem. The implementation is part of the back end of the Janus natural language interface and is docu- mented in [7]. 2. Scope of the Problem Our view of access to multiple underlying sys- tems is given in Figure 2. As implied in the graphical representation, the user's request, whatever its modality, is translated into an internal representation of the meaning of what the user needs. We initially explored a first-order logic for this purpose; however, in Janus [13] we have adopted an intensional logic [3, 14] to investigate whether intensional logic offers 227 more appropriate representations for applications more complex than databases, e.g., simulations and other calculations in hypothetical situations. From the statement of what the user needs, we next derive a statement of how to fulfill that need, an execution p/an composed of abstract commands. The execution plan takes the form of a limited class of data flow graphs for a virtual machine that includes the capabilities of all of the application systems. At the level of that virtual machine, specific commands to specific under- lying systems are dispatched, results from those ap- plication systems are composed, and decisions are made regarding the appropriate presentation of infor- mation to the user. Thus, the multiple underlying sys- tems (MUS) problem is a mapping, MUS: Semantic representation -- > Program that is, a mapping from what the user wants to a program to fulfill those needs, using the heterogeneous application programs' functionality. Though the statement of the problem as phrased above may at first suggest an extremely dif- ficult and long-range program of research in automatic programming (e.g., see [8]), there are several ways one can narrow the scope of the problem to make utility achievable. Restricting the input language, as others have done [4, 6], is certainly one way to narrow the problem to one that is tractable. In contrast, we allow a richer input language (an intensional logic), but assume that the output is a restricted class of programs: acyclic data flow graphs. The implication of this restriction is that the programs generatabla by the MUS component may include only: • Functions available in the underlying applications systems • Routines preprogrammed by the application sys- tem staff, and • Operators on those elements, such as functional composition, if-then-else, operators from the rela- tional algebra, and mapping over lists (for in- stance, for universal quantification and cardinality of sets). If all the quantifiers are assumed to be restricted to finite sets with a generator function, then the quan- tifiers can be converted to simple loops over the ele- ments of sets, such as the MAPCAR of Lisp, rather than having to undertake synthesis of arbitrary program loops. We assume that all primitives of the logic have at least one transformation which will rewrite it, potentially in conjunction with other primi- tives, from the level of the statement of the user's needs to the level of the executable plan. These transformations will have been elicited from the ap- plication system experts, e.g., expert system builders, database administrators, and systems programming staff of other application systems. (Some work has been done on automating this process.) 3. Approach The problem of multiple systems may be decomposed into the following issues, as others have done [4, 9]: • Representation. It is necessary to represent un- derlying system capabilities in a uniform way, and to represent the user request in a form independ- ent of any particular underlying system. The input/output constraints for each function of each underlying system must be specified, thus defining the services available. • Formulation. One must choose a combination of underlying system services that satisfies the user request. Where more than one alternative exists, it is preferable to select a solution with low execu- tion costs and low passing of information between systems • Execution. Actual calls to the underlying systems must be accomplished, information must be passed among the systems as required, and an appropriate response must be generated. 3,1. Representation 3.1.1. Representing the semantics of utterances Since the meaning of an utterance in Janus is represented as an expression in WML (World Model Language [3]), an intensional logic., the input to the MUS component is in WML. For a sentence such as Display the destroyers within 500 miles of Vinson, the WML is as follows: (bring-about ((intension (exists ?a display (object-of ?a (iota ?b (power destroyer) (exists ?c (lambda (?d) interval (& (starts-interval ?d VINSON) (less-than (iota ?e length-measure (interval-length ?d ?e)) (iota ?f length-measure (& (measure-unit ?f miles) (measure-quantity ?f 500)))))) (ends-interval ?c ?b)))))) TIME WORLD)) 228 3.1.2. Representing Application Capabilities To represent the functional capabilities of un- derlying systems, we define services and servers. A server is a functional module typically corresponding to an underlying system or a major part of an under- lying system. Each server offers a number of services: objects describing a particular piece of functionality provided by a server. Specifying a ser- vice in MUS provides the mapping from fragments of logical form to fragments of underlying system code. Each service has associated with it the server it is part of, the input variables, the output variables, the con- juncta computed, and an estimate of the relative cost in applying it. SAMPLE SERVICES: Land-avoidance-distance: owner: Expert System 1 inputs: (x y) locals: (z w) pattern: ((in-class x vessel) (in-class y vessel) (in-class z interval) (In-class w length-measure) (starts-interval z x) (ends-interval z y) (interval-length z w)) outputs: (w) method: ((route-distanca (location-of x) (location-of y)))) cost: 5 Great-circle-distance: owner: Expert System 1 inputs: (x y) locals: (z w) pattern: ((in-class x vessel) (in-class y vessel) (in-class z Interval) (in-class w length-measure) (starts-interval z x) (ends-interval z y) (interval-length z w)) outputs: (w) method: ((gc-distance (location.of x) (location-of y)))) cost: 1 In the example above, there are two competing services for computing distance between two ships: Great-circle-distance, which simply computes a great circle route between two points, and Land-avoidance- distance, which computes the distance of an actual path avoiding land and sticking to shipping lanes. The second is far more accurate when near land; both for calculating delays and in estimating fuel costs; however, the computation time is greater. 3.1.3. Clause Lists Typically, the applicability of a service is contin- gent on several facts, and therefore, several proposi- tions must all be true for the service to apply. To facilitate matching the requirements of a given service against the needs expressed in an utterance, we con- vert expressions in WML to an extended disjunctive normal form (DNF), i.e., a disjunction of conjunctions. We chose DNF because: • In the simplest case, an expression in disjunctive normal form is simply a conjunction of clauses, a particularly easy logical form to cope with, • Even when there are disjuncts, each can be in- dividually handled as a conjunction of clauses, and the results then combined together via union, and • In a disjunctive normal form, the information necessary for a distinct subquery is effectively iso- lated in one disjunct. For details of the algorithm for converting an inten- sional expression to DNF, see [7]; a model-theoretic semantics has been defined for the DNF. For the sentence, Display the destroyers within 500 miles of Vinson, whose WML representation was represented earlier, the clause list is as follows: ((in-class ?a display) (object-of ?a ?b) (in-class ?b destroyer) (in-class ?c interval) (in-class ?d interval) (equal ?c ?d) (starts-interval ?d VINSON) (in-class ?s length-measure) (interval-length ?d ?s) (in-class ?f length-measure) (measure-unlt ?f miles) (measure-quantity ?f 500) (less-than ?e ?f) (ends-lnterval ?c ?b)) The normal form in this case is the same as the standard disjunctive normal form: a simple conjunc- tion of clauses. However, there ere oases where ex- tensions to disjunctive normal form are used: in par- ticular, certain expressions containing embedded sub- expressions (such as universal quantifications, car- dinality, and some other set-related operators) are left in place. In such cases, the embedded subexpres- sions are themselves normalized; the result is a context object that compactly represents a necessary logical constraint but has been normalized as far as possible. #S(CONTEXT :OPERATOR FORALL 229 :OPERATOR-VAR var :CLASS-EXP expression :CONSTRAINT expression) states that var is univer- sally quantified over the CLASS-EXP expression as var appears in the CONSTRAINT express/on. As an example, consider the query Are all the displayed car- tiers ci ? Its WML expression is given below, followed by its normalized representation. Note that contexts are defined recursively; thus, arbitrary embeddings of operators are allowed. The component that analyzes the DNF to find underlying application services to carry out the user request calls itself recursively to correctly process DNF expressions invovling embedded expr~_ ;ons. {QUERY ((INTENSION (PRESENT (INTENSION (FORALL ?JX699 (u (POWER (SET-TO-PRED (IOTA ?JX702 (LAMBDA (?JX701) (POWER AIRCRAFT-CARRIER) (EXISTS ?JX700 DISPLAY (OBJECT.OF ?JX700 ?JX701))) T)))) (OSGP- ENTITY-OVERALL-READINESS-OF ?JX699 C1))))) TiME WORLD)) (#s (CONTEXT :OPERATOR FORALL :OPERATOR-VAR ?JX6g9 :CLASS-EXP ((IN.CLASS ?JX699 AIRCRAFT.CARRIER) (IN.CLASS ?JX700 DISPLAY) (OBJECT.OF ?JX700 ?JX699)) :CONSTRAINTS ((OSGP- ENTITY-OVERALL- READINESS-OF ?JX699 C1 )))) 3.2. Formulation For a request consisting only of a conjunction of literals, finding a set of appropriate services may be viewed as a kind of set-covering problem. A beam search is used to find a low cost cover. Queries containing embedded subqueries (e.g., the quantifier context in the example above) require recursive calls to this search procedure. Inherent in the collectio/: of services covering a DNF expression is the data flow that combines the services into a program to fulfill the DNF request. The next step in the formulatior, process is data flow analysis to extract the data ~low graph corresponding to an abstract program fulfillin~ the request. In Figure 1, the data flow graph for Display the destroyers within 500 miles of Vinson is pictured. Note that the data base (IDB) is called to identify the set of all destroyers, their locations, and the location of Vinson. An expert system is being called to cal- culate the distance between pairs of locations 1 using land avoidance routes. A Lisp utility for comparing measures is called, followed by the display command in an expert system. 3.3. Execution In executing the data flow graph, evaluation at a node corresponds to executing the code in the server specified. Function composition corresponds to pass- ing data between systems, Where more than one data flow path enters a node, the natural join over the input lists is computed. Aggregating operations (e.g., computing the cardinality of a set) correspond to a mapping over lists. 4. Challenging Cases Here we present several well-known challeng- ing classes of problems in translating from logical form to programs. 4.1. Deriving procedures from descriptions. The challenge is to find a compromise between arbitrary program synthesis and a useful class of program derivation problems. Suppose the user asks for the square root of a value, when the system does not know the meaning of square root, as in Find the square root of the sum of the squares of the residuals. Various knowledge acquisition techniques, such as KNACQ [15], would allow a user to provide syntactic and semantic information for the unknown phrase to be defined. Square root could be defined as a func- tion that computes the number that when multiplied times itseff is the same as the input. However, that is a descriptive definition of square root without any in- dication of how to compute it. One still must syn- thesize a program that computes square root; in fact, in early literature on automatic programming and rigorous approaches to developing programs, deriving a program to compute square root was often used as an example problem. Rather than expecting the system to perform such complex examples of automatic programming, we assume the system need not derive programs for terms that it does not already know. For the example 'The distance function takes any physical objects as its arguments and looks up their location. 230 above, the system should be expected to respond I don't know how to compute square root. By making that assumption, we know that all concepts and relations in the domain model, that is, all primitives appearing in WML as input to the MUS component, have a translation specified by the ap- plications programmer to a composition of underlying services. As stated in Section 2, we further restrict the goals of the MUS component to synthesize programs of a simple structure: acyclic data flow graphs of services where one of the services is apply- ing a function to every element in a finite list. There- fore, the arbitrary program synthesis problem includ- ing arbitrary loops and/or recursions is avoided, limit- ing the scope of inputs handleable but allowing solu- tion of a large class of problems. To our knowledge, no NL interface allows ar- bitrary program synthesis. Most assume equivalence at the abstract program level to synthesis of composi- tions of the select, project, and join operations of rela- tional algebra. Our component goes beyond previous work in that the programs it generates include more than just the relational algebra. 4.2. Side-effects. It is well-known that generating a program with side-effects is substantially harder than generating a program that is side-effect free. If there are no side effects, transformations of program expressions can be freely applied, preserving the value(s) computed. Nevertheless, side-effects are critical to many inter- face tasks, for example, changing a display, updating a data base, and setting a value of a variable. Our component produces acyclic data flow graphs. The only node that can have side-effects is the final node in the graph. This keeps the MUS processing simple, while still allowing for side-effects at the final stage, such as producing output, updating data in the underlying systems, or running an applica- tion program having side-effects. All three of those cases have been handled in demonstrations of Janus. Though this issue has not been discussed in other NL publications to our knowledge, we believe this restriction to be typical in NL systems. 4.3. Collapse of information. It has long been noted [5] that a complex rela- tion may be represented in a boolean field in a data base, such as the boolean field of the Navy Blue file which for a given vessel was T/F depending on whether there was a doctor onboard the vessel. There was no information about doctors in the data base, except for that field. In a medical data base, a similar phenomenon was noticed [11]; patient records contained a T/F field depending on whether the patient's mother had had melanoma, though there was no other information on the patient's mother or her case of melanoma. The challenge for such fields is mapping from the many ways that may occur linguistically to the appropriate field without having to write arbitrarily many patterns mapping from logical form to the data base. Just a few examples of the way the melanoma field might be referenced follow: Did Smith's mother ever have melanoma ? How many patients had a mother suffering from melanoma ? Was me/anoma diagnosed for any of the patients' mothers? Our approach to this problem has been to adopt disjunctive normal form (clause form) as the basis for matching services against requirements in the user request. No matter what the form of user request, transforming it to disjunctive normal form means that the information necessary for a disjunct is effectively isolated in one disjunct. The service represented by the field corresponding to "patient's mother had melanoma" covers two conjoined forms: (MOTHER x y) (HAD-MELANOMA y). All of the examples above, given appropriate definitions of suffer and diagnose, will have the two relations as conjuncts in the disjunc- tive normal form for the input, and therefore, will map to the required data base service. 4.4. Hidden joins. In data bases, a relation in English may require a join to be inferred, given the model in the underlying system. Suppose that a university data base as- sociates an office with every faculty member and a phone number with every office. Additionally, some faculty members may be associated with a lab facility; labs have telphones as well. Then to answer the query, What is Dr. Ramehaw's phone number?, the relation between faculty members and phone num- bers must be determined. There are two possibilities: the office phone number or the lab phone number. Most approaches treat this as an inference problem. It can be visualized as finding a relation between two nominal notions faculty member and phone number [1,2]. One such path uses the relation OFFICE(PERSON, ROOM) followed by the relation PHONE(ROOM,PHONE-NUMBER). A general heuristic is to use the shortest path. Computing hid- den joins complicates the search space in searching for a solution among the underlying services, as can be seen in the architectures proposed, e.g., [1,4, 9]. In contrast to the typical approach where one 231 infers the hidden join as needed, we believe such joins are normally anticipatable, and provide support in our lexical definition tools (KNACQ) for specifying them. In KNACQ [15], a knowledge engineer, data base administrator, or other person familiar with the domain and with frame representation specifies for each frame (concept in KL-ONE terminology) and each slot (role in KL-ONE terminology) one or more words denoting that concept or role. In addition, the KNACQ user identifies role chains (sequences of role relations), such as RI(A, B) and R2(B, C), having special linguistic representation. In the example above, KNACQ would prompt the user to select from six possibilities for nominal compounds, possessives, and prepositional connectives relating PERSON to PHONE-NUMBER. In this way, the search space is substantially simplified, since hidden joins have been elicited ahead of time as part of the knowledge ac- quisition and installation process. 4.5. Data coercion. At times, the type required by the underlying functions is not directly stated in the input (English) expression but must be derived. One procedure may produce the measure of an angle in degrees, whereas another may require the measure of an angle in radians. Differing application systems may assume a person is referred to by differing attributes, e.g., by social security number in one, but by employee num- ber in another. In How far is Vinson from Pear/ Harbor?, one must not only infer that the positions of Vinson and Pearl Harbor must be looked up, but also make sure that the coordinates are of the type re- quired by the particular distance function chosen. In our approach, we assume that there are ser- vices available for translati~,g between each mismatch in data type. For the examples above, we assume that there is a translation from degrees to radians and vice versa; that there is a translation from person identified by social security number to person with employee number, and vice versa; that there is a translation function from ships and ports to their loca- tion in latitude and longitude. Such translations may already exist in the applications or may be added as a new application. If there are n different ways to iden- tify the same entity (the measure of an angle, a per- son, the position of a vessel or port, etc.), there need not be (n*'2)/2 translation functions of course; a canonical representation may be chosen if as few as 2n translation functions are available to provide inter- translatability to the canonical form. In constructing the data flow graph, we assume that the canonical representation is used throughout. Then translation functions are inserted on arcs of the data flow graph wherever the output/input assump- tions are not met by the canonical form. Of the five challenging problems, this is the only one we have not yet implemented. 5. Related Work Most previous work applying natural language interfaces provided access to a single system: e.g., a relational data base. Two earlier efforts (at Honeywell [4, 9] and at USC/Information Sciences Institute [6]) dealt with multiple systems. We will focus on com- parison with their work. A limitation common to those two approaches is the minimal expressiveness of the input language: user requests must be expressed as a conjunction of simple relations (literals), equivalent to the select/project/join operations of a relational algebra. This restriction is relaxed in Janus, allowing requests to contain negation of elementary predicates, existen- tial and universal quantification, cardinality and other aggregates, a limited form of disjunction (sufficient for the most common cases), and of course simple con- junction. Wh-questions (who, what, etc.), commands, and yes/no queries are handled, and some classes of helpful responses are produced. All three efforts employ a search procedure. In the Honeywell effort, graph matching is at the heart of the search; in the USC/ISI effort, the NIKL classifier [10] is at the heart of the search; in our effort, a beam search with a cost function is used. Only our effort has been tested on applications with a potentially large search space (800 services); the other efforts have thus far been tested on applica- tions with relatively few services. 6. Experience in Applying the System The MUS component has been applied in the domain of the Reet Command Center Battle Manage- ment Program (FCCBMP), using an internal version of the Integrated Database (IDB) -- a relational database -- as one underlying resource, and a set of LISP func- tions providing mathematical modeling of a Navy problem as another. The system includes more than 800 services. An earlier version of the system described here was also applied to provide natural language access to data in Intellicorp's KEE knowledge-base system, to objects representing hypothetical world-states in an object-oriented simulation system, and to LISP func- tions capable of manipulating this data~ We have begun integrating the MUS com- ponent with BBN's Spoken Language System HARC. 232 7. Conclusions The work offers highly desirable utility along the following two dimensions: • It frees the user from having to identify for each term (word) pieces of program that would carry out their meaning. • It improves the modularity of the interface, insulat- ing the presentation of information, such as table i/o, from details of the underlying application(s). We have found the general approach depicted in Figure 2 quite flexible. The approach was developed in work on natural language processing; however, it seems to be valuable for other types of I/O modalities. Some preliminary work has suggested its utility for table input and output in managing data base update, data base retrieval, and a directly manipulable image of tabular data. Our prototype module generates code from forms in the intensional logic; then the components originally developed for the natural language processor provide the translation mechanism to and from intensional logic and under- lying systems that actually store the data. Acknowledgments This research was supported by the Advanced Research Projects Agency of the Department of Defense and was monitored by ONR under Contracts N00014-85-C-0079 and N00014-85-C-0016. The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either ex- pressed or implied, of the Defense Advanced Research Projects Agency or the U.S. Government. The current address for Philip Resnik is Com- puter & Information Sciences Department, University of Pennsylvania, Philadelphia, PA 19104. We gratefully acknowledge the comments and assistance of Lance Ramshaw in drafts of this paper. REFERENCES 1. Carberry, M.S. Using Inferred Knowledge to Un- derstand Pragmatically Ill-Formed Queries. In R. Reilly, Ed., Communication Failure in Dialogue, North-Holland, 1987. 2. Chang, C.L. Finding missing joins for incomplete in Relational Data Bases. Research Report RJ2145, IBM Research Laboratory, 1978. San Jose, CA. 3. Hinrichs, E.W., Ayuso, D.M., and Scha, R. The Syntax and Semantics of the JANUS Semantic Inter- pretation Language. In Research and Development in Natural Language Understanding as Part of the Strategic Computing Program, Annual Technical Report December 1985- December 1986, BBN Laboratories, Report No. 6522, 1987, pp. 27-31. 4. Kaemmerer, W. and Larson, J. A graph-oriented knowledge representation and unification technique for automatically selecting and invoking software func- tions. Proceedings AAAI-86 Fifth National Con- ference on Artificial Intelligence, American Association for Artificial Intelligence, 1986, pp. 825-830. 5. Moore, R.C. Natural Language Access to Databases - Theoretical/Technical Issues. Proceed- ings of the 20th Annual Meeting of the Association for Computational Linguistics, Association for Computa- tional Linguistics, June, 1982, pp. 44-45. 6. Pavlin, J. and Bates, R. SIMS: single interface to multiple systems. Tech. Rept. ISI/RR-88-200, Univer- sity of Southern California Information Sciences In- stitute, February, 1988. 7. Resnik, P. Access to Multiple Underlying Systems in Janus. BBN Report 7142, Bolt Beranek and New- man Inc., September, 1989. 8. Rich, C. and Waters, R.C. Automatic Program- ming: Myths and Prospects. 9. Ryan, K. R. and Larson, J. A.. The use of E-R Data Models in Capability Schemas. In Spaccapietra, S., Ed., Entity-Relationship Approach, Elsevier Science Publishers, 1987. 10. Schmolze, J.G., Lipkis, T.A. Classification in the KL-ONE Knowledge Representation System. Proceedings of the Eighth International Joint Con- ference on Artificial Intelligence, 1983. 11. Stallard, D.G. A Terminological Simplification Transformation for Natural Language Question- Answering Systems. Proceedings of the 24th Annual Meeting of the Association for Computational Linguis- tics, New York, June, 1986, pp. 241-246. 12. Stallard, David. Answering Questions Posed in an Intensional Logic: A Multilevel Semantics Ap- proach. In Research and Development in Natural Language Understanding as Part of the Strategic Computing Program, R. Weischedel, D.Ayuso, A. Haas, E. Hinrichs, R. Scha, V. Shaked, D. Stallard, Eds., BBN Laboratories, Cambridge, Mass., 1987, ch. 4, pp. 35-47. Report No. 6522. 13. Weischedel, R., Ayuso, D., Haas, A., Hinrichs, E., Scha. R., Shaked, V., Stallard, D. Research and Development in Natural Language Understanding as Part of the Strategic Computing Program. BBN 233 Laboratories, Cambridge, Mass., 1987. Report No. 6522. 14. Weischedel, R.M. A Hybrid Approach to Representation in the Janus Natural Language Processor. Proceedings of the 27th Annual Meeting of the Association for Computational Linguistics, 1989, pp. 193-202. 15. Weischedel, R.M., Bobrow, R., Ayuso, D.M., and Ramshaw, L. Portability in the Janus Natural Lan- guage Interface. Speech and Natural Language, San Mateo, CA, 1989, pp. 112-117. SENTENCE: "DIsplay the destroyers within 500 miles of Vlnaon." DATA FLOW GRAPH: EXPERT EXPERT lOB I SYSTEM I LISP I SYSTEM Figure 1 : Data Row Graph for "Display the destroyers within 500 miles of Vinson'" Figure 2: MULTd-MODAL INPUT TEXT MENU GRAPHIC8 SPEECH I I O EV . I .,SES I I S's'sus MULTIPLE UNDERLYING SYSTEMS BBN's Approach to Simultaneous Access to Multiple Systems 234
1990
29
PROSODY, SYNTAX AND PARSING John Bear and Patti Price SRI International 333 Ravenswood Avenue Menlo Park, California 94025 Abstract We describe the modification of a grammar to take advantage of prosodic information provided by a speech recognition system. This initial study is lim- ited to the use of relative duration of phonetic seg- ments in the assignment of syntactic structure, specif- ically in ruling out alternative parses in otherwise ambiguous sentences. Taking advantage of prosodic information in parsing can make a spoken language system more accurate and more efficient, if prosodic- syntactic mismatches, or unlikely matches, can be pruned. We know of no other work that has suc- ceeded in automatically extracting speech informa- tion and using it in a parser to rule out extraneous parses. 1 Introduction Prosodic information can mark lexical stress, iden- tify phrasing breaks, and provide information useful for semantic interpretation. Each of these aspects of prosody can benefit a spoken language system (SLS). In this paper we describe the modification of a gram- mar to take advantage of prosodic information pro- vided by a speech component. Though prosody in- cludes a variety of acoustic phenomena used for a variety of linguistic effects, we limit this initial study to the use of relative duration of phonetic segments in the assignment of syntactic structure, specifically in ruling out alternative parses in otherwise ambiguous sentences. It is rare that prosody alone disambiguates oth- erwise identical phrases. However, it is also rare that any one source of information is the sole feature that separates one phrase from all competitors. Tak- ing advantage of prosodic information in parsing can make a spoken language system more accurate and more efficient, if prosodic-syntactic mismatches, or unlikely matches, can be pruned out. Prosodic struc- ture and syntactic structures are not, of course, com- pletely identical. Rhythmic structures and the neces- sity of breathing influence the prosodic structure, but not the syntactic structure (Gee and Grosjean 1983, Cooper and Paccia-Cooper 1980 ). Further, there are aspects of syntactic structure that are not typically marked prosodically. Our goal is to show that at least some prosodic information can be automatically ex- tracted and used to improve syntactic analysis. Other studies have pointed to possibilities for deriving syn- tax from prosody (see e.g., Gee and Grosjean 1983, Briscoe and Boguraev 1984, and Komatsu, Oohira, and Ichikawa 1989) but none to our knowledge have communicated speech information directly to a parser in a spoken language system. 2 Corpus For our corpus of sentences we selected a subset of a corpus developed previously (see Price et aL 1989) for investigating the perceptual role of prosodic infor- mation in disambiguating sentences. A set of 35 pho- netically ambiguous sentence pairs of differing syntac- tic structure was recorded by professional FM radio news announcers. By phonetically ambiguous sen- tences, we mean sentences that consist of the same string of phones, i.e., that suprasegmental rather than segmental information is the basis for the distinction between members of the pairs. Members of the pairs were read in disambiguating contexts on days sepa- rated by a period of several weeks to avoid exagger- ation of the contrast. In the earlier study listeners viewed the two contexts while hearing one member of the pair, and were asked to select the appropriate context for the sentence. The results showed that lis- teners can, in general, reliably separate phonetically and syntactically ambiguous sentences on the basis of prosody. The original study investigated seven types of structural ambiguity. The present study used a subset of the sentence pairs which contained 17 prepositional phrase attachment ambiguities, or par- ticle/preposition ambiguities (see Appendix). If naive listeners can reliably separate phonetically and structurally ambiguous pairs, what is the basis for this separation? In related work on the perception of prosodic information, trained phoneticians labeled the same sentences with an integer between zero and five inclusive between every two words. These num- bers, 'prosodic break indices,' encode the degree of prosodic decoupling of neighboring words, the larger the number, the more of a gap or break between the words. We found that we could label such break in- dices with good agreement within and across labelers. In addition, we found that these indices quite often disambiguated the sentence pairs, as illustrated be- low. * Marge 0 would 1 never 2 deal 0 in 2 any 0 guys • Marge 1 would 0 never 0 deal 3 in 0 any 0 guise The break indices between 'deal' and 'in' provide a clear indication in this case whether the verb is 'deal-in' or just 'deal.' The larger of the two indices, 3, indicates that in that sentence, 'in' is not tightly coupled with 'deal' and hence is not likely to be a particle. So far we had established that naive listeners and trained listeners appear to be able to separate such ambiguous sentence pairs on the basis of prosodic in- formation. If we could extract such information au- tomatically perhaps we could make it available to a parser. We found a clue in an effort to assess the phonetic ambiguity of the sentence pairs. We used SRI's DECIPHER speech recognition system, con- strained to recognize the correct string of words, to automatically label and time-align the sentences used in the earlier referenced study. The DECIPHER sys- tem is particularly well suited to this task because it can model and use very bushy pronunciation net- works, accounting for much more detail in pronun- ciation than other systems. This extra detail makes it better able to time-align the sentences and is a stricter test of phonetic ambiguity. We used the DE- CIPHER system (Weintraub et al. 1989) to label and time-align the speech, and verified that the sen- tences were, by this measure as well as by the ear- lier perceptual verification, truly ambiguous phonet- ically. This meant that the information separating the member of the pairs was not in the segmental information, but in the suprasegmental information: duration, pitch and pausing. As a byproduct of the labeling and time alignment, we noticed that the du- rations of the phones could be used to separate mem- bers of the pairs. This was easy to see in phonetically ambiguous sentence pairs: normally the structure of duration patterns is obscured by intrinsic duration of phones and the contextual effects of neighboring phones. In the phonetically ambiguous pairs, there was no need to account for these effects in order to see the striking pattern in duration differences. If a human looking at the duration patterns could reliably separate the members of the pairs, there was hope for creating an algorithm to perform the task automat- ically. This task could not take advantage of such pairs, but would have to face the problem of intrinsic phone duration. Word break indices were generated automatically by normalizing phone duration according to esti- mated mean and variance, and combining the average normalized duration factors of the final syllable coda consonants with a pause factor. Let di = (di- ~j)/o'j be the normalized duration of the ith phoneme in the coda, where pj and ~rj are the mean and standard deviation of duration for phone j. dp is the duration (in ms) of the pause following the word, if any. A set of word break indices are computed for all the words in a sentence as follows: 1 n = + d,,/70 The term dp/70 was actually hard-limited at 4, so as not to give pauses too much weight. The set .A includes all coda consonants, but not the vowel nu- cleus unless the syllable ends in a vowel. Although the vowel nucleus provides some boundary cues, the lengthening associated with prominence can be con- founded with boundary lengthening and the algo- rithm was slightly more reliable without using vowel nucleus information. These indices n are normalized over the sentence, assuming known sentence bound- aries, to range from zero to five (the scale used for the initial perceptual labeling). The correlation co- efficient between the hand-labeled break indices and the automatically generated break indices was very good: 0.85. 3 Incorporating Prosody Into A Grammar Thus far, we have shown that naive and trained lis- teners can rely on suprasegmental information to sep- arate ambiguous sentences, and we have shown that we can automatically extract information that corre- lates well with the perceptual labels. It remains to be shown how such information can be used by a parser. In order to do so we modified an already existing, and in fact reasonably large grammar. The parser we 18 use is the Core Language Engine developed at SRI in Cambridge (Alshawi et al. 1988). Much of the modification of the grammar is done automatically. The first thing is to systematically change all the rules of the form A --* B C to be of the form A --. B Link C, where Link is a new gram- matical category, that of the prosodic break indices. Similarly all rules with more than two right hand side elements need to have link nodes interleaved at ev- ery juncture: e.g., a rule A --* B C D is changed into A --~ B Link1 C Link2 D. Next, allowance must be made for empty nodes. It is common practice to have rules of the form NP --* and PP ~ ~ in order to handle wh-movement and relative clauses. These rules necessitate the incorpo- ration into the modified grammar of a rule Link --* e. Otherwise, a sentence such as a wh-question will not parse because an empty node introduced by the gram- mar will either not be preceded by a link, or not be followed by one. The introduction of empty links needs to be con- strained so as not to introduce spurious parses. If the only place the empty NP or PP etc. could fit into the sentence is at the end, then the only place the empty Link can go is right before it so there is no extra am- biguity introduced. However if an empty wh-phrase could be posited at a place somewhere other than the end of the sentence, then there is ambiguity as to whether it is preceded or followed by the empty link. For instance, for the sentence, "What did you see _ on Saturday?" the parser would find both of the following possibilities: • What L did L you L see L empty-NP empty-L on L Saturday? • What L did L you L see empty-L empty-NP L on L Saturday? Hence the grammar must be made to automatically rule out half of these possibilities. This can be done by constraining every empty link to be fol- lowed immediately by an empty wh-phrase, or a constituent containing an empty wh-phrase on its left branch. It is fairly straightforward to incorpo- rate this into the routine that automatically modi- fies the grammar. The rule that introduces empty links gives them a feature-value pair: empty_link=y. The rules that introduce other empty constituents are modified to add to the constituent the feature-value pair: trace_on_left_branch--y. The links zero through five are given the feature-value pair empty_link--n. The default value for trace_on_left_branch is set to n so that all words in the lexicon have that value. Rules of the form Ao -~ A1 Link1 ...An are modi- fied to insure that A0 and A1 have the same value sent i.d. la lb 2a 2b 3a 3b 4a 4b 5a 5b 6a 6b 7a 7b TOT. # parses no prosody # parses with prosody parse time no prosody parse time with prosody 10 4 5.3 5.3 10 10 5.3 7.7 3.6 3.6 10 10 2 2 2 2 2 2 2 2 2 2 60 2.3 2.3 3.2 3.2 7 10 4.3 4.0 2.7 3.7 4.7 5.5 1 1.7 2.5 2 1.6 2.9 1 2.5 2.8 2 2.5 4.1 1 0.8 1.3 2 0.8 1.5 46 38.7 53.0 Table 1: The seconds) with mation. number of parses and parse times (in and without the use of prosodic infor- for the feature trace_on_left_branch. Additionally, if Linki has empty_link---y then Ai+x must have trace_on_left_branch--y. These modifications, incor- porated into the grammar-modifying routine, suffice to eliminate the spurious ambiguity. 4 Setting Grammar Parame- ters Running the grammar through our procedure, to make the changes mentioned above, results in a gram- mar that gets the same number of parses for a sen- tence with links as the old grammar would have pro- duced for the corresponding sentence without links. In order to make use of the prosodic information we still need to make an additional important change to the grammar: how does the grammar use this in- formation? This area is a vast area of research. The present study shows the feasibility of one particular approach. In this initial endeavor, we made the most conservative changes imaginable after examining the break indices on a set of sentences. We changed the rule N --~ N Link PP so that the value of the link must be between 0 and 2 inclusive (on a scale of 0-5) for the rule to apply. We made essentially the same change to the rule for the construction verb plus par- ticle, VP --* V Link PP, except that the value of the link must, in this case, be either 0 or 1. 19 After setting these two parameters we parsed each of the sentences in our corpus of 14 sentences, and compared the number of parses to the number of parses obtained without benefit of prosodic informa- tion. For half of the sentences, i.e., for one member of each of the sentence pairs, the number of parses remained the same. For the other members of the pairs, the number of parses was reduced, in many cases from two parses to one. The actual sentences and labels are in the ap- pendix. The incorporation of prosody resulted in a re- duction of about 25% in the number of parses found, as shown in table 1. Parse times increase about 37%. In the study by Price et al., the sentences with more major breaks were more reliably identified by the listeners. This is exactly what happens when we put these sentences through our parser too. The large prosodic gap between a noun and a following preposition, or between a verb and a following prepo- sition provides exactly the type of information that our grammar can easily make use of to rule out some readings. Conversely, a small prosodic gap does not provide a reliable way to tell which two constituents combine. This coincides with Steedman's (1989) ob- servation that syntactic units do not tend to bridge major prosodic breaks. We can construe the large break between two words, for example a verb and a preposition/particle, as indicating that the two do not combine to form a new slightly larger constituent in which they are sisters of each other. We cannot say that no two con- stituents may combine when they are separated by a large gap, only that the two smallest possible con- stituents, i.e., the two words, may not combine. To do the converse with small gaps and larger phrases simply does not work. There are cases where there is a small gap between two phrases that are joined together. For example there can be a small gap between the subject NP of a sentence and the main VP, yet we do not want to say that the two words on either side of the juncture must form a constituent, e.g., the head noun and auxiliary verb. The fact that parse times increase is due to the way in which prosodic information is incorporated into the text. The parser does a certain amount of work for each word, and the effect of adding break indices to the sentence is essentially to double the number of words that the parser must process. We expect that this overhead will constitute a less significant percent- age of the parse time as the input sentences become more complex. We also hope to be able to reduce this overhead with a better understanding of the use of prosodic information and how it interacts with the parsing of spoken language. 5 Corroboration From Other Data After devising our strategy, changing the grammar and lexicon, running our corpus through the parser, and tabulating our results, we looked at some new data that we had not considered before, to get an idea of how well our methods would carry over. The new corpus we considered is from a recording of a short ra- dio news broadcast. This time the break indices were put into the transcript by hand. There were twenty- two places in the text where our attachment strategy would apply. In eighteen of those, our strategy or a very slight modification of it, would work properly in ruling out some incorrect parses and in not preventing the correct parse from being found. In the remaining four sentences, there seem to be other factors at work that we hope to be able to incorporate into our sys- tem in the future. For instance it has been mentioned in other work that the length of a prosodic phrase, as measured by the number of words or syllables it con- tains, may affect the location of prosodic boundaries. We are encouraged by the fact that our strategy seems to work well in eighteen out of twenty-two cases on the news broadcast corpus. 6 Conclusion The sample of sentences used for this study is ex- tremely small, and the principal test set used, the phonetically ambiguous sentences, is not independent of the set used to develop our system. We therefore do not want to make any exaggerated claims in inter- preting our results. We believe though, that we have found a promising and novel approach for incorporat- ing prosodic information into a natural language pro- cessing system. We have shown that some extremely common cases of syntactic ambiguity can be resolved with prosodic information, and that grammars can be modified to take advantage of prosodic information for improved parsing. We plan to test the algorithm for generating prosodic break indices on a larger set of sentences by more talkers. Changing from speech read by professional speakers to spontaneous speech from a variety of speakers will no doubt require mod- ification of our system along several dimensions. The next steps in this research will include: • Investigating further the relationship between prosody and syntax, including the different roles of phrase breaks and prominences in marking syntactic structure, 20 • Improving the prosodic labeling algorithm by incorporating intonation and syntactic/semantic information, • Incorporating the automatically labeled informa- tion in the parser of the SRI Spoken Language System (Moore, Pereira and Murveit 1989), • Modeling the break indices statistically as a func- tion of syntactic structure, • Speeding up the parser when using the prosodic information; the expectation is that pruning out syntactic hypotheses that are incompatible with the prosodic pattern observed can both improve accuracy and speed up the parser overall. 7 Acknowledgements This work was supported in part by National Science Foundation under NSF grant number IRI-8905249. The authors are indebted to the co-Principle Investi- gators on this project, Mart Ostendorf (Boston Uni- versity) and Stefanie Shattuck-Hufnagel (MIT) for their roles in defining the prosodic infrastructure on the speech side of the speech and natural language integration. We thank Hy Murveit (SRI) and Colin Wightman (Boston University) for help in generating the phone alignments and duration normalizations, and Bob Moore for helpful comments on a draft. We thank Andrea Levitt and Leah Larkey for their help, many years ago, in developing fully voiced struc- turally ambiguous sentences without knowing what uses we would put them to. This work was also supported by the Defense Ad- vanced Research Projects Agency under the Office of Naval Research contract N00014-85-C-0013. [3] W. Cooper and J. Paccia-Cooper (1980) Syn- tax and Speech, Harvard University Press, Cam- bridge, Massachusetts. [4] J. P. Gee and F. Grosjean (1983) "Performance Structures: A Psycholinguistic and Linguistic Appraisal," Cognitive Psychology, Vol. 15, pp. 411-458. [5] J. Harrington and A. Johnstone (1987) "The Ef- fects of Word Boundary Ambiguity in Continu- ous Speech Recognition," Proc. of XI Int. Cong. Phonetic Sciences, Tallin, Estonia, Se 45.5.1-4. [6] A. Komatsu, E. Oohira and A. Ichikawa (1989) "ProsodicM Sentence Structure Inference for Natural Conversational Speech Understanding," ICOT Technical Memorandum: TM-0733. [7] R. Moore, F. Pereira and H. Murveit (1989) "Integrating Speech and Natural-Language Pro- cessing," in Proceedings of the DARPA Speech and Natural Language Workshop, pages 243-247, February 1989. [8] P. J. Price, M. Ostendorf and C. W. Wightman (1989) "Prosody and Parsing," Proceedings of the DARPA Workshop on Speech and Natural Language, Cape Cod, October, 1989. [9] M. Steedman (1989) "Intonation and Syntax in Spoken Language Systems," Proceedings of the DARPA Workshop on Speech and Natural Lan- guage, Cape Cod, October 1989. [10] M. Weintraub, H. Murveit, M. Cohen, P. Price, J. Bernstein, G. Baldwin and D. Bell (1989) "Linguistic Constraints in Hidden Markov Model Based Speech Recognition," in Proc. IEEE Int. Conf. Acoust., Speech, Signal Processing, pages 699-702, Glasgow, Scotland, May 1989. References [1] H. Alshawi, D. M. Carter, J. van Eijck, R. C. Moore, D. B. Moranl F. C. N. Pereira, S. G. Pulman, and A. G. Smith (1988) Research Pro. gramme In Natural Language Processing: July 1988 Annual Report, SRI International Tech Note, Cambridge, England. [2] E. J. Brisco and B. K. Boguraev (1984) "Con- trol Structures and Theories of Interaction in Speech Understanding Systems," COLING 1984, pp. 259-266, Association for Computational Lin- guistics, Morristown, New Jersey. 8 la. lb. 2a. Appendix I 1 read O a 0 review 2 of 1 nasality 4 in 0 German. I 0 read 2 a 1 review 1 of 0 nasality 1 in 0 German. Why 0 are 0 you 2 grinding 0 in 3 the 0 mud. 2b. Why 1 are 0 you 2 grinding 3 in 0 the 1 mud. 3a. Raoul 2 murdered 1 the 0 man 4 with 0 a 1 gun. 3b. Raoul 1 murdered 3 the 0 man 1 with 0 a 0 gun. 4a. The 0 men 1 won 3 over 0 their 0 enemies. 4b. The 0 men 2 won 0 over 1 their 0 enemies. 21 5a. Marge 1 would 0 never 0 deal 3 in 0 any 0 guise. 5b. Marge 0 would 1 never 2 deal 0 in 2 any 0 guys. 6a. Andrea 1 moved 1 the 0 bottle 3 under 0 the 0 bridge. 6b. Andrea 1 moved 3 the 0 bottle 1 under 0 the 0 bridge. 7a. They 0 may 0 wear 4 down 0 the 0 road. 7b. They 0 may 1 wear 0 down 2 the 0 road. 22
1990
3
Computational structure of generative phonology and its relation to language comprehension. Eric Sven Ristad* MIT Artificial Intelligence Lab 545 Technology Square Cambridge, MA 02139 Abstract We analyse the computational complexity of phonological models as they have developed over the past twenty years. The major results ate that generation and recognition are undecidable for segmental models, and that recognition is NP- hard for that portion of segmental phonology sub- sumed by modern autosegmental models. Formal restrictions are evaluated. 1 Introduction Generative linguistic theory and human language comprehension may both be thought of as com- putations. The goal of language comprehension is to construct structural descriptions of linguistic sensations, while the goal of generative theory is to enumerate all and only the possible (grammat- ical) structural descriptions. These computations are only indirectly related. For one, the input to the two computations is not the same. As we shall see below, the most we might say is that generative theory provides an extensional chatacterlsation of language comprehension, which is a function from surface forms to complete representations, includ- ing underlying forms. The goal of this article is to reveal exactly what generative linguistic theory says about language comprehension in the domain of phonology. The article is organized as follows. In the next section, we provide a brief overview of the com- putational structure of generative phonology. In section 3, we introduce the segmental model of phonology, discuss its computational complexity, and prove that even restricted segmental mod- els are extremely powerful (undecidable). Subse- quently, we consider various proposed and plausi- ble restrictions on the model, and conclude that even the maximally restricted segmental model is likely to be intractable. The fourth section in-. troduces the modern autosegmental (nonlinear) model and discusses its computational complexity. "The author is supported by a IBM graduate fellowship and eternally indebted to Morris Halle and Michael Kenstowicz for teaching him phonol- ogy. Thanks to Noam Chomsky, Sandiway Fong, and Michael Kashket for their comments and assistance. 235 We prove that the natural problem of construct- ing an autosegmental representation of an under- specified surface form is NP-hard. The article concludes by arguing that the complexity proofs are unnatural despite being true of the phonolog- ical models, because the formalism of generative phonology is itself unnatural. The central contributions of this article ate: (i) to explicate the relation between generative theory and language processing, and argue that generative theories are not models of language users primarily because they do not consider the inputs naturally available to language users; and (ii) to analyze the computational complexity of generative phonological theory, as it has developed over the past twenty years, including segmental and autosegmental models. 2 Computational structure of generative phonology The structure of a computation may be described at many levels of abstraction, principally includ- ing: (i) the goal of the computation; (ii) its in- put/output specification (the problem statement), (iii) the algorithm and representation for achiev- ing that specification, and (iv) the primitive opera- tions in which terms the algorithm is implemented (the machine architecture). Using this framework, the computational struc- ture of generative phonology may be described as follows: • The computational goal of generative phonol- ogy (as distinct from it's research goals) is to enumerate the phonological dictionaries of all and only the possible human languages. • The problem statement is to enumerate the observed phonological dictionary of s particu- lax language from some underlying dictionary of morphemes (roots and affixes) and phono- logical processes that apply to combinations of underlying morphemes. • The algorithm by which this is accomplished is a derivational process g ('the grammar') from underlying forms z to surface forms y = g(z). Underlying forms are constructed by combining (typically, with concatenation or substitution) the forms stored in the under- lying dictionary of morphemes. Linguistic re- lations are represented both in the structural descriptions and the derivational process. The structural descriptions of phonology are representations of perceivable distinctions be- tween linguistic sounds, such as stress lev- els, syllable structure, tone, and articula- tory gestures. The underlying and surface forms are both drawn from the same class of structural descriptions, which consist of both segmental strings and autosegmental re- lations. A segmental string is a string of segments with some representation of con- stituent structur. In the SPE theory of Chom- sky and Halle (1968) concrete boundary sym- bols are used; in Lexical Phonology, abstract brackets are used. Each segment is a set of phonological features, which are abstract as compared with phonetic representations, al- though both are given in terms of phonetic features. Suprasegmental relations are rela- tions among segments, rather than properties of individual segments. For example, a syl- lable is a hierarchical relation between a se- quence of segments (the nucleus of the syl- lable) and the less sonorous segments that immediately preceed and follow it (the onset and coda, respectively). Syllables must sat- isfy certain universal constraints, such as the sonority sequencing constraint, as well as lan- guage particular ones. a The derivntional process is implemented by an ordered sequence of unrestricted rewriting rules that are applied to the current deriva- tion string to obtain surface forms. According to generative phonology, comprehen- sion consists of finding a structural description for a given surface form. In effect, the logical prob- lem of language comprehension is reduced to the problem of searching for the underlying form that generates a given surface form. When the sur- face form does not transparently identify its cor- responding underlying form, when the space of possible underlying forms is large, or when the grammar g is computationally complex, the logical problem of language comprehension can quickly become very difficult. In fact, the language comprehension problem is intractable for all segmental theories. For ex- ample, in the formal system of The Sound Pat. tern of English (SPE) the comprehension prob- lem is undecidable. Even if we replace the seg- mental representation of cyclic boundaries with the abstract constituents of Lexical Phonology, and prohibit derivational rules from readjusting constituent boundaries, comprehension remains PSPACE-complete. Let us now turn to the tech- nical details. 3 Segmental Phonology The essential components of the segmental model may be briefly described as follows. The set of features includes both phonological features and diacritics and the distinguished feature segment that marks boundaries. (An example diacritic is ablaut, a feature that marks stems that must undergo a change vowel quality, such as tense- conditioned ablaut in the English sing, sang, sung alternation.) As noted in SPE, "technically speak- ing, the number of diacritic features should be at least as large as the number of rules in the phonol- ogy. Hence, unless there is a bound on the length of a phonology, the set [of features] should be un- limited." (fn.1, p.390) Features may be specified q- or - or by an integral value 1, 2,..., N where N is the maximal deg/ee of differentiation permitted for any linguistic feature. Note that N may vary from language to language, because languages ad- mit different degrees of differentiation in such fea- tures as vowel height, stress, and tone. A set of feature specifications is called a unit or sometimes a segment. A string of units is called a matriz or a segmental string. A elementary rule is of the form ZXAYW ZXBYW where A and B may be ~b or any unit, A ~ B; X and Y may be matrices (strings of units), and Z and W may be thought of a brack- ets labelled with syntactic categories such as 'S' or 'N' and so forth. A comple= rule is a finite schema for generating a (potentially infinite) set of elementary rules. 1 The rules are organised into 1Following 3ohnson (1972), we may define schenm as follows. The empty string and each unit is s schema; schema may be combined by the operations of union, intersection, negation, kleene star, and exponentiation over the set of units. Johnson also introduces variables and Boolean conditions into the schema. This "schema language" is a extremely powerful characterisation of the class of regular languages over the alphabet of units; it is not used by practicing phonologists. Be- cause a given complex rule can represent an infinite set of elementary rules, Johnson shows how the iterated, exhaustive application of one complex rule to a given segmental string can "effect virtually any computable mapping," (p.10) ie., can simulate any TNI computa- tion. Next, he proposes a more restricted "simultane- ous" mode of application for a complex rule, which is only capable of performing a finite-state mapping in any application. This article considers the indepen- dent question of what computations can be performed by a set of elementary rules, and hence provides loose lower bounds for Johnson's model. We note in pass- ing, however, that the problem of simply determining whether a given rule is subsumed by one of Johnson's schema is itself intractable, requiring at least exponen- 236 lineat sequence R,,R2, ...Rn, and they ate ap- plied in order to an underlying matrix to obtain a surface matrix. Ignoring a great many issues that are important for linguistic reasons but izrelevant for our pur- poses, we may think of the derivational process as follows. The input to the derivation, or "underly- ing form," is a bracketed string of morphemes, the output of the syntax. The output of the derivation is the "surface form," a string of phonetic units. The derivation consists of a series of cycles. On each cycle, the ordered sequence of rules is ap- plied to every maximal string of units containing no internal brackets, where each P~+, applies (or doesn't apply) to the result of applying the imme- diately preceding rule Ri, and so forth. Each rule applies simultaneously to all units in the current derivations] string. For example, if we apply the rule A --* B to the string AA, the result is the string BB. At the end of the cycle, the last rule P~ erases the innermost brackets, and then the next cycle begins with the rule R1. The deriva- tion terminates when all the brackets ate erased. Some phonological processes, such as the as- similation of voicing across morpheme boundaries, are very common across the world's languages. Other processes, such as the atbitraty insertion of consonants or the substitution of one unit for another entirely distinct unit, ate extremely rate or entirely unattested. For this reason, all ade- quate phonological theories must include an ex- plicit measure of the naturalness of a phonologi- cal process. A phonological theory must also de- fine a criterion to decide what constitutes two in- dependent phonological processes and what con- stitutes a legitimate phonological generalization. Two central hypotheses of segmental phonology are (i) that the most natural grammaxs contain the fewest symbols and (ii) a set of rules rep- resent independent phonological processes when they cannot be combined into a single rule schema according to the intricate notational system first described in SPE. (Chapter 9 of Kenstowicz and Kisseberth (1979) contains a less technical sum- maty of the SPE system and a discussion of sub- sequent modifications and emendations to it.) 3.1 Complexity of segmental recognition and generation. Let us say a dictionary D is a finite set of the underlying phonological forms (matrices) of mor- phemes. These morphemes may be combined by concatenation and simple substitution (a syntactic category is replaced by a morpheme of that cate- gory) to form a possibly infinite set of underlying forms. Then we may characterize the two central computations of phonology as follows. tial space. The phonological generation problem (PGP) is: Given a completely specified phonological matrix z and a segmental grammar g, compute the sur- face form y : g(z) of z. The phonological recognition problem (PRP) is: Given a (partially specified) surface form y, a dic- tionary D of underlying forms, and a segmental grammar g, decide if the surface form y = g(=) can be derived from some underlying form z ac- cording to the grammar g, where z constructed from the forms in D. Lenuna 3.1 The segmental model can directly simulate the computation of any deterministic~ Turing machine M on any input w, using only elementary rules. Proof. We sketch the simulation. The underlying form z will represent the TM input w, while the surface form y will represent the halted state of M on w. The immediate description of the machine (tape contents, head position, state symbol) is rep- resented in the string of units. Each unit repre- sents the contents of a tape square. The unit rep- resenting the currently scanned tape square will also be specified for two additional features, to represent the state symbol of the machine and the direction in which the head will move. Therefore, three features ate needed, with a number of spec- ifications determined by the finite control of the machine M. Each transition of M is simulated by a phonological rule. A few rules ate also needed to move the head position around, and to erase the entire derivation string when the simulated m~ chine halts. There are only two key observations, which do not appear to have been noticed before. The first is that contraty to populat misstatement, phono- logical rules ate not context-sensitive. Rather, they ate unrestricted rewriting rules because they can perform deletions as well as insertions. (This is essential to the reduction, because it allows the derivation string to become atbitatily long.) The second observation is that segmental rules can f~eely manipulate (insert and delete) bound- ary symbols, and thus it is possible to prolong the derivation indefinitely: we need only employ a rule R,~_, at the end of the cycle that adds an extra boundary symbol to each end of the derivation string, unless the simulated machine has halted. The remaining details are omitted, but may be found in Ristad (1990). [] The immediate consequences are: Theorem I PGP is undecidable. Proof. By reduction to the undecidable prob- lem w 6 L(M)? of deciding whether a given TM M accepts an input w. The input to the gen- eration problem consists of an underlying form z that represents w and a segmental grammar 237 g that simulates the computations of M accord- ing to ]emma 3.1. The output is a surface form y : g(z) that represents the halted configuration of the TM, with all but the accepting unit erased. [] Theorem 2 PRP is undecidable. Proof. By reduction to the undecidable prob- lem L(M) =?~b of deciding whether a given TM M accepts any inputs. The input to the recog- nition problem consists of a surface form y that represents the halted accepting state of the TM, a trivial dictionary capable of generating E*, and a segmental grammar g that simulates the com- putations of the TM according to lemma 3.1. The output is an underlying form z that represents the input that M accepts. The only trick is to con- struct a (trivial) dictionary capable of generating all possible underlying forms E*. [] An important corollary to lemma 3.1 is that we can encode a universal Turing machine in a seg- mental grammax. If we use the four-symbol seven- state "smallest UTM" of Minsky (1969), then the resulting segmental model contains no more than three features, eight specifications, and 36 very simple rules (exact details in Ristad, 1990). As mentioned above, a central component of the seg- mental theory is an evaluation metric that favors simpler (ie., shorter) grammars. This segmental grammar of universal computation appears to con- tain significantly fewer symbols than a segmental grammar for any natural language. Therefore, this corollary presents severe conceptual and empirical problems for the segmental theory. Let us now turn to consider the range of plau- sible restrictions on the segmental model. At first glance, it may seem that the single most important computational restriction is to prevent rules from inserting boundaries. Rules that ma- nipulate boundaries axe called readjustment rules. They axe needed for two reasons. The first is to reduce the number of cycles in a given deriva- tion by deleting boundaries and flattening syntac- tic structure, for example to prevent the phonol- ogy from assigning too many degrees of stress to a highly-embedded sentence. The second is to reaxrange the boundaries given by the syn- tax when the intonational phrasing of an utter- ance does not correspond to its syntactic phras- ing (so-called "bracketing paradoxes"). In this case, boundaries are merely moved around, while preserving the total number of boundaries in the string. The only way to accomplish this kind of bracket readjustment in the segmental model is with rules that delete brackets and rules that in- sert brackets. Therefore, if we wish to exclude rules that insert boundaries, we must provide an alternate mechanism for boundary readjustment. For the sake of axgument--and because it is not too hard to construct such a boundary readjust- ment mechanism--let us henceforth adopt this re- striction. Now how powerful is the segmental model? Although the generation problem is now cer- taiuly decidable, the recognition problem remains undecidable, because the dictionary and syntax are both potentially infinite sources of bound- aries: the underlying form z needed to generate any given surface form according to the grammar g could be axbitradly long and contain an axbi- traxy number of boundaries. Therefore, the com- plexity of the recognition problem is unaffected by the proposed restriction on boundary readjust- ments. The obvious restriction then is to addi- tionally limit the depth of embeddings by some fixed constant. (Chomsky and Halle flirt with this restriction for the linguistic reasons mentioned above, but view it as a performance limitation, and hence choose not to adopt it in their theory of linguistic competence.) Lernma 3.2 Each derivational cycle can directly simulate any polynomial time alternating Turing machine (ATM) M computation. Proof. By reduction from a polynomial-depth ATM computation. The input to the reduction is an ATM M on input w. The output is a segmen- tad grammar g and underlying form z s.t. the sur- face form y = g(z) represents a halted accepting computation iff M accepts ~v in polynomial time. The major change from lemma 3.1 is to encode the entire instantaneous description of the ATM state (ie., tape contents, machine state, head po- sition) in the features of a single unit. To do this requires a polynomial number of features, one for each possible tape squaxe, plus one feature for the machine state and another for the head position. Now each derivation string represents a level of the ATM computation tree. The transitions of the ATM computation axe encoded in a block B as fol- lows. An AND-transition is simulated by a triple of rules, one to insert a copy of the current state, and two to implement the two transitions. An OR- transition is simulated by a pair of disjunctively- ordered rules, one for each of the possible succes- sor states. The complete rule sequence consists of a polynomial number of copies of the block B. The last rules in the cycle delete halting states, so that the surface form is the empty string (or reasonably-sized string of 'accepting' units) when the ATM computation halts and accepts. If, on the other hand, the surface form contains any non- halting or nonaccepting units, then the ATM does not accept its input w in polynomial time. The reduction may clearly be performed in time poly- nomial in the size of the ATM and its input. [] Because we have restricted the number of em- beddings in an underlying form to be no more than 238 a fixed language-universal constant, no derivation can consist of more than a constant number of cycles. Therefore, lemma 3.2 establishes the fol- lowing theorems: Theorem 3 PGP with bounded embeddings is PSPA CE.hard. Proof. The proof is an immediate consequence of lemma 3.2 and a corollary to the Chandra-Kosen- Stockmeyer theorem (1981) that equates polyno- mial time ATM computations and PSPACE DTM computations. [] Theozem 4 PRP with bounded embeddings is PSPA CE-hard. Proof. The proof follows from lemma 3.2 and the Chandra-Kosen-Stockmeyer result. The dic- tionary consists of the lone unit that encodes the ATM starting configuration (ie., input w, start state, head on leftmost square). The surface string is either the empty string or a unit that represents the halted accepting ATM configuration. [] There is some evidence that this is the most we can do, at least for the PGP. The requirement that the reduction be polynomial time limits us to specifying a polynomial number of features and a polynomial number of rules. Since each feature corresponds to a tape square, ie., the ATM space resource, we are limited to PSPACE ATM compu- tations. Since each phonological rule corresponds to a next-move relation, ie., one time step of the ATM, we are thereby limited to specifying PTIME ATM computations. For the PRP, the dictionary (or syntax- interface) provides the additional ability to nondeterministically guess an arbitrarily long, boundary-free underlying form z with which to generate a given surface form g(z). This ability remains unused in the preceeding proof, and it is not too hard to see how it might lead to undecid- ability. We conclude this section by summarizing the range of linguistically plausible formal restrictions on the derivational process: Feature system. As Chomsky and Halle noted, the SPE formal system is most naturally seen as having a variable (unbounded) set of fea- tures and specifications. This is because lan- guages differ in the diacritics they employ, as well as differing in the degrees of vowel height, tone, and stress they allow. Therefore, the set of features must be allowed to vary from lan- guage to language, and in principle is limited only by the number of rules in the phonol- ogy; the set of specifications must likewise be allowed to vary from language to language. It is possible, however, to postulate the ex- istence of a large, fixed, language-universal set of phonological features and a fixed upper limit to the number N of perceivable distinc- tions any one feature is capable of supporting. If we take these upper limits seriously, then the class of reductions described in lemma 3.2 would no longer be allowed. (It will be pos- sible to simulate any ~ computation in a single cycle, however.) Rule for m__At. Rules that delete, change, ex- change, or insert segments--as well as rules that manipulate boundaries--are crucial to phonological theorizing, and therefore cannot be crudely constrained. More subtle and in- direct restrictions are needed. One approach is to formulate language-universal constraints on phonological representations, and to allow a segment to be altered only when it violates some constraint. McCarthy (1981:405) proposes a morpheme rule constraint (MRC) that requires all mor- phological rules to be of the form A ---, B/X where A is a unit or ~b, and B and X are (possibly null) strings of units. (X is the im- mediate context of A, to the right or left.) It should be obvious that the MRC does not constrain the computational complexity of segmental phonology. 4 Autosegmental Phonology In the past decade, generative phonology has seen a revolution in the linguistic treatment of suprasegmental phenomena such as tone, har- mony, infixation, and stress assignment. Although these autosegmental models have yet to be for- malised, they may be briefly described as follows. Rather than one-dimensional strings of segments, representations may be thought of as "a three- dimensional object that for concreteness one might picture as a spiral-bound notebook," whose spine is the segmental string and whose pages contain simple constituent structures that are indendent of the spine (Halle 1985). One page represents the sequence of tones associated with a given articu- lation. By decoupling the representation of tonal sequences from the articulation sequence, it is pos- sible for segmental sequences of different lengths to nonetheless be associated to the same tone se- quence. For example, the tonal sequence Low- High-High, which is used by English speakers to express surprise when answering a question, might be associated to a word containing any number of syllables, from two (Brazi 0 to twelve (floccin- auccinihilipilification) and beyond. Other pages (called "planes") represent morphemes, syllable structure, vowels and consonants, and the tree of articulatory (ie., phonetic) features. 239 4.1 Complexity of autosegmental recognition. In this section, we prove that the PRP for au- tosegmental models is NP-hard, a significant re- duction in complexity from the undecidable and PSPACE-hard computations of segmental theo- ries. (Note however that autosegmental repre- sentations have augmented--but not replaced-- portions of the segmental model, and therefore, unless something can be done to simplify segmen- tal derivations, modern phonology inherits the in- tractability of purely segmental approaches.) Let us begin by thinking of the NP-complete 3-Satisfiability problem (3SAT) as a set of inter- acting constraints. In particular, every satisfiable Boolean formula in 3-CNF is a string of clauses C1, C2,..., Cp in the variables zl, z=,..., z, that satisfies the following three constraints: (i) nega- tion: a variable =j and its negation ~ have op- posite truth values; (ii) clausal satisfaction: every clause C~ = (a~VbiVc/) contains a true literal (a lit- eral is a variable or its negation); (iii) consistency of truth assignments: every unnegated literal of a given variable is assigned the same truth value, either 1 or 0. Lemma 4.1 Autosegmental representations can enforce the 3SAT constraints. ProoL The idea of the proof is to encode negation and the truth values of variables in features; to enforce clausal satisfication with a local autoseg- mental process, such as syllable structure; and to ensure consistency of truth assignments with a nonlocal autosegmental process, such as a non- concatenative morphology or long-distance assim- ilation (harmony). To implement these ideas we must examine morphology, harmony, and syllable structure. Morphology. In the more familiar languages of the world, such as Romance languages, mor- phemes are concatenated to form words. In other languages, such as Semitic languages, a morpheme may appear more that once inside another mor- pheme (this is called infixation). For example, the Arabic word katab, meaning 'he wrote', is formed from the active perfective morpheme a doubly in- fixed to the ktb morpheme. In the autosegmental model, each morpheme is assigned its own plane. We can use this system of representation to ensure consistency of truth assigments. Each Boolean variable z~ is represented by a separate morpheme p~, and every literal of =i in the string of formula literals is associated to the one underlying mor- pheme p~. Harmony. Assimilation is the common phono- logical process whereby some segment comes to share properties of an adjacent segment. In En- glish, consonant nasality assimilates to immedi- ately preceding vowels; assimilation also occurs 240 across morpheme boundaries, as the varied surface forms of the prefx in- demonstrate: in+logical -, illogical and in-l-probable --, improbable. In other languages, assimilation is unbounded and can af- fect nonadjacent segments: these assimilation pro- cesses are called harmony systems. In the Turkic languages all sutFtx vowels assimilate the backnesss feature of the last stem vowel; in Capanshua, vow- els and glides that precede a word-final deleted nasal (an underlying nasal segment absent from the surface form) are all nasalized. In the autoseg- mental model, each harmonic feature is assigned its own plane. As with morpheme-infixation, we can represent each Boolean variable by a harmonic feature, and thereby ensure consistency of truth assignments. Syllable structure. Words are partitioned into syllables. Each syllable contains one or more vow- ds V (its nucleus) that may be preceded or fol- lowed by consonants C. For example, the Ara- bic word ka.tab consists of two syIlabhs, the two- segment syllable CV and the three-segment dosed syllable CVC. Every segment is assigned a sonor- ity value, hrhich (intuitively) is proportional to the openness of the vocal cavity. For example, vowels are the most sonorous segments, while stops such as p or b are the least sonorous. Syllables obey a language-universal sonority sequencing constraint (SSC), which states that the nucleus is the sonor- ity peak of a syllable, and that the sonority of adjacent segments swiftly and monotonically de- creases. We can use the SSC to ensure that every clause C~ contains a true literal as follows. The centred idea is to make literal truth correspond to the stricture feature, so that a true literal (repre- sented as a vowel) is more sonorous than a false literal (represented as a consonant). Each clause C~ - (a~ V b~ V c~) is encoded as a segmental string C - z, - zb - zc, where C is a consonant of sonor- ity 1. Segment zG has sonority 10 when literal at is true, 2 otherwise; segment =s has sonority 9 when literal bi is true, 5 otherwise; and segment zc has sonority 8 when literal q is true, 2 otherwise. Of the eight possible truth values of the three lit- erals and ~he corresponding syllabifications, 0nly the syllabification corresponding to three false lit- erals is excluded by the SSC. In that case, the corresponding string of four consonants C-C-C-C has the sonority sequence 1-2-5-2. No immediately preceeding or following segment of any sonority can result in a syllabification that obeys the SSC. Therefore, all Boolean clauses must contain a true literal. (Complete proof in Ristad, 1990) [] The direct consequence of this lemma 4.1 is: Theorem 5 PRP for the autosegraental model is NP-hard. Proof. By reduction to 3SAT. The idea is to construct a surface form that completely identi- ties the variables and their negation or lack of it, but does not specify the truth values of those variables. The dictionary will generate all possi- ble underlying forms (infixed morphemes or har- monic strings), one for each possible truth as- signment, and the autosegmental representation of lemma 4.1 will ensure that generated formulas are in fact satisfiable. [] 5 Conclusion. In my opinion, the preceding proofs are unnatural, despite being true of the phonological models, be- cause the phonological models themselves are un- natural. Regarding segmental models, the unde- cidability results tell us that the empirical content of the SPE theory is primarily in the particular rules postulated for English, and not in the ex- tremely powerful and opaque formal system. We have also seen that symbol-minimization is a poor metric for naturalness, and that the complex no- rational system of SPE (not discussed here) is an inadequate characterization of the notion of "ap- propriate phonological generalisation. "2 Because not every segmental grammar g gener- ates a natural set of sound patterns, why should we have any faith or interest in the formal system? The only justification for these formal systems then is that they are good programming languages for phonological processes, that clearly capture our intuitions about human phonology. But seg- mental theories are not such good programming languages. They are notationally-constrained and highly-articulated, which limits their expressive power; they obscurely represent phonological re- lations in rules and in the derivation process it- self, and hide the dependency relations and inter- actions among phonological processes in rule or- dering, disjunctive ordering, blocks, and cyclicity, s Yet, despite all these opaque notational con- straints, it is possible to write a segmental gram- mar for any decidable set. A third unnatural feature is that the goal of enumerating structural descriptions has an indi- rect and computationally costly connection to the goal of language comprehension, which is to con- struct a structural description of a given utter- ance. When information is missing from the sur- face form, the generative model obligates itself to enumerate all possible underlying forms that might generate the surface form. When the gen- erative process is lengthy, capable of deletions, or capable of enforcing complex interactions between nonlocal and local relations, then the logical prob- lem of language comprehension will be intractable. Natural phonological processes seem to avoid complexity and simplify interactions. It is hard to find an phonological constraint that is absolute and inviolable. There are always exceptions, ex- ceptions to the exceptions, and so forth. Deletion processes like apocope, syncopy, cluster simplica- tion and stray erasure, as well as insertions, seem to be motivated by the necessity of modifying a representation to satisfy a phonological constraint, not to exclude representations or to generate com- plex sets, as we have used them here. Finally, the goal of enumerating structural de- scriptions might not be appropriate for phonology and morphology, because the set of phonological words is only finite and phrase-level phonology is computationally simple. There is no need or ra- tional for employing such a powerful derivational system when all we are trying to do is capture the relatively little systematicity in a finite set of representations. 6 References. 2The explication of what constitutes a "natural rule" is significantly more elusive than the symbol- minimization metric suggests. Explicit symbol- counting is rarely performed by practicing phonolo- gists, and when it is, it results in unnatural rules. Moreover, the goal of constructing the smallest gram- mar for a given (infinite) set is not attainable in prin- ciple, because it requires us to solve the undecid- able TM equivalence problem. Nor does the symbol- counting metzlc constrain the generative or computa- tional power of the formalism. Worst of all, the UTM simulation suggested above shows that symbol count does not correspond to "naturalness." In fact, two of the simplest grammars generate ~ and ~', both of which are extremely unnatural. 3A further difficulty for autosegmental models (not brought out by the proof) is that the interactions among planes is obscured by the current practice of imposing an absolute order on the construction of planes in the derivation process. For example, in En- glish phonology, syllable structure is constructed be- Chandra, A., D. Kozen, and L. Stockmeyer, 1981. Alternation. 3. A CM 28(1):114-133. Chomsky, Noam and Morris Halle. 1968. The Sound Pattern of English. New York: Harper Row. Halle, Morris. 1985. "Speculations about the rep- resentation of words in memory." In Phonetic Linguistics, Essays in Honor of Peter Lade- ]oged, V. Fromkin, ed. Academic Press. Johnson, C. Douglas. 1972. Formal Aspects of Phonological Description. The Hague: Mou- ton. Kenstowicz, Michael and Charles Kisseberth. 1979. Generative Phonology. New York: fore stress is assigned, and then recomputed on the ba- sis of the resulting stress assignment. A more natural approach would be to let stress and syllable structure computations intermingle in a nondirectional process. 241 Academic Press. McCarthy, John. 1981. "A prosodic theory of nonconcatenative morphology." Linguistic Inquiry 12, 373-418. Minsky, Marvin. 1969. Computation: finite and infinite machines. Englewood Cliffs: Prentice Hall. Ristad, Eric S. 1990. Computational structure of human language. Ph.D dissertation, MIT De- partment of Electrical Engineering and Com- puter Science. 242
1990
30
PARSING THE LOB CORPUS Carl G. de Marcken MIT AI Laboratory Room 838 545 Technology Square Cambridge, MA 02142 Internet: [email protected] ABSTRACT This paper 1 presents a rapid and robust pars- ing system currently used to learn from large bodies of unedited text. The system contains a multivalued part-of-speech disambiguator and a novel parser employing bottom-up recogni- tion to find the constituent phrases of larger structures that might be too difficult to ana- lyze. The results of applying the disambiguator and parser to large sections of the Lancaster/ Oslo-Bergen corpus are presented. INTRODUCTION We have implemented and tested a pars- ing system which is rapid and robust enough to apply to large bodies of unedited text. We have used our system to gather data from the Lancaster/Oslo-Bergen (LOB) corpus, generat- ing parses which conform to a version of current Government-Binding theory, and aim to use the system to parse 25 million words of text The system consists of an interface to the LOB corpus, a part of speech disambiguator, and a novel parser. The disambiguator uses multivaluedness to perform, in conjunction with the parser, substantially more accurately than current algorithms. The parser employs bottom- up recognition to create rules which fire top- down, enabling it to rapidly parse the constituent phrases of a larger structure that might itself be difficult to analyze. The complexity of some of the free text in the LOB demands this, and we have not sought to parse sentences completely, but rather to ensure that our parses are accu- rate. The parser output can be modified to con- form to any of a number of linguistic theories. This paper is divided into sections discussing the LOB corpus, statistical disambiguation, the parser, and our results. 1 This paper reports work done at the MIT Artificial Intelligence Laboratory. Support for this research was provided in part by grants from the National Science Foundation (under a Presidential Young Investigator award to Prof. Robert C. Berwick); the Kapor Family Foun- dation; and the Siemens Corporation. THE LOB CORPUS The Lancaster/Oslo-Bergen Corpus is an on- line collection of more than 1,000,000 words of English text taken from a variety of sources, broken up into sentences which are often 50 or more words long. Approximately 40,000 differ- ent words and 50,000 sentences appear in the corpus. We have used the LOB corpus in a standard way to build several statistical tables of part of speech usage. Foremost is a dictionary keying every word found in the corpus to the number of times it is used as a certain part of speech, which a/lows us to compute the probability that a word takes on a given part of speech. In ad- dition, we recorded the number of times each part of speech occurred in the corpus, and built a digram array, listing the number of times one part of speech was followed by another. These numbers can be used to compute the probability of one category preceding another. Some disambiguation schemes require knowing the number of trigram occurrences (three spe- cific categories in a row). Unfortunately, with a 132 category system and only one million words of tagged text, the statistical accuracy of LOB trigrams would be minima/. Indeed, even in the digram table we have built, fewer than 3100 of the 17,500 digrams occur more than 10 times. When using the digram table in statisti- ca/schemes, we treat each of the 10,500 digrams which never occur as if they occur once. STATISTICAL DISAMBIGUATION Many different schemes have been proposed to disambiguate word categories before or dur- ing parsing. One common style of disambigua- tots, detailed in this paper, rely on statistical cooccurance information such as that discussed in the section above. Specific statistical disam- biguators are described in both DeRose 1988 and Church 1988. They can be thought of as algorithms which maximize a function over the possible selections of categories. For instance, for each word A-" in a sentence, the DeRose al- gorithm takes a set of categories {a~, a~,...} as input. It outputs a particular category a~z such 243 that the product of the probability that A: is the category a~, and the probability that the category a~.. occurs before the category a z+l is i.z+l maximized. Although such an algorithm might seem to be exponential in sentence length since there are an exponential number of combina- tions of categories, its limited leftward and right- ward dependencies permit linear time dynamic programming method. Applying his algorithm to the Brown Corpus 2, DeRose claims the ac- curacy rate of 96%. Throughout this paper we will present accuracy figures in terms of how of- ten words are incorrectly disambiguated. Thus, we write 96% correctness as an accuracy of 25 (words per error). We have applied the DeRose scheme and several variations to the LOB corpus in order to find an optimal disambiguation method, and display our findings below in Figure 1. First, we describe the four functions we maximize: Method A: Method A is also described in the DeRose paper. It maximizes the product of the probabilities of each category occurring before the next, or n--1 IIP (a~zis-flwd-by a'~+l )~+1 z=l Method B: Method B is the other half of the Dettose scheme, maximizing the product of the probabilities of each category occurring for its word. Method B simply selects each word's most probable category, regardless of context. n H P ( Azis-cat aZz) z----1 Method C" The DeRose scheme, or the maximum of n n-1 IT P ( A~is-cat a~,) l'-I P (a~ is-flwd-by a~?:~) z=l z=l Method D: No statistical disambiguator can perform perfectly if it only returns one part of speech per word, because there are words and sequences of words which can be truly ambigu- ous in certain contexts. Method D addresses this problem by on occasion returning more than one category per word. The DeRose algorithm moves from left to right assigning to each category a~ an optimal path of categories leading from the start of the sentence to a~, and a corresponding probability. 2 The Brown Corpus is a large, tagged text database quite similar to the LOB. It then extends each path with the categories of the word A -'+1 and computes new probabilities for the new paths. Call the greatest new prob- ability P. Method D assigns to the word A z those categories {a~} which occur in those new paths which have a probability within a factor F of P. It remains a linear time algorithm. Naturally, Method D will return several cat- egories for some words, and only one for others, depending on the particular sentence and the factor F. If F = 1, Method D will return only one category per word, but they are not nec- essarily the same categories as DeRose would return. A more obvious variation of DeRose, in which alternate categories are substituted into the DeRose disambiguation and accepted if they do not reduce the overall disambigua- tion probability significantly, would approach DeRose as F went to 1, but turns out not to perform as well as Method D. 3 Disambiguator Results: Each method was applied to the same 64,000 words of the LOB corpus. The results were compared to the LOB part of speech pre-tags, and are listed in Figure 1. 4 If a word was pre-tagged as being a proper noun, the proper noun category was included in the dictionary, but no special infor- mation such as capitalization was used to dis- tinguish that category from others during dis- ambiguation. For that reason, when judging accuracy, we provide two metrics: one simply comparing disambiguator output with the pre- tags, and another that gives the disambiguator the benefit of the doubt on proper nouns, under the assumption that an "oracle" pre-processor could distinguish proper nouns from contextual or capitalization information. Since Method D can return several categories for each word, we provide the average number of categories per word returned, and we also note the setting of the parameter F, which determines how many categories, on average, are returned. The numbers in Figure 1 show that sim- ple statistical schemes can accurately disam- biguate parts of speech in normal text, con- firming DeRose and others. The extraordinary 3 To be more precise, for a given average number of parts of speech returned V, the "sub- stitution" method is about 10% less accurate when 1 < V < 1.1 and is almost 50% less ac- curate for 1.1 < V < 1.2. 4 In all figures quoted, punctuation marks have been counted as words, and are treated as parts of speech by the statistical disambiguators. 244 Method: A B C D(1)D(.3) Accuracy: 7.9 17 23 25 41 with oracle: 8.8 18 30 31 54 of Cats: 1 1 1 1 1.04 Method: D(.1) D(.03) D(.01) D(.003) Accuracy: 70 126 265 1340 with oracle: 105 230 575 1840 No. of Cats: 1.09 1.14 1.20 1.27 Figure 1: Accuracy of various disambiguation strategies, in number of words per error. On average, the dictionary had 2.2 parts of speech listed per word. accuracy one can achieve by accepting an ad- ditional category every several words indicates that disambiguators can predict when their an- swers are unreliable. Readers may worry about correlation result- ing from using the same corpus to both learn from and disambiguate. We have run tests by first learning from half of the LOB (600,000 words) and then disambiguating 80,000 words of random text from the other half. The ac- curacy figures varied by less than 5% from the ones we present, which, given the size of the LOB, is to be expected. We have also applied each disambiguation method to several smaller (13,000 word) sets of sentences which were se- lected at complete random from throughout the LOB. Accuracy varied both up and down from the figures we present, by up to 20% in terms of words per error, but relative accuracy between methods remained constant. The fact the Method D with F = 1 (with F = 1 Method D returns only one category per word) performs as well or even better on the LOB than DeKose's algorithm indicates that, with exceptions, disambiguation has very lim- ited rightward dependence: Method D employs a one category lookahead, whereas DeRose's looks to the end of the sentence. This sug- gests that Church's strategy of using trigrams instead of digrams may be wasteful. Church manages to achieve results similar or slightly better than DeRose's by defining the probabil- ity that a category A appears in a sequence ABC to be the number of times the sequence ABC appears divided by the number of times the sequence BC appears. In a 100 category system, this scheme requires an enormous ta- ble of data, which must be culled from tagged text. If the rightward dependence of disam- biguation is small, as the data suggests, then the extra effort may be for naught. Based on our results, it is more efficient to use digrams in genera] and only mark special cases for tri- grams, which would reduce space and learning requirements substantially. Integrating Disambiguator and Parser: As the LOB corpus is pretagged, we could ig- nore disambiguation problems altogether, but to guarantee that our system can be applied to arbitrary texts, we have integrated a variation of disambiguation Method D with our parser. When a sentence is parsed, the parser is ini- tially passed all categories returned by Method D with F = .01. The disambiguator substan- tially reduces the time and space the parser needs for a given parse, and increases the parser's accuracy. The parser introduces syntactic con- straints that perform the remaining disambigua- tion well. THE PARSER Introduction: The LOB corpus contains unedited English, some of which is quite com- plex and some of which is ungrammatical. No known parser could produce full parses of all the material, and even one powerful enough to do so would undoubtably take an impractical length of time. To facilitate the analysis of the LOB, we have implemented a simple parser which is capable of rapidly parsing simple con- structs and of "failing gracefully" in more com- plicated situations. By trading completeness for accuracy, and by utilizing the statistical dis- ambiguator, the parser can perform rapidly and correctly enough to usefully parse the entire LOB in a few hours. Figure 2 presents a sample parse from the LOB. The parser employs three methods to build phrases. CFG-like rules are used to recognize lengthy, less structured constructions such as NPs, names, dates, and verb systems. Neigh- boring phrases can connect to build the higher level binary-branching structure found in En- glish, and single phrases can be projected into new ones. The ability of neighboring phrase pairs to initiate the CFG-like rules permits context- sensitive parsing. And, to increase the effi- ciency of the parser, an innovative system of deterministically discarding certain phrases is used, called "lowering". Some Parser Details: Each word in an input sentence is tagged as starting and end- ing at a specific numerical location. In the sentence "I saw Mary." the parser would in- sert the locations 0-4, 0 I 1 SAW 2 MARY 3 245 MR MICHAEL FOOT HAS PUT DOWN A RESOLUTION ON THE SUBJECT AND HE IS TO BE HACKED BY ME WILL GHIFFITHS , PIP FOR MANCHESTER EXCHANGE . > (IP (NP (PROP (N MR) (NAME MICHAEL) (NAME FOOT))) (I-EAR (I (HAVE HAS) (RP DOWN)) (VP (V PUT) (NP (DET A) (N RESOLUTION))))) > (PP (P ON) (NP (DET THE) (N SUBJECT))) > (CC AND) > (IP (NP HE) (I-BAR (I) (VP (IS IS) (I-BAR (I (PP (P BY) (NP (PROP (N MR) (NAME WILL) (NAME GRIFFITNS))))) (TO TO) (IS BE)) (VP (V BACKED)))))) > (*CMA ",") > (NP (N MP)) > (PP (P FOR) (NP (PROP (NAME MANCHESTER) (NAME EXCHANGE) ) ) ) > (*PER ".") Figure 2: The parse of a sentence taken ver- batim from the LOB corpus, printed without features. Notice that the grammar does not at- tach PP adjuncts. 4. A phrase consists of a category, starting and ending locations, and a collection of fea- ture and tree information. A verb phrase ex- tending from 1 to 3 would print as [VP 1 3]. Rules consist of a state name and a location. If a verb phrase recognition rule was firing in location 1, it would get printed as (VP0 a* 1) where VP0 is the name of the rule state. Phrases and rules which have yet to be pro- cessed are placed on a queue. At parse initial- ization, phrases are created from each word and its category(ies), and placed on the queue along with an end-of-sentence marker. The parse pro- ceeds by popping the top rule or phrase off the queue and performing actions on it. Figure 3 contains a detailed specification of the parser algorithm, along with parts of a grammar. It should be comprehensible after the following overview and parse example. When a phrase is popped off the queue, rules are checked to see if they fire on it, a table is examined to see if the phrase automatically projects to another phrase or creates a rule, and neighboring phrases are examined in case they can pair with the popped phrase to ei- ther connect into a new phrase or create a rule. Thus the grammar consists of three tables, the "rule-action-table" which specifies what action a rule in a certain state should take if it en- counters a phrase with a given category and features; a "single-phrase-action-table" which specifies whether a phrase with a given category and features should project or start a rule; and a "paired-phrase-action-table" which specifies possible actions to take if two certain phrases abut each other. For a rule to fire on a phrase, the rule must be at the starting position of the phrase. Pos- sible actions that can be taken by the rule are: accepting the phrase (shift the dot in the rule); closing, or creating a phrase from all phrases accepted so far; or both, creating a phrase and continuing the rule to recognize a larger phrase should it exist. Interestingly, when an enqueued phrase is accepted, it is "lowered" to the bot- tom of the queue, and when a rule closes to create a phrase, all other phrases it may have already created are lowered also. As phrases are created, a call is made to a set of transducer functions which generate more principled interpretations of the phrases, with appropriate features and tree relations. The representations they build are only for out- put, and do not affect the parse. An exception is made to allow the functions to project and modify features, which eases handling of sub- categorization and agreement. The transduc- ers can be used to generate a constant output syntax as the internal grammar varies, and vice versa. New phrases and rules are placed on the queue only after all actions resulting from a given pop of the queue have been taken. The ordering of their placement has a dramatic ef- fect on how the parse proceeds. By varying the queuing placement and the definition of when a parse is finished, the efficiency and ac- curacy of the parser can be radically altered. The parser orders these new rules and phrases by placing rules first, and then pushes all of them onto the stack. This means that new rules will always have precedence over newly created phrases, and hence will fire in a succes- sive "rule chain". If all items were eventually popped off the stack, the ordering would be ir- relevant. However, since the parse is stopped at the end-of-sentence marker, all phrases which have been "lowered" past the marker are never examined. The part of speech disambiguator can pass in several categories for any one word, which are ordered on the stack by likelihood, most probable first. When any lexical phrase is lowered to the back of the queue (presum- ably because it was accepted by some rule) all other lexical phrases associated with the same word are also lowered. We have found that this both speeds up parsing and increases accuracy. That this speeds up parsing should be obvi- ous. That it increases accuracy is much less so. Remember that disambiguation Method D is 246 The Parser Algorithm To parse a sentences S of length n: Perform multivalued disambiguation of S. Create empty queue Q. Place End-of-Sentence marker on Q. Create new phrases from disambiguator output categories, and place them on Q. Until Q is elnpty, or top(Q) = End-of-Sentence marker. Let I= pop(Q). Let new-items = nil If Its phrase [cat i 3] Let rules = all rules at location i. Let lefts = all phrases ending at. location i. Lel rights = all-phrases starting a.t location j. Perform rule-actions(rules,if}) Perform paired-phrase-actions(lefts,{]}) Perform paired-phrase-actions({]}, rights) Perforin single-phrase-actions (D. If/is rule (state at i) Let phrases = all phrasess{arting alt location i. Perforin rule-actions ({]} ,phrases). Place each item in new-items on Q, rules first. Let i = 0. Until i = n, Output longest phrase [cat i 3]. Let, i = j. To perform rule-actions (rules ,phrases): For all rules R = (state at i) in rules, And all phrases P = [cat+features i 3] in phrases, If there is an action A in the rule-action-table with key (state, cat+features), If A = (accept new-state) or (aeespt-and-close new- state new-cat). Create new rule (new-state at j). If A = (close new-cat) or (aeeept-artd-close new- state new-cat). Let daughters = the set of all phrases which have been accepted in the rule chain which led to R, including the phrase P. Let l = the lef|mosl starting location of all)' phrase in daughters. Create new phrase [new-cat l 3] wilh daughters daughters. For all phrases p in daughters, perform lowsr (p). For all phrases p created (via accept-and-close) by the rule chair, which led to R. perform lower(p). To perform paired-phrase-actions (lefts, rights): For all phrases Pl = [left-cat+features l if in lefts, And all phrases Pr = [right-cat+features i r] in rights, If there is an action A in the paired-phrase-action- table with key (left-cat+features, right-cat+featureS). If A = (cormect new-caD, Create new phrase [new-cat I r] with daughters Pl and Pr. If A = (project new-cat). Create new phrase [new-catir] with (laughter Pr. If A = (stext-new-rule state). Create new rule (state at i). Perform Iower(Pl) and lower(Pr). To perform single-phrase-actions ( [cat+features i 3"] ) : If there is an action A in the single-phrase-action-table with key cat+features. If A = (project new-cat). Create new phrase [new-cat i 3]. If A = (start-rule new-state). C_'reate new rule (state at i). To perform lower(/): If Its in Q, renmve iT from Q and reiw, erl il at end of Q. If I is a le×ical level phrase [cat i i+1] created from the dis- ambiguator outpnl categoric.,,. For all other lexical level phrases p starting aIi. pertbrm lo~er (p). When creating a new rule R: Add R to list of new-items. When creating a new phrase P = [cat+features i .7] with daughters D: Add P to list of new-items. If there is a hook function F in the hook-ftmction-table with key' cat+features, perform F(P,D). Hook fnnctious can add features to P. A section of a rule-action-table. Key(State. (:'at) Action DET0, DET DET1, JJ DET1, N +pl DET1. N J J0. JJ VP1, ADV (accept DET1 ) (accept DET1 ) (close NP) (accept-and-close DET2 NP) (accept-and-close J J0 AP) (accept. VP1) A section of a paired-phrase-action-table. Key(Cat. Cat ) Action COMP. S (connect CP) NP +poss, NP (connect NP) NP. S (project CP) NP, \:P exl-np +tense expect-nil (collnecl S) NP, CMA* (start-rule < ',\IA0) VP expect-pp. PP (connect VP) A section of a single-phrase-action-table. Key(Cat ) Aclion K<v Action DET+pro (start-rule DET0) PRO (lu'ojecl NP) (pro.iect NP) V (start-rule vPa} N (start-rule DErII) IS (start-rule \'Pl) NAME (start-rule NMI) (stuN-rule ISQ] ) A section of a hook-ftmction-table. Key(Cat ) Hook Function \"P Get-Subcat egoriz at ion-I nfo S Check-Agreenlent CP ('heck-Coml>St ruct ure Figure 3: A pseudo-code representation of the parser algo- rithm, omitting implementation details. Included in table form are representative sections from a grammar. 247 substantially more accurate the DeRose~s algo- rithm only because it can return more than one category per word. One might guess that if the parser were to lower all extra categories on the queue, that nothing would have been gained. But the top-down nature of the parser is suf- ficient in most cases to "pick out" the correct category from the several available (see Milne 1988 for a detailed exposition of this). A Parse in Detail: Figure 4 shows a parse of the sentence "The pastry chef placed the pie in the oven." In the figure, items to the left of the vertical line are the phrases and rules popped off the stack. To the right of each item is a list of all new items created as a result of it being popped. At the start of the parse, phrases were created from each word and their corresponding categories, which were correctly (and uniquely)determined by the disambigua- tor. The first item is popped off the queue, this being the [DET 0 1] phrase corresponding to the word "the". The single-phrase action ta- ble indicates that a DET0 rule should be started at location 0 and immediately fires on "the", which is accepted and the rule (DET1 a* 1) is accordingly created and placed on the queue. This rule is then popped off the queue, and ac- cepts the [N 1 2] corresponding to "pastry", also closing and creating the phrase [NP 0 2]. When this phrase is created, all queued phrases which contributed to it are lowered in priority, i.e., "pastry". The rule (DET2 at 2) is cre- ated to recognize a possibly longer NP, and is popped off the queue in line 4. Here much the same thing happens as in line 3, except that the [NP 0 2] previously created is lowered as the phrase [NP 0 3] is created. In line 5, the rule chain keeps firing, but there are no phrases starting at location 3 which can be used by the rule state DET2. The next item on the queue is the newly created [NP 0 3], but it neither fires a rule (which would have to be in location 0), finds any action in the single-phrase table, or pairs with any neighboring phrase to fire an action in the paired-phrase table, so no new phrases or rules are created. Hence, the verb "placed" is popped and the single-phrase table indicates that it should create a rule which then immedi- ately accepts "placed", creating a VP and plac- ing the rule (VP4 a* 4) in location 4. The VP is popped off the stack, but not attached to [NP 0 3] to form a sentence, because the paired- phrase table specifies that for those two phrases to connect to become an S, the verb phrase must have the feature (expec't; nil), indi- 0 The 1 pastry 2 chef 3 placed 4 the 5 pie 6 in ? the 8 oven 9 . I0 I. Phrase [DET 0 I] 2. Rule (DETO at O) 3. Rule (DETI at I) 4. Rule (DET2 at 2) 5. Rule (DET2 at 3) 6. Phrase [NP 0 3] 7. Phrase [V 3 4] 8. Rule (VP3 at 3) 9. Rule (UP4 at 4) I0. Phrase [VP 3 4] 11. Phrase [DET 4 5] 12. Phrase (DETO at 4) 13. Rule (DETI at 5) 14. Rule (DET2 at 6) 15. Phrase [NP 4 6] 16. Phrase [VP 3 6] 17. Phrase IS 0 6] 18. Phrase [P 6 7] 19. Phrase [DET 7 8] 20. Rule (DETO at 7) 21. Rule (DETI at 8) 22. Rule (DET2 at 9) 23. Phrase [NP 7 9] 24. Phrase [PP 6 9] 25. Phrase [*PER 9 I0] (DETO at O) (DETI at I) [NP 0 2] (DETI at 2) Lowering: [N 1 2] [NP 0 3] (DET2 at 3) Lowering: [NP 0 2] Lowering: IN 2 3] (VP3 at 3) [VP 3 4] (VP4 at 4) (DETO at 4) (DETI at 5) [NP 4 6] (DET2 at 6) Lowering: IN 5 6] [VP 3 6] Is 0 6] (DETO at 7) (DETI at 8) [NP 7 9] (DET2 at 9) Lowering: [N 8 9] [PP 6 9] > (IP (NP (DET "The") (N "pastry") (N "chef")) (I-BAR (I) (UP (V "placed") (NP (DET "the") (N "pie"))))) > (PP (P "in") (NP (DET "the") (N "oven"))) > (*PER ".") Phrases left on Queue: [N I 2] IN 2 3] [NP 0 2] IN s 6] IN 8 9] Figure 3: A detailed parse of the sentence "The pastry chef placed the pie in the oven". Dictionary look-up and disambiguation were performed prior to the parse. cating that all of its argument positions have been filled. However when the VP was cre- ated, the VP transducer call gave it the feature (expect . NP), indicating that it is lacking an NP argument. In line 15, such an argument is popped from the stack and pairs with the VP as specified in the paired-phrase table, creating a new phrase, [VP 3 6]. This new VP then pairs with the subject, forming [S 0 6]. In line 18, the prepo- sition "in" is popped, but it does not create any rules or phrases. Only when the NP "the oven" is popped does it pair to create [PP 6 9]. Al- though it should be attached as an argument 248 to the verb, the subcategorization frames (con- tained in the expoc'c feature of the VP) do not allow for a prepositional phrase argument. Af- ter the period is popped in line 25, the end-of- sentence marker is popped and the parse stops. At this time, 5 phrases have been lowered and remain on the queue. To choose which phrases to output, the parser picks the longest phrase starting at location 0, and then the longest phrase starting where the first ended, etc. The Reasoning behind the Details: The parser has a number of salient features to it, in- cluding the combination of top-down and bottom- up methods, the use of transducer functions to create tree structure, and the system of lower- ing phrases off the queue. Each was necessary to achieve sufficient flexibility and efficiency to parse the LOB corpus. As we have mentioned, it would be naive of us to believe that we could completely parse the more difficult sentences in the corpus. The next best thing is to recognize smaller phrases in these sentences. This requires some bottom-up capacity, which the parser achieves through the single-phrase and paired-phrase action tables. In order to avoid overgeneration of phrases, the rules (in conjunction with the "lowering" sys- tem and method of selecting output phrases) provide a top-down capability which can pre- vent some valid smaller phrases from being built. Although this can stifle some correct parses 5 we have not found it to do so often. Keaders may notice that the use of special mechanisms to project single phrases and to connect neighboring phrases is unnecessary, since rules could perform the same task. However, since projection and binary attachment are so common, the parser's efficiency is greatly im- proved by the additional methods. The choice of transducer functions to create tree structure has roots in our previous expe- riences with principle-based structures. Mod- ern linguistic theories have shown themselves to be valuable constraint systems when applied to sentence tree-structure, but do not necessar- ily provide efficient means of initially generat- ing the structure. By using transducers to map For instance, the parser always generates the longest possible phrase it can from a se- quence of words, a heuristic which can in some cases fail. We have found that the only situ- ation in which this heuristic fails regularly is in verb argument attachment; with a more re- strictive subcategorization system, it would not be much of a problem. between surface structure and more principled trees, we have eliminated much of the compu- tational cost involved in principled representa- tions. The mechanism of lowering phrases off the stack is also intended to reduce computational cost, by introducing determinism into the parser. The effectiveness of the method can be seen in the tables of Figure 5, which compare the parser's speed with and without lowering. RESULTS We have used the parser, both with and without the lexical disambiguator, to analyze large portions of the LOB corpus. Our gram- mar is small; the three primary tables have a total of 134 actions, and the transducer func- tions are restricted to (outside of building tree structure) projecting categories from daughter phrases upward, checking agreement and case, and dealing with verb subcategorization fea- tures. Verb subcategorization information is obtained from the Oxford Advanced Learner's Dictionary of Contemporary English (Hornby et al 1973), which often includes unusual verb aspects, and consequently the parser tends to accept too many verb arguments. The parser identifies phrase boundaries sur- prisingly well, and usually builds structures up to the point of major sentence breaks such as commas or conjunctions. Disambiguation fail- ure is almost nonexistent. At the end of this pa- per is a sequence of parses of sentences from the corpus. The parses illustrate the need for a bet- ter subcategorization system and some method for dealing with conjunctions and parentheti- cals, which tend to break up sentences. Figure 5 presents some plots of parser speed on a random 624 sentence subset of the LOB, and compares parser performance with and with- out lowering, and with and without disambigua- tion. Graphs 1 and 2 (2 is a zoom of 1) illustrate the speed of the parser, and Graph 3 plots the number of phrases the parser returns for a sen- tence of a given length, which is a measure of how much coverage the grammar has and how much the parser accomplishes. Graph 4 plots the number of phrases the parser builds during an entire parse, a good measure of the work it performs. Not surprisingly, there is a very smooth curve relating the number of phrases built and parse time. Graphs 5 and 6 are in- cluded to show the necessity of disambiguation and lowering, and indicate a substantial reduc- tion in speed if either is absent. There is also a substantial reduction in accuracy. In the no dis- ambiguation case, the parser is passed all cate- 249 (seconds) 20 18 16 14 ° 12 o °° m 10 m ° m 8 ° [] m -6 ° °°° o [] ° ~D ° o m --A oa[]l~ °0 00 I~ g 0 ° u [] °0 o -2 "f i I Graph 1: # of words in sentence t (seconds) -4 o [] ° m -3.5 " ° ° o [] o ° "3 [] o go o [] °o o 0° o ° B ° [] ° .= ° o o ° ° •m m a aag "2.5 ~° ° = ° =° ° ° DO IO -OB °0 2 0. o oOoo%=. [] [] [] Dm [~ as ° = S • ° O0~°O °HI [] ° I• ° 1.5 00 _ e - °o ° a ° 2 R° OaOH= oB 0° 4 o °°u°|° B=HBB •age= =° / Bm° a = a ° °',.•B°m'!inU',|o ° o°%° ° ""°B B" =° .°,,hUll,,,•• ° ) 3,o 3; ,,o 45 I Graph 2: # of words in sentence of phrases returned - 30 o ° ° m ° o ° 7O I 0 O 5O I -25 o ° -20 ° ° [] ° ° ° o [] 15 " ~ o °• "= o o ~ =. ?. .=?. . .=. .= ° ° [] ° o ° ....... °moo ° mm ° = =,==== =.= =-o ~,,~ [] 10 . . . . . . o ¢D .......... °o~m0 m m ® ~ ==%===~°=~=. % -5 ~ ............... m I = o ~= $0 40 50 60 70 80 ......... ~ ..... ° I a i o I I I I I Graph 3: # of words in sentence Figure 4: Performance graphs of parser on subset of LOB. See text for explanations. of phrases built - 200 18O 160 140 - 120 = o °[] a ° ° a ° [] === = =° D o ° o ° [] m ° -100 /~-- °=a aa~ [] aaa R a ~u o ,,,B =° _E~6. ~ ~, ,0 oo Graph 4: ~ o/words in sentence (seconds) 60 = o 50 = 70 I 40 [] [] [] m ° 30 ° ° m o o ° u ° | o o 20 °° °° ° [] ,0 °°,: :;:oi ° :°'.== o ° °°°HI;IgBg=§°oo~°° ,oN,|8111B' I" 2C~ 30 40 50 f 1 I Graph 5[No Dis.]: # of words in sentence (seconds) -60 ° ° ° - 50 ° 60 I -40 ° m a O o o -30 ° ° m m 0 a [] O D -20 = []= = °B = ° ~ ~ °° °°o ° D ° ° o o o a D Oa Q ° ooo ° B ° "10 a . = °%°= =° = °= e °" = ° o g 01~ -= § " a ° " B 0 ° oo .° ; _l===°°gliil ailBgaal ,leO=aS e 50 60 I I Graph 6[No Lowering]: # of words in sentence Figure 5: Performance graphs of parser on subset of LOB. See text for explanations. gories every word can take, in random order. Parser accuracy is a difficult statistic to mea- sure. We have carefully analyzed the parses • assigned to many hundreds of LOB sentences, and are quite pleased with the results. A1- though there are many sentences where the parser is unable to build substantial structure, it rarely builds incorrect phrases. A pointed exception is the propensity for verbs to take too many arguments. To get a feel for the parser's ac- 250 curacy, examine the Appendix, which contains unedited parses from the LOB. BIBLIOGRAPHY Church, K. W. 1988 A Stochastic Parts Pro- gram and Noun Phrase Parser for Unrestricted Text. Proceedings of the Second Conference on Applied Natural Language Processing, 136-143 DeRose, S. J. 1988 Grammatical Category Disambiguation by Statistical Optimization. Com- putational Linguistics 14:31-39 Oxford Advanced Learner's Dictionary of Con- temporary English, eds. Hornby, A.S., and Covie, A. P. (Oxford University Press, 1973) Milne, 1%. Lexical Ambiguity Resolution in a Deterministic Parser, in Le~.icaI Ambiguity Res- olution, ed. by S. Small et al (Morgan Kauf- mann, 1988) APPENDIX: Sample Parses The following are several sentences from the beginning of the LOB, parsed with our system. Because of space considerations, indenting does not necessarily reflect tree structure. A MOVE TO STOP MR GAITSKELL FROM NOMINATING ANY MORE LABOUR LIFE PEERS IS TO BE MADE AT A MEETING OF LABOURMPS TOMORROW . > (NF (DET A) (N MOVE)) > (I-BAR (I (TO TO)) (VP (V STOP) (NP (PROP (N MR) (NAME GAITSKELL))) (P FROM))) > (I-BAR (I) (VP (V NOMINATING) (NP (DET ANY) (AP MORE) (N LABOUR) (N LIFE) (N PEERS)))) > (I-EAR (I) (UP (IS IS) (I-BAR (I (NP (N TOMORROW)) (TO TO) (IS BE)) (V MADE) (P AT) (NP (NF (DET A) (N MEETING)) (PP (P OF) (NP (N LABOUR) (N PIPS)))))))) > (*PER .) THOUGH THEY MAY GATHER SOME LEFT-WING SUPPORT , A LARGE MAJORITY OF LABOURMPS ARE LIKELY TO TURN DOWN THE F00T-GRIFFITHS RESOLUTION . > (CP (C-BAR (COMP THOUGH)) (IP (NP THEY) (I-BAR (I (MD MAY)) (VP (V GATHER) (NP (DET SOME) (3J LEFT-WING) (N SUPPORT)))))) > (*CMA ,) > (IP (NP (NP (DET A) (JJ LARGE) (N MAJORITY)) (PP (P OF) (NP (N LABOUR) (N MPS)))) (I-BAR (I) (VP (IS ARE) (AP (JJ LIKELY))))) > (I-BAR (I (TO TO) (RP DOWN)) (uP (v TURN) (NP (DET THE) (PROP (NAME F00T-GRIFFITHS)) (N RESOLUTION)))) > (*PER .) MR F00T'S LINE WILL BE THAT AS LABOUR MPS OPPOSED THE GOVERNMENT BILL WHICH BROUGHT LIFE PEERS INT0 EXISTENCE , THEY SHOULD H0T NOW PUT FORWARD NOMINEES . > (IP (NP (NP (PROP (N MR) (NAME FOOT))) (NP (N LINE))) (I-EAR (I (MD WILL)) (VP (IS HE) (NP THAT)))) > (CP (C-EAR (COMP AS)) (IP (NP (N LABOUR) (N MPS)) (I-BAR (I) (VP (V OPPOSED) (NP (NP (DET THE) (N GOVERNMENT) (N BILL)) (CP (C-BAR (COMP WHICH)) (IP (NP) (I-BAR (I) (VP (V BROUGHT) (NP (N LIFE) (N PEERS))))))) (F INT0) (NP (N EXISTENCE)))))) > (*CMA ,) > (IP (NP THEY) (I-BAR (I (ADV FORWARD) (MD SHOULD) (XNOT NOT) (ADV NOW)) (VP (V PUT) (NP (N NOMINEES))))) > (*PER .) THE TWO RIVAL AFRICAN NATIONALIST PARTIES OF NORTHERN RHODESIA HAVE AGREED TO GET TOGETHER TO FACE THE CHALLENGE FROM SIR ROY WELENSKY , THE FEDERAL PREMIER . > (IP (NP (NP (DET THE) (NUM (CD TWO)) (JJ RIVAL) (ffff AFRICAN) (3ff NATIONALIST) (N PARTIES)) (PP (P OF) (NP (PROP (NAME NORTHERN) (NAME RHODESIA))))) (I-BAR (I (HAVE HAVE)) (VP (V AGREED) (I-BAR (I (ADV TOGETHER) (TO TO)) (VP (V GET) (I-BAR (I (TO TO)) (up (v FACE) (NP (DET THE) (N CHALLENGE)) (P FROM) (NP (NP (PROP (N SIR) (NAME ROY) (NAME WELENSKY))) (*CMA ,) (NP (DET THE) (JJ FEDERAL) (N+++ PREMIER)))))))))) > (*PER .) 251
1990
31
AUTOMATICALLY EXTRACTING AND REPRESENTING COLLOCATIONS FOR LANGUAGE GENERATION* Frank A. Smadja t and Kathleen R. McKeown Department of Computer Science Columbia University New York, NY 10027 ABSTRACT Collocational knowledge is necessary for language gener- ation. The problem is that collocations come in a large variety of forms. They can involve two, three or more words, these words can be of different syntactic cate- gories and they can be involved in more or less rigid ways. This leads to two main difficulties: collocational knowledge has to be acquired and it must be represented flexibly so that it can be used for language generation. We address both problems in this paper, focusing on the acquisition problem. We describe a program, Xtract, that automatically acquires a range of collocations from large textual corpora and we describe how they can be represented in a flexible lexicon using a unification based formalism. 1 INTRODUCTION Language generation research on lexical choice has fo- cused on syntactic and semantic constraints on word choice and word ordering. Colloca~ional constraints, however, also play a role in how words can co-occur in the same sentence. Often, the use of one word in a par- ticular context of meaning will require the use of one or more other words in the same sentence. While phrasal lexicons, in which lexical associations are pre-encoded (e.g., [Kukich 83], [Jacobs 85], [Danlos 87]), allow for the treatment of certain types of collocations, they also have problems. Phrasal entries must be compiled by hand which is both expensive and incomplete. Furthermore, phrasal entries tend to capture rather rigid, idiomatic expressions. In contrast, collocations vary tremendously in the number of words involved, in the syntactic cat- egories of the words, in the syntactic relations between the words, and in how rigidly the individual words are used together. For example, in some cases, the words of a collocation must be adjacent, while in others they can be separated by a varying number of other words. *The research reported in this paper was partially sup- ported by DARPA grant N00039-84-C-0165, by NSF grant IRT-84-51438 and by ONR grant N00014-89-J-1782. tMost of this work is also done in collaboration with Bell Communication Research, 445 South Street, Morristown, NJ 07960-1910 In this paper, we identify a range of collocations that are necessary for language generation, including open compounds of two or more words, predicative relations (e.g., subject-verb), and phrasal templates represent- ing more idiomatic expressions. We then describe how Xtract automatically acquires the full range of colloca- tions using a two stage statistical analysis of large do- main specific corpora. Finally, we show how collocations can be efficiently represented in a flexible lexicon using a unification based formalism. This is a word based lexicon that has been macrocoded with collocational knowledge. Unlike a purely phrasal lexicon, we thus retain the flexi- bility of word based lexicons which allows for collocations to be combined and merged in syntactically acceptable ways with other words or phrases of the sentence. Unlike pure word based lexicons, we gain the ability to deal with a variety of phrasal entries. Furthermore, while there has been work on the automatic retrieval of lexical informa- tion from text [Garside 87], [Choueka 88], [Klavans 88], [Amsler 89], [Boguraev & Briscoe 89], [Church 89], none of these systems retrieves the entire range of collocations that we identify and no real effort has been made to use this information for language generation [Boguraev & Briscoe 89]. In the following sections, we describe the range of col- locations that we can handle, the fully implemented ac- quisition method, results obtained, and the representa- tion of collocations in Functional Unification Grammars (FUGs) [Kay 79]. Our application domain is the domain of stock market reports and the corpus on which our ex- pertise is based consists of more than 10 million words taken from the Associated Press news wire. SINGLE WORDS TO WHOLE PHRASES: WHAT KIND OF LEXICAL UNITS ARE NEEDED? Collocational knowledge indicates which members of a set of roughly synonymous words co-occur with other words and how they combine syntactically. These affini- ties can not be predicted on the basis of semantic or syn- tactic rules, but can be observed with some regularity in • text [Cruse 86]. We have found a range of collocations from word pairs to whole phrases, and as we shall show, 252 this range will require a flexible method of representa- tion. 3 THE ACQUISITION METHOD: Xtract Open Compounds . Open compounds involve unin- terrupted sequences of words such as "stock mar- ket," "foreign ezchange," "New York Stock Ez- change," "The Dow Jones average of $0 indust~- als." They can include nouns, adjectives, and closed class words and are similar to the type of colloca- tions retrieved by [Choueka 88] or [Amsler 89]. An open compound generally functions as a single con- stituent of a sentence. More open compound exam- ples are given in figure 1. x Predicative Relations consist of two (or several) words repeatedly used together in a similar syn- tactic relation. These lexical relations axe harder to identify since they often correspond to inter- rupted word sequences in the corpus. They axe also the most flexible in their use. This class of col locations is related to Mel'~uk's Lexical Functions [Mel'~uk 81], and Benson's L-type relations [Ben- son 86]. Within this class, Xtract retrieves subject- verb, verb-object, noun-adjective, verb-adverb, verb- verb and verb-particle predicative relations. Church [Church 89] also retrieves verb-particle associations. Such collocations require a representation that al- lows for a lexical function relating two or more words. Examples of such collocations axe given in figure 2. 2 Phrasal templates: consist of idiomatic phrases con- taining one, several or no empty slots. They axe extremely rigid and long collocations. These almost complete phrases are quite representative of a given domain. Due to their slightly idiosyncratic struc- ture, we propose representing and generating them by simple template filling. Although some of these could be generated using a word based lexicon, in general, their usage gives an impression of fluency that cannot be equaled with compositional genera- tion alone. Xtract has retrieved several dozens of such templates from our stock market corpus, in- eluding: "The NYSE's composite indez of all its listed com- mon stocks rose *NUMBER* to *NUMBER*" "On the American Stock Ezchange the market value indez was up *NUMBER* at *NUMBER*" "The Dow Jones average of 30 industrials fell *NUMBER* points to *NUMBER*" "The closely watched indez had been down about *NUMBER* points in the first hour of trading" "The average finished the week with a net loss of *NUMBER *" I All the examples related to the stock market domain have been actually retrieved by Xtract. 2In the examples, the "~" sign, represents a gap of zero, one or several words. The "¢*" sign means that the two words can be in any order. In order to produce sentences containing collocations, a language generation system must have knowledge about the possible collocations that occur in a given domain. In previous language generation work [Danlos 87], [Ior- danskaja 88], [Nirenburg 88], collocations are identified and encoded by hand, sometimes using the help of lexi- cographers (e.g., Danlos' [Daulos 87] use of Gross' [Gross 75] work). This is an expensive and time-consuming pro- cess, and often incomplete. In this section, we describe how Xtract can automatically produce the full range of collocations described above. Xtract has two main components, a concordancing component, Xconcord, and a statistical component, Xstat. Given one or several words, Xconcord locates all sentences in the corpus containing them. Xstat is the co-occurrence compiler. Given Xconcord's output, it makes statistical observations about these words and other words with which they appear. Only statistically significant word pairs are retained. In [Smadja 89a], and [Smadja 88], we detail an earlier version of Xtract and its output, and in [Smadja 891)] we compare our results both qualitatively and quantitatively to the lexicon used in [Kukich 83]. Xtract has also been used for informa- tion retrieval in [Maarek & Smadja 89]. In the updated version of Xtract we describe here, statistical signifi- cance is based on four parameters, instead of just one, and a second stage of processing has been added that looks for combinations of word pairs produced in the first stage, resulting in multiple word collocations. Stage one- In the first phase, Xconcord is called for a single open class word and its output is pipeIined to Xstat which then analyses the distribution of words in this sample. The output of this first stage is a list of tuples (wx,w2, distance, strength, spread, height, type), where (wl, w2) is a lexical relation between two open-class words (Wx and w2). Some results are given in Table 1. "Type" represents the syn- tactic categories of wl and w2. 3. "Distance" is the relative distance between the two words, wl and w2 (e.g., a distance of 1 means w~ occurs immediately after wx and a distance of-i means it occurs imme- diately before it). A different tuple is produced for each statistically significant word pair and distance. Thus, ff the same two words occur equally often sep- arated by two different distances, they will appear twice in the list. "Strength" (also computed in the earlier version of Xtract) indicates how strongly the two words are related (see [Smadja 89a]). "Spread" is the distribution of the relative distance between the two words; thus, the larger the "spread" the more rigidly they are used in combination to one another. "Height" combines the factors of "spread" 3In order to get part of speech information we use a stochastic word tagger developed at AT&T Bell Laborato- ries by Ken Church [Church 88] 253 wordl stock president trade Table 1: Some binary lexical relations. word2 market vice deficit distance -I strength 47.018 40.6496 30.3384 spread 28.5 29.7 28.4361 11457.1 10757 7358.87 vre r avmcm'am ;,,,Lo¢,~,c-- i~fft~,,,,~l , illll(;t£1 I~.'lgl~:l~i Ig~llI,~lt:.. composite blue totaled closing -1 12.3874 29.0682 3139.89 index chip -1 -4 -1 -2 -1 -1 10.078 shares price stocks volume 20.7815 23.0465 27.354 16.8724 19.3312 13.5184 5.43739 listed takeover takeovers takeover takeovers 30 29.3682 25.9415 23.8696 29.7 28.1071 29.3682 25.7917 totaled bid hostile o~er 2721.06 5376.87 4615.48 4583.57 4464.89 4580.39 3497.67 1084.05 I ll"i~.~ l ' _ll-~,'l I~,[lll Jill '[ Ib']l~$'l [ Type NN NN NN NN NN NN NJ NJ NJ NV NV NV NV NN NJ iNN I NV Table 2: Concordances for "average indus~rial" On Tuesday the Dow Jones industrial average rose 26.28 points to 2 304.69. The Dow ... a selling spurt that sent the Dow On Wednesday the Dow The Dow The Dow ... Thursday with the Dow ... swelling the Dow The rise in the Dow Jones industrial average Jones industrial average Jones industrial average Jones industrial average Jones industrial average Jones industrial average Jones industrial average Jones industrial average went up 11.36 points today. down sharply in the first hour of trading. showed some strength as ... was down 17.33 points to 2,287.36 ... had the biggest one day gain of its history ... soaring a record 69.89 points to ... by more than 475 points in the process ... was the biggest since a 54.14 point jump on ... Table The NYSE s composite index The NYSE s composite index The NYSE s composite index The NYSE s composite index The NYSE s composite index The NYSE s composite index The NYSE s composite index The NYSE s composite index The NYSE s composite index 3: Concordances for "composite indez" of all its listed common stocks fell 1.76 to 164.13. of all its listed common stocks fell 0.98 to 164.91. of all its listed common stocks fell 0.96 to 164.93. of all its listed common stocks fell 0.91 to 164.98. of all its listed common stocks rose 1.04 to 167.08. of all its listed common stocks rose 0.76 of all its listed common stocks rose 0.50 to 166.54. of all its listed common stocks rose 0.69 to 166.73. of all its listed common stocks fell 0.33 to 170.63. 254 open compound open compound open compound open compound open compound open compound open compound open compound open compound open compound open compound open compound open compound open compound open compound qeading industrialized countries" "the Dow Jones average of .90 industriais" "bear/buil market" "the Dow Jones industrial average" "The NYSE s composite indez of all it8 listed common stocks" "Advancing/winuing/losing/declluing issues" "The NASDAQ composite indez for the over the counter market" "stock market" "central bank 'qeveraged buyout" "the gross national product" 'q~lue chip stocks" "White House spokesman Marlin Fitztoater" "takeover speculation/strategist/target/threat/attempt" "takeover bid /battle/ defense/ efforts/ flght /law /proposal / rumor" Figure 1: Some examples of open compounds noun adjective noun adjective noun adjective subject verb subject verb subject verb verb adverb verb object verb object verb particle verb verb verb verb examples "heavy/Hght D tradlng/smoker/traffic" "hlgh/low ~ fertility/pressure/bounce" "large/small D crowd/retailer/client" "index ~ rose "stock ~ [rose, fell, closed, jumped, continued, declined, crashed, ...]" "advancers D [outnumbered, outpaced, overwhelmed, outstripped]" "trade ¢~ actively," "mix ¢~ narrowly," "use ¢~ widely," "watch ¢~ closely" ~posted ~ gain '~momentum D [pick up, build, carry over, gather, loose, gain]" "take ~ from," "raise ~ by," "mix D with" "offer to [acquire, buy"] "agree to [acquire, buy"] Figure 2: Some examples of predicative collocations and "strength" resulting in a ranking of the two words for their "distances". Church [Church 89] produces results similar to those presented in the table using a different statistical method. However, Church's method is mainly based on the computa- tion of the "strength" attribute, and it does not take into account "spread" and "height". As we shall see, these additional parameters are crucial for pro- ducing multiple word collocations and distinguish- ing between open compounds (words are adjacent) and predicative relations (words can be separated by varying distance). Stage two: In the second phase, Xtraet first uses the same components but in a different way. It starts with the pairwise lexical relations produced in Stage one to produce multiple word collocations, then classifies the collocations as one of three classes iden- tified above, end finally attempts to determine the syntactic relations between the words of the collo- cation. To do this, Xtract studies the lexical re- lations in context, which is exactly what lexicogra- phers do. For each entry of Table 1, Xtract calls Xconcord on the two words wl and w~ to pro- duce the concordances. Tables 2 and 3 show the concordances (output of Xconcord) for the input pairs: "average-industrial" end "indez-composite". Xstat then compiles information on the words sur- rounding both wl and w2 in the corpus. This stage allows us to filter out incorrect associations such as "blue.stocks" or "advancing-market" and replace them with the appropriate ones, "blue chip stocks," "the broader market in the NYSE advancing is. sues." This stage also produces phrasal templates such as those given in the previous section. In short, stage two filters inapropriate results and combines word pairs to produce multiple word combinations. To make the results directly usable for language gen- eration we are currently investigating the use of a bottom-up parser in combination with stage two in order to classify the collocations according to syn- tactic criteria. For example if the lexical relation involves a noun and a verb it determines if it is a subject-verb or a verb-object collocation. We plan to do this using a deterministic bottom up parser developed at Bell Communication Research [Abney 89] to parse the concordances. The parser would analyse each sentence of the concordances and the parse trees would then be passed to Xstat. Sample results of Stage two are shown in Fig- ures 1, 2 and 3. Figure 3 shows phrasal templates and open compounds. Xstat notices that the words "com- posite and "indez" are used very rigidly throughout the corpus. They almost always appear in one of the two 255 lexical relation composite-indez composite-indez collocation "The NYSE's composite indez of all its listed common stocks fell *NUMBER* to *NUMBER*" "the NYSE's composite indez of all its listed common stocks rose *NUMBER* to *NUMBER*." [ "close-industrial" "Five minutes before the close the Dow Jones average of 30 industrials ~as up/down *NUMBER* to/from *NUMBER*" "the Dow Jones industrial average." "average industrial" "advancing-market" "block- trading" "cable- television" "the broader market in the NYSE advancing issues" "Jack Baker head of block trading in Shearson Lehman Brothers Inc." "cable television" Figure 3: Example collocations output of stage two. sentences. The lexical relation composite-indez thus pro- duces two phrasal templates. For the lexical relation average-industrial Xtract produces an open compound collocation as illustrated in figure 3. Stage two also con- firms pairwise relations. Some examples are given in figure 2. By examining the parsed concordances and extracting recurring patterns, Xstat produces all three types of collocations. 4 HOW TO REPRESENT THEM FOR LANGUAGE GENERATION? Such a wide variety of lexical associations would be dif- ficnlt to use with any of the existing lexicon formalisms. We need a flexible lexicon capable of using single word entries, multiple word entries as well as phrasal tem- plates and a mechanism that would be able to gracefully merge and combine them with other types of constraints. The idea of a flexible lexicon is not novel in itself. The lexical representation used in [Jacobs 85] and later re- fined in [Desemer & Jabobs 87] could also represent a wide range of expressions. However, in this language, collocational, syntactic and selectional constraints are mixed together into phrasal entries. This makes the lex- icon both difficnlt to use and difficult to compile. In the following we briefly show how FUGs can be successfully used as they offer a flexible declarative language as well as a powerful mechanism for sentence generation. We have implemented a first version of Cook, a sur- face generator that uses a flexible lexicon for express- in~ co-occurrence constraints. Cook uses FUF [Elhadad 90J, an extended implementation of PUGs, to uniformly represent the lexicon and the syntax as originally sug- gested by Halliday [Halliday 66]. Generating a sentence is equivalent to unifying a semantic structure (Logical Form) with the grammar. The grammar we use is di- vided into three zones, the "sentential," the "lezical" and "the syntactic zone." Each zone contains constraints pertaining to a given domain and the input logical form is unified in turn with the three zones. As it is, full backtracking across the three zones is allowed. • The sentential zone contains the phrasal templates against which the logical form is unified first. A sententiai entry is a whole sentence that should be used in a given context. This context is specified by subparts of the logical form given as input. When there is a match at this point, unification succeeds and generation is reduced to simple template filling. • The lezical zone contains the information used to lexicalize the input. It contains collocational infor- mation along with the semantic context in which to use it. This zone contains predicative and open compound collocations. Its role is to trigger phrases or words in the presence of other words or phrases. Figure 5 is a portion of the lexical grammar used in Cook. It illustrates the choice of the verb to be used when "advancers" is the subject. (See below for more detail). • The syniacgic zone contains the syntactic grammar. It is used last as it is the part of the grammar en- suring the correctness of the produced sentences. An example input logical form is given in Figure 4. In this example, the logical form represents the fact that on the New York stock exchange, the advancing issues (se- mantic representation or sere-R: c:winners) were ahead (predicate c:lead)of the losing ones (sem-R: c:losers)and that there were 3 times more winning issues than losing ones ratio). In addition, it also says that this ratio is of degree 2. A degree of 1 is considered as a slim lead whereas a degree of 5 is a commanding margin. When unified with the grammar, this logical form produces the sentences given in Figure 6. As an example of how Cook uses and merges co- occurrence information with other kind of knowledge consider Figure 5. The figure is an edited portion of the lexical zone. It only includes the parts that are rel- evant to the choice of the verb when "advancers" is the subject. The lex and sem-R attributes specify the lex- eme we are considering ("advancers") and its semantic representation (c:winners). The semantic context (sere-context) which points to the logical form and its features will then be used in order 256 logical-form predicate-name = p : lead leaders = [ sem-R L ratio trailers : c : winners ] J : 3 sem-R : c : losers ] : ratio ---- I degree = 2 Figure 4: LF: An example logical form used by Cook o,, °°° ooo lex = "advancer" sam-R = c:~oinners sem-context = <logical-form> OO0 10e o,o sem-context SV-collocates = predicate-name = p: lead ] degree = 2 lex ---- "o.u~nurn, ber" / lex = "lead" lex = "finish" lex = "hold" lex = "~eept' lex = "have" , , ° sem-context SV-collocates = predicate-name : p:lead = degree : 4 lex : U°verp°~er" 1 lex = "outstrip" lex : "hold" lex : "keel' • Figure 5: A portion of the lexical grammar showing the verbal collocates of "advancers". "Advancers outnumbered declining issues by a margin of 3 4o 1." "Advancers had a slim lead over losing issues wi~h a margin of 3 4o 1." "Advancers kep~ a slim lead over decliners wi~h a margin of 3 ~o 1" Figure 6: Example sentences that can be generated with the logical form LF 257 to select among the alternatives classes of verbs. In the figure we only included two alternatives. Both are rela- tive to the predicate p:lead but they axe used with dif- ferent values of the degree attribute. When the degree is 2 then the first alternative containing the verbs listed un- der SV-colloca~es (e.g. "outnumber") will be selected. When the degree is 4 the second alternative contain- ing the verbs listed under SV-collocal;es (e.g. "over- power") will be selected. All the verbal collocates shown in this figure have actually been retrieved by Xtract at a preceding stage. The unification of the logical form of Figure 4 with the lexical grammar and then with the syntactic gram- mar will ultimately produce the sentences shown in Fig- ure 6 among others. In this example, the sentencial zone was not used since no phrasal template expresses its semantics. The verbs selected are all listed under the SV-collocates of the first alternative in Figure 5. We have been able to use Cook to generate several sentences in the domain of stock maxket reports using this method. However, this is still on-going reseaxch and the scope of the system is currently limited. We are working on extending Cook's lexicon as well as on de- veloping extensions that will allow flexible interaction among collocations. 5 CONCLUSION In summary, we have shown in this paper that there axe many different types of collocations needed for lan- guage generation. Collocations axe flexible and they can involve two, three or more words in vaxious ways. We have described a fully implemented program, Xtract, that automatically acquires such collocations from large textual corpora and we have shown how they can be represented in a flexible lexicon using FUF. In FUF, co- occurrence constraints axe expressed uniformly with syn- tactic and semantic constraints. The grammax's function is to satisfy these multiple constraints. We are currently working on extending Cook as well as developing a full sized from Xtract's output. ACKNOWLEDGMENTS We would like to thank Kaxen Kukich and the Computer Systems Research Division at Bell Communication Re- search for their help on the acquisition part of this work. References [Abney 89] S. Abney, "Parsing by Chunks" in C. Tenny~ ed., The MIT Parsing Volume, 1989, to appeax. [Amsler 89] R. Amsler, "Research Towards the Devel- opment of a Lezical Knowledge Base for Natural Language Processing" Proceedings of the 1989 SI- GIR Conference, Association for Computing Ma- [Benson 86] M. Benson, E. Benson and R. Ilson, Lezi- cographic Description of English. John Benjamins Publishing Company, Philadelphia, 1986. [Boguraev & Briscoe 89] B. Boguraev & T. Briscoe, in Computational Lezicography for natural language processing. B. Boguraev and T. Briscoe editors. Longmans, NY 1989. [Choueka 88] Y. Choueka, Looking for Needles in a Haystack. In Proceedings of the RIAO, p:609-623, 1988. [Church 88] K. Church, A Stochastic Par~s Program and Noun Phrase Parser for Unrestricted Tezt In Pro- ceedings of the Second Conference on Applied Nat- ural Language Processing, Austin, Texas, 1988. [Church 89] K. Church & K. Hanks, Word Association Norms, Mutual Information, and Lezicography. In Proceedings of the 27th meeting of the Associ- ation for Computational Linguistics, Vancouver, B.C, 1989. [Cruse 86] D.A. Cruse, Lezical Semantics. Cambridge University Press, 1986. [Danlos 87] L. Danlos, The linguistic Basis of Tezt Generation. Cambridge University Press, 1987. [Desemer & Jabobs 87] D. Desemer & P. Jacobs, FLUSH: A Flezible Lezicon Design. In proceedings of the 25th Annual Meeting of the ACL, Stanford University, CA, 1987. [Elhadad 90] M. Elhadad, Types in Functional Unifica- tion Grammars, Proceedings of the 28th meeting of the Association for Computational Linguistics, Pittsburgh, PA, 1990. [Gaxside 87] R. Gaxside, G. Leech & G. Sampson, edi- tors, The computational Analysis of English, a cor- pus based approach. Longmans, NY 1987. [Gross 75] M. Gross, Mdthodes en Syntaze. Hermann, Paxis, France, 1975. [Halliday 66] M.A.K. Halliday, Lezis as a Linguistic Level. In C.E. Bazell, J.C. Catford, M.A.K Hal- liday and R.H. Robins (eds.), In memory of J.R. Firth London: Longmans Linguistics ]la Libraxy, 1966, pp: 148-162. [Iordanskaja88] L. Iordanskaja, R. Kittredge, A. Polguere, Lezical Selection and Paraphrase in a Meaning-Tezt Generation Model Presented at the fourth International Workshop on Language Gen- eration, Catalina Island, CA, 1988. [Jacobs 85] P. Jacobs, PHRED: a generator for natu- ral language interfaces, Computational Linguis- tics, volume 11-4, 1985 [Kay 79] M. Kay, Functional Grammar, in Proceedings of the 5th Meeting of the Berkeley Linguistic So- ciety, Berkeley Linguistic Society, 1979. [Klavans 88] J. Klavans, "COMPLEX: a computational lezicon for natural language systems." In proceed- ing of the 12th International Conference on Corn- chinery. Cambridge, Ma, June 1989. 258 putational Linguistics, Budapest, Hungary, 1988. [Kukich 83] K. Kukich, Knowledge-Based Report Gen- eration: A Technique for Automatically Gener- ating Natural Language Reports from Databases. Proceedings of the 6th International ACM SIGIR Conference, Washington, DC, 1983. [Maarek & Smadja 89] Y.S Maarek & F.A. Smadja, Full Tezt Indezing Based on Lezical Relations, An Ap. plication: Software Libraries. Proceedings of the 12th International ACM SIGIR Conference, Cam- bridge, Ma, June 1989. [Mel'~uk 81] I.A Mel'euk, Meaning-Tezt Models: a Re- cent Trend in Soviet Linguistics. The annual re- view of anthropology, 1981. [Nirenburg 88] S. Nirenburg et al., Lezicon building in natural language processing. In program and ab- stracts of the 15 th International ALLC, Confer- ence of the Association for Literary and Linguistic Computing, Jerusalem, Israel, 1988. [Smadja 88] F.A. Smadja, Lezical Co-occurrence: The Missing link. In program and abstracts of the 15 th International ALLC, Conference of the As- sociation for Literary and Linguistic Computing, Jerusalem, Israel, 1988. Also in the Journal for Literary and Linguistic computing, Vol. 4, No. 3, 1989, Oxford University Press. [Smadja 89a] F.A. Smadja, Microcoding the Lezicon for Language Generation, First International Work- shop on Lexical Acquisition, IJCAI'89, Detroit, Mi, August 89. Also in "Lezical Acquisition: Using on-line resources to build a lezicon", MIT press, Uri Zeruik editor, to appear. [Smadja 89b] F.A. Smadja, On the Use of Flezible Col- locations for Language Generation. Columbia Uni- versity, technical report, TR# CUCS-507-89. 259
1990
32
DISAMBIGUATING AND INTERPRETING VERB DEFINITIONS Yael Ravin IBM T.J. Watson Research Center Yorktown Heights, New York 10598 e-mail:[email protected] ABSTRACT To achieve our goal of building a compre- hensive lexical database out of various on-line resources, it is necessary to interpret and disambiguate the information found in these resources. In this paper we describe a Disambiguation Module which analyzes the content of dictionary dcf'mitions, in particular, definitions of the form to VERB with NP". We discuss the semantic relations holding be- tween the head and the prepositional phrase in such structures, as wellas our heuristics for identifying these relations and for disambiguating the senses of the words in- volved. We present some results obtained by the Disambiguation Module and evaluate its rate of success as compared with results ob- tained from human judgements. INTRODUCTION The goal of the Lexical Systems Group at IBM's Watson Research Center is to create COMPLEX, "a lexical knowledge base in which word senses are identified, endowed with appropriate lexical haforrn, ation and properly related to one another" (Byrd 1989). Information for COMPLEX is derived from multiple lexical sources so senses in one source need to be related to appropriate senses in the other sources. Similarly, the senses of def'ming words need to be disambiguated relative to the senses supplied for them by the various sources. (See Klavans et al, 1990.) Sense-disambiguation of the words found in dictionary entries can be viewed as a sub- problem of sense-disambiguation of text corpora in general, since dictionaries are large corpora of phrases and sentences exhibiting a variety of ambiguities, such as unresolved ?ro- nominal references, attachment ambigutties, and ellipsis. The resolution of these ambiguity problems in the context of dictionary defi- nitions would directly benefit their resolution in other types of text. In order to solve the ~roblem of lexical ambiguity in dictionary de- fruitions, we are investigating how to auto- maticaUy analyze the semantics of these definitions and identify the relations holding between genus and differentia. This paper concentrates on one aspect of the task - the semantics of one class of verb definitions. I. DISAMBIGUATING DEFINITIONS We have chosen to concentrate initially on definitions of the tbrm 'to VERB with NW in Webster's 7th New Collegiate Dictionary (Merriam 1963; henceforth W7). Disambiguating these definitions consists of identifying the appropriate sense of 'with (that is, the type of semantic relation linking the VERB to the NP) and choosing, if possi- ble, the appropriate senses of the VERB and the NP-head from among "all their W7 senses. For example, the dis ambiguation of the defi- nition of angle(3,vi, l), to fish with a hook", determines that the relation between fish and hook is use of instrument. 1 It also determines that the intended sense of fish is (vi, l)-"to at- tempt to catch fish and the intended sense of cha°~c~fi~ InAo)idag, urved prll~;t im~re-/m~inttf° ~ senses ~or intransitive fish and "4 for the noun hook. To•ether with the five senses of with (described m the next section), these yield 80 ~ook°SSible. sense combinations for to fish with a In addition to contributing to the creation of COMPLEX, disambiguating strings of the form "to VERB with NP" also contributes to the task of disambiguating prepositional phrases in free text, an tmportant problem in NL processing. As is well known, parsing prepositional phrases (PPs) in free text is problematic because of the syntactic ambiguity of their attachment. It is usually impossible to determine on purely syntactic grounds which head a given PP attaches to from among all those that.precede it in the sentence. Thus, sentences like the player hit the ball with the bat are usually parsed as syntactically ambig- uous between with the bat as modifying the verb and its modifying the noun. One way to resolve the syntactic ambiguity is to fisrt resolve the semantic ambiguity that underlies it. To resolve it, we follow the ap- proach proposed by Jensen & Binot (1987) and consult the dictionary defmitions of the words involved. This approach differs from others that have been proposed for the Thus we differ From other attempts at disambiguating definitions, (such as Alshawi 1987), which leave these "with" cases unresolved. 260 disambiguation of polysemous words in con- text in that it accesses large published diction- aries rather than hand-built knowledge bases (as in Dalhgren & McDowell 1989). More- over, it parses the information retrieved from the dictionary. Other approaches apply simple string matches (Lesk 1987) or statisUcal meas- ures (Amsler & Walker 1985). Consulting the dict!onary for the player hit the ball with the bat ", we identLf~¢ ~with the bat" as meaning, among other things, the use of an implement and qait' as a verb that can take a use modifier. These potential meanings favor an attachment of the PP to the verb. Furthermore, since no semantic connection can be established be- tween "ball" and "with the bat" based on the dictionary, the likelihood of the verb attach- ment increases. Within this approach, we can view the disambiguation of the text of dictionary defi- nitions as a subgoal of the general PP-attachment problem in free text. The structure of sentences like "he hit the ball with the bat" is "to VERB NP with NP", where syntactic ambiguity arises between attachment to the verb and attachment to the syntactic object. These sentences differ from definition strings, which have the form of "to VERB with NP , lacking a syntactic object. Even deft- nitions of transitive verbs, which are headed by transitive verbs, typicall), lack an object, as in bat, (vt, l)-"to strike or hit with or as if with a bat . In the absence of an object, there is no attachment amb!guity, since there is only one head available ( strike or hit"). However, semantic ambiguity still remains: "hit" means both to strike and to score; "bat" refers both to a club and to an animal. We can view such strings as cases where attachment has already been resolved, and view their disambiguation as an attempt to supply the semantic basis for that attachment. Thus, obtaining the correct semantic representation for cases where at- tachment is known directly benefits cases where attachment is ambiguous. Our Disambiguation Module (henceforth DM) selects the most appropriate sense combination(s) in two parts: first, it tries to identify the semantic categories or types de- noted by each sense of the VERB and the NP-head. It checks if the VERB denotes change, affliction, an act of coveting, marking or providing. It tests whether the NP-head refers to an implement, a part of some other entity, a human being or group, an animal, a body part, a feeling, state, movement, sound, etc. ~ rIqaen it tries to identify the semantic re- lation holding between the VERB and NP-head. In the constructions we are inter- ested in, the semantic relation between the two terms depends not only on their semantic cat- egories but also on the semantics of with, which we discuss in the following section? 2. THE MEANING OF WITH To investigate the semantics of with, we turn to the linguistic literature on one hand and to lexico~aphical sources on the other. In the theoretical literature about prepositions and PPs, a syntactic distinction is made be- tween PPs as complements of predicates and PPs as adjuncts. In traditional terms, a complement-PP is more closely related to the I-predicate-I, which determines its choice, than to the prepositional complement' (Quirk et al. 1972). In current terms, complement-PPs are determined by the predicate and listed in its lexical (or thematic) entry, from which syntac- tic structures are projected. To assure correct projection, the occurrence of complements in syntactic structures is subject to various con- ditions of uniqueness and completeness (Chomsky 1981; Bresnan 1982). Adjuncts, by contrast, do not depend on the predicate. They freely attach to syntactic structures as modifiers and are not subject to these condi- tions. Although the syntactic distinction between complements and adjuncts is assumed by many theories, few provide criteria for deciding whether a given PP is a complement or ad- junct. (Exceptions are Larson (1988) and Jackendoff (in preparation).) The theoretical status of with is particularly interesting in this context: It is generally agreed that some with-PPs (such as those expressing manner) are adjun~s and that others (like those occur- ring with spray/load" predicates) are comple- merits; but there is dtsagreement about the status of other classes, such as with-PPs ex- pressing instruments. See Ravin (in press) for a discussion of this issue. The distinction between complements and adjuncts bears directly on our disambiguation problem, as we try to match it to our dis- tinctton between NP-based heuristics and VERB-based ones (see Section 3). In turn, the results provided by our DM put the various theoretical hypotheses to test, by applying them to a large amount of real data. Dictionaries and other lexicographical works typically explain the meaning of prep- ositions in a collection of senses, some involv- ing semantic descriptions and others expressing usage comments. W.7, for example, defines with(l) semantically: in opposition to; against 2 We have defined 16 semantic categories for nouns, so far. A most relevant question is how many such categories need to be stipulated. For the purpose of the work reported here, these 16 categories surf'tee. Others, however, will be needed for the disambiguation of other prepositions and other forms or" ambiguity. 3 We concentrate here on with; however, preliminary work indicates that the treatment of other prepositions is quite similar. 261 ('had a fight with his brother")"; it defines sense 2 by a usage comment: "used as a func- tion word to indicate one to whom a usu. re- ciprocal communication is made ("talking with a friend")". W7 lists a total of 12 senses for with and various sub-senses. The Longman Dictionary of Contemporary English (Longman 1978; henceforth LDOCE) fists 20. Quirk et al. (1972) attempt to group the variety of meanings under a few general categories, such as means/instrument, accompantment, and having. Others (Boguraev & Sparck Jones 1987, Collins 1987) offer somewhat different divisions into main categories. After reviewin 8 the different characteriza- tions of the mearun~s of with against a small corpus of verb definitions containing with, we have arrived at a set of five senses for it, cor- responding to five semantic relations that can hold between the VERB and the NP-head in "to VERB with NP". Since we are concerned with verbs only, senses mentioned by our sources for "NOUN with NP" were not in- cluded (e.g., the "having" sense of Quirk et al., as in a man with a red nose" or "a woman with a large family"). Moreover, we have ob- served that certain common meanings of "VERB with NP" fail to occur in dictionary detinitions. The accompaniment sense, for examp!e, as in "walk with Peter" or "drink with friends , was not found in our corpus of 300 defmltions. 4 The five senses which we have identified are USE, MANNER, ALTERATION, CO-AGENCY/PARTICIPATION, and PROVISION, each including several smaller sub-classes. Each sense is characterized by a description of the states of affairs it refers to and by some criteria which test it. As can be expected, however, the criteria are not always conclusive. There exist both unclear and overlapping cases. USE - examples are ",'to fish with a hook"; "to obscure with a cloud ; and "to surround with an army". With in this sense can usually be paraphrased as "by means off or "using". The states of affairs in this category involve three participants: an agent (usually the missing subject of the definition), a patient (the missing object) and the thing used (the referent of "wtth NP"). The agent usually manipulates, controls or uses the NP-referent and the NP-referent remains distinct and apart from the patient at the end of the action. The sub- classes of USE are USE -OF-INSTRUMENT, -OF-SUBSTANCE, -OF-BODYPART, -OF-ANIMATE_BEING, -OF-OBJECT. MANNER - some examples are "to examine with intent to verify"; "to anticipate with anx- iety"; or "to attack with blows or words". "With NP" in this sense can be paraphrased with an adverb (e.g., anxiously ~, violently, verbally') and it describes the way in which the agent acts. The MANNER sub-classes are INTENTION-, SOUND-, MOTION-, FEELING- or ATTITUDE-AS-MANNER. The distinction between USE and MANNER is usually quite straightforward but one class of overlapping cases we have identified has,to do with verbal entities, such as retort in to check or stop with a cutting retort". Since verbal entities are abstract, they can be viewed as both being used by the agent as a type of instrument and describing how the action is performed. ALTERATION - examples are "to mark with bars; 'to impregnate with alcohol"; "to ftll with air ; and to strike with fear". In some cases, this sense can be paraphrased with ~make" and an adjective (e.g., "make full", make afraid'); in others, with "put into/onto" (e.g., "put air into"; "put marks onto"). The states of affairs are ones in which change oc- curs in the patient and the NP-referent remains close to the patient or even becomes part of it. The sub-classes are ALTERATION -BY-MARKING, -BY-COVERING, -BY-AFFLICTION, and CAUSAL ALTER- ATION. Cases of overlap between ALTER- ATION and USE are abundant. 'To spatter with some discoloring substance" is an exam- ple of creating a change in the patient while using a substance. The definition of spatter itself indicates this overlap: "to splash wtth or as if with a liquid; also to spoil in this way. CO-AGENCY or PARTICIPATION - as in "to combine with other parts". Such strings can be paraphrased with and" ("one part and other parts combine ). The state of affairs is one in which there are two agents or partic- ipants sharing relatively equally in the event. PROVISION - as in "to fit with clothes"; and "to furnish with an alphabet". This sense can be p~aphrased with give (and sometimes with ~to" - "to furnish an alphabet to '), and it applies to states of affairs where the NP-referent is given to somebody by the agent. In addition to the five semantic meanings discussed above, there is also one purely syn- tactic function, PHRASAL, which with fulfdls in verb-prepositioncombinations, such as "in- vest with authority. It can be argued that with in such cases simply serves to link the NP to the VERB. The DM disambiguates a given string by classifying it as an instance of one of these six categories, and thus selecting the appropriate sense combination of the words in the string. A major contribution to the establishment of the senses of with has been comments and judgements of human subjects, who were asked to categorize samples of verb-definition strings into the various with senses we stipulated. 262 The process of disambiguation is a function of interdependencies among the senses of the VERB, the NP-head and with, as we show in the next section. 3. THE DISAMBIGUATION PROCESS The DM is an extended and modified ver- sion of an earlier prototype developed by Jensen and Binot for the resolution of prepositional-phrase attachment ambiguities (Jensen & Bmot 1987). It uses a syntactic parser, PEG (Jensen 1986), and a body of se- mantic heuristics which operate on the parsed dictionary definitions of the terms to be disambiguated. The first step in the disambiguation process is parsing the ambig- uous string (e.g., "to fish with a hook') by PEG and tdentifyingthe two relevant terms, the VERB and NP-head (fish and hook). Next, each of these terms is looked up in WT, its definitions are retrieved and also parsed by PEG. Heuristics then apply to the parsed de- fruitions of the terms to determine their se- mantic categories. The heuristics contain a set of lexical and syntactic conditions to identify each semantic category. For example, the IN- STRUMENT heuristic for nouns checks if the head of the parsed definition is "instrument", "implement') "device" ,"tool" or "weapon"; if the head is part '~, post-modified by an of-pp, whose, object is "instrument", "imolement", et_c_~..tt.tlae head is post-modified by the partmpla~ usea as a weapon'; etc.. If any of these conditions apply, that sense of the noun is marked + INSTRUMENT. s Next, each of the possible with-relations is tried. Let us take USE as a first example. To determine whether a USE relation holds in a particular string, the DM considers the se- mantic category of the NP-head. The most typical case is when the NP-head is + IN- STRUMENT, as in to fish with a hook . In this case, the relationship of USE is further supported by a link established between the NP-head definition and the VERB definition through catch: a hook is an ~'... implement for catching, holding, or pulling and to fish is to attempt to catch fish. (See Jensen & Binot 1987 for similar examples and discussion.) Such a link, however, is rarely found. In many other USE instances, it is the meaning of the NP-head alone that determines the relation. Thus, DM determines that USE applies to "to attack with bombs" based on bomb(n,l)-"an explosive device fused to detonate under .speci- fied conditions", although no link is established between attack and detonate. USE is also applied regardless of the VERB when the NP-head is +BODYPART and certain syntactic conditions (a definite article or a 3rd-person possessive pronoun) hold of the string, as ~ "to strike or push with or as if with the head" and to write with one's own hand". USE is similarly assigned if the NP-head is + SUBSTANCE: "to rub with oil or an oily substance" or "to kill especially with poison'. MANNER, like USE, is also deter- mined largely on the basis of the NP-head. It is assigned if the semantic category of the NP-head is a state ("to progress ,with much tacking or difficulty'); a feeling (to dispute with zeal, anger or heat")i a movement ("to move with a swaying or swindling motion"); an intention ("to examine with intent to verify"); etc. Since USE and MANNER are largely de- termined on the basis of the semantic category of the NP, they correspond to adjuncts, in the theoretical distinction made between adjuncts and complements. By contrast, ALTER- ATION, CO-AGENCY and PROVISION are determined mostly on the basis of the VERB and could be said to correspond to comple- ments. (There are, however, many compli- cations with this simple division, which we are currently studying.) To assign an ALTER- ATION relation to a string, the DM checks whether the VERB subcategorizes for an (op- tional) with-complement, based on informa- tion found in the online version of LDOCE and whether the VERB denotes change. The ftrst LDOCE sense of fill, ~to make or become full", for example, fulfills both conditions. Therefore, ALTERATION is assigned !n "to become filled with or as if with air, to fdl with detrital material" and "to become idled with painful yearning". ALTERATION also applies to other verb classes that are not marked for with-subcategorization in LDOCE, such as verbs denot~g affliction ("to overcome with fear or dread') or actions of marking ("to mark with an asterisk"). Finally, PHRASAL is assigned if a separate LDOCE entry exists for "VERB with, as in "to charge with a crime" and "to ply with drink". PHRASAL indicates that the semantic relation between the VERB and the NP is not re- stricted by the meaning of with but is more like the relation between a verb and its direct ob- ject. Since the heuristics for each semantic re- lation are independent of each other, conflict- ing interpretations may arise. There are cases of unresolved ambigu!ty, when different senses of one of the terms gtve rise to different inter- pretations. For example,. "to write with one's own hand" receives a ~ USE (-OF-BODYPART) interpretation but also a USE (-OF-ANIMATE BEING), which is in- correct but due to several W7 senses of hand which are marked +HUMAN ("one who performs or executes a particular work"; "one employed at manual labor or general tasks"; s The heuristics apply to each definition in isolation, retrieving information that is static and unchanging. In the future, we intend to apply the heuristics to the whole dictionary and store the information in COMPLEX. 263 "worker, employee", etc.). A general heuristic can be added to prefer a + BODYPART in- terpretation over a + HUMAN one, since this ambiguity occurs with other body parts too. Other instances of ambiguity, however, are more idiosyncratic. "I'o utter with accent", for example, receives a MANNER interpretation (correct), based on aecent(n,l)-"a distinctive manner of usually oral expression ; but it also receives USE(-OF-SUBSTANCE) (incorrect), based on aeeent(n,7,c)-"a substance or object used for emphasis . General heuristics cannot eliminate all cases of ambiguities of this kind. Another t~,pe of conflict arises when one semantic relation is assigned on the basis of the VERB while another is assigned on the basis of the NP-head. This is the case with to overcome with fear or dread", for which the DM returns two interpretations: ALTER- ATION (correct) because the verb denotes af- fliction and MANNER (incorrect) because the NP denotes a mental attitude. For "to com- bine or impregnate with ammonia or an ammonium compound" DM similarly returns ALTERATION (correct) because the verb is a causative verb of change and USE(-OF-SUBSTANCE) (incorrect) because the NP refers to a chemical substance. To handle this type of conflict:, we have imple- mented a "Tmal preference heuristic which chooses the VERB-based interpretation over the NP-based one. Note, however, that this heuristic has implications for cases of overlap, such as "spatter with a discoloring substance", discussed above. When DM generates both the VP-based ALTERATION link and the NP-based link of USE for this string, the for- mer would be preferred over the latter. Thus the fact that both links truly apply in this case will be lost. A third possible conflict arises between a PHRASAL interpretation and a semantic one. The DM returns PHRASAL-VERB (correct) and ALTERATION (incorrect) for to charge with a crime, based on eharge with-(espe- ciaUy of an official or an official group) to bring a charge against ,(someone) for (some- thing wrong); accuse of ; and eharge(with)-"to (cause to) take in the correct amount of elec- tricity". Since the existence of a PHRASAL interpretation is an idiosyncratic property of verbs, there is no general heuristic for solving conflicts of this kind. 4. RESULTS We have developed our DM heuristics based on a training corpus of 170 strings - 148 transitive and 22 intransitive verb definitions extracted randomly from the letters a and b of W7 using a pattern extracting program devel- oped by M. Chodorow (Chodorow & Klavans in preparation). The syntactic forms of the strings vary as can be seen from the following examples: "!o suffer from or become affected with blight'; to contend with full strength, vigor, craft, or resources'; to prevent from in- terfering with each other (as by a baffle). However, since we submit the strings to the PEG parser and retrieve the VERB and NP-head from the parsed structures, we are able to abstract over most of the variations. Currently, the DM ignores multiple conjuncts in coordinate structures and considers only one VERB and one NP-head. In the future, all possible pairings should be considered (e.g. "contend with strength", 'contend with vigor", "contend with craft , and so on, for the exam- ~ le mentioned above) and the results should e combined. As mentioned in Section 1, de- fruition strings lack a syntactic object. The few strings that contain an object include it in pa- rentheses (to treat (flour) with nitrogent trichloride 3. This, again, is tolerated by the PEG parser, and allows us to assume that in all the strings the with-phrase attaches to the VERB rather than to the object. The DM results can be summarized as fol- lows: The correct 6 semantic relation, based on the appropriate semantic category (of the NP-head or VERB), is assigned to 113 out of the 170 strings. Here are a few examples: sever with an ax USE(-OF-INSTRUMENT) wet with blood USE(-OF-SUBSTANCE) inter with full ceremonies (ACTION-AS-) MANNER dispute with zeal (ATTITUDE-AS-) MANNER ornament with ribbon ALTERATION (BY-COVERING) clothe with rich garments ALTERATION (BY-COVERING) equip with weapons PROVISION We consider these 113 results to be completely satisfactory. In a second group of cases, the correct se- mantic relation, based on the appropriate se- mantic category, is one of 2 (andrarely of 3) semantic relations assigned to the string. There are 15 such cases. Here are two examples: harass with dogs USE(-OF-ANIMATE_BEING) correct USE(-OF-INSTRUMENT) incorrect The second interpretation ts due to dog(n,3,a)-"any of various usually simple me- chanical devices for holding, gripping, or fas- tening consisting of a spike, rod, or bar". Lacking information about the frequency of different senses of words, we have at present no principled way to distinguish a primary 6 See discussion of correctness at the end of this section. 264 sense (like the animal sense of dog) from more obscure senses (like the device sense). Make dirty with grime USE(-OF-SUBSTANCE) correct (STATE-AS) MANNER incorrect The incorrect interpretation of grime as man- ner is due to the definition of its hypernym dirtiness as "the quality or state of being dirty . We consider this second group of cases, which are assigned two interpretations, to be partial successes, since they represent an improvement over the initial number of possible sense com- binations even if they do not fully disambiguated them. In 37 cases, DM is unable to assign any interpretation. One reason is failure to identify the semantic category of the VERB or NP-head. For example, 'to pronounce with a burr should be assigned MANNER (SOUND), but the relevant definitions of burr read: "a trilled uvular r as used by some speakers of English especially ~n northern En- gland and in Scotland and a tongue-ooint trill that is the usual Scottish r", making tt im- possible for DM to identify it as a sound. (See discussion below.) There are other reasons for failure: occasionally the NP-head isnot listed as an entry in W7, as barking in to pursue with barking" or drunkenness in to muddle with drunkenness or infatuation". Even if we introduced morphological rules, identified the base of the derivational word and looked up the meaning of the base, the derived meaning in these cases would still not be obvious. Finally, a negligible number of failures is due to incorrect parsing by PEG, which in turn provides incorrect input for the heuristics. Failure to assign any interpretation does not, of course, count as success; but it does not produce much harm either. Far more danger- ous than iao assignment is the assignment of one incorrect interpretation, since incorrect in- terpretations cannot be differentiated from correct ones in any general or automatic way. Out of the set of 170 strings, only 5 are as- signed a single incorrect interpretation. These are: press with requests (STATE-AS-) MANNER based on the fourth definition of request: "the state of being sought after; demand". Seize with teeth ALTERATION (BY-AFFLICTION) based on seize(vt,5,a)-"to attack or overwhelm physically; afflict". Speak with a burr USE(-OF-INSTRUMENT) based on burr(n,2,b,1)-"a small rotary cutting tool". Suffuse with light USE 265 where the semantic relation may seem correct, but the sense of light on which it is based ("a flame for lighting something") is inappropriate. Possess with a devil USE(-OF-ANIMATE BEING) where the intended semafftic relation is unclear (ALTERATION?) as is the semantic category of devil. However, the USE interpretation is clearly based on the several inappropriate + HUMAN senses of devil ( an extremely and malignantly wicked person : fiend"; "aperson of notable energy, recklessness, and dashing spirit"; and others). As incorrect interpretations cannot be au- tomatically identified as such, it is most im- portant to design the heuristics so that they generate as few incorrect interpretations as possible. One way of restricting the heuristics ts by not considering the meaning of hypemyms, except in special cases. To return to "pronounce wtth a burr". We prefer to miss the fact that a burr, which is a trill, is a sound by ignoring the meaning of the hypemym trill than to have to take into account the meaning of all the hypemyms of burr. Considering the meaning of all the hypernyms will yield too many incorrect semantic interpretations for "pronounce with a burr". One hypemym of burr, weed, has a + HUMAN sense and a + ANIMAL sense; ridge, another hypemym, has a + BODYPART sense. Since results obtained with the training corpus were promising, we ran DM on a test- ing corpus: 132 definitions of the form "to VERB with NP" not processed by the pro- gram before. The results obtained with the testing corpus are compared below with those of the training corpus. The first column lists the total number of strings; the second, the number of strings assigned a single, correct in- terl?retation; the third, the number of strings asstgned two interpretations, one of which ts correct; the fourth column shows the number of strings for which no interpretation was found, and the last column lists the number of strings assigned one or more incorrect in- terpretations (but no correct ones). TOT COR 1/2 0 INC TRAINING 170 113 15 37 5 TESTING 132 75 13 22 22 To measure the coverage of DM, we calculate the ratio of strings interpreted (correctly and incorrectly) to the total number of strings: TRAINING TESTING COVERAGE RATIO 133/170 (or 78.2%) 110/132 (or 83.3%) To measure the reliability of DM, we calculate the ratio of correct interpretations to incorrect ones: TRAINING TESTING COR-TO-INC RATIO 113/133 (or 85%) 75/110 (or 68%) If we include in the correct category those strings for which two interpretations were found (only one of which is correct), the reli- ability measure increases: TRAINING TESTING COR + I/2-TO-INC RATIO 128/133 (or 96.2%) 88/110 (or 80%) As expected, reliability for the testing material is lower than for the training set. This is due to the several iterations of free-tuning to which the training corpus has been subjected. The examination of the testing results suggests some further f'me-tuning, which is currently being implemented, and which will reduce the number of incorrect interpretations. Finally, we developed a criterion by which to measure the accuracy of our judgements of correctness. To ensure that our personal judgements of the correctness of the DM in- terpretations as reported above were neither idiosyncratic nor favorably biased, we com- pared them with the judgements of other hu- man subjects, both linguists and non-linguists. We randomly selected 58 definition strings whose interpretation we judged to be correct and assigned each of them to 3-4 different participants for their judgements. Participants were asked to perform the same task as the module's, namely, for each definition string, select the relevant with-link from among the six we have stipulated and choose the relevant senses of the VERB and the NP-head from among all their W7 senses. We provided short explanations of the different with-links (based on the descriptions found here in Section 2) with a few examples. We allowed participants to choose more than one link if necessary, so that we can detect cases of overlap; we also allowed the choice of OTHER, if no link seemed suitable; or a question mark, if the string seemed confusing. In 3 cases there was no consensus among the human judgements. Either 4 different choices of with-links or two question marks were given, as shown below: Affect with a blighting influence USE, PHRASAL, ALTERATION/PHRASAL, ? Fill with bewildered wonder PROVISION, PHRASAL, ALTERATION, MANNER fit to or with a stock PROVISION, USE, ?, ? Even though the DM choice for these strings (deemed correct by us) coincided with one of 266 the human choices, the variation is too large to validate the correctness of this choice. These 3 cases were therefore ignored. In 44 cases out of the remaining 55, there was (almost) unanimous agreement (3 or 4) among the human judgements on a single with-link. The DM choice was identical to 41 of those 44. That is, in 41 out of 44 cases, our own judgement of correctness coincides with that of others. The cases where we differ are: flavor, blend, or preserve with brandy 4 subjects out of 4: ALTERATION DM: USE face or endure with courage 2 subjects out of 3: MANNER third subject: MANNER/USE DM: USE strengthen with or as if with buckram 4 subjects out of 4: ALTERATION DM: USE In the remaining 11 strings, there was an even split in the human judgements between two with-links, indicative to some extent of genuine overlap. For example, "treat with a bromate" was interpreted as USE by two participants and as ALTERATION bytwo others. One participant explained that his choice depended on the implied object: he would categorize treating a patient with medicine as USE but treating a metal with a chemical substance as ALTERATION. The DM choice was identi- cal to one of the two altemative human choices in 10 out of these 11 strings. That is, in 10 out of 11 cases, our judgement of cor- rectness fits one of the two choices made by others. To summarize, our judgements of correct- ness were validated by others in 51 cases out of 56 (or 91%). Our practical conclusion from this experiment is simply that our semantic judgements concerning the meaning of with in context coincides with those of others often enough to allow us to rely on our intuitions when informaUy evaluatinAg the results of our program. More generally, this experiment seems to indicate that people reach consensus on the meaning of prepositions once they are given a set of alternatives to choose from, even though they may fmd it very difficult to define the meaning of prepositions themselves. The significance of the unclear cases and the over- lap cases in the experiment requires further study. CONCLUSION As our evaluations indicate, the DM which we are developing is quite successful in identifying the correct semantic relation that holds between the terms of a definition string. In identifying this relation, the DM also par- tially disambig.uates the senses of the definition tema" s. In ass,gning MANNER, for example, to utter with accent , DM selects two senses of accent as relevant, from among the nine listed in its W7 entry. In assigning ALTER- ATION to mark with a written or printed accent", it selects 3 completely different senses of accent as relevant. Thus, the same noun (accent), occurring in identical syntactic struc- tures ("VERB with NP') is assigned different sense(s), based on its semantic link to its head. Interpreting the semantic relations between genus and differentia and disambiguating the senses of de[ruing terms are both crucial for our lgeneral goal - the creation of a compre- henswe, yet disambiguated, lexical database. There are other important applications: the heuristics that have been developed for the analysis of dictionary definitions should be helpful in the disamb,guation of PPs occurring in free text. In cases of syntactic ambiguity, the need to determine proper attachment is evident. In addition, we should point out that there is a need to identity the semantic relation between a head and a PP, even when attach- ment is clear. In translation, for example, re- solving the semantic ambiguity of a source preposition is needed when ambiguity cannot be preserved in the target preposition. Finally, we hope that the computational disambiguation of the meanings of prep- ositions will contribute interesting insights to the linguistic issues concerning the distm" ction between adjuncts and complements. ACKNOWLEDGMENTS I thank John Justeson (Watson Research Ctr., IBM), Martin Chodorow (Hunter College, CUNY), Michael Gunther (ASD, IBM) and Howard Sachar (ESD, IBM) for many critical comments and insights. REFERENCES Alshawi Hiyan. 1987. "ProcessingDictiona_~,, Definitions with Phrasal Pattem Hierarchies , Computational Linguistics, 13, 3-4, 195-202. Amsler Robert & Donald Walker. 1985. q'he Use of Machine-Readable Dictionaries in • " " ban u e Sublanguage Analysis , m Su l, ~ ag : De- scription and Processing, eds. R. Grishman and R. Kittredge, Lawrence Erlbaum. Boguraev Branimir & Karen Sparck Jones. 1987. Material Concerning a Study of Cases, Technical Report no. 118, Cambridge: Uni- versity of Cambridge, Computer Laboratory. Bresnan Joan. 1982. ed., The Mental Repre- sentation of Grammatical Relations, Cambridge, Mass.: MIT Press. Byrd Roy. 1989. "Discovering Relationships among Word Senses , to be published in Dic- tionaries in the Electronic Age." Proceedings of the Fifth Annual Conference of the University of Waterloo Centre for the New Oxford English Dictionary. Chodorow Martin & Judith Klavans. In prep- aration. "Locating Syntactic Pattems in Text Corpora". Chomsky Noam. 1981. Lectures on Govern- ment and Binding, Dordrecht: Foris. Collins. 1987. Cobuild, English Language Dictionary, London: Collins. Dahlgren Kathleen & Joyce McDowell. 1989. ' Knowledge Representation for Commonsense Reasoning with Text , Computational Linguis- tics, 15, 3, 149-170. Jackendoff Ray. In preparation. Semantic Structures. Jensen Karen. 1986. "PEG 1986: A Broad- coverage Computational Syntax of English", Unpublished paper. Jensen Karen & Jean-Louis Binot. 1987. "Disambiguating Prepositional Phrase Attach- ments by Using On-Line Definitions", Com- putational Linguistics, 13, 3-4, 251-260. Klavans Judith, Martin Chodorow, Roy Byrd & Nina Wacholder. 1990. '~Faxonomy and Polysemy", Research Report, IBM. Larson Richard. 1988. "Implicit Arguments in Situation Semantics', Linguistics and Philoso- phy, 11, 169-201. Lesk Michael. 1987. "Automatic Sense Disambiguation Using Machine Readable Dictionaries: [tow to Tell a Pine Cone from an Ice Cream Cone", Proceedings of the 1986 A CM SIGDOC Conference, Canada. Longman. 1978. Longman Dictionary of Con- temporary English, London: Longman Group. Merriam. 1963. Webster's Seventh New Collegiate Dictionary, Springfield, Mass.: G.&C. Merriam. Quirk Randolph, Sidney Greenbaum, Geoffrey Leech & Jan Svartvik. 1972. A Grammar of Contemporary English, London: Longman House. Ravin Yael. In print. Lexical Semantics with- out Thematic Roles, Oxford: Oxford University Press. 267
1990
33
NOUN CLASSIFICATION FROM PREDICATE.ARGUMENT STRUCTURES Donald Hindle AT&T Bell Laboratories 600 Mountain Avenue Murray Hill, NJ 07974 ABSTRACT A method of determining the similarity of nouns on the basis of a metric derived from the distribution of subject, verb and object in a large text corpus is described. The resulting quasi-semantic classification of nouns demonstrates the plausibility of the distributional hypothesis, and has potential application to a variety of tasks, including automatic indexing, resolving nominal compounds, and determining the scope of modification. 1. INTRODUCTION A variety of linguistic relations apply to sets of semantically similar words. For example, modifiers select semantically similar nouns, selecfional restrictions are expressed in terms of the semantic class of objects, and semantic type restricts the possibilities for noun compounding. Therefore, it is useful to have a classification of words into semantically similar sets. Standard approaches to classifying nouns, in terms of an "is-a" hierarchy, have proven hard to apply to unrestricted language. Is-a hierarchies are expensive to acquire by hand for anything but highly restricted domains, while attempts to automatically derive these hierarchies from existing dictionaries have been only partially successful (Chodorow, Byrd, and Heidom 1985). This paper describes an approach to classifying English words according to the predicate-argument structures they show in a corpus of text. The general idea is straightforward: in any natural language there ate restrictions on what words can appear together in the same construction, and in particular, on what can he arguments of what predicates. For nouns, there is a restricted set of verbs that it appears as subject of or object of. For example, wine may be drunk, produced, and sold but not pruned. Each noun may therefore he characterized according to the verbs that it occurs with. Nouns may then he grouped according to the extent to which they appear in similar environments. This basic idea of the distributional foundation of meaning is not new. Hams (1968) makes this "distributional hypothesis" central to his linguistic theory. His claim is that: "the meaning of entities, and the meaning of grammatical relations among them, is related to the restriction of combinations of these entities relative to other entities." (Harris 1968:12). Sparck Jones (1986) takes a similar view. It is however by no means obvious that the distribution of words will directly provide a useful semantic classification, at least in the absence of considerable human intervention. The work that has been done based on Harris' distributional hypothesis (most notably, the work of the associates of the Linguistic String Project (see for example, Hirschman, Grishman, and Sager 1975)) unfortunately does not provide a direct answer, since the corpora used have been small (tens of thousands of words rather than millions) and the analysis has typically involved considerable intervention by the researchers. The stumbling block to any automatic use of distributional patterns has been that no sufficiently robust syntactic analyzer has been available. This paper reports an investigation of automatic distributional classification of words in English, using a parser developed for extracting grammatical structures from unrestricted text (Hindle 1983). We propose a particular measure of similarity that is a function of mutual information estimated from text. On the basis of a six million word sample of Associated Press news stories, a classification of nouns was developed according to the predicates they occur with. This purely syntax-based similarity measure shows remarkably plausible semantic relations. 268 2. ANALYZING THE CORPUS A 6 million word sample of Associated Press news stories was analyzed, one sentence at a time, SBAR I/I D N C PROTNS VS PRO I I I I I I I the land that t * sustains us CONJ NP i?)' • CN Q p D NPL I I I I I I and many of the products we S °A xvs 7AYx i PROTNS V PRO ThiS VS D N I I I I I I I I * use ? * are the result Figure 1. Parser output for a fragment of sentence (1). by a deterministic parser (Fidditch) of the sort originated by Marcus (1980). Fidditch provides a single syntactic analysis -- a tree or sequence of trees -- for each sentence; Figure 1 shows part of the output for sentence (1). (1) The clothes we wear, the food we eat, the air we breathe, the water we drink, the land that sustains us, and many of the products we use are the result of agricultural research. (March 22 1987) The parser aims to be non-committal when it is unsure of an analysis. For example, it is perfectly willing to parse an embedded clause and then leave it unattached. If the object or subject of a clause is not found, Fidditch leaves it empty, as in the last two clauses in Figure 1. This non-committal approach simply reduces the effective size of the sample. The aim of the parser is to produce an annotated surface structure, building constituents as large as it can, and reconstructing the underlying clause structure when it can. In sentence (1), six clauses are found. Their predicate-argument information may be coded as a table of 5-tuples, consisting of verb, surface subject, surface object, underlying subject, underlying object, as shown in Table 1. In the subject-verb-object table, the root form of the head of phrases is recorded, and the deep subject and object are used when available. (Noun phrases of the form a nl of n2 are coded as nl n2; an example is the first entry in Table 2). 269 Table 1. Predicate-argument relations found in an AP news sentence (1). verb subject object surface deep surface deep wear we eat we breathe we drink we sustain Otrace use we be land land Otrace food Otrace air Otrace water us result The parser's analysis of sentence (1) is far from perfect: the object of wear is not found, the object of use is not found, and the single element land rather than the conjunction of clothes, food, air, water, land, products is taken to be the subject of be. Despite these errors, the analysis is succeeds in discovering a number of the correct predicate-argument relations. The parsing errors that do occur seem to result, for the current purposes, in the omission of predicate-argument relations, rather than their misidentification. This makes the sample less effective than it might be, but it is not in general misleading. (It may also skew the sample to the extent that the parsing errors are consistent.) The analysis of the 6 million word 1987 AP sample yields 4789 verbs in 274613 clausal structures, and 267zt2 head nouns. This table of predicate-argument relations is the basis of our similarity metric. 3. TYPICAL ARGUMENTS For any of verb in the sample, we can ask what nouns it has as subjects or objects. Table 2 shows the objects of the verb drink that occur (more than once) in the sample, in effect giving the answer to the question "what can you drink?" Table 2. Objects of the verb drink. OBJECT COUNT WEIGHT bunch beer 2 12.34 tea 4 11.75 Pepsi 2 11.75 champagne 4 11.75 liquid 2 10.53 beer 5 10.20 wine 2 9.34 water 7 7.65 anything 3 5.15 much 3 2.54 it 3 1.25 <SOME AMOUNT> 2 1.22 This list of drinkable things is intuitively quite good. The objects in Table 2 are ranked not by raw frequency, but by a cooccurrence score listed in the last column. The idea is that, in ranking the importance of noun-verb associations, we are interested not in the raw frequency of cooccurrence of a predicate and argument, but in their frequency normalized by what we would expect. More is to be learned from the fact that you can drink wine than from the fact that you can drink it even though there are more clauses in our sample with # as an object of drink than with wine. To capture this intuition, we turn, following Church and Hanks (1989), to "mutual information" (see Fano 1961). The mutual information of two events l(x y) is defined as follows: P(x y) l(xy) = log2 P(x) P(y) where P(x y) is the joint probability of events x and y, and P(x) and P(y) axe the respective independent probabilities. When the joint probability P(x y) is high relative to the product of the independent probabilities, I is positive; when the joint probability is relatively low, I is negative. We use the observed frequencies to derive a cooccurrence score Cobj (an estimate of mutual information) defined as follows. 270 /(. v) N C~,j(n v) = log2 /(n) /(v) N N where fin v) is the frequency of noun n occurring as object of verb v, f(n) is the frequency of the noun n occurring as argument of any verb, f(v) is the frequency of the verb v, and N is the count of clauses in the sample. (C,,,bi(n v) is defined analogously.) Calculating the cooccurrence weight for drink, shown in the third column of Table 2, gives us a reasonable tanking of terms, with it near the bottom. Multiple Relationships For any two nouns in the sample, we can ask what verb contexts they share. The distributional hypothesis is that nouns axe similar to the extent that they share contexts. For example, Table 3 shows all the verbs which wine and beer can be objects of, highlighting the three verbs they have in common. The verb drink is the key common factor. There are of course many other objects that can be sold, but most of them are less alike than wine or beer because they can't also be drunk. So for example, a car is an object that you can have and sell, like wine and beer, but you do not -- in this sample (confirming what we know from the meanings of the words) -- typically drink a car. 4. NOUN SIMILARITY We propose the following metric of similarity, based on the mutual information of verbs and arguments. Each noun has a set of verbs that it occurs with (either as subject or object), and for each such relationship, there is a mutual information value. For each noun and verb pair, we get two mutual information values, for subject and object, Csubj(Vi nj) and Cobj(1Ji nj) We define the object similarity of two nouns with respect to a verb in terms of the minimum shared coocccurrence weights, as in (2). The subject similarity of two nouns, SIMs~j, is defined analogously. Now define the overall similarity of two nouns as the sum across all verbs of the object similarity and the subject similarity, as in (3). (2) Object similarity. SIMobj(vinjnt) = min(Cobj(vinj) Cobj(vln,)), ff Coni(vinj) > 0 and abs (m~x(Cobj(vinj) , Cobj(Vink))), if Cobj(vinj) < 0 O, otherwise Cobj(vi,,) > 0 and Cobj(vin,) < 0 (3) Noun similarity. N SIM(ntn2) = ~'. i=0 SIM~a,i(vinln2) + SIMobj(vinln2) The metric of similarity in (2) and (3) is but one of many that might be explored, but it has some useful properties. Unlike an inner product measure, it is guaranteed that a noun will be most similar to itself. And unlike cosine distance, this metric is roughly proportional to the number of different verb contexts that are shared by two nouns. Using the definition of similarity in (3), we can begin to explore nouns that show the greatest similarity. Table 4 shows the ten nouns most similar to boat, according to our similarity metric. The first column lists the noun which is similar to boat. The second column in each table shows the number of instances that the noun appears in a predicate-argument pair (including verb environments not in the list in the fifth column). The third column is the number of distinct verb environments (either subject or object) that the noun occurs in which are shared with the target noun of the table. Thus, boat is found in 79 verb environment. Of these, ship shares 25 common environments (ship also occurs in many other unshared environments). The fourth column is the measure of similarity of the noun with the target noun of the table, SIM(nln2), as defined above. The fifth column shows the common verb environments, ordered by cooccurrence score, C(vinj), as defined above. An underscore before the verb indicates that it is a subject environment; a following underscore indicates an object environment. In Table 4, we see that boat is a subject of cruise, and object of sink. In the list for boat, in column five, cruise appears earlier in the list than carry because cruise has a higher cooccurrence score. A - before a verb means that the cooccurrence score is negative -- i.e. the noun is less likely to occur in that argument context than expected. For many nouns, encouragingly appropriate sets of semantically similar nouns are found. Thus, of the ten nouns most similar to boat (Table 4), nine are words for vehicles; the most Table 3. Verbs taking wine and beer as objects. VERB wine beer count weight count weight drug 2 12.26 sit around l 10.29 smell 1 10.07 contaminate 1 9.75 rest 2 9.56 drink 2 9.34 5 10.20 rescue 1 7.07 purchase 1 6.79 lift 1 6.72 prohibit 1 6.69 love l 6.33 deliver 1 5.82 buy 3 5.44 name 1 5.42 keep 2 4.86 offer 1 4.13 begin 1 4.09 allow I 3.90 be on 1 3.79 sell I 4.21 1 3.75 's 2 2.84 make 1 1.27 have 1 0.84 2 1.38 similar noun is the near-synonym ship. The ten nouns most similar to treaty (agreement, plan, constitution, contract, proposal, accord, amendment, rule, law, legislation) seem to make up a duster involving the notions of agreement and rule. Table 5 shows the ten nouns most similar to legislator, again a fairly coherent set. Of course, not all nouns fall into such neat clusters: Table 6 shows a quite heterogeneous group of nouns similar to table, though even here the most similar word (floor) is plausible. We need, in further work, to explore both automatic and supervised means of discriminating the semantically relevant associations from the spurious. 271 Table 4. Nouns similar to boat. Noun ~n) verbs SIM boat 153 79 370.16 ship 353 25 79.02 plane 445 26 68.85 bus 104 20 64.49 jet 153 17 62.77 vessel 172 18 57.14 truck 146 21 56.71 car 414 9_,4 52.22 helicopter 151 14 50.66 ferry 37 10 39.76 man 1396 30 38.31 Verbs _cruise, keel_, _plow, sink_, drift_, step off_, step from_, dock_, righ L, submerge , near, hoist , intercept, charter, stay on_, buzz_, stabilize_, _sit on, intercept, hijack_, park_, _be from, rock, get off_, board, miss_, stay with_, catch, yield-, bring in_, seize_, pull_, grab , hit, exclude_, weigh_, _issue, demonstrate, _force, _cover, supply_, _name, attack, damage_, launch_, _provide, appear , carry, _go to, look a L, attack_, _reach, _be on, watch_, use_, return_, _ask, destroy_, fire, be on_, describe_, charge_, include_, be in_, report_, identify_, expec L, cause , 's , 's, take, _make, "be_,-say, "give_, see ," be, "have_, "get _near, charter, hijack_, get off_, buzz_, intercept, board_, damage, sink_, seize, _carry, attack_, "have_, _be on, _hit, destroy_, watch_, _go to, "give , ask, "be_, be on_, "say_, identify, see_ hijack_, intercept_, charter, board_, get off, _near, _attack, _carry, seize_, -have_, _be on, _catch, destroy_, _hit, be on_, damage_, use_, -be_, _go to, _reach, "say_, identify_, _provide, expect, cause-, see- step off_., hijack_, park_, get off, board , catch, seize-, _carry, attack_, _be on, be on_, charge_, expect_, "have , take, "say_, _make, include_, be in , " be charter, intercept, hijack_, park_, board , hit, seize-, _attack, _force, carry, use_, describe_, include , be on, "_be, _make, -say_ right-, dock, intercept, sink_, seize , catch, _attack, _carry, attack_, "have_, describe_, identify_, use_, report_, "be_, "say_, expec L, "give_ park_, intercept-, stay with_, _be from, _hit, seize, damage_, _carry, teach, use_, return_, destroy_, attack , " be, be in , take, -have_, -say_, _make, include_, see_ step from_, park_, board , hit, _catch, pull , carry, damage_, destroy_, watch_, miss_, return_, "give_, "be , - be, be in_, -have_, -say_, charge_, _'s, identify_, see , take, -get_ hijack_, park_, board_, bring in , catch, _attack, watch_, use_, return_, fire_, _be on, include , make, -_be dock_, sink_, board-, pull_, _carry, use_, be on_, cause , take, "say_ hoist_, bring in_, stay with_, _attack, grab, exclude , catch, charge_, -have_, identify_, describe_, "give , be from, appear_, _go to, carry, _reach, _take, pull_, hit, -get , 's , attack_, cause_, _make, "_be, see , cover, _name, _ask 272 Table 5. Nouns simliar to legislator. Noun fin) verbs SIM legislator 45 35 165.85 Senate 366 11 40.19 commit~e 697 20 39.97 organization 351 16 34.29 commission 389 17 34.28 legislature 86 12 34.12 delega~ 132 13 33.65 lawmaker 176 14 32.78 panel 253 12 31.23 Congress 827 15 31.20 side 327 15 30.00 Table 6. Nouns similar to table. Noun f(n) verbs SIM table 66 30 181.43 floor 94 6 30.01 farm 80 8 22.94 scene 135 10 20.85 America 156 7 19.68 experience 129 5 19.04 river 95 4 18.73 town 195 6 18.68 side 327 8 18.57 hospital 190 7 18.10 House 453 6 17.84 Verbs cajole , thump, _grasp, convince_, inform_, address , vote, _predict, _address, _withdraw, _adopt, _approve, criticize_, _criticize, represent, _reach, write , reject, _accuse, support_, go to_, _consider, _win, pay_, allow_, tell , hold, call__, _kill, _call, give_, _get, say , take, "__be _vote, address_, _approve, inform_, _reject, go to_, _consider, adopt, tell , - be, give_ _vote, _approve, go to_, inform_, _reject, tell , " be, convince_, _hold, address_, _consider, _address, _adopt, call_, criticize, allow_, support_, _accuse, give_, _call adopt, inform_, address, go to_, _predict, support_, _reject, represent_, _call, _approve, -_be, allow , take, say_, _hold, tell_ _reject, _vote, criticize_, convince-, inform_, allow , accuse, _address, _adopt, "_be, _hold, _approve, give_, go to_, tell_, _consider, pay_ convince_, approve, criticize_, _vote, _address, _hold, _consider, "_.be, call_, give, say_, _take -vote, inform_, _approve, _adopt, allow_, _reject, _consider, _reach, tell_, give , " be, call, say_ -criticize, _approve, _vote, _predict, tell , reject, _accuse, "__be, call_, give , consider, _win, _get, _take _vote, approve, convince_, tell , reject, _adopt, _criticize, _.consider, "__be, _hold, give, _reach inform_, _approve, _vote, tell_, _consider, convince_, go to , " be, address_, give_, criticize_, address, _reach, _adopt, _hold reach, _predict, criticize , withdraw, _consider, go to , hold, -_be, _accuse, support_, represent_, tell_, give_, allow , take Verbs hide beneath_, convolute_, memorize_, sit at, sit across_, redo_, structure_, sit around_, fitter, _carry, lie on_, go from_, hold, wait_, come to, return to, turn_, approach_, cover, be on-, share, publish_, claim_, mean_, go to, raise_, leave_, "have_, do , be litter, lie on-, cover, be on-, come to_, go to_ _carry, be on-, cover, return to_, turn_, go to._, leave_, "have_ approach_, retum to_, mean_, go to, be on-, turn_, come to_, leave_, do_, be_ go from_, come to_, return to_, claim_, go to_, "have_, do_ structure_, share_, claim_, publish_, be_ sit across_, mean_, be on-, leave_ litter,, approach_, go to_, return to_, come to_, leave_ lie on_, be on-, go to_, _hold, "have_, cover, leave._, come to_ go from_, come to_, cover, return to_, go to_, leave_, "have_ return to_, claim_, come to_, go to_, cover_, leave_ 273 Reciprocally most similar nouns We can define "reciprocally most similar" nouns or "reciprocal nearest neighbors" (RNN) as two nouns which are each other's most similar noun. This is a rather stringent definition; under this definition, boat and ship do not qualify because, while ship is the most similar to boat, the word most similar to ship is not boat but plane (boat is second). For a sample of all the 319 nouns of frequency greater than 100 and less than 200, we asked whether each has a reciprocally most similar noun in the sample. For this sample, 36 had a reciprocal nearest neighbor. These are shown in Table 7 (duplicates are shown only once). Table 7. A sample of reciprocally nearest neighbors. RNN word counts bomb device (192 101) ruling - decision (192 761) street road (188 145) protest strike (187 254) list fieM (184 104) debt deficit (183 351) guerrilla rebel (180 314) fear concern (176 355) higher lower (175 78) freedom right (164 609) battle fight (163 131) jet plane (153 445) shot bullet (152 35) truck car (146 414) researcher scientist (142 112) peace stability (133 64) property land (132 119) star editor (131 85) trend pattern (126 58) quake earthquake (126 120) economist analyst (120 318) remark comment (115 385) data information (115 505) explosion blast (115 52) tie relation (114 251) protester demonstrator (110 99) college school (109 380) radio IRNA (107 18) 2 3 (105 90) The list in Table 7 shows quite a good set of substitutable words, many of which axe neat synonyms. Some are not synonyms but are 274 nevertheless closely related: economist - analyst, 2 - 3. Some we recognize as synonyms in news reporting style: explosion - blast, bomb - device, tie - relation. And some are hard to interpret. Is the close relation between star and editor some reflection of news reporters' world view? Is list most like fieM because neither one has much meaning by itself?. 5. DISCUSSION Using a similarity metric derived from the distribution of subjects, verbs and objects in a corpus of English text, we have shown the plausibility of deriving semantic relatedness from the distribution of syntactic forms. This demonstration has depended on: 1) the availability of relatively large text corpora; 2) the existence of parsing technology that, despite a large error rate, allows us to find the relevant syntactic relations in unrestricted text; and 3) (most important) the fact that the lexical relations involved in the distribution of words in syntactic structures are an extremely strong linguistic constraint. A number of issues will have to be confronted to further exploit these structurally- mediated lexical constraints, including: Po/ysemy. The analysis presented here does not distinguish among related senses of the (orthographically) same word. Thus, in the table of words similar to table, we find at least two distinct senses of table conflated; the table one can hide beneath is not the table that can be commuted or memorized. Means of separating senses need to be developed. Empty words. Not all nouns are equally contentful. For example, section is a general word that can refer to sections of all sorts of things. As a result, the ten words most similar to section (school, building, exchange, book, house, ship, some, headquarter, industry., office) are a semantically diverse list of words. The reason is clear: section is semantically a rather empty word, and the selectional restrictions on its cooccurence depend primarily on its complement. You might read a section of a book but not, typically, a section of a house. It would be possible to predetermine a set of empty words in advance of analysis, and thus avoid some of the problem presented by empty words. But it is unlikely that the class is well-defined. Rather, we expect that nouns could be ranked, on the basis of their distribution, according to how empty they are; this is a matter for further exploration. Sample size. The current sample is too small; many words occur too infrequently to be adequately sampled, and it is easy to think of usages that are not represented in the sample. For example, it is quite expected to talk about brewing beer, but the pair of brew and beer does not appear in this sample. Part of the reason for missing selectional pairs is surely the restricted nature of the AP news sublanguage. Further analysis. The similarity metric proposed here, based on subject-verb-object relations, represents a considerable reduction in the information available in the subjec-verb- object table. This reduction is useful in that it permits, for example, a clustering analysis of the nouns in the sample, and for some purposes (such as demonstrating the plausibility of the distribution-based metric) such clustering is useful. However, it is worth noting that the particular information about, for example, which nouns may be objects of a given verb, should not be discarded, and is in itself useful for analysis of text. In this study, we have looked only at the lexical relationship between a verb and the head nouns of its subject and object. Obviously, there are many other relationships among words -- for example, adjectival modification or the possibility of particular prepositional adjuncts -- that can be extracted from a corpus and that contribute to our lexical knowledge. It will be useful to extend the analysis presented here to other kinds of relationships, including more complex kinds of verb complementation, noun complementation, and modification both preceding and following the head noun. But in expanding the number of different structural relations noted, it may become less useful to compute a single-dimensional similarity score of the sort proposed in Section ,1. Rather, the various lexical relations revealed by parsing a corpus, will be available to be combined in many different ways yet to he explored. REFERENCES Chodorow, Martin S., Roy J. Byrd, and George E. Heidom. 1985. Extracting semantic hierarchies from a large on-line dictionary. Proceedings of the 23rd Annual Meeting of the ACL, 299-304. Church, Kenneth. 1988. A stochastic parts program and noun phrase parser for unrestricted text. Proceedings of the second ACL Conference on Applied Natural Language Processing. Church, Kenneth and Patrick Hanks. 1989. Word association norms, mutual information and lexicography. Proceedings of the 23rd Annual Meeting of the ACL, 76-83. Fano, R. 1961. Transmission of Information. Cambridge, Mass:MIT Press. Harris, Zelig S. 1968. Mathematical Structures of Language. New York: Wiley. Hindle, Donald. 1983. User manual for Fidditch. Naval Research Laboratory Technical Memorandum #7590-142. Hirschman, Lynette. 1985. Discovering sublanguage structures, in Grishman, Ralph and Richard Kittredge, eds. Analyzing Language in Restricted Domains, 211-234. Lawrence Erlbaum: Hillsdale, NJ. Hirschman, Lynette, Ralph Grishman, and Naomi Sager. 1975. Grammatically-based automatic word class formation. Information Processing and Management, 11, 39-57. Marcus, Mitchell P. 1980. A Theory of Syntactic Recognition for Natural Language. MIT Press. Sparck Jones, Karen. 1986. Synomyny and Semantic Classification. Edinburgh University Press. 275
1990
34
DETERMINISTIC LEFT TO RIGHT PARSING OF TREE ADJOINING LANGUAGES* Yves Schabes Dept. of Computer & Information Science University of Pennsylvania Philadelphia, PA 19104-6389, USA [email protected] K. Vijay-Shanker Dept. of Computer & Information Science University of Delaware Newark, DE 19716, USA [email protected] Abstract We define a set of deterministic bottom-up left to right parsers which analyze a subset of Tree Adjoining Lan- guages. The LR parsing strategy for Context Free Grammars is extended to Tree Adjoining Grammars (TAGs). We use a machine, called Bottom-up Embed- tied Push Down Automaton (BEPDA), that recognizes in a bottom-up fashion the set of Tree Adjoining Lan- guages (and exactly this se0. Each parser consists of a finite state control that drives the moves of a Bottom-up Embedded Pushdown Automaton. The parsers handle deterministically some context-sensitive Tree Adjoining Languages. In this paper, we informally describe the BEPDA then given a parsing table, we explain the LR parsing algo- rithm. We then show how to construct an LR(0) parsing table (no lookahead). An example of a context-sensitive language recognized deterministically is given. Then, we explain informally the construction of SLR(1) pars- ing tables for BEPDA. We conclude with a discussion of our parsing method and current work. 1 Introduction LR(k) parsers for Context Free Grammars (Knuth, 1965) consist of a finite state control (constructed given a CFG) that drives deterministically with k lookahead symbols a push down stack, while scanning the input from left to right. It has been shown that they recognize exactly the set of languages recognized by deterministic push down automata. LR(k) parsers for CFGs have been proven useful for compilers as well as recently for nat- ural language processing. For natural language process- ing, although LR(k) parsers are not powerful enough, *The first author is partially supported by Darpa grant N0014-85- K0018, ARO grant DAAL03-89-C-003iPRI NSF grant-IRIS4-10413 A02. We are extremely grateful to Bernard Lang and David Weir for their valuable suggestions. 276 conflicts between multiple choices are solved by pseudo- parallelism (Lang, 1974, Tomita, 1987). This gives rise to a class of powerful yet efficient parsers for natural languages. It is in this context that we study determin- istic (LR(k)-style) parsing of TAGs. The set of Tree Adjoining Languages is a strict su- perset of the set of Context Free Languages (CFLs). For example, the cross serial dependency constmction in Dutch can be generated by a TAG. 1 Waiters (1970), R~v6sz (1971), Turnbull and Lee (1979) investigated deterministic parsing of the class of context-sensitive languages. However they used Turing machines which recognize languages much more powerful than Tree Ad- joining Languages. So far no deterministic bottom-up parser has been proposed for any member of the class of the so-called "mildly context sensitive" formalisms (Joshi, 1985) in which Tree Adjoining Grammars fall. 2 Since the set of Tree Adjoining Languages (TALs) is a strict superset of the set of Context Free Languages, in order to define LR-type parsers for TAGs, we need to use a more powerful configuration then a finite state au- tomaton driving a push down stack. We investigate the design of deterministic left to right bottom up parsers for TAGs in which a finite state control drives the moves of a Bottom-up Embedded Push Down Stack. The class of corresponding non-deterministic automata recognizes exactly the set of TALs. We focus our attention on showing how a bottom- up embedded pushdown automaton is deterministically driven given a parsing table. To illustrate the building of a parsing table, we consider the simplest case, i.e. building of LR(0) items and the corresponding LR(0) 1The parsers that we develop in this paper can parse these con- structions deterministically (see Figure 5). 2Tree Adjoining Grammars, Modified Head Grammars, Linear In- dexed Grammars and Categorial Grammars (all of which generate the same subclass of context-sensitive languages) fall in the class of the so-called "mildly context sensitive" formalisms. The Embedded Push Down Automaton recognizes exactly this set of languages (Vijay- Shanker 1987). parsing table for a given TAG. An example for a TAG generating a context-sensitive language is given in Fig- ure 5. Finally, we consider the construction of SLR(1) parsing tables. We assume that the reader is familiar with TAGs. We refer the reader to Joshi (1987) for an introduction to TAGs. We will assume that the trees can be combined by adjunction only. 2 Automata Models of Tags Before we discuss the Bottom-up Embedded Push- down Automaton (BEPDA) which we use in our parser, we will introduce the Embedded Pushdown Automaton (EPDA). An EPDA is similar to a pushdown automaton (PDA) except that the storage of an EPDA is a sequence of pushdown stores. A move of an EPDA (see Figure 1) allows for the introduction of bounded pushdowns above and below the current top pushdown. Informally, this move can be thought of as corresponding to the adjoin- ing operation move in TAGs with the pushdowns intro- duced above and below the current pushdown reflecting the tree structure to the left and right of the foot node of an auxiliary being adjoined. The spine (path from root to foot node) is left on the previous stack. The generalization of a PDA to an EPDA whose stor- age is a sequence of pushdowns captures the generaliza- tion of the nature of the derived trees of a CFG to the nature of derived trees of a TAG. From Thatcher (1971), we can observe that the path set of a CFG (i.e. the set of all paths from root to leaves in trees derived by a CFG) is a regular set. On the other hand, the path set of a TAG is a CFL. This follows from the nature of the adjoining operation of TAGs, which suggests stacking along the path from root to a leaf. For example, as we traverse down a path in a tree 3' (in Figure 1), if ad- junction, say by/~, occurs then the spine of/~ has to be traversed before we can resume the path in 7. ~ e ~ -gQeft of foot d [~ ~ .,~splne of I~ i~fight d foot of ~ Figure 1: Embedded Pushdown Automaton 277 3 Bottom-up Embedded Push- down Automaton 3 For any TAG G, an EPDA can be designed such that its moves correspond to a top-down parse of a string generated by G (EPDA characterizes exactly the set of Tree Adjoining Languages, Vijay- Shanker, 1987). If we wish to design a bottom-up parser, say by adopting a shift reduce parsing strategy, we have to consider the nature of a reduce move of such a parser (i.e. using EPDA storage). This reduce move, for example applied after completely considering an auxiliary tree, must be allowed to 'remove' some bounded pushdowns above and below some (not necessarily bounded) pushdown. Thus (see Figure 2), the reduce move is like the dual of the wrapping move performed by an EPDA. Therefore, we introduce Bottom-up Embedded Push- down Automaton (BEPDA), whose moves are dual of an EPDA. The two moves of a BEPDA are the unwrap move depicted in Figure 2 - which is an inverse of the wrap move of an EPDA - and the introduction of new pnshdowns on top of the previous pushdown (push move). In an EPDA, when the top pnshdown is emp- tied, the next pushdown automatically becomes the new top pushdown. The inverse of this step is to allow for the introduction of new pushdowns above the previous top pushdown. These are the two moves allowed in a BEPDA, the various steps in our parsers are sequences of one or more such moves. Due to space constraints, we do not show the equiva- lence between BEPDA and EPDA apart from noting that the moves of the two machines are dual of each other. 4 LR Parsing Algorithm An LR parser consists of an input, an output, a sequence of stacks, a driver program, and a parsing table that has three parts (ACTION, GOTOright and GOTO.foot). The parsing program is the same for all LR parsers, only the parsing tables change from one grammar to another. The parsing program reads characters from the input one character at a time. The program uses the sequence of stacks to store states. The parsing table consists of three parts, a pars- ing action function ACTION and two goto functions GOTOright and GOTOloot. The program driving the LR parser first determines the state i currently on top of the top stack and the current input token at. Then it consults the ACTION table entry for state i and token 3The need to use bottom-up version of an EPDA in LR style pars- ing of TAGs was suggested to us by Bernard Lang and David Weir. Also their susgestions played all insU~llaK~[ v01e in the definition of BBPDA, for example restriction on the moves allowed. read only input tape u stack of aac~ BEPDA Bounded number [1 of stacks II of bounded size 1 Bounded number [~ of stack elements Unbounded number (1 of stack elements ~.J Bounded number of stacks II of bounded size ~,1 A~ All al BI 7" Bn EPDA lnove UNWRAP move [] PUSH move Figure 2: Bottom-up Embedded Pushdown Automaton at. The entry in the action table can have one of the following five values: • Shift j (s j), where j is a state; • Resume Right of 6 at address dot (rs6@dot)), where 6 is an elementary tree and dot is the ad- dress of a node in 6; • Reduce Root of the auxiliary tree/5 in which the last adjunction on the spine was performed at ad- dress star (rd/3@star); • Accept (acc); • Error, no action applies, the parsers rejects the in- put string (errors are associated with empty table entries). The function GOTOright and GOTOfoo, take a state i and an auxiliary tree # and produce a state j. An example of a parsing table for a grammar gener- ating L = {anbnecndnln > 0} is given in Figure 5. We denote an instantaneous description of the BEPDA by a pair whose first component is the sequence of pushdowns and whose second component is the un- expanded input: (lltm'' "till" "-Ilsl" "sw, a~a~+l...a,$) In the above sequence of pushdowns, the stacks are piled up from left to right. II stands for the bottom of a stack, s~ is the top element of the top stack, Sx is the bottom element of the top stack, tl is the top element of the bottom stack and tm is the bottom element of the bottom stack. The initial configuration of the parser is set to: (110, al-..an$) where 0 is the start state and ax • .. a,$ is the input string to be read with an end marker ($). 278 Suppose the parser reaches the configuration: (lit,,," "till" "IIi~""" ill, arar+l.., an$) The next move of the parser is determined by reading at, the current input token and the state i on top of the sequence of stacks, and then consulting the parsing table entry for ACTION[i, a,]. The parser keeps applying the move associated with ACTION[i, at] until acceptance or error occurs. The following moves are possible: (i) (ii) ACTION[/, at] = shift state j (,j). The parser exe- cutes a push move, entering the configuration: (lltm''' tx II"" IIi~o • • • ilillJ, at+l"'" an$) ACTION[/, at] = resume right of 6 at address dot (rs6@doO. The parser is coming to the right and below of the node at address dot in 6, say ri, on which an auxiliary tree has been adjoined. The information identifying the auxiliary tree is in the sequence of stacks and must be recovered. There are two eases: Case 1:71 does not subsume a foot node. Let k be the number of terminal symbols subsumed by r/. Before applying this move, the current configuration looks like: (ll"" Ilikll "" IIi111i, a,.. "an$) The k top first stacks are merged into one stack and the stack IIm is pushed on top of it, where m = GOTOfoo,[ik, #] for some auxiliary tree # that can be adjoined in 6 at 71, and the parser enters the configuration: (11""" Ilikllit-t "'" ix illm, at"" a,$) Case 2:~7 subsumes the foot node of 6. Let k (resp. k') be the number of terminal symbols to the right (resp. to the left) of the foot node subsumed by r/. Before applying this move, the configuration looks like: (ll" "" Ilnv+tll""" Ilnxllsl" "" szllik" "" Iii111i, a,--. a.$) The k' stacks below the k + 2 *h stack from the top as well as the k + 1 top stacks are rewritten onto the k + 2 th stack and the stack lira is pushed on top of it, where m = GOTO/oot[nk,+ x,/3] for some auxiliary tree ~ that can be adjoined in 6 at ,7, and the parser enters the configuration: (11"" Ilnv+lllsl "" .sink .... nlik.., ixil]m, a~... an$) (iii) ACTION[/, at] = reduce root of an auxiliary tree/3 in which the last adjunction on the spine was per- formed at address star (rdfl@star). The parser has finished the recognition of the auxiliary tree/L It must remove all information about/3 and continue the recognition of the tree in which/3 was adjoined. The parser executes an unwrap move. Let k (resp. k') be the number of terminal symbols to the left (resp. to the righO of the foot node of B. Let ff be the node at address star in/3 (ff = nil if star is not set). Let p be the number of terminal symbols to the left of the foot node subsumed by ~ (p = 0 if = nil). p + k' + 1 symbols from the top of the sequence of stacks popped. Then k - p single ele- ment stacks below the new top stack are unwrapped. Let j be the new top element of the top stack. Let ra = GOTOriaht~, t~]. j is popped and the single element stack lira is pushed on top of the top stack. By keeping track of the auxiliary trees being reduced, it is possible to output a parse instead of acceptance or an error. The parser recognizes the derived tree inside out: it extracts recursively the innermost auxiliary tree that has no adjunction performed in it. 5 LR(0) Parsing Tables This section explain how to construct an LR(0) parsing table given a TAG. The construction is an extension of the one used for CFGs. Similarly to Schabes and Joshi (1988), we extend the notion of dotted rules to trees. We define the closure operations that correspond to adjunction. Then we explain how transitions between states are defined. We give in Figure 5 an example of a finite state automaton used to build the parsing table for a TAG (see Figure 5) generating a context-sensitive language. We first explain preliminary concepts (originally de- fined to construct an Earley-type parser for TAGs) that will be used by the algorithm. Dotted rules are extended to trees. Then we recall a tree traversal that the algo- rithm will mimic in order to scan the input from left to right. A dotted symbol is defined as a symbol associated with a dot above or below and either to the left or to 279 the right of it. The four positions of the dot are anno- tated by ia, ib, ra, rb (resp. left above, left below, right above, right below): taa,~ In practice, only two dot Ib.L.rb • positions can be used (to the left and to the fight of a node). However, for sake of simplicity, we will use four different dot positions. A dotted tree is defined as a tree with exactly one dotted symbol. Furthermore, some nodes in the dotted tree can be marked with a star. A star on a node expresses the fact that an adjunction has been performed on the corresponding node. A dot- ted tree is referred as [c~, dot, pos, stars], where o~ is a tree, dot is the address of the dot, pos is the position of the dot (la, lb, ra or rb) and stars is a list of nodes in a annotated by a star. Given a dotted tree with the dot above and to the left of the root, we define a tree traversal of a dotted tree (as shown in the Figure 3) that will enable us to scan the frontier of an elementary tree from left to right while try- ing to recognize possible adjunctions between the above and below positions of the dot of interior nodes. STAa : .ao • E F G H I 2.1 2.2 2.3 3.1 3.2 Figure 3: Left to Right Tree Traversal A state in the finite state automaton is defined to be a set of dotted trees closed under the following opera- tions: Adjunction Prediction, Left Completion, Move Dot Down, Move Dot Up and Skip Node (See Fig- tire 4). 4 Adjunction Prediction predicts all possible auxiliary trees that can be adjoining at a given node. Left Com- pletion occurs when an auxiliary tree is recognized up to its foot node. All trees in which that tree can be adjoined are pulled back with the node on which ad- junction has been performed added to the list of stars. Move Dot Down moves the dot down the links. Move Dot Up moves the dot up the links. Skip Node moves the dot up on the right hand side of a node on which no adjunction has been performed. All the states in the finite state automaton (FSA) must be closed under the closure operations. The FSA is 4These operations correspond to proeesson in the Eadey-type parser for TAGs. /% /% "A Adjunction Prediction Move Dot Up Move Dot Down A Left Completion stap node Figure 4: Closure Operations build as follows. In states set 0, we put all initial trees with a dot to the left and above the root. The state is then closed. Then recursively we build new states with the following transitions (we refer to Figure 5 for an example of such a construction). • A transition on a (where a is a terminal symbol) from Si to Sj occurs if and only if in Si there is a dotted tree [6, dot, la, stars] in which the dot is to the left and above a terminal symbol a; Sj consists of the closure of the set of dotted trees of the form [6, dot, ra, stars]. • A transition on/3~ight from Si to Sj occurs iff in Si there is a dotted tree [8, dot, rb, stars] such that the dot is to the right and below a node on which /3 can he adjoined; Sj consists of the closure of the set of dotted trees of the form [8, dot, ra, stars']. If the dotted node of [8, dot, rb, stars] is not on the spine 5 of 8, star' consists of all the nodes in star that strictly dominate the dotted node. When the dotted node is on the spine, stars' consists of all the nodes in star that strictly dominate the dotted node, ff there are some, otherwise stars' = {dot}. • A Skip foot of [/3, dot, lb, stars] transition from Si to Sj occurs iff in S~ there is a dotted tree [/3, dot, lb, stars] such that the dot is to the left and below the foot node of the auxiliary tree/3; Sj consists of the closure of the set of dotted trees of the form [/3, dot, rb, stars]. The parsing table is constructed from the FSA built as above. In the following, we write trans(i, z) for set of states in the FSA reached from state i on the transition labeled by z. The actions for ACTION(i, a) are: • Shift j (sc(j)). It applies fff j E trans(i, a). 5Nodes on the path from root node to foot node. 280 • Resume Right of /6, dot, rb, stars] (rsS@dot). It applies iff in state i there is a dotted tree [8, dot, rb, stars], where dot E stars. • Reduce Root of/3 (rd/3@star). It applies iff in state i there is a dotted tree [/3, O, ra, {star}], where /3 is an auxiliary tree. 6 • Accept occurs iff a is the end marker (a = $) and there is a dotted tree [~, O, ra, {star}], where a is an initial tree and the dot is to the right and above the root node. • Error, if none of the above applies. The GOTO table encodes the transitions in the FSA on non-terminal symbols. It is indexed by a state and by /3right or /31oot, for all auxiliary trees /3: j G GOTO(i, label) iff there is a tran- sition from i to j on the given label (label E {/3riaht,/3/oot I/3 is an auxiliary tree}. If more than one action is possible in an entry of the ac- tion table, the grammar is not LR(0): there is a conflict of action, the grammar cannot be parsed deterministi- tally without lookahead. An example of a finite state automaton used for the construction of the LR(0) table for a TAG (trees cq,/31 in Figure 5) generating 7 L = {anbneendnln >_ O}, its corresponding parsing table is given and an example of sequences of moves are given in Figure 5. 60 is the address of the root node. tin the given TAG (trees ~1 and/31), if we omit a and c, we obtain a TAG that is similar to the one for the Dutch cross-serial construction. This grammar can still bc handled by an LR(0) parser. In the trees c~ and /3, na stand for null adjuncfion constraint (i.e. no anxifiary tree can be adjoined on a node with null adjunction constraint). TAG for L = {a"b~ec"d "} Sea A',, a Sd (~) //~ b S~a e a S d b S~ "~ • bS.o • ,' S d b S~ s (~)l e '~ S~d -~ b S d a'$ d It a • /t,, /1",, /r',, b'Sc b Snac b Suc b Sna¢ I a/~d "a S d a.~ • Sd .,.S*d /~ ./~ [b -S~ c b Suc b Suc b S~,a¢ b S~a¢ "Ae Ae, Ae • S* d a S*d • S* d aSd aSd b S~c b.Snac a S* d e *e ./1~ bSc aSd bS, c I0 I' ~ ~ 7 o/rN. "bS~ b S c b Sine 8 1~ '~*C~ ~ 12( Jl~u ~ 3 ( ~ ° ~ v b ~*~ :~t I~ ]a S d a S*~l[~ dl a S*d " S ¢ b F---I Z n,¢', cT a S*d /'I',,, bS¢ b Snac b S~a~) [ PARSING ACTION II GOTO I II fcot [[ right Finite State Aatomaton for a BEPDA Recognizing L = { a " b " ecn d" } a b c d e $ /5' /3 Parser configuration Next move (llo, aabbeccdd$) (lloll2, abbeccdd$) <110112112, bbeccdd$) (110112112113, b~ccdd$) (110112112113119, eccdd$) (110112112ll3ll9ll4, ccdd$) (I]0112112[[3[[9[[4[[10, ccdd$) (110112112[[3[[9114[[101111, cdd$) (110112112113114 9 10 11116, cdd$) (110112112113114 9 10 11116117, dd$) (110H2H2H3H4 9 10 11[[6117[[8, d$) (110[[2ll4 9 101112, d$) (lloll2114 9 lO1[121113, $) <110[15, *) s2 s2 s3 s9 s4 rsa@O sll rs~@2 s7 s8 rd~@ - s13 rd/3~2 ace Example of LR(O) Parsing Table Example of sequences of moves sj _---- Shift j; rs6~dot -- Resume Right of 6 at dot; rd~star ---- Reduce Root of/~ with star at address star; $ -- end of input. Figure 5: Example of the construction of an LR(0) parser for a TAG recognizing L = {a'~bnec"d" } 281 6 SLR(1) Parsing Tables The tables that we have constructed are LR(0) tables. The Resume Right and Reduce Root moves are per- formed regardless of the next input token. The accu- racy of the parsing table can be improved by comput- ing lookaheads. FIRST and FOLLOW can be extended to dotted trees, s FIRST of a dotted tree corresponds to the set of left most symbols appearing below the subtree dominated by the dotted node. FOLLOW of a dotted tree defines the set of tokens that can appear in a derivation immediately following the dotted node. Once FIRST and FOLLOW computed, the LR(0) parsing table can be improved to an SLR(1) table: Resume Right and Re- duce Root are applicable only on the input tokens in the follow set of the dotted tree. For example, the SLR(1) table for the TAG built with trees oq and ~1 is given in Figure 6. I PARSING AC'TION II GOTO[ I I1 foot II right I I I'lbl 'c I a lel S I1~11 ~1 6 Figure 6: Example of SLR(1) Parsing Table By associating dotted trees with lookaheads, one can also compute LR(k) items in the finite state automaton in order to build LR(k) parsing tables. 7 Current Research The deterministic parsers we have developed do not sat- isfy an important property satisfied by LR parsers for CFG. This property is often described as the viable pre- fix property which states that as long as the portion of the input considered so far leads to some stack configu- ration (i.e. does not lead to error), it is always possible to find a suffix to obtain a string in the language. Our parsers do not satisfy this property because the left completion move is not a 'reduce" move. This move aDue to the lack of space, we do not define FIRST and FOLLOW. How¢ver, we explain the basic principles used for the computafi~m of FIRST and FOLI£)W. 282 applies when we have reached a bottom-left end (to the left of the foot node) of an auxiliary tree, say/3. If we had considered this move to be a reduce move, then by popping appropriate amount of elements off the storage would allow us to figure out which tree (into which/3 was adjoined), say a, to proceed with. Rather than us- ing this information (that is available in the storage of the BEPDA), by putting left completion in the closure operations, we apply a move that is akin to the predict move of Earley parser. That is we continue by consider- ing every possible nodes/3 could have been adjoined at, which could include nodes in trees that were not used so far. However, we do not accept incorrect strings, we only lose the prefix property (for an example see Fig- ure 7). As a consequence, errors are always detected but not as soon as possible. Parser configuration Next move ([10, aabeccdd$) ¢11o112, abeccdd$) (liO[[2U2, beccdd$) (llo112ll2113, ,c,dd$) (Iio1[21121131[4, ccdd$) (11o1121121131141[6, ccdd$) (11o112112113114116117, ~dd*) s2 s2 s3 s4 rsa@O s7 ¢ITOr Figure 7: Example of error detecting The reason why we did not consider the left comple- tion move to be a reduce move is related to the restric- tions on moves of BEPDA which is weakly equivalent to TAGs (perhaps also due to the fact that left to right parsing may not be most natural for parsing TAGs which produce trees with context-free path sets). In CFGs, where there is only horizontal stacking, a single reduc- tion step is used to account for the application of rule in left to right parsing. On the other hand, with TAGs, if a tree is used successfully, it appears that a prediction move and more than one reduction move are necessary for auxiliary tree. In left to right parsing, a prediction is made to start an auxiliary tree/3 at top left end; a reduc- tion is appropriate to recover the node/3 was adjoined at the left completion stage; a reduction is needed again at resume right state to resume the right end of t; finally a reduction is needed at the right completion stage. In our algorithm, reductions are used at right resume stage and reduce right state. Even if a reduction step is applied at left completion stage, an encoding of the fact that left part of/3 (as well as the left part of trees adjoined on the spine of/~) has been completed has to be restored in the storage (note in a reduction move of any shift reduce parser for CFGs, any information about the rule used is discarded once reduction step applied). So far we have not been able to apply a reduction step at the left com- pletion stage, reinsert the left part of fl and yet maintain the correct sequence in the storage so that the right part of/3 can be recovered at the resume right stage. We are considering alternative strategies for shift reduce parsing with BEPDA as well as considering whether there are other automata models equivalent to TAGs better suited for deterministic left to right parsing of tree-adjoining languages. Conclusion We have introduced a bottom-up machine (Bottom-up Embedded Push Down Automaton) that enabled us to define LR-like parsers for TAGs. The machine recog- nizes in a bottom-up fashion exactly the set of Tree Ad- joining Languages. We described the LR parsing algorithm and a method for computing LR(0) parsing tables. We also men- tioned the possibility of building SLR(k) parsing tables by defining the notions of FIRST and FOLLOW sets for TAGs. As shown for the example, no lookaheads are nee- essary to parse deterministically the language L = {anbnec"d"ln >_ O}. If instead of using e, we had the empty string e in the initial tree, LR(0)-like parser will not be enough. On the other hand SLR(1)-like parser will suffice. We have noted that our parsers do not satisfy the valid prefix property. As a consequence, errors are always detected but not as soon as possible. Similar to the work of Lang (1974) and Tomita (1987) extending LR parsers for arbitrary CFGs, the LR parsers for TAGs can be extended to solve by pseudo-parallelism the conflicts of moves. Lang, Bernard, 1974. Deterministic Techniques for EffÉ- cient Non-Deterministic Parsers. In Loeckx, Jacques (editor), Automata, Languages and Programming, 2rid Colloquium, University of Saarbri~cken. Lecture Notes in Computer Science, Springer Verlag. R6v6sz, G., 1971. Unilateral context sensitive gram- mars and left to fight parsing. J. Comput. System Sci. 5:337-352. Schabes, Yves and Joshi, Aravind K., June 1988. An Earley-Type Parsing Algorithm for Tree Adjoining Grammars. In 26 th Meeting of the Association for Computational Linguistics (A CL' 88 ). Buffalo. Thatcher, J. W., 1971. Characterizing Derivations Trees of Context Free Grammars through a Generalization of Finite Automata Theory. J. Comput. Syst. Sci. 5:365-396. Tomita, Masaru, 1987. An Efficient Augmented- Context-Free Parsing Algorithm. Computational Lin- guistics 13:31--46. Turnbull, C. J. M. and Lee, E. S., 1979. Generalized Deterministic Left to Right Parsing. Acta lnformatica 12:187-207. Vijay-Shanker, K., 1987. A Study of Tree Adjoining Grammars. Phi) thesis, Department of Computer and Information Science, University of Pennsylvania. Waiters, D.A., 1970. Deterministic Context-Sensitive Languages. Inf. Control 17:14--40. References Joshi, Aravind IC, 1985. How Much Context- Sensitivity is Necessary for Characterizing Struc- tural Descriptions---Tree Adjoining Grammars. In Dowry, D., Karttunen, L., and Zwicky, A. (editors), Natural Language Processing--Theoretical, Compu- tational and Psychological Perspectives. Cambridge University Press, New York. Originally presented in a Workshop on Natural Language Parsing at Ohio State University, Columbus, Ohio, May 1983. Joshi, Aravind K., 1987. An Inmxluction to Tree Ad- joining Grammars. In Manaster-Ramer, A. (editor), Mathematics of Language. John Benjamins, Amster- dam. Knuth, D. E., 1965. On the translation of languages from left to right. Inf. Control 8:607-639. 283
1990
35
AN EFFICIENT PARSING ALGORITHM FOR TREE ADJOINING GRAMMARS Karin Harbusch DFKI - Deutsches Forschungszentrum fiir Kfinstliche Intelligenz Stuhlsatzenhausweg 3, D-6600 Saarbriicken 11, F.R.G. harbusch~dfki.uni-sb.de ABSTRACT In the literature, Tree Adjoining Grammars (TAGs) are propagated to be adequate for nat- ural language description -- analysis as well as generation. In this paper we concentrate on the direction of analysis. Especially important for an implementation of that task is how efficiently this can be done, i.e., how readily the word problem can be solved for TAGs. Up to now, a parser with O(n 6) steps in the worst case was known where n is the length of the input string. In this paper, the result is improved to O(n 4 log n) as a new lowest upper bound. The paper demonstrates how local interpretion of TAG trees allows this reduction. 1 INTRODUCTION Compared with the formalism of context-free grammars ( CFC, s), the rules of Tree Adjoining Grammars (TAGs) can be imagined intuitively as parts of context-free derivation trees. Without paying attention to the fact that there are some more restrictions for these rules, the recursion op- eration (adjoining) is represented as replacing a node in a TAG rule by another TAG rule so that larger derivation trees are built. This close relation between CFGs and TAGs can imply that they are equivalent. But TAGs are more powerful than context-free grammars. This additional power -- characterized as mildly context-sensitive -- leads to the question of whether there are efficient algorithms to solve the word problem for TAGs. Up to now, the algorithm of Vijay-Shanker and Joshi with a time complexity of O(n 6) for the worst case was known, in addition to several un- successful attempts to improve this result. This paper's main emphasis is on the improvement of this result. An efficient parser for Tree Adjoining Grammars with a worst case time complexity of O(n 4 log n) is discussed. All known parsing algorithms for TAGs use the close structural similarity between TAGs and CFGs, which can be expressed by writing all inner nodes and all their sons in a TAG as the rule set of a context-free grammar (the context-free ker- nelof a TAG). Additionally, the constraint has to be tested that all further context-free rules corre- sponding to the same TAG tree must appear in the derivation tree, iff one rule of that TAG tree is in use. Therefore, it is clear that a context-free parser can be the basis for extensions representing the test of the additional constraint. On the basis of the two fundamental context- free analysers, the different approaches for TAGs 284 can be divided into two classes. One class extends an Earley parser and the second class extends a Cocke-Kasami- Younger ( CKY) parser for CFGs. Here, we focus on the approaches with a CKY basis, because the relation between the resulting triangle matrix and the encoded derivation trees is closer than for the item lists of an Earley parser. In particular, the paper is divided into the fol- lowing sections. First, a short overview of the TAG formalism is given in order to have a com- mon terminological basis with the reader. In the second section, the approach of Vijay- Shanker and Joshi is presented as the natural way of extending the CKY algorithm for context-free grammars to TAGs. As a precondition for that analysis, it has to be proven that each TAG can be transformed into two form, a normal form re- stricting the outdegree of a node to be less three. In section 4, the main section of this paper, a normal-form is defined as a precondition for a new and more efficient parsing algorithm. This form is more restricted than the two form, and is closely related to the Chomsky normal form for CFGs. The main emphasis lies on the description of the new parsing approach. The general idea is to separate the context-free parsing and the addi- tional testing so that the test can run locally. On the triangle matrix which is the result of the CKY analysis with the context-free kernel, all complete TAG trees encoded in the triangle matrix are computed recursively. It is intuitively motivated that this approach needs fewer steps than the strategy of Vijay-Shanker and Joshi, which stores all intermediate states of TAG derivations, be- cause the locally represented elementary trees can be interpreted as TAG derivations where equal parts are computed exactly once instead of indi- vidual representations in each derivation. In the summary, our experience with an imple- mentation in CommonLISP on a Hewlett Packard machine is mentioned to illustrate the response time in an average case. Finally, different ap- proaches for TAG parsing are characterized and compared with the approaches presented here. 2 TAGS BRIEFLY REVISITED First of all, the basic definitions for TAGs are re- visited in order to have a common terminology with the reader (even though not defined explic- itly here, CFGs are used as described, e.g., in [Hopcroft, Ullman 79]). In 1975, the formalism of Tree Adjoining Gram- mars (TAGs) was introduced by Aravind K. Joshi, Leon S. Levy and Masako Takahashi ([Joshi et al. 75]). Since then, a wide variety of prop- erties -- formal properties as well as linguisti- cally relevant ones -- have been studied (see, e.g., [Joshi 85] for a good overview). The following example describing the crossed dependencies in Dutch should illustrate the for- malism (see Figure 1, where the node numbers written in slanted font should be ignored here; they make sense in combination with the descrip- tion of the new algorithm, especially step (tag2)). A TAG is a tree generation system. It consists, in addition to the set of nonterminals N, the set of terminals T and the start symbol S, an extraor- dinary symbol in N, of two different sets of trees, which specify the rules of a TAG. Intuitively, the set I of initial trees can be seen as context-free derivation trees. This means the start symbol is the root node, all inner nodes are nontermi- nals and all leaves are terminals (e.g., in Figure 1 tree a). The second set A, the auxiliary trees, which can replace a node in an initial tree (which is possibly modified by further adjoinings) dur- ing the recursion process, must have a form, so that again a derivation tree results. The trees/31 and /32 demonstrate that restriction. A special leaf (the foot node) must exist, labelled with the same nonterminal as the root node. Further, it is obligatory that an auxiliary tree derives at least one terminal. The union of the initial and the auxiliary trees, so to speak the rule set of a TAG, is called the set of elementary trees. Tree 7 in Figure 1 shows a TAG derivation tree, which means an initial tree with an arbi- trary number of adjoinings (here /3x is adjoined at the node S* in a and/32 at the node S* in the adjoined tree /31). During the recursion process (adjoining), a node X in an initial tree a, which can be modified by further adjoinings, is replaced by an auxiliary tree /3 with the same nontermi- nal label at root and foot node, that X is labelled with. The incoming edge in X (if it exists; this is true if X is not the root node of a) now ends in the root node of/3, and all outgoing edges of X in a now start at the foot node of/3. The set of all initial trees modified by an arbi- trary number of adjoinings (at least zero) is called T(G), the tree set of a TAG G. The elements in this set can also be specified by building a series of triples (ai,/3i, Xi) (0 < i < n) -- the deriva. tion -- where s0 E I, al (1 < i < n) is the result of the adjoining of/31-x in node ~:i-x in ai-x,/3i (0 < i < n-l) is the auxiliary tree, which is ad- joined in node Xl in tree ai and Xi (0 < i < n-l) is a unique node number in ai. This description has the advantage that structurally equal trees in T(G) which result from different adjoinings can be uniquely represented. L(G), the language of a TAG, is defined as the set containing all leaf strings of trees in T(G), respectively all trees which can be constructec[ by adjoining as described in the corresponding derivation. Here, a leaf string means all labels of leaves in a tree are concatenated in order from left to right. In the tree 3' in Figure 1 'Jan Pier Marie e e zag laten zwemmen' is in L(G). The relation between TAGs and CFGs can be characterized by defining the context.free kernel 285 a: AS°° /3F ~SI0 /32: /~ Marie e Plet e 7: s~ l'2 men" N~P2' ~aten Jan N.P 111 V~ zag Marie c Figure l: A small sample TAG demonstrating the process of ADJOINING K of a TAG G. K is a CFG and consists of the same sets N, T and S of G, but P(K) is the set of all inner nodes of all elementary trees in G interpreted as the lefthand side of a rule, where all sons in their order from left to right build the righthand side of that rule. E.g., in Figure 1 /32 has the corresponding context-free rules: (S, NP VP), (NP, N), (N, Jan), (VP, S V1), (V1, zag). It is clear that having a context-free derivation tree (on the basis of the context-free kernel K of a TAG G) is a necessary, but not sufficient prop- erty for an input string, which is tested to be an element in L(G). In the following, this property motivates the extension of context-free parsing al- gorithms to accept TAGs as well. The following parsing algorithms are able to ac- cept some extensions of the pure TAG definition without changing the upper time bound. Here, only TAGs with Constraints are mentioned (for more information about other extensions, e.g., TAGs with Links, with Unification or Multi Com- ponent TAGs -- some extending the generative capacity -- see, e.g., [Joshi 85]). The motivation for TAGs with Constraints TAGCs) is to restrict the recursion operation of Gs. Each node X in an elementary tree la- belled with a nonterminal has an associated con- straint set C, which has one 0fthe following forms: • NA stands for null adjoining and means that at node X no adjoining can take place, • SA(B) stands for selective adjoining and means that at X the adjoining of an auxil- iary tree (6 B). can take place (where each tree in B has the same root and foot node label as X) or • OA(B) stands for obligalory adjoining and means that at X the adjoining of an auxiliary tree (E B) must take place (where each tree in B has the same root and foot node label as X). When TAGs are mentioned in the following, the same result can be shown for TAGs with Con- straints, which it is not explicitly outlined. Only the property of generative power is illustrated, to make clear that finding a parsing algorithm is not a trivial task. For more information about the lin- guistic relevance of TAGs, the reader is referred, e.g., to [Kroch, Joshi 85]. A first impression comparing the generative power of TAGs and CFGs can be that they are equivalent, but TAGs are more powerful, e.g., the famous language a n b" e c" can be produced by a TAG with Constraints (the main idea in con- structing this grammar is to represent the produc- tion of an a, a b and a c in one auxiliary tree). Thinking of the application domain of natural language processing, the discussion in the linguis- tic community becomes relevant as to how pow- erful a linguistic formalism should be (see, e.g., [Pullum 84] or [Shieber 85]). TAGs are mildly conlezt-sensitive, which means that they can de- scribe some context-sensitive languages, but not all (e.g., www with w 6 {a,b)*, but ww is accept- able for a TAG). One thesis holds that natural language can be described very well by a mildly context-sensitive formalism. But this can only be empirically confirmed by describing difficult lin- guistic phenomena (here, the example in Figure 1 can only give an idea of the appropriateness of TAGs for natural language description). This property leads to the question of whether the word problem is solvable and if so, how ef- ficiently. In the following section, two differ- ent polynomial approaches are presented in de- tail. The property of efficiency becomes impor- tant when a TAG should be used in the applica- tion domain mentioned above, e.g., one can think of a syntax description encoded in TAG rules which is part of a natural language dialogue sys- tem. The execution time is responsible for the acceptance of the whole system. Later on, our experience with the response time of an imple- mentation of the new algorithm is described. 3 THE VIJAY-SHANKER AND JOSHI APPROACH First, the approach of Vijay-Shanker and Joshi (see [Vijay-Shanker, Joshi 85])is discussed as the natural way of extending the context-free CKY algorithm (see, e.g., [Hopcroft, Ullman 79]) to an- alyze TAGs as well. As for the context-free anal- ysis with CKY, the grammar is required in nor- mal form as a precondition for the TAG parser. Therefore, first the two form is defined and the idea of the constructive proof for transforming a TAG into two form is given. The TAG parser is then presented in more detail. 286 3.1 TWO FORM TRANSFOR- MATION The parsing algorithm of Vijay.Shanker and Joshi uses a special CKY algorithm for CFGs which requires fewer restrictive constraints than the Chomsky normal form for the ordinary CKY al- gorithm does. Here, the righthand side of all rules of the grammar should have at most two elements. This definition has to he adapted for TAG rules to extend this CKY parser to analyze TAGs as well. A TAG G is in two form, iff each node in each elementary tree has at most two sons. It can be proven that each TAG G can be transformed into a TAG G' in two form with L(G) - L(G'). The proof of that theorem uses the same tech- niques as in the context-free case which allow the reduction of the number of elements on the right- hand side to build the Chomsky normal form. If there are more than two sons, the second and all additional sons are replaced by a new nontermi- hal which becomes the lefthand side of a new rule with all replaced symbols on the righthand side (for more details see [Vijay-Shanker, Joshi 85]). We always refer to a TAG in two form, even when it is not explicitly confirmed. 3.2 THE STEPS OF THE ALGO- RITHM Now the idea of extending each context-free anal- ysis step by additional tests to ensure that whole TAG trees are in use (sufficient property) is moti- vated. This approach was proposed to be natural because it tries to build TAG derivation trees at once. In contrast, a two level approach is pre- sented which constructs all context-free deriva- tion trees before the TAG derivations are com- puted in a second step. In the CKY analysis used here, a cell [row /,column j] in the triangle matrix (1 < i, j _< n, the length of the input string w --- tl ...t,, where without loss of generality n >__ 1, because the test for e, the empty string, E L(G) sim- ply consists of searching for initial trees with all leaves labelled with e) contains an element X (6 N) iff there are rules to produce the derivation for ti+l...tj_l. This invariant is extended to repre- sent a TAG derivation for ti+l...tj_l iff X 6 [i,j]. Therefore additional information of each nonter- minal in a cell has to be stored as to which el- ementary trees are under completion and what subtrees have been analyzed up to now. Impor- tant to note is that the list of trees which are under completion, can be longer than one. E.g., think of adjoinings which have taken place in ad- joined trees as described in Figure 1. For realization of that information, a slack can be imagined. Here, the different stack elements are stored separately to use intermediate states in common. A stack element contains the infor- mation of exactly one auxiliary tree which is un- der construction, and a pointer to the next stack element. This pointer is realized by two addi- tional positions for each cell in the triangle matrix ([i,j,k,l]), where k and I in the third and fourth position characterize the fact that from tk+l to t1_1 no information about the structure of the TAG derivation is known in this element and has to be reconstructed by examination of all cells [k,l,v,w] (k <_ v < w < 1). The stack cells which the elements point at must also be recursively in- terpreted until the whole subtree is examined (left and right stack pointer are equal). It is clear that in interpreting these chains of pointers the stack at each node X in the triangle matrix represents all intermediate states of TAG derivations with X as root node in an individual cell of the triangle matrix. The algorithm starts initializing cells for all ter- minal leaves (X E [i-l,i,i,i] for ti with father X, 1 < i < n) and all foot nodes which can be seen as nonterminal leaves (X E [i, j, i, j] where X is a foot node in an auxiliary tree, 0 < i < j < n-l). Just as the CKY algorithm tests all combina- tions of neighboring strings, here new elements of cells are computed together with the context-free invariant computation, e.g., iff (Z,X Y) is a rule in the context-free kernel of the input TAG and X E [i, j, k, I], Y E [j - 1, m, p, p] and X and Y are root nodes of neighboring parts in the same ele- mentary tree, then Z is added to [i, m, k, l]). With the additional test to determine whether the rule (in the example (Z,X Y)) is in the same TAG tree as the two sons (X and Y) and whether the same holds for the subtrees below X and Y, it is clear that a whole TAG tree can be detected. If this is the case, i.e., that two neighboring stack ele- ments should be combined, all elements of cells [k, l, m, p] are added to [i, j, m, p] iff X E[i, j, k,/] is the root of an identified auxiliary tree. The time complexity becomes obvious when the range of the loops for all four dimensions of the array is described explicitly (see [Vijay- Shanker, Joshi 85]). From a more abstract point of view, the main difference between the CKY analysis for a CFG and a TAG is that, the sub- trees below the foot nodes are stored. This fact extends the input of length n to n 2 to describe the two additional dimensions. On the basis of that input, the ordinary CKY analysis can be done, and so the expected time complexity is O((n2) 3) = O(nS). With the explicitly defined ranges of the four dimensions for the positions in the array, it is clear that the worst case and the best case for this algorithm are equal. 4 A NEW AND MORE EFFI- CIENT APPROACH A time bound of O(n e) in the best and worst case must be seen as a more theoretical result, be- cause an implementation of the algorithm shows that the execution time is unacceptable. In order to use the formalism for any application domain, this result should be improved. In this section, a TAG parser with an upper bound of O(n 4 log n) in the worst case is presented. The best case is O(n3), because a CKY analysis has to at least be done. 287 4.1 NORMAL FORM TRANS- FORMATION As precondition of the new parsing algorithm, the TAG has to be transformed into a normal form which contains only trees with nodes and their sons, following the Chomsky normal form defini- tion. This means that the following three condi- tions hold for a TAG G: 1. e E L(G) iff a tree with root node S (NA), the start symbol, which allows no further ad- joinings (null adjoining), and a single termi- nal son e is element in the set of initial trees I (this tree is called the e tree), 2. except the e tree, no leaf in another elemen- tary tree is labelled with e, and 3. for each node in each elementary tree, the condition holds that either the node has two sons both labelled with a nonterminal or that the node has one son labelled with a termi- nM. In a first step, each TAG is transformed in two form so that condition 3 can be satisfied easier. This transformation is accomplished by the con- structive proof for the theorem that for each TAG (or TAG with Constraints for which the definition holds as well) there exists an equivalent TAG with Constraints in normal. Important to note is that the idea of the trans- formation into Chomsky normal form for CFGs cannot be adopted further on because this con- struction allows the erasure of nonterminal sym- bols if their derived structure is added to the grammar. In TAGs, a nonterminal not only rep- resents the derivation of its subtree in an elemen- tary tree, but can be replaced by an adjoining. Therefore, the general idea of the proof is to erase parts of elementary trees which are not in normal form, and represent those parts as new auxiliary trees. After this step, the original grammar is in normal form and therefore the encoded auxiliary trees can be used for explicit adjoinings, always producing structures in normal form. Expiicil adjoinings mean adjoinings in the new auxiliary trees which were built out of the erased parts of the original grammar. These adjoinings replace the nodes which are not in normal form. Since the details of the different steps are of no further interest here, the reader is referred to [Harbusch 89] for the complete proof. 4.2 THE STEPS OF THE NEW PARSING ALGORITHM The input of the new parser consists of a TAG G in normal form, and a string w = tl ...t,. With condition one in the normal form definition, the test for e E L(G) is trivial again. From now on this case is ignored, i.e., n > 1. The algorithm is divided into two steps. First a CKY analysis is done with the context-free ker- nel K of the input TAG G. Here, the standard CKY algorithm as described in [Hopcroft, Uliman 79] is taken, which requires a CFG in Chomsky normal form. K satisfies the requirement that the TAG G is in normal form. One can think that it would be sufficient to simply transform the context-free kernel into Chomsky normal form in- stead of transforming the input TAG. But with this strategy one would loose the one-to-one map- ping of context-free rules in the CFK and father- son-relations in a TAG rule which becomes impor- tant for finding complete TAG rules in the second step of the new parser. Here the invariant of the CKY analysis is X E [i, j] iff there are rules to produce a derivation for ti ... tj+i-1. This information is slightly ex- tended to recognize complete subtrees of elemen- tary trees in the triangle matrix. In the terminol- ogy of Vijay-Shanker and Joshi, a stack element is constructed. But it's important to note that the pointers are not interpreted, so that here local in- formation is computed relative to an elementary tree. Actually, the correspondence between an ele- ment in the triangle matrix and a TAG tree is represented as a pointer from the node in a tri- angle cell to a node in an elementary tree as de- scribed in Figure 2 (ignore the dotted lines at the moment). An equivalent description is presented in Figure 3 by storing the unique node number at which the pointer ends in the elementary tree and additionally a flag indicating whether the TAG tree is initial (I) or auxiliary (A) and whether the node is root node (T) of the tree or not (L). E.g., in Figure 2 the NP-son of the root node S in tree T carries the flag TA. In this terminology, the special case that the subtree contains the foot node has to be repre- sented explicitly, because the foot node is a leaf in the sense of elementary trees, but not in the sense of a derivation tree. To know where this leaf is positioned in the triangle matrix, a foot node pointer (FP) is defined from the root of the subtree to the foot node if one exists in that tree (in Figure 2 the dashed arc). initial tree ~ auxiliary tree,B: 3' where,6' is adjoined in o~: N~'~I/ VP'~I DETH"~II/I ~ NP ~,~,.~ rVP . , , J I°ET-- Figure 2: Example illustrating the inductive basis of the new invariant So, the invariant in the first step of the new parsing algorithm is computed during the CKY analysis -- in our second terminology -- by re- cursively defining extended node numbers ( ENNs) by triples (NN,TK,FP) as follows: Initialization Each element X in level 1 (father of a terminal t) is initialized with an ENN, where NN is the node number of X in a father-son-relation in an elementary tree a ( x ---* t), the tree kind TK := LU (U=I,A) iff a E U and NN doesn't .end with zero (X is not the root of a) else TK := TU, and the foot node pointer FP := nil, because the fa- 288 ther of a terminal is never a foot node in the same auxiliary tree. For each node X in level 1 the ENN := iNN=node number of a foot node in an auxiliary tree, LA, pointer to that ENN) is added iff X is the label of the foot node with node number NN -- to de- scribe foot node leaves. Recursion along the CKY analysis For each new context-free element Z (Z ; X Y), the following tests are done in addition: If X has an ENN (NNi,TK1,FPi) and Y has an ENN (NNz,TK2,FP2) and NNI-(1 in the last positition) = NN2-(2 in the last position) and TKi = TK2 and at least FPi or FP2 = nil then for Z an ENN (NNI-I,TK,FP) is added where TK = TK1 if Z is not the root node of the whole tree (in this case TK = TKI-(L+T in the first position)); FP = FPi (i=1,2), which is not equal nil, else it is nil. If an auxiliary tree with Z the label of the foot node exists, the ENN (NN=node number of the foot node in that tree, LA, pointer to that element) is added to Z in the currently manipu- lated triangle cell -- to represent the possibility of an adjoining in that node. It is obvious that this invariant consisting of the nonterminal in a cell of the triangle matrix to represent the context-free invariant, the pointers to elementary trees, and the foot node pointers to represent which part of an elementary tree is analyzed computes less information than an array cell in the approach of Vijay-Shanker and Joshi does, where whole subtrees of the derivation tree are stored not stopping at a foot node as we do. Also, it is clear that this invariant can be com- puted recursively during the ordinary CKY steps within the upper time bound of O(n3). The num- ber of pointers to elementary trees at each node can be restricted by the number ofoccurences of a nonterminal as the left-hand side symbol of a rule in the context-free kernel (which is a constant). The number of foot node pointers is restricted by the outdegree of each cell in the triangle matrix b<e n), because only for such an edge can an FP recursively defined. In the second step, whole TAG derivations are computed by combining the subtrees of elemen- tary trees (represented by the invariant after step 1), according to the adjoining definition inter- preted inversely. Inversely means that the equiva- lence in the adjoining definition is not interpreted in the direction that a node is replaced by a tree, but in the opposite direction, where trees have to be detected and are eliminated. Since all TAG derivation trees of a string w and a TAG G are encoded in the triangle matrix bnecessary condition w E CFK(G)) and have to e found in the triangle matrix, the derivation definition has to be modified as well to support the 'inverse' adjoining definition. It means that a string w E L(G) iff there exists a tree, where recursively all complete auxiliary trees can be de- tected and replaced bythe label of the root node of the auxiliary tree until this process terminates in an initial tree. The second step formulates the algorithm for exactly this definition. An auxiliary tree in the derivation tree which contains no further adjoin- ings is called an innermost tree. As long as the termination condition isn't satisfied, at least one innermost tree must exist in the derivation tree. Returning to the invariant in the first step, in- nermost trees are characterized as a pointers to the root node of an auxiliary tree or in the rep- resentation of ENNs as the node number of the root node (in our numbering algorithm visible by the end number zero) and the tree kind flag TA (total auxiliary). These trees are eliminated by identifying the root and the foot nodes of innermost trees, so to speak, as interpretation of the foot node point- ers as e edges. This can be represented sim- ply as propagation of the pointers from the foot node to the root node. This information is suffi- cient because the strategy of the algorithm checks whether an incoming edge in a node and the in- formation of an outgoing edge (without loss of generality represented at the start node of the edge) belong to the same elementary tree. Note that this bottom-up interpretation of the deriva- tion trees (propagation) realizes that the finding of larger subtrees is computed only once (the father-son relation is only interpreted in the up- ward direction). In Figure 2 the dotted line from the NP node in 7 describes the elimination of/3 by propagation of the information from the foot node to the root node. Since it doesn't matter in the algorithm what history an information in a node has (especially how much and exactly what trees are eliminated) all possibilities of producing new extended node numbers -- representing the new invariant -- are simply called elimination. The information in a node represents what further parts of the same elementary tree are expected to be found in the triangle matrix above that node. A subclassifica- tion differentiates what kinds of incoming edges should be compared to find these parts. One class describes whether such a further piece is detected -- by interpreting incoming and outgoing edges of the same node (simple elimination). E.g., this is the case in the inductive basis of the invariant definition. The second class realizes the elimina- tion of a detected innermost tree, where its foot node pointer ends in that node. Then the neigh- borhood of the incoming edges in the root node of the innermost tree and the outgoing edges in the foot node (the currently examined node where the invariant contains the information of the outgoing edges from this node) has to be tested (complex elimination). By this classification, each neigh- borhood -- the explicitly represented ones in the triangle matrix as well as the neighborhoods via e respectively foot node pointer edges -- is exam- ined exactly once during the algonthm. The fact that a derivation tree again results after an elimination, which is encoded in the tri- angle matrix as well, becomes clear by looking at the invariant after an elimination. In the first step the invariant describes complete subtrees of elementary trees. If a complete innermost tree is eliminated by propagating the complete subtrees of elementary trees derived by the foot node to the root node, this represents the fact that the 289 root node can derive both trees, but the subtrees below the foot node have to be completed. This can be done again by elimination (in Figure 2 the dotted line from node S represents the computa- tion of a TAG tree after an elimination). Since this is not the place to present the algo- rithm in detail, it is described in informal terms: (tagl) Treatment of the Empty String ACCEPT :- false; ifw = e then if e tree 6 I then ACCEPT := true; fi; goto (tagT); fi; From now on, G is interpreted without the e tree. (tag2) Definition of Unique Node Numbers V nodes X in ~ 6 (I t9 A) a unique node number NN is defined recursively as follows: • a has a unique number k all over the grammar (starting with zero), • if X is root node NN := k0, for X the left or only son of the root NN := kl, for X the right son of the root (if existing) NN := k2, and • for the left or only son of a node with node number kx (x 6 {1,2} +) NN := kxl, for the right son of kx NN := kx2. (tag3) Computation of the Context-Free Kernel for The TAG (CFK) Each inner node of an elementary tree in G and its sons are interpreted as a context-free rule where the node number and the con- straints are represented as well. (tag4) Cocke- Kasami-Younger- Analysis with CFK and w The slightly extended CKY algorithm is ap- plied to w and CFK. The result is a triangle matrix if the following holds: if w ~L(CFK) then goto (tagT) else goto (tagS); fi; (tagS) Computation of the Initial State All possible extended node numbers are com- puted, which means that all auxiliary trees, or respectively all subtrees of elementary trees, are computed on the triangle matrix and gathered in SAT, the set of active trees. (tag6) Iteration on the Elimination and the Initial State NEWSAT1 and NEWSAT2 are empty sets and COUNT :- 1. (it0) if an extended node number with tree kind TK -- TI 6 [1,n] then ACCEPT := true and COUNT := n; fi; (it1) if COUNT - n then goto (tagT); fi; (it2) ¥ nodes k with extended node num- ber ENN 6 SAT and tree kind of ENN = TA : propagate the extended node number of the node which the foot node pointer points at to the root node and add this information to NEWSAT1; (it3) V nodes k E NEWSAT1 : do all sim- ple and complez eliminations and add the new extended node numbers to NEWSAT2; (it4) SAT := NEWSAT2; NEWSAT1 and NEWSAT2 := ~, COUNT := COUNT+I and goto (it0). (tagT) Output of the Result If ACCEPT = true then w E L(G) else w ~ L(G); ft. Figure 3 illustrates the recursion step (tag6) for a single, but arbitrary innermost tree represent- ing an auxiliary tree with the root node number numl. for all auxiliary trees in SAT: all extended node numbers (it2) .(nurnl,~A,F/~r) in the node FP1 points at: (nurn~,l.A~FP2) (num2,LA or TI or LI,nil) and . ~ or A propagate these exlanded node numbers to the mot node: New exlended node numbers and all trees with tree kind LA at the root are added to NEWSATI. case a) a s/n'p/e elirru'nation (it3) for case b) a coml~x elimlnation for (num2,LA.F P2) is de~rbed: (num2,LA,FP2) is deeo'bed: (num4'LA'~FP2) '~ num~ 3,LA,nlI) here exists a context-free rule here exists an eliminated tree (nurn,p nurn 2 num3) and below nurn 2 with a foot node pointer the subtree is cornpiste¥ analyzed, (dashed line) to the Iooal root this means nurn 4. nurn 2. I- nurn3-2 node (num2,LA,FP2) (in this case nurn 4 not equal root). (it4): Results are added to SAT, all other sels are redefined wlthe. START " End of recurrJon (It0) M1er at most n-1 interations (Itl). Figure 3: Illustration of the step of recursion Here, the question of correctness is not dis- cussed in more detail (see [Harbusch 89]). It should be intuitively clear with the correspon- dence between the derivation definition and it's interpretation in the recursion step. Actually, the main emphasis lies on the ex- planation of the time complexity (for the formal proof see [Harbusch 89]). A good intuition can be won by concentrating for a first glance on a single, but arbitrary TAG derivation tree 6 for w = tz...tn in the triangle matrix after step one. It is clear that (i contains at most n-1 adjoinings, because each TAG tree must produce at least one terminal. Therefore the recursion, which finds in- dependent (unnested) adjoinings simultaneously (after elimination of nested adjoinings identified in the last recursion step), terminates definitively after n-1 loops. At the beginning, at most O(n 2) innermost trees can exist in the triangle matrix. Each ter- 290 minal can be a leaf in a constant number of ele- mentary trees and with an indegree of O(n-1) in row 1 of the triangle matrix, the number of oc- curences of elementary trees containing the input symbol tl (1 .< i < n) encoded in the invariant after step one is restricted. Since an elimination is defined along the path between root and foot node of an auxiliary tree, which has at least length 1 (i.e., root and foot node are father and son), the foot node informa- tion is always propagated to a higher row in the triangle matrix. The triangle matrix has depth n so that the information of a node in ~f -- our explicitly chosen derivation tree -- can only be passed to O(n-1) nodes because each node has indegree 1 in a derivation tree. The passing of information (propagation) stands for the elimi- nation of O(n-1) innermost trees along the path to the root node. So, the invariant of that node (a constant number of ENNs) can be propagated to O(n) nodes. As a result, the number of in- variants at a node increases to O(n). This must be done for all nodes (O(n2)) so that the overall number of steps to find a special, but arbitrary TAG derivation tree is O(n3). These suggestions can be used as a basis for finding all derivation trees in parallel instead of a single, but arbitrary one, because all intermedi- ate states in the triangle matrix are shared. The only difference is that the indegree of a node can- not be restricted to 1, but to O(n) so that the exponent 3 increases to 4. The extension "log n" results from storing the foot node pointers, where addresses have to be represented instead of num- bers of other cells as in the Vijay-Shanker-Joshi approach. In other words, an intuition for an upper time bound of the algorithm is that the recursion step can be seen as a CKY analysis, because particu- larly neighboring subtrees are combined to build a larger structure, where the constant number of nonterminals in a cell has to be replaced by O(n) candidates (O(n 3) x n). Another intuition gives a comparison with the Vijay.Shanker and Joshi approach. It is obvious that our new approach has a different time bound for the best and the worst case, because all possi- bilities violating the necessary condition to have a context-free derivation are filtered out before step two is started. In the Vijay-Shanker and Joshi ap- proach for all context-free subtrees of the triangle matrix, the invariant is computed. But this fact doesn't modify the upper time bound. The main difference lies in the execution time for the two different invariants. In the Vijay-Shanker.Joshi approach, all different TAG derivations for a sub- tree are gathered in the stack of a node in a cell. For all these possibilities, the building process of larger structures is done separately, although the differences in the derivation tree doesn't concern the auxiliary tree actually mentioned. Our local invariant always handles an auxiliary tree with no further information about the derivation. There- fore each elimination of an auxiliary tree is done once only for all derivation trees. From this point of view, the different exponent results from the existence of O(n 2) stack pointers at each node in the triangle matrix. For both approaches, the integration of TAGs with Constraints is mentioned in common. For the new approach, this extension is obligatory be- cause the normal form transformation produces a TAGC. Anyway, this additional computation doesn't change the upper time bound, because constraints are local and their satisfaction has only to be tested iff an innermost tree should be eliminated ( i.e., a stack pointer has to be extended). In this case it had to be checked whether all obligatory constraints in the elimi- nated tree are satisfied and whether the adjoining was allowed (by analyzing to which tree the rule of the incoming edge in the root node belongs and what constraint the end point of that edge has). 5 SUMMARY In the application domain of natural language processing, the execution time in an average case is of great interest as well. For the new parsing algorithm, a result is not yet known, but in basic considerations the main idea is to take the depth of analyzed parts of derivation trees as a constant term to come up with a result of O(n3). Actually, an implementation of the presented formalism exists written in Common LISP on a Hewlett Packard machine of the 9000 series (for more details about the implementation see [Buschauer et al. 89]). To give an idea of the re- sponse time, the analysis of a sentence of about 10 to 15 words and a grammar of about 20 to 30 ele- mentary trees takes at most 6 milliseconds. Cur- rently, this implementation is extended to build a workbench supporting a linguist in writing and testing large TAG grammars (respectively TAGs with Unification). Finally, other approaches for TAG parsing should be mentioned and compared with the pre- sented result. In the literature, the two Ear- leybased approaches of Schabes and Joshi (see [Schabes, Joshi 89]) and of Lang ([Lang 86]) are proposed. The lowest upper time bound for the Schabes.Joshi approach is O(n 9) and for the ap- proach of Lang O(n6). But both algorithms come up with better results in the best and in the av- erage case. In the framework of parallel parsing, results for TAGs are also proposed. In [Palis et al. 87] a linear time approach on O(n 5) proces- sors and in [Palis, Shende 88] a sublinear (O(log 2 n)) algorithm is described. One future perspective is to parallelize the new approach by the same method so that the expected result should be a linear time bound on O(n 2) processors. More concretely, an op- timal layout for two processors is looked for, where independent subtrees have to be specified (candidates are not always total innermost trees, e.g., if only one TAG derivation exists where all innermost trees are nested). Further on, we concentrate on appropriate ex- tensions of the TAG formalism for analysis as well as generation of natural language with the ambi- tious aim to verify that TAGs (in some extension) are appropriate for a bidirectional and integrated description of syntax, semantics and pragmatics. 291 ACKNOWLED GEMENTS This paper is based on thesis work done under supervision of Wolfgang Wahlster and Gffnther Hotz. I would like to gratefully acknowledge Hans Arz, Bela Buschauer, G*inther Hotz, Paul Moli- tor, Peter Poller, Anne Schauder and Wolfgang Wahlster for their valuable interactions. I would like to thank Aravind Joshi for his helpful comments in earlier discussions and especially on this paper. REFERENCES B. Buschauer, P. Poller, A. Schauder, K. Har- busch. 1989. Parsing yon TAGs mit Unifikation. Saarbriicken, F.R.G.: "AI-Laboratory" Memo, Dept. of Computer Science, Univ. of Saarland. K. Harbusch. 1989. E•ziente Strukturanalyse nat~irlicher Sprache mit Tree Adjoining Gram. mars. PhD Thesis, Saarbriicken, F.R.G.: Dept. of Computer Science, Univ. of Saarland. J. E. Hopcroft, J. D. Ullman. 1979. In. troduction to Automata Theory, Languages, and Computation. Addison-Wesley, Reading, Mas- sachusetts. A. K. Joshi. 1985. An Introduction to Tree Ad- joining Grammars. Philadelphia, Pennsylvania: Technical Report MS-CIS-86-64, Dept. of Com- puter and Information Science, Moore School, Univ. of Pennsylvania. A. K. Joshi, L. S. Levy, M. Takahashi. 1975. Tree Adjoining Grammars. Journal of Computer and Systems Science 10:1, Seite 136-163. T. Kroch, A. K. Joshi. 1985 Linguistic Rel- evance of Tree Adjoining Grammars. Philadel- phia, Pennsylvania: Technical Report MS-CIS- 85-16, Dept. of Computer and Information Sci- ence, Moore School, Univ. of Pennsylvania. B. Lang. 1989 forthcoming. A Uniform Frame- work for Parsing, Proceedings of the Interna- tional Workshop on Parsing Technologies in Pitts- burgh, 28~h-31 rd of August. G. Pullum. 1984. On Two Recent Attempts to Show That English ls Not a CFL. Computational Linguistics 10(4): 182-186. M. A. Pallis, S. Shende, D. S. L. Wet. 1987. An Optimal Linear-Time Parallel Parser for Tree Adjoining Languages. Philadelphia, Pennsyl- vania: Technical Report MS-CIS-87-36, Dept. of Computer and Information Science, Moore School, Univ. of Pennsylvania. Y. Schabes, A. Joshi. 1988. An Earley-Type Parsing Algorithm for Tree Adjoining Grammars. Philadelphia, Pennsylvania: Technical Rep.MS- CIS-88-36, Dept. of Computer and Information Science, Moore School, Univ. of Pennsylvania. S. M. Shieber. 1985. Evidence against the Context.Freeness of Natural Language. Linguis- tics and Philosophy 8: 333-343. K. Vijay-Shanker. 1987. A Study of Tree Ad- joining Grammars. Philadelphia, Pennsylvania: PhD Thesis, Dept. of Computer and Information Science, Moore School, Univ. of Pennsylvania. K. Vijay-Shanker, A. K. Joshi. 1985. Some Computational Properties o/ Tree Adjoin- ing Grammars. Chicago, Illinois: Proceedings of the 23 "d Annual Meeting of the Association for Computational Linguistics: 82-93.
1990
36
LEXICAL AND SYNTACTIC RULES IN A TREE ADJOINING GRAMMAR Anne Abeill6* LADL and UFRL University of Paris 7-Jussieu [email protected] ABSTRACT according to this definition 2. Each elementary tree is constrained to have at least one terminal at its frontier which serves as 'head' (or 'anchor'). Sentences of a Tag language are derived from the composition of an S-rooted initial tree with other elementary trees by two operations: substitution (the same operation used by context free grammars) or adjunction, which is more powerful. Taking examples from English and French idioms, this paper shows that not only constituent structures rules but also most syntactic rules (such as topicalization, wh-question, pronominalization ...) are subject to lexical constraints (on top of syntactic, and possibly semantic, ones). We show that such puzzling phenomena are naturally handled in a 'lexJcalized' formalism such as Tree Adjoining Grammar. The extended domain of locality of TAGs also allows one to 'lexicalize' syntactic rules while defining them at the level of constituent structures. 1 INTRODUCTION TO 'LEXICALIZED' GRAMMARS 1.1 Lexicalizing Phrase Structure rules In most current linguistic theories the information put in the lexicon has been increased in both amount and complexity. Viewing constituent structures as projected from the lexicon for example avoids the often noted redundancy between Phrase Structure rules and subcategorization frames. Lexical constraints on the well-formedness of linguistic outputs has also simplified the previous transformational machinery. Collapsing phrase-structure rules into the lexicon is the overt purpose of 'lexicali7ed' grammars as defmed by Schabes, Abeill6, Joshi 1988 : a 'lexicallzed' grammar consists of a fmite set of elementary structures, each of which is systematically associated with one (or more) lexical item serving as 'head'. These structures are combined with one another with one or more combining operation(s). These structures specify extended domains of locality (as compared to CFGs) over which lexical constraints can be stated. The 'grammar' consists of a lexicon where each lexical item is associated with a finite number of structures for which that item is the 'head'. We here assume familiarity with Tree Adjoining Grammars, which are naturally 'lexicalized' * The author wants to thank Yves Schabes, Aravind Joshi, Maurice Gro~, Sharon Cote and Tilman Becker for fruitful discussions, and Robert Giannasi and Beatrice Santorini for their help. Schabes, Abeill~, Jo~hl 1988 show that context free grammars cannot in general be lexicalized (using substitution only as the combining operation). They also show that lexicalized grammars are interesting from a computational point of view since lexicalization simplifies parsing techniques, because the parser uses only a relevant subset of the entire grammar: in a first stage, the parser selects a set of elementary structures associated with the lexical items in the input sentence, and in a second stage the sentence is parsed with respect to this set. As shown by Schabes, Joshi 1989, a parser's performances are thus improved. We show here that such 'lexicalization' should be extended to other components of the grammar as well, thus challenging the usual distinction between 'lexical' and 'syntactic' rules. Further parsing simplification is therefore expected. 1.2 'Lexicalizing' lexicai rules As has often been noticed, rules (or transitivity alternations) such as passive, particle hopping, middle, dative-shift ... are subject to lexical idiosyncrasies. There are of course syntactic and semantic constraints governing such phenomena, but lexical ones seem to be at stake to. If one considers double objects constructions, passivation of the second NP is regularly ruled out on syntactic grounds. Passivation of the first NP, on the other hand is subject to lexical restrictions as the example of 'cost', opposed to 'envy' or 'spare', shows: They envy John his new car. John is envied his new car. The mistake cost Mary a chance to win. ?* Mary was cost a chance to win. The judge kindly spared John the ordeal. John was kindly spared the ordeal. One might argue that such differences may be due to some semantic constraints, but even verbs with similar meaning may exhibit striking differences. For example, in French, 'regarder' in 292 2 Categorial grammars are also 'lexicalized'. its figurative readin£ (to concern) and 'concerner', which is a true synonym in this context, behave differently:. Cette affaire regarde Jean * Jean est regard6 par cette affaire Cette affaire concerne Jean Jean est conceru~ par cette affaire (M. Gross 1975) It also seems a lexical phenomenon that "change" but not "transform" allows for ergative alternation in English' The witch changed/transformed John into a wolf John changed into a wolf * John transformed into a wolf (G. Lakoff 1970) To take another example, dative shift (or there- insertion) is often thought of as applying to a semantically restricted set of verbs (eg verbs of communication or of change of possession, for dative), but this does not predict the difference between 'tell' that allows for it, and 'announce' or 'explain' which do not3: John told his ideas to Mary John told Mary his ideas John explained his ideas to Mary * John explained Mary his ideas Lexicalist frameworks such as GPSG, which handles such phenomena by metarules (defined on 'lexical' PS rules), or LFG, which defmes them at the f-structure level (ie between 'lexical forms') could capture such restrictions. D. Flickin£er 1987 handles them explicitly with a hierarchical lexicon in HPSG, considering such rules to hold between two word classes (verbs here) and to apply by default unless they are explicitly blocked in the lexicon. But all these representations rely on a clear-cut distinction between lexical and syntactic rules and it is not clear how they could be extended to the latter. 2 LEXICAL CONSTRAINTS ON SYNTACTIC RULES The distinction between 'syntactic' rules 4 that do not usually change argument structure nor 3 To dismiss 'announce' or 'explain' on the mere basis of their latin origin would not do, since 'offer', which comes from latin as well, does exhibit dative shift. 4 Wc use the term 'rule' for conveniency. It does not matter for our pu~, whether these phenomena are captured by meaning of the sentence and are supposed to apply regularly on syntactic structures, and 'lexical' rules that alter argument structure, may change the meaning of the predicate and may exhibit some lexical idiosyncrasies, usually overlooks the fact that both are subject to lexical constraints. There has often been discussions about whether certain rules, (eg passive or extraposition) should be considered of one kind or the other. But it has seldom been realized, to the best of our knowledge, how often 'syntactic' rules are prevented to apply on what seems purely lexical grounds5. Our discussion crucially relies on idiomatic or semi-idiomatic constructions. We believe that a sizable grammar of natural language, as well as any realistic natural language application, cannot ignore them, since their frequency is quite high in real texts (M. Gross 1989). We first present examples of such lexical constraints on topicalization, pronomi~aliTation and wh- question for both English and French idioms. We then show that similar constraints can be found in non idiomatic sentences. 2.1 Flexibility of idiomatic constructions Idioms are usually divided into two sets (eg J. Bresnan 1982, T. Wasow et al. 1982): 'fLxed' ones (not subject to any syntactic rule) and flexible ones (presumably subject to all). However, there is quite a continuum between both. Let us take two French idioms usually considered as "fixed': 'casser la croflte' (to have a bite) and 'demander ia lune' (to ask for the impossible). It is true that passivation or wh-question do not apply to either. But pronominalization for the former, cleft-extraction (c'est que) for the latter do6: Paul a casse la crotite (Paul had a bite) # Quelle crofite casse-t-il ? # C'est une petite cro0te qu'il a cassee. derivation rules as such or by constraints on the well- formedness of ou~ut structures. 5 An interesting exception being Kaplan and Zaenen 1989's proposal that wh-movement and topicalization be constrained at the f-structure level, ie by LFG's 'lexical forms'. 6 # marks that the sentence is not possible with the desired idiomatic interpretation. There may be some variations among speakers about acceptability judgements on such sentences (and on some of the following ones). Such variability is indeed a property of lexical phenomena. 293 ? Paul est en train de casser une petite cro~te et j'en casserais bien une anssi. (Paul is having a little bite, I wouldn't mind having one too) Jeanne demande la lune # Ouelle lune demande-t-elle ? C'est la lune qu'clle demande ! # Jeanne demande la lune et Paule la demande aussi. (Jeanne is askin~ for the moon and i'm asking for it too) These idioms are thus not completely fixed (as opposed to idioms such as 'casser sa pipe' or 'kick the bucket'), and some grammatical function must be assigned to their frozen NPs. But the differences among them are somewhat unexpected: 'casser la cro~te' (where the noun can be modified and take several determiners) does not allow for more rules than 'demander la lune' (where the frozen NP is completely fixed). If one now takes an idiom usually considered as flexible, 'briser la glace' (to break the ice), which does passivize, we notice the same distribution as with 'casser la crof, e': Paul a bris6 la glace # Ouelle glace a-t-il bris6e ? # C'est la glace qu'il brise 77 Jean a bris6 la glace hier et c'est ~ moi de la briser aujourd'hul. (Paul broke the ice yesterday and I have to break it today) Passive is allowed but not wh-question, nor cleft extraction. It is difficult to dismiss such phenomena as rare exceptions. Looking at numerous idioms shows that one combination of such rules is not more frequent than the other. It is also difficult to fred a clear semantic principle at work here. Similar restrictions seem to be at work in English. If one takes some English idioms usually considered as 'flexible' (or even not idiomatic at all): NP0 give hell/the boot to NP1. The main verb 'give' seems to behave syntactically and semantically as in non idiomatic constructions: Dative shift applies and we have the regular semantic alternation : NP1 get hell/the boot (from NP0), with identical meaning. But it is not the case that all expecte rules apply: passive is blocked, pronominalization on the object too: # Hell was given to Mary (by John) # The boot was given to Mary (by John) # Alice gave hell to Paul yesterday and she is giving it to Oscar now. # Oscar gave the boot to Mary, and he will give it to Bob too. 294 Syntactic rules may also apply differently to distinct 'flexible' idioms. It is easy to lind idioms which do passivize but don't allow for pronominaliTation or topicaliTation in the same way: They hit the bull's eye. The bull's eye, they hit. ? John hit the bull's eye and Paul hit it too. They buried the hatchet. 77 The hatchet, they buried. # John buried the hatchet and Paul buried it/one too. For relativation also, there might be similar differences: The strings that Chris pulled helped hime get the job (Wasow et al. 1982) # The bull's eye that John hit helped him get the job. # The hatchet that he buried helped him get the job. Distinguishing between fixed and flexible idioms is thus not sufficient. Because different rules apply to them differently, without a clear hierarchy (contrary to Fraser 1970), one should distinguish as many different types of flexibility as there are possible combinations of such rules. Similarly, if one wants to follow T. Wasow et al. 1982 's suggestion that some kind of compositional semantics should be held responsible for the syntactic flexibility of idioms, as many degrees of compositionality should be defined as there are combinations of syntactic properties. Direct encoding of the latter is thus preferable, and such a semantic 'detour' does not seem to help. This does not mean that no regularities could be found for idioms' syntax but that they have to be investigated at a more lexical level. 2.2 Some lexical constraints on non Idiomatic constructions Going back to non idiomatic constructions, it seems that their syntactic properties may be subject to similar lexical idiosyncrasies. If one considers double objects constructions, It seems a lexical phenomenon that wh-question on the second N-P is allowed with 'give' or 'spare', and not with 'envy' or 'cost', and that topicalization is allowed with 'spare' only: They envy John his new car * What/* Which car do they envy John 7 * This brand new car, everyone envies John The mistake cost Mary a chance to win * What/ *Which chance did the mistake cost Mary ? * This unique chance, the mistake cost Mary The judge' spared John the ordeal What / Which ordeal did the judge spare John ? This ordeal, the judge kindly spared John If one now considers the first NP, topicMi-ation appfies differently to: * Mary, the mistake cost a chance to win .9 John, you have always envied his extraordinary luck John, the judge kindly spared the ordeal In French, as noted by M. Gross 1969, properties usually thought of as applying to aLl 'direct objects'(passivation, Que-question and Le- cliticizatlon) may apply in fact unpredictingiy. Although the objects of a verb like 'almer' (love) take objects undergoing the three of them, the object of 'valoir'(be worth) only allows for Que- question and Lc-¢llti¢i|TagiOiX, that of 'co~]tter' (cost) only for Que-quesfion and that of 'passer' (spend (time)) only allows for Le-cliticization: Each elementary tree in a Tag is lexicalized in the sense that it is headed by (at least) one lexical item. The category of a word in the lexicon is the name of the tree it selects. We only consider here sentential trees for the sake of simpficity. What lexical heads select is in fact a set of such elementary trees called a "Tree Family ~ (Abeill~ 1988, Abeill~ et al. 1990), each tree correspondln~ to a certain constituent sructure (initial trees for wh-questions, auxiliary trees for relative clauses...). This is the level at which syntactic generMiTJtions can be stated, since each elementary tree may bear specific constraints independantly of any iexical items B. A Tree Family consists in fact of all the constituent structures trees which are possibly allowed for a given predicate 9. Examples of trees in the n0Vnl Family (verbs taking two NP arguments) are the followlngl0: II S a! S sP~ vl, A NPo~ VP VO NPI~ J v9 Ce livre vaut cents francs. (This book is worth 100 francs) Ce livre les vaut. Que vaut ce fivre ? Ce Hvre coQte cent francs. (This book costs I00 francs) * Ce livre les co0te. Que coOte ce liwe ? 11 a pass6 la nuit A travailler. (He spent the night working) II l'a pass6e t travailler. *Qu' a-t-il pass6 A travailler .97 These differences are all the more surpri~in£ that 'cofiter' and 'valoir' are otherwise very dose verbs (same subcategorization frames, ~me selectional restrictions). Looking for some generalization principles with which to predict such restrictions should be pursued, but it seems that they will be of a lexlcal kind. 3 LEXICALIZED RULES IN A TREE ADJOINING GRAMMAR 3.1 Tree Families 7 ? Quelle nuit a-t-ii pasrde i travailler ? would be better. 295 S al s t~ A NP,~t .,~tJ S NP0~ VP P NI~ I 8 Further subdividing these Tree Families, similarly to M. Gross 1975's verb tables for French, and to D. Flickinger 1987's wcqrd classes for English, will help reduce the number of features, and thus the amount of seemingly idiot~cratic information, associated with each verb. However, as noted by both authors, lexical idioayncrasies will never be eliminated altogether. 9 Tree Family names (nOV, nOVnl...) are somewhat similar to 'lexical forms' in LFG in the sense that they capture both the predicate argument structure and the associated grammatical functions (which we note by indices: 0 for subject, l for first object...). Notice that the Tree Family name does not change when lexical rules apply. 10~marks a substitution node,0marks the head. We use here standard TAG trees for commodity of extx~tion, although recent independent linguistic work suggests to slightly modify them, challenging for example the distinction between VP and V levels (see Abeill~, in preparation). Each tree is identified by a Tree family name associated to a feature bundle correspondin~ to the rules it involves. For example, a2, a3 and a4 are respectively marked11: al (nOVnl) a2 (nOVnl) passive f- passive = + Wh-0--- Wh-1 -- - Wh-1--- Wh-0 ffi - erg =- a3 (n0Vnl) a4 (n0Vnl) passive=- passive ffi + Wh-l=- Wh-1 = + erg= + Wh-O = - A given tree can belong to several tree families at the same time, which helps factorizing the grammar in a parsing perspective. For example, a3 can also be considered as belon~n~ to the nOV Family (for verbs with one NI' argument) with a different feature bundle : passive =-; Wh-0 = -. The lexical items headin~ the tree constrain~ its interpretation, eg 'sleep' will interpret a3 as nOV, while 'bake' or 'walk' interpret it as n0Vnl. Lexical constraints on syntactic and lexical rules are handled by having the head select its own subset of trees in its tree family. For example, 'resemble' selects only active trees; 'rumored' only passive ones, and 'love' select both12:. [love],V : n0Vnl [erg=-] [resemble],V: n0Vnl [passive =-; erg=-] [rumored],V: n0Vnl [part, lye = + ] [donate],V; to,P : n0VPnl [dative =-;erg=-] [give],V;to,P: n0VPnl [erg=-] [spare],V:n0Vnln2 These features work as follows: when nothing is said about a feature, it means that the predicate selects trees with the feature being plus of minus; 11 One might explicitly define materules, or links between such trees: a passive rule for example, changes the feature passive of the tree and intervert the features bearing on NO and N1. Work is being curretly done along this line with T. Becket, Y. Schabes and K. Vijay Shanker. 12 We note with square [I the set of inflected forms of a lexical item. For example, [10ve] = give, g~.s, gave, giving. given. We use a restriction principle to rule out erg= + whenever passive = + (or dativ~ = +), and vice vemh to the ergative feature does not have to appear in the lexicon for 'rumored'. when a feature is marked plus, it means that only trees with this feature plus are selected (ie that the corresponding rule is 'forced' to apply). Such 'lexicallzation' of syntactic rules applies similarly in idiomatic and non idiomatic constructions. 3.2 Idioms in a Lexlcalized Tree Adjoining Grammar Tags seem a natural framework to represent structures which at the same time are semantically non compositional and should be assigned regular syntactic structures (Abeill6 and Schabes 1989, 1990). Idioms are thus made fall into the same grammar as non idiomatic coustructious. The only specificity of idioms is that they are selected by a multicomponent head (called 'anchor') and may select elementary trees which are more extended than non idiomatic constructions. Here are some examples of elementary trees for 'kick the bucket', 'bury the hatchet' and 'take NI' into account': S i~ S ~t. A A NPo,L vr ~Po~ vp V e Np I x A I I I I the bucket tlx ~chet A NP0~ vr 0V NPx~. PP2 s't I A I I into tN 2N A I Kcotm¢ The lexical anchors are respectively 'kick', "the' and "bucket' for (1, 'bury', 'the' and 'hatchet' for ¢2, and 'take', 'into' and 'account' for ,t3. The idiomatic interpretation of sentences such as 'John kicked the bucket', as opposed to their 296 literal readln~ is strail~forwardly based on their distinct derivation trees'-': toni[kick. ;] o.N'Pn[John] (1) ctNPdntbucket] (2.2) c~tdnl[klck .the.bucket] : ', ! ! ¢zD[the] (1) o.NPa[John] (1) literal derivation idiomatic derivation Idiomatic and non idiomatic elementary trees are gathered into tree families according to the same principles. Here are some examples of the trees belonging to the Family of idioms with a frozen object (nOVDN1): S ~ S a 3 NPoi VP .%1*, .... *, S VO NP, N'Po,va VP • ¢ ~ V¢ NIP: //~ .~,f"x., vp .,,,,.o /,\ o o VO (Pp~ vo sP~., (~:0 / \ ~1 ,v. V ,, counterpart, although it allows for passive : "Par quelle mouche a-t-il 6t6 piqu~ ?" (M. Gross 19s9). [prendre],V;ie,D;temps,N: DN0Vnllpassive= +] [piquer],V;mouche,N : NOVnl [Wh.N0 =+ ] Notice that the tree familiy name tells not only about the argument structure but also about the head being multicomponent or not (all head elements are noted with capital letter). Usually, no part of a multicomponent head can be omitted, and trees that are possible for this argument structure but in which all head elements could not be inserted will be ruled out. For example, what-questions (noted Wh-i) are generally disallowed with frozen nominals (and thus not noted for each iexical entry), whereas questions with wh-determiners (noted Wh-Ni) are not: John took a trip to Spain # What did John take ? ? Which trip to Spain did John take ? (AbeRl6 et at. 1990) In fact, as has been noted by M. Gross 1989 for French, Wh-Ni questions seem to be ruled out when the determiner of the argument is completely fixed, as the following contrasts show:. John spilled the/those beans John buried the/#thi~/#a hatchet Which bean(s) did John spill ? # Which hatchet did John bury ? Similarly, idioms bear syntactic features constrainln£ the elementary trees of the Tree Family they select. In the n0VDN1 Tree Family, for e~mmple, 'kick the bucket' selects only al, and the trees corresponding to wh-movement on NO; 'bury the hatchet' selects also the trees for passive (and possibly topic~liT.'ation on N1). [bury],V;the,D;hatchet,N: n0VDN1 [Wh-N1 •-] [kick],V;the,D;bucket,N: n0VDN1 [passive=-; Wh-N1 =-; Top-N1=-] This generalization which can be captured since the Tree family names will be different (with D for frozen determiners, and d for not frozen ones): [spill],V; [beun],N: n0VdN1 [bury],V; the,D; hatchet,N: n0VDN1 The trees for the Wh-N questions will thus belong only to the corresponding 'd' Families, and not to the 'D' ones. CONCLUSION There are some idioms which exist only in the passive form, or in the question form, and the correspond;no trees are directly selected. In French, "~tre pris par le temps" (to be very busy) lacks its active counterpart (* Le temps prend Jean), and "Quelle mouche a piqe6 NP ?" (What's eating NP ?) lacks its non interrogative 13 "l'he derived trees are the same (modulo the syntactic features explained above). 297 It has been shown that taklno, idiomatic or semi- idiomatic constructions into account in a French or Enali~ grammar forces one to define some lexical constraints on syntactic rules such as wh- question, pronominaliTation and topicalization. Such a lexical treatment has been exemplified using Lexicalized Tree Adjoining grammars. An interestlno point about TAGs is that, due to their extended domain of locality, they enable one to consider as 'lexicar syntactic rules bearing on constituent structures, and not only rules changing the syntactic category of a predicate (as D. Dowry 1978) or rules chan#,~ the argument structure of a predicate (as in T. Wasow 1977 or D. Flickinger 1987). REFERENCES AbeilM A., 1988. "Parsing French with Tree Adjoinlnz grammar', Coling'88, Budapest. AbeiU6 A., Schabes Y., 1989. "Parsing idioms with Lexicalized Tags', Proceedings of the European ACL meeting, Manchester. AbeilM A., Schabes Y., 1990. "Non compositional discontinuous constituents in Lexicali7¢d TAG', Proceedings of the international workshop on discontinuous constituency, Tilburg. AbeiU6 A., K. Bishop, S. Cote, Y. Schabes, 1990. A lexicalized Tree Adjoining Grammar for English, Technical Report, CIS Dpt, University of Pennsylvania, Philadelphia. Bresnan J., 1982. "Passive in lexical theory ~, in Bresnan (ed), The Mental Representation of Grammatical Relations, MIT Press. Dowty D., 1978. "Governed tra-~formations as lexical rules in a Montague grammar', Linguistic Inquiry, 9:3. Flickinger, D. 1987. Lexical rules in the hierarchical lexicon, PhD Dissertation, Stanford University. Gross M., 1969. "Remarques sur la notion d'objet direct en Franqais", Langue frw~faise, n°3, Pads. Gross M., 1975. Mdthodes en syntaxe, Hermann~ Paris. Gross M., 1989. "Los expressions fig6es en Franfais', Technical Report, LADL, University Paris 7, Paris. Kaplan R., Zaenen A., 1989. *Long distance dependencies, Constituent structure and Functional uncertainty", in Baltin & Kroch (eds), Alternative Conceptions of Phrase Structures, Chicago Press. G. Lakoff, 1970. Irregularity in Syntax, Holt, Rinehart and Winston, New York. Schabes Y., AbeilM A., Joshit A., 1988. "Parsing strategies with 'lexicaliTed' grammars", Proceedings of COLING'88, Budapest. Wasow T., 1977. "Transformations and the lexicon", in P. Culicover et al. (eds), Formal syntax, Academic press, New York. Wasow T., Sag I., Nunberg G., 1982. "Idioms: an interim report", Proceedings of the XIIIth international Congress of Linguists, Tokyo. 298
1990
37
BOTTOM-UP PARSING EXTENDING CONTEXT-FREENESS IN A PROCESS GRAMMAR PROCESSOR Massimo Marino Department of Linguistics - University of Pisa Via S. Maria 36 1-56100 Pisa - ITALY Bitnet: [email protected] ABSTRACT A new approach to bottom-up parsing that extends Augmented Context-Free Grammar to a Process Grammar is formally presented. A Process Grammar (PG) defines a set of rules suited for bottom-up parsing and conceived as processes that are applied by a P G Processor. The matching phase is a crucial step for process application, and a parsing structure for efficient matching is also presented. The PG Processor is composed of a process scheduler that allows immediate constituent analysis of structures, and behaves in a non-deterministic fashion. On the other side, the PG offers means for implementing spec~c parsing strategies improving the lack of determinism innate in the processor. 1. INTRODUCTION Bottom-up parsing methods are usually preferred because of their property of being driven from both the input's syntactic/semantic structures and reduced constituents structures. Different strategies have been realized for handling the structures construction, e.g., parallel parsers, backtracking parsers, augmented context- free parsers (Aho et al., 1972; Grishman, 1976; Winograd, 1983). The aim of this paper is to introduce a new approach to bottom-up parsing starting from a well known and based framework - parallel bottom-up parsing in immediate constituent analysis, where all possible parses are considered - making use of an Augmented Phrase-S tructure Grammar (APSG). In such environment we must perform efficient searches in the graph the parser builds, and limit as much as possible the building of structures that will not be in the final parse tree. For the efficiency of the search we introduce a Parse Graph Structure, based on the def'mition of adjacency of the subtrees, that provides an easy method of evaluation for deciding at any step whether a matching process can be accomplished or not. The control of the parsing process is in the hands of an APSG called Process Grammar fPG), where grammar rules are conceived as processes that are applied whenever proper conditions, detected by a process scheduler, exist. This is why the parser, called PG Processor, works following a non- deterministic parallel strategy, and only the Process Grammar has the power of altering and constraining this behaviour by means of some Kernel Functions that can modify the control structures of the PG Processor, thus 299 improving determinism of the parsing process, or avoiding construction of useless structures. Some of the concepts introduced in this paper, such as some definitions in Section 2, are a development from Grishman (1976) that can be also an introductory reading regarding the description of a parallel bottom-up parser which is, even if under a different aspect, the core of the PG Processor. 2. PARSE GRAPH STRUCTURE The Parse Graph Structure (PGS) is built by the parser while applying grammar rules. If s = a a a2... a is an input string the initial PGS is composed by a set of terminal nodes <0,$>, <l,aa>, <2,a2> ..... <n,a >, <n+l,$>, where nodes 0,n+ 1 represent border markers for the sentence. All the next non-terminal nodes are numbered starting from n+2. Definition 2.1. A PGS is a triple (Nr,Nr~,T) where N r is the set of the terminal nodes numbers {0, 1 ..... n, n+l}; N N is the set of the non-terminal nodes numbers {n+2 .... }, and T is the set of the subtrees. The elements of N N and N T are numbers identifying nodes of the PGS whose structure is defined below, and throughout the paper we refer to nodes of the PGS by means of such nodes number. Definition 2.2. If ke Nr~ the node ie N r labeling a i at the beginning of the clause covered by k is said to be the left corner leaf of k lcl(k). If ke N r then lcl(k)=k. Definition 2.3. Ifke N s the nodeje N T labeling aj at the end of the clause covered by k is said to be the right corner leaf of k rcl(k). If ke N T then rcl(k) = k. Definition 2.4. Ifk~ N N the node he N r that follows the right corner leaf of k rel(k) is said to be the anchor leafofk al(k), and al(k) = h = rel(k)+L IfkeNT-{n+l } then al(k) = k+l. Definition 2.5. If ke N T the set of the anchored nodes of k an(k) is an(k) = {j~ NTUN s I alQ) = k}. From this definition it follows that for every ke NT-{0}, an(k) contains at the initial time the node number (k-l). Definition 2.6. a. If keN T the subtree rooted in k T(k) is represented by T(k) = <k,lcl(k),rcl(k),an(k),cat(k)>, where kis theroot node; lcl(k)-- rel(k)= k; an(k) = {(k-l)} initially; cat(k) = a~, the terminal category of the node. b. If ke Nr~ the subtree rooted in k T(k) is represented by T(k)=<k,lcl(k),rcl(k),sons(k),cat(k)>, where k is the root node; sons(k) = {s I ..... sv}, sic NTuN s, i = 1 ..... p, is the set of the direct descendants of k; cat(k) = A, a non-terminal category assigned to the node. From the above definitions the initial PGS for a sentence s=a~av..a n is: Nr={0,1 ..... n,n+l}, Ns={}, T= { T(0),T(1 ) ..... T(n) ,T(n+ 1 ) }; and: T(0)=<0,0,0, { } ,$>, T(i)=<i,i,i, { i- 1 } ,ai> for i= 1 ..... n, and T(n+ 1)=<n+ 1, n+l,n+l,{n} ,$>. With this PGS the parser starts its work reducing new nodes from the already existing ones. If for some k~Nr~, T(k)=<k,lcl(k),rcl(k),{s 1 ..... sp},A>, and T(s)=<si,lcl(sl),rcl(s~),{ s n ..... s~t},zi>e T, for i = 1 ..... p0 are the direct descendants of k, then k has been reduced from s~ .... ,s t by some grammar rule whose reduction rule, as we shall see later, has the form (A~---z v ..zp), and the following holds: lcl(k) = lcl(st), rcl(s~) = lcl(s2)-l, rcl(s2) = lcl(ss)-1 ..... rcl(sr, l) = lcl(sr)- 1, rcl(sp) = rcl(k). From that we can give the following definition: <12,a12> <14,a14> <13,a13> <0,$> <l,al> <2,a2> {} {0} {I} of the match process the matcher must start from the last scanned or built node z s, finding afterwards z 2 and z~, respectively, sailing in the PGS right-to-left and passing through adjacent subtrees. Steps through adjacent subtrees are easily accomplished by using the sets of the anchored nodes in the terminal nodes. It follows from the above def'mitions that if k~ N N then the subtrees adjacent to T(k) are given by an(lel(k)), whereas ff k~ N r then the adjacent subtrees are given by an(k). The lists of the anchored nodes provide an efficient way to represent the relation of adjacency between nodes. These sets stored only in the terminal nodes provide an efficient data structure useful for the matcher to accomplish its purpose. Figure 1 shows a parse tree at a certain time of a parse, where under each I T(9) = <9,1,2,{1,2},a9> TOO) = <10,2,2,{2},a10> T(ll) = <11,2,3,{ 10,3},al 1> T(12) = <12,1,3,{9,3},a12> T(13) = <13,4,5, {4,5 },a13> T(14) -- <14,3,5,{3,4,5 },a14> <3#3> <4,a4> <5,a5> <6,a6> <7,a7> <8,$> {2,9,10} {3,11,12} {4} {5,13,14} {6} {7} Figure 1. A parse tree with the sets of the anchored nodes 5 ,41 4 8 a7 I 7 a6 I 6 a5 a14 13 14 3 11 12 2 9 10 1 lal al[ 1 1 Figure 2. Definition 2.7. If { s t ..... s.} is a set of nodes in the PGS, then their subtrees T(s a) ..... T(~p) are said to be adjacent when rcl(si) = lcl(si.~)-1 or, alternatively, al(si) = lcl(sm), for i = 1 .... ,p-1. During a parsing process a great effort is made in finding a set of adjacent subtrees that match a fight-hand side of a reduction rule. Let (A~z~ z 2 z 3) be a reduction rule, then the parser should start a match process to find all possible sets of adjacent subtrees such that their categories match z a z 2 z 3. The parser scans the input string left-to-right, so reductions grow on the left of the scanner pointer, and for the efficiency 300 Adjacency Tree terminal node there is the corresponding list of the anchored nodes. A useful structure that can be derived from these sets is an adjacency tree, recursively defined as follows: Definition 2.8. If (Nr,NwT) is a PGS for an input sentence s, and Isl = n, then the adjacency tree for the PGS is so built: - n+1 is the root of the adjacency tree; - for every k~Nr-{0,1}uN ., the sons ofk are the nodes in an(Icl(k)) unless an(Icl(k))= {0}. Figure 2 shows the adjacency tree obtained from the partial parse tree in Figure 1. Any passage from a node k to one of its sons h in the adjacency tree represents a passage from a 3 11 12 2 9 10 t alt 2 9 10 1 1 1 I "1 1 1 subtree T(k) to one of its adjacent subtrees T(h) in the PGS. Moreover, during a match process this means that a constituent of the right-hand side has been consumed, and matching the first symbol that-match process is f'mished. The adjacency lace also provides further useful information for optimizing the search during a match. For every node k, if we consider the longest path from k to a leaf, its length is an upper bound for the length of the right hand side still to consume, and since the sons ofk are the nodes in an(lcl(k)), the longest path is always given by the sequence of the terminal nodes from the node 1 to the node lcl(k)- 1. Thus its length is just lcl(k)-l. Property 2.1. If (Nr,Ns,T) is a PGS, (A~zl...z v) is a reduction rule whose right-hand side has to be matched, and T(k)~ T such that cat(k) = z, then: a. the string z t ... zp is matc'hable iffp < lcl(k); b. for i = p ..... 1, zt is partially matchable to a node Definition 2.10. If (Nr,Ns,T) is a PGS, an adjacency digraph can be represented as follows: a. for any ke N r, k has outgoing arcs directed to the nodes in an(k); b. for any k¢ N N, k has one outgoing arc directed to lcl(k). In the classic literature the lists of the anchored nodes are called adjacency lists, and are used for representing graphs (Aho et at., 1974). A graph G=(V,E) can be usually represented by IVI adjacency lists. In our representation we can obtain an optimization representing an adjacency digraph by n adjacency lists, if n is the length of the sentence, and by INsl simple pointers for accessing the adjacency lists from the non-terminal nodes, with respect to n+lNsl adjacency lists for a full representation of an adjacency digraph composed of arcs as in Det'mition 2.10.a. Figure 3 shows how a new non-terminal node is connected in an adjacency digraph, and Figure 4 shows the adjacency k [ lcl(k) ~I- - - - k access from k to lcl(k) ,ql-lcl(k-1) ~ lcl(k),~- ... al(k) =rcl(k)+l.~- I r ~ k T(k) is adjacent to T(r) Figure 3. Adding a non-terminal node k to an adjacency digraph 04$_ 1 ~4r "a~ " " " ;~ " "~ _ ~ a4 5.4t.....~..__~ ~7.~.~__8 ".. ,/,".. j'--. -Id" '~ l l P " " " '14~ '13 r Figure 4. Adjacency Digraph he NNuN riff cat(h) = z i and i < Icl(h). Property 2. I. along with the adjacency relation provides a method for an efficient navigation within the PGS among the subtrees. This navigation is performed by the matcher in the PGS as visiting the adjacency tree in a pre-order fashion. It is easy to see that a pre-order visit of the adjacency tree scans all possible sequences of the adjacent subtrees in the PGS, but Property 2.1 provides a shortcut for avoiding useless passages when matchable conditions do not hold. When a match ends the matcher returns one or more sets of nodes satisfying the following conditions: Definition 2.9. A set RSet = {n I .... ,np} is a match for a string zl...zpiff cat(nl) ffi z i, for i = 1,...,p, and T(nl) is adjacent to T(ni, l), for i = 1 .... ,p-1. The set RSet is called a reduction set. The adjacency tree shows the hypothetical search space for searching the reduction sets in a PGS, thus it is not a representation of what memory is actually required to store the useful data for such a search. A more suitable representation is an adjacency directed graph defined by means of the lists of the anchored nodes in the terminal nodes, and by the pointers to the left comer leaf in the non- terminal nodes. 301 digraph for the parse tree of Figure 1. 3. PROCESS GRAMMAR The Process Grammar is an extension of the Augmented Context-Free Grammar such as APSG, oriented to bottom- up parsing. Some relevant features make a Process Grammar quite different from classical APSG. 1. The parser is a PG processor that tries to apply the rules in a bottom-up fashion. It does not have any knowledge about the running grammar but for the necessary structures to access its rules. Furthermore, it sees only its internal state, the Parse Graph Structure, and works with a non- deterministic strategy. 2. The rules are conceived as processes that the PG processor schedules somehow. Any rule defines a reduction rule that does not represent a rewriting rule, but rather a statement for search and construction of new nodes in a bottom-up way within the Parse Graph Structure. 3. The rules are augmented with some sequences of operations to be performed as in the classical APSG. In general, augmentations such as tests and actions concern manipulation of linguistic data at syntactic and/or semantic level. In this paper we are not concerned with this aspect (an informal description about this is in Marino (1989)), rather we examine some aspects concerning parsing strategies by means of the augmentations. In a Process Grammar the rules can have knowledge of the existence of other rules and the purpose for which they are defined. They can call some functions that act as filters on the control structures of the parser for the scheduling of the processes, thus altering the state of the processor and forcing alternative applications. This means that any rule has the power of changing the state of the processor requiring different scheduling, and the processor is a blind operator that works following a loose strategy such as the non-deterministic one, whereas the grammar can drive the processor altering its state. In such a way the lack of determinism of the processor can be put in the Process Grammar, implementing parsing strategies which are transparent to the processor. Definition 3.1. A Process Grammar PG is a 6-tuple (VT,Vs,S,R,Vs,F) where: . V r is the set of terminal symbols; - V N is the set of non-terminal symbols; - S¢ V N is the Root Symbol of PG; - R = {r 1 .... ,rt} is the set of the rules. Any rule r i in R is of the form r i = <red(ri),st(ri),t(ri),a(Q>, where red(ri) is a reduction rule (A~---a), A~ Vr~, ct~ (VruVN)+; st(r) is the state of the rule that can be active or inactive; t(Q and a(Q are the tests and the actions, respectively; - V s is a set of special symbols that can occur in a reduction rule and have a special meaning. A special symbol is e a, a null category that can occur only in the left-hand side of a reduction rule. Therefore, a reduction rule can also have the form (e~¢---a), and in the following we refer to it as e- reduction; - F = {fl ..... f] is a set of functions the rules can call within their augmentations. Such a definition extends classical APSG in some specific ways: first, a Process Grammar is suited for bottom-up parsing; second, rules have a state concerning the applicability of a rule at a certain time; third, we extend the CF structure of the reduction rule allowing null left-hand sides by means of e-reductions; fourth, the set F is the strategic side that should provide the necessary functions to perform operations on the processor structures. As a matter of fact, the set F can be further structured giving the PG a wider complexity and power. In this paper we cannot treat a formal extended definition for F due to space restrictions, but a brief outline can be given. The set F can be defined as F=Fr~uFt,. In F~ are all those functions devoted to operations on the processor structures (Kernel Functions), and, in the case of a feature-based system, in Ft, are all the functions devoted to the management of feature structures (Marino, 1989). In what follows we are also concerned with the combined use of e-reductions and the function RA, standing for Rule Activation, devoted to the immediate scheduling of a rule. RAe Fx~ ' and a call to it means that the 302 specified role must be applied, involving the scheduling process we describe in Section 4. Before we introduce the PG processor we must give a useful definition: Definition 32. Let reR be a rule with t(r)=[f,1;...;f.~], a(r)=[fl;...;f ] be sequences of operations in its augmentations, f,~ ..... f~,ft ..... feF. Let {n 1 ..... rip) be a reduction set for red(r) = (A~z r..zv), and he Nr~ be the new node for A such that T(h) is the new subtree created in the PGS, then we define the Process Environment for t(r) and a(r), denoted briefly by ProcEnv(r), as: ProcEnv(r) = {h,n 1 .... ,n.} If red(r) is an e-reduction then ProcEnv(r) = {nl .... ,np}. This definition states the operative range for the augmentations of any rule is limited to the nodes involved by the match of the reduction rule. 4. PG PROCESSOR Process Scheduler. The process scheduler makes possible the scheduling of the proper rules to run whenever a terminal node is consumed in input or a new non-terminal node is added to the PGS by a process. By proper rules we mean all the rules satisfying Property 2.1.a. with respect to the node being scanned or built. These rules are given by the sets def'med in the following definition: Definia'on 4.1. Vce VsuV r such that 3 r~ R where red(r) = (Ac---ac), AeVNu{e~}, being c the right comer of the reduction rule, and lacl _< L, being L the size of the longest right-hand side having c as the right comer, the sets P(c,i), P,(c,i) for i = 1 ..... L, can be built as follows: P(c,i) = {re R I red(r)=(At---cxc), 1 < Itxcl _< i, st(r)=aclive} Pe(c,i)= {re R I red(r)=(eac---ac ), 1 < lacl < i, st(r)=active} Whenever a node he NruNr~ has been scanned or built and k=lcl(h), then the process scheduler has to schedule the rules in P(cat(h),k)uP,,(cat(h),k). In the following this union is also denoted by Yl(cat0a),k). Such a rule scheduling allows an efficient realization of the immediate constituent analysis approach within a bottom-up parser by means of a partitioning of the roles in a Process Grammar. The process scheduler sets up aprocess descriptor for each rule in l-l(cat0a),k) where the necessary data for applying a process in the proper environment are supplied. In a Process Grammar we can have three main kinds of rules: rules that are activated by others by means of the function RA; e- reduction roles; and standard rules that do not fall in the previous cases. This categorization implies that processes have assigned a priority depending on their kind. Thus activated rules have the highest priority, e-reduction rules have an intermediate priority and standard rules the lowest priority. Rules become scheduled processes whenever a process descriptor for them is created and inserted in a priority queue by the process scheduler. The priority queue is divided into three stacks, one for each kind of rule, and they form one of the structures of the processor state. Definition 4.2. A process descriptor is a triple PD=[r,h,C] where: m R is the rule involved; he NruNsu {NIL} is either the right corner node from which the marcher starts or NIL; C is a set of adjacent nodes or the empty set. A process descriptor of the form [r,NiL,[nl .... ,nc] is built for an activated rule r and pushed in the stack s r A process descriptor of the form [r,h, [ } ] is built for all the other rules and is pushed either in the stack s 2 if r is an e-reduction rule or in the stack s 3 if a standard rule. Process descriptors of these latter forms are handled by the process scheduler, whereas process descriptors for activated rules are only created and queued by the function RA. State of Computation. The PG processor operates by means of an operation Op on some internal structures that define the processor state ProcState, and on the parsing structures accessible by the process environment ProcEnv. The whole state of computation is therefore given by: [Op,ProcState,ProcEnv] = [Op,pt,[s~,svs3],PD,pn,RSet] where pt¢ N r is the input pointer to the last terminal node scanned; pn~ N~ is the pointer to the last non-terminal node added to the PGS. For a sentence s=a r..a. the computation starts from the initial state [begin,0,[NIL,NIL,NIL], NIL,n+I,{}], and terminates when the state becomes [end,n,[NIL,NIL,NIL],NIL,pn,[ }]. The aim of this section is not to give a complete description of the processor cycle in a parsing process, but an analysis of the activation mechanism of the processes by means of two main cases of rule scheduling and processing. Scheduling and Processing of Standard Rules. Whenever the state of computation becomes as [scan, pt, [NIL,NILMIL]MIL,pn,{ }] the processor scans the next terminal node, performing the following operations: sc an: scl if pt = n then Op <--- end sc2 else pt*-- pt + 1; sc3 schedule 0"I(cat(pt),lcl(pt))); sc4 Op <--- activate. Step sc4 allows the processor to enter in the state where it determines the first non-empty higher priority stack where the process descriptor for the next process to be activated must be popped off. Let suppose that cat(pt)=zp, and l'I(z,lcl(p0)={r } where r is a standard rule such that red(~)=(A<--zr..z ~. At this point the state is [activate, pt,[NILMIL,[r,pt, [ } ]] MIL,pn,[ } ] and the processor has to try reduction for the process in the stack s v thus Op<--reduce performing the following statements: reduce: rl PD<---pop (%); [reduce,pt,[NIL,NIL,NIL],[r,pt, { } ],pn,{ }] r2 C0--match (red(r), pt); C = {nl,...,n vpt} r3 PD<-Lir, pt, C]; 303 [reduce,pt,[Nfl.,MiL,NIL],[r,pt,C],pn,{ }] r4 V rset~ C: r5 RSet ~rset; [reduce,pt,[NiL,NILMIL],[r,pt, { } ],pn~RSet] r6 if t(r) then pn<--pn + 1; r7 add subtree(pn ,red (r) ,R S e0; r8 a(r); r9 schedule (H(cat(pn),lcl(pn)); [reduce,pt,[NIL,sv%],[r,pt, { } ],pn,RSet] rl00p<--activate. Step r9, where the process scheduler produces process descriptors for all the rules in H(AJcl(pn)), implies immediate analysis of the new constituent added to the PGS. Scheduling and Processing of Rules Activated by ~- Reduction Rules. Let consider the case when an ~- reduction rule r activates an inactive rule r' such that: red(r)f-(eat--zr..zp), a(r)=[RA (r')], red(r')=(A~zr..Zh), l~,_<.h<p, and st(r')=inactive. When the operation activate has checked that an g-reduction rule has to be activated then Olx--~-reduce, thus the state of computation becomes: [e.reduce,pt,[NIL,[r,m,{}],NIL],NIL,pn,{}], and the following statements are performed: e-reduce: 0-I PD<---pop (sz); [e-red uce,pt,[NIL,NIL.NIL] ,[r,m, { } ] ,pn, { } ] ~2 C<---match (red(r), m); C = (n I .... ,n I, m} 0-3 f~b.[r,m,C]; [e.red uce,pt,[NIL,NIL.NIL] ,[r,m,C] ,pn, { } ] 0-4 V rsemC: 0"5 RSet.--rset; [e-red uce,pt. [NIL,NIL,NIL],[r,m, { } ],pn,RSet] 0-6 if t(r) then a(r)=[RA (r')]; [¢.reduce,pt,[[r',NIL,{n k .... ,nh}],NIL,NIL], [r,m,{}],pn,RSet] 0"70lx--activate. In this case, unlike that which the process scheduler does, the function RA performs at step 0-6 the scheduling of a process descriptor in the stack s, where a subset of ProcEnv(r) is passed as the ProcEnv(r'). Therefore, when an e-reduction rule r activates another rule r' the step er2 does the work also for r', and RA just has to identify the ProcEnv of the activated rule inserting it in the process descriptor. Afterwards, the operation activate checks the highest priority stack s, is not empty, therefore it pops the process descriptor [r',NIL,{ n k ..... n u} ] and OIx--h-reduce that skips the match process applying immediately the rule r': h-reduce: hrl RSet<--C; [h-reduce,pt,[NiL,NIL,NlL],[r',NIL,{ } ],pn,RSet] hr2 through hr6 as r6 through rl0. From the above descriptions it turns out that the operation activate plays a central role for deciding what operation must run next depending on the state of the three stacks. The operation activate just has to check whether some process descriptor is in the first non-empty higher priority stack, and afterwards to set the proper operation. The following statements describe such a work and Figure 5 depicts graphically the connections among the operations defined in this Section. activate: al if sI=NIL a2 then if %=NIL a3 then if s,=NIL a4 then Op ~ scan a5 else Op <-- reduce a6 else Op <--- c-reduce a7 else PD ~ pop (%); PD = [r,NIL,C] a8 Op <-- h-reduce. sl=s2=s3=NI~ Figure 5. Operations Transition Diagram 5. EXAMPLE It is well known that bottom-up parsers have problems in managing rules with common right-hand sides like X ---> ABCD, X ---> BCD, X ---> CD, X ---> D, since some or all of these rules can be fired and build unwanted nodes. A strategy called top-down filtering in order to circumvent such a problem has been stated, and it is adopted within bottom-up parsers (Kay, 1982; Pratt, 1975; Slocum, 1981; Wir6n, 1987) where it simulates a top-down parser together with the bottom-up parser. The PG Processor must face this problem as well, and the example we give is a Process Grammar subset of rules that tries to resolve it. The kind of solution proposed can be put in the family of top-down filters as well, taking advantage firstly of using e-reduction rules. Unfortunately, the means described so far are still insufficient to solve our problem, thus the following definitions introduce some functions that extend the Process Grammar and the control over the PGS and the PG Processor. Definition 5.1. Let r be a rule of R with red(r)=(~--z v..z), and RSet={n,...np} be a reduction set for red(r). Taken two nodes %,nje RSet where n,e N N such that we have cat(n)--z,, cat(nj)=zj, and T(n~), T(n) are adjacent, i.e., either j=i+ 1 or 304 j=i- 1, then the function Add_Son_Rel of Fx= when called in a(r) as Add_Son_Rel (zi~z) has the effect of creating a new parent-son relation between %, the parent, and n, the son, altering the sets sons(n), and either 1cI(%) or rcl(n) as follows: a) sons(n) ~- sons(n) u {nj} b) lcl(n) ~ lcl(nj) ifj=i-1 c) rcl(n) 6-- rcl(n) ifj=i+l Such a function has the power of making an alteration in the structure of a subtree in the PGS extending its coverage to one of its adjacent subtrees. Definition 5.2. The function RE of Fr~, standing for Rule Enable, when called in the augmentations of some rule r as RE (r'), where r, r' are in R, sets the state of r' as active, masking the original state set in the definition of r'. Without entering into greater detail, the function RE can have the side effect of scheduling the just enabled rule r' whenever the call to RE follows the call Add Son Rel (X,Y) for some category Xe V,,,Ye V,wVr, and the right corner of red(r') is X. Definition 5.3. The function RD of Fx, ,, standing for Rule Disable, when called in the augmentations of some rule r as RD (r'), where r, r' are in R, sets the state ofr' as inactive, masking the original state set in the definition of r'. We axe now ready to put the problem as follows: given, for instance, the following set P1 of productions: PI = {X --> ABCD, X ---> BCD, X --> CD, X ---> D} we want to define a set of PG rules having the same coverage of the productions in PI with the feature of building in any case just one node X in the PGS. Such a set of rules is shown in Figure 6 and its aim is to create links among the node X and the other constituents just when the case occurs and is detected. All the possible cases are depicted in Figure 7 in chronological order of building. The only active rule is r0 that is fired whenever a D is inserted in the PGS, thus a new node X is created by r0 (case (a)). Since the next possible case is to have a node C adjacent to the node X, the only action of r0 enables the rule rl whose work is to find such an adjacency in the PGS by means of the e-reduction rule red(rl)=(e,~ C X'). If such a C exists rl is scheduled and applied, thus the actions of rl create a new link between X and C (case Co)), and the rule r2 is enabled in preparation of the third possible case where a node B is adjacent to the node X. The actions of rl disable rl itself before ending their work. Because of the side effect of RE cited above the rule r2 is always scheduled, and whenever a node B exists then it is applied. At this point it is clear how the mechanism works and cases (c) and (d) are handled in the same way by the rules r2 and r3, respectively. As the example.shows, whenever the rules rl ¢2¢3 are scheduled their task is realized in two phases. The first phase is the match process of the e-reduction rules. At this stage it is like when a top-down parser searches lower-level constituents for expanding the higher level constituent. If this search succeeds the second phase is when the red(r0) = (X ~-- D) st(r0) = active a(r0) = iRE (rl)] red(rl) = (el<--- C X) st(rl) = inactive a(rl) = [Add Son_Rel (X,C); RE (r2); RD (rl)] red(a) = B XO st(r2) = inactive a(r2) = [Add_Son Rel (X,B); RE (r3); RD (r2)] red(r3) -- (el ¢--- A X) st(r3) = inactive a(r3) = [Add Son_Rel (X,A); RD (r3)] Figure 6. The Process Grammar of the example X D A (a) X rl/'N C D AA Co) X B C D /x/ A (c) X A B C D AAAA (d) Figure 7. All the possible cases of the example appropriate links are created by means of the actions, and the advantage of this solution is that the search process terminates in a natural way without searching and proposing useless relations between constituents. We terminate this Section pointing out that this same approach can be used in the dual case of this example, with a set P2 of productions like: P2= {X --~ A, X ---> AB, X ---> ABC, X ---> ABCD} The exercise of finding a corresponding setofPG rules is left to the reader. 6. RELATED WORKS Some comparisons can be made with related works on three main levels: the data structure PGS; the Process Grammar; the PG Processor. ThePGS can be compared with the chart (Kaplan, 1973; Kay, 1982). The PGS embodies much of the information the chart has. As a matter of fact, our PGS can be seen as a denotational variant of the chart, and it is managed in a different way by the PG Processor since in the PGS we mainly use classical relations between the nodes of the parse-trees: the dominance relation between a parent and a son node, encoded in the non-terminal nodes; the left- adjacency relation between subtrees, encoded in the terminal nodes. Note that if we add the fight-adjacency relation to the PGS we obtain a structure fully comparable to the chart. The Process Grammar can embody many kinds of information. Its structure comes from the general structure stated for the APSG, being very close to the ATN Grammars structure. On the other hand, our approach proposes that grammar rules contain directives relative to the control of the parsing process. This is a feature not in line with the current trend of keeping separate control and linguistic restrictions expressed in a declarative way, and it can be 305 found in parsing systems making use of grammars based on situation-action rules 0Vinograd, 1983); furthermore, our way of managing grammar rules, i.e., operations on the states, activation and scheduling mechanisms, is very similar to that realized in Marcus (1980). 7. DISCUSSION AND CONCLUSIONS The PG Processor is bottom-up based, and it has to try to take advantage from all the available sources of information which are just the input sentence and the grammar structure. A slrong improvement in the parsing process is determined by how the rules of a Process Grammar are organized. Take, for instance, a grammar where the only active rules are e-reduction rules. Within the activation model they merely have to activate inactive rules to be needed next, after having determined a proper context for them. This can be extended to chains of activations at different levels of context in a sentence, thus limiting both calls to the matcher and nodes proliferation in the PGS. This case can be represented writing (ea~offl3) ~ (A~--T), reading it as if the e-reduction in the lhs applies then activate the rule with the reduction in the rhs, thus realizing a mechanism that works as a context-sensitive reduction of the form (otA[3~---¢c#), easily extendable also to the general case =, This is not the only reason for the presence of the e-reduction rules in the Process Grammar. It also becomes apparent from the example that the e-reduction rules are a powerful tool that, extending the context-freeness of the reduction rules, allow the realization of a wide alternative of techniques, especially when its use is combined together with Kernel Functions such as RA getting a powerful mean for the control of the parsing process. From that, a parser driven by the input - for the main scheduling - and both by the PGS and the rules - for more complex phenomena - can be a valid framework for solving, as much as possible, classical problems of efficiency such as minimal activation of rules, and minimal node generation. Our description is implementation-independent, it is responsive to improvements and extensions, and a first advantage is that it can be a valid approach for realizing efficient implementations of the PG Processor. Extending the Process Grammar. In this paper we have described a Process Grammar where rules are augmented with simple tests and actions. An extension of this structure that we have not described here and that can offer further performance to the parsing process is if we introduce in the PG some recovery actions that are applied whenever the detection of one of the two possible cases of process failure happens in either the match process or the tests. Consider, for instance, the reduction rule. Its f'mal aim is to find a process environment for the rule when scheduled. This leads to say that whenever some failure conditions happen and a process environment cannot be provided, the recovery actions would have to manage just the control of what to do next to undertake some recovery task. It is easy to add such an extension to the PG, consequently modifying properly the reduction operations of the PG processor. Other extensions concern the set F~, by adding further control and process management functions. Functions such as RE and RD can be defined for changing the state of the rules during a parsing process, thus a Process Grammar can be partitioned in clusters of rules that can be enabled or disabled under proper circumstances detected by qow- level'(e-reduction) rules. Finally, there can be also some cutting functions that stop local partial parses, or even halt the PG processor accepting or rejecting the input, e.g., when a fatal condition has been detected making the input unparsable, the PG processor might be halted, thus avoiding the complete parse of the sentence and even starting a recovery process. The reader can refer to Marino (1988) and Marino (1989) for an informal description regarding the implementation of such extensions. Conclusions. We have presented a complete framework for efficient bottom-up parsing. Efficiency is gained by means of: a structured representation of the parsing structure, the Parse Graph Structure, that allows efficient matching of the reduction rules; the Process Grammar that extends APSG by means of the process-based conception of the grammar rules and by the presence of Kernel Functions; the PG Processor that implements a non- deterministic parser whose behaviour can be altered by the Process Grammar increasing the determinism of the whole system. The mechanism of rule activation that can be realized in a Process Grammar is context-sensitive-based, but this does not increase computational effort since processes involved in the activations receive their process environments - which are computed only once - from the activating rules. At present we cannot tell which degree of determinism can be got, but we infer that the partition of a Process Grammar in clusters of rules, and the driving role the e-reductions can have are two basic aspects whose importance should be highlighted in the future. ACKNOWLEDGMENTS The author is thankful to Giorgio Satta who made helpful comments and corrections on the preliminary draft of this paper. REFERENCES Aho, Alfred, V. and Ullman, Jeffrey, D. (1972). The Theory of Parsing, Translation, and Compiling. Volume 1: Parsing. Prentice Hall, Englewood Cliffs, NJ. Aho, Alfred, V., Hopcroft, John, E. and Ullman, Jeffrey, D. (1974). The Design and Analysis of Computer Algorithms. Addison-Wesley. Grishman, Ralph (1976). A Survey of Syntactic Analysis Procedures for Natural Language. American Journal of ComputationalLinguistics. Microfiche 47, pp. 2- 96. Kaplan, Ronald, M. (1973). A General Syntactic Processor. In Randall Rustin, ed., Natural Language Processing, Algodthmics Press, New York, pp. 193-241. Kay, Martin (1982). Algorithm Schemata and Data Structures in Syntactic Processing. In Barbara J. Grosz, Karen Sparck Jones and Bonnie Lynn Webber, eds., Readings in Natural Language Processing, Morgan Kaufmann, Los Altos, pp. 35-70. Also CSL-80-12, Xerox PARC, Palo Alto, California. Marcus, Mitchell, P. (1980). A Theory of Syntactic Recognition for NaturaI Language. MIT Press, Cambridge, MA. Marino, Massimo (1988). A Process-Activation Based Parsing Algorithm for the Development of Natural Language Grammars. Proceedings of 12th International Conference on Computational Linguistics. Budapest, Hungary, pp. 390-395. Marino, Massimo (1989). A Framework for the Development of Natural Language Grammars. Proceedings of lnternational Workshop on Parsing Technologies. CMU, Pittsburgh, PA, August 28-31 1989, pp. 350-360. Pratt, Vaughan, R. (1975). LINGOL - A Progress Report. Proceedings of4th IJCAI, Tbilisi, Georgia, USSR, pp. 422-428. Slocum, Johnathan (1981). A Practical Comparison of Parsing Strategies. Proceedings of 19th ACL, Stanford, California, pp. 1-6. Winograd, Terry (1983). Language as a Cognitive Process. Vol. 1: Syntax. Addison-Wesley, Reading, MA. Wirtn, Mats (1987). A Comparison of Rule-Invocation Strategies in Context-Free Chart Parsing. Proceedings of 3rd Conference of the European Chapter of the ACL, Copenhagen, Denmark, pp. 226-233. 306
1990
38
A HARDWARE ALGORITHM FOR HIGH SPEED MORPHEME EXTRACTION AND ITS IMPLEMENTATION Toshikazu Fukushima, Yutaka Ohyama and Hitoshi Miyai C&C Systems Research Laboratories, NEC Corporation 1-1, Miyazaki 4-chome, Miyamae-ku, Kawasaki City, Kanagawa 213, Japan ([email protected], ohyama~tsl.cl.nec.co.jp, [email protected]) ABSTRACT This paper describes a new hardware algorithm for morpheme extraction and its implementation on a specific machine (MEX-I), as the first step toward achieving natural language parsing accel- erators. It also shows the machine's performance, 100-1,000 times faster than a personal computer. This machine can extract morphemes from 10,000 character Japanese text by searching an 80,000 morpheme dictionary in I second. It can treat multiple text streams, which are composed of char- acter candidates, as well as one text stream. The algorithm is implemented on the machine in linear time for the number of candidates, while conven- tional sequential algorithms are implemented in combinational time. 1 INTRODUCTION Recent advancement in natural language pars- ing technology has especially extended the word processor market and the machine translation sys- tem market. For further market extension or new market creation for natural language applications, parsing speed-up as well as improving parmng ac- curacy is required. First, the parsing speed-up directly reduces system response time required in such interactive natural language application sys- tems as those using natural language interface, speech recognition, Kana-to-Kanjl i conversion, which is the most popular Japanese text input method, and so on. Second, it also increases the advantage of such applications as machine transla- tion, document proofreading, automatic indexing, and so on, which are used to treat a large amount of documents. Third, it realizes parsing meth- ods based on larger scale dictionary or knowledge database, which are necessary to improve parsing accuracy. Until now, in the natural language processing field, the speed-up has depended mainly on perfor- mance improvements achieved in sequential pro- cesslng computers and the development of sequen- tial algorithms. Recently, because of the further IKan~ characters are combined consonant and vowel symbols used in written Japanese. Kanjl characters ~re Chinese ideographs. speeded-up requirement, parallel processing com- puters have been designed and parallel parsing al- gorithms (Matsumoto, 1986) (Haas, 1987) (Ryt- ter, 1987) -(Fukushima, 1990b) have been pro- posed. However, there are many difficult problems blocking efficient practical use of parallel process- ing computers. One of the problems is that ac- cess confiicts occur when several processors read or write a common memory simultaneously. An- other is the bottle-neck problem, wherein commt- nication between any two processors is restricted, because of hardware scale limitation. On the other hand, in the pattern processing field, various kinds of accelerator hardware have been developed. They are designed for a special purpose, not for general purposes. A hardware approach hasn't been tried in the natural language processing field yet. The authors propose developing natural lan- guage parsing accelerators, a hardware approach to the parsing speed-up (Fukushima, 1989b) -(Fukushima, 1990a). This paper describes a new hardware algorithm for high speed morpheme ex- traction and its implementation on a specific ma- chine. This morpheme extraction machine is de- signed as the first step toward achieving the nat- ura] language parsing accelerators. 2 MACHINE DESIGN STRATEGY 2.1 MORPHEME EXTRACTION Morphological analysis methods are generally composed of two processes: (1) a morpheme ex- traction process and (2) a morpheme determina- tion process. In process (1), all morphemes, which are considered as probably being use<] to construct input text, are extracted by searching a morpheme dictionary. These morphemes are extracted as candidates. Therefore, they are selected mainly by morpheme conjunction constraint. Morphemes which actually construct the text are determined in process (2). The authors selected morpheme extraction as the first process to be implemented on specific hardware, for the following three reasons. First is that the speed-up requirement for the morpho- logical analysis process is very strong in Japanese 307 Input Text . . . . . . . . ~.p)i~ C. ...... ~ Iverb ! ! i ' ' i I noun ; I i ,1", ; ~'~,~: I noun ~MorphemeExtraction~l fi~ inoun ~.~ Process ..,) ,ti~ inou n ~ i Morpheme Dictionary !~; postposition i ..... su,,x !~, :verb I I I I : , ~,~ noun i . . . . . . . . . . d '" . . . . . . . . "1 i ~)f :suffix Extracted = ' Morphemes i i~#~. :noun = , . . . . . . . . . / I I . . . . . !vo, ~)f ! no,,n ; . . . . . . . . . I Figure h Morpheme Extraction Process for Japanese Text 2.2 STRATEGY DISCUSSION In conventional morpheme extraction methods, which are the software methods used on sequential processing computers, the comparison operation between one key string in the morpheme dictio- nary and one sub-string of input text is repeated. This is one to one comparison. On the other hand, many to one comparison or one to many compar- ison is practicable in parallel computing. Content- addressable mem- ories (.CAMs) (Chlsvln, 1989) (Yamada, 1987) re- allze the many to one comparison. One sub-string of input text is simultaneously compared with all key strings stored in a CAM. However, presently available CAMs have only a several tens of kilo- bit memory, which is too small to store data for a more than 50,000 morpheme dictionary. The above mentioned parallel processing com- puters realize the one to many comparison. On the parallel processing computers, one processor searches the dictionary at one text position, while another processor searches the same dictionary at the next position at the same time (Nakamura, 1988). However, there is an access conflict prob- lem involved, as already mentioned. The above discussion has led the authors to the following strategy to design the morpheme extrac- tion machine (Fukushima, 1989a). This strategy is to shorten the one to one comparison cycle. Simple architecture, which will be described in the next section, can realize this strategy. text parsing systems. This process is necessary for natural language parsing, because it is the first step in the parsing. However, it is more labo- rious for Japanese and several other languages, which have no explicit word boundaries, than for Engllsh and many European languages (Miyazald, 1983) (Ohyama, 1986) (Abe, 1986). English text reading has the advantage of including blanks be- tween words. Figure 1 shows an example of the morpheme extraction process for Japanese text. Because of the disadvantage inherent in reading difficulty involved in all symbols being strung to- gether without any logical break between words, the morpheme dictionary, including more than 50,000 morphemes in Japanese, is searched at al- most all positions of Japanese text to extract mor- phemes. The authors' investigation results, indi- cating that the morpheme extraction process re- quires using more than 70 % of the morphologi- cal analysis process time in conventional Japanese parsing systems, proves the strong requirement for the speed-up. The second reason is that the morpheme ex- traction process is suitable for being implemented on specific hardware, because simple character comparison operation has the heaviest percentage weight in this process. The third reason is that this speed-up will be effective to evade the com- mon memory access conflict problem mentioned in Section 1. 308 3 A HARDWARE ALGO- RITHM FOR MOR- PHEME EXTRACTION 3.1 FUNDAMENTAL ARCHITECTURE A new hardware algorithm for the morpheme extraction, which was designed with the strategy mentioned in the previous section, is described in this section. The fundamental architecture, used to imple- ment the algorithm, is shown in Fig. 2. The main components of this architecture are a dictionary block, a shift register block, an index memory, an address generator and comparators. The dictionary block consists of character mem- ories (i.e. 1st character memory, 2nd character memory, ..., N-th character memory). The n-th character memory (1 < n < N) stores n-th charac- ters of all key strings ]-n th~ morpheme dictionary, as shown in Fig. 3. In Fig. 3, "iI~", "~f", "@1:~ ", "~", "~", and so on are Japanese mor- phemes. As regarding morphemes shorter than the key length N, pre-deflned remainder symbols /ill in their key areas. In Fig. 3, '*' indicates the remainder symbol. The shift register block consists of character reg- isters (i.e. 1st character register, 2nd character reg- ister,..., N-th character register). These registers Address~'~._____J Index J,,~ enerator~/'--"--] Memory cM ~*(~,comlpStrator~*~ lstCRli iiiiiiiiiii i iii i!ii; ! ii!ili! i;i I I' ,i TI N-th CM mparator~ , ............. ...... ..--.-.-~.-~ Mazcn ~lg Dictionary Block CM --- Character Memory .... t N-th CR,I Text Register Block CR = Character Register Figure 2: Fundamental Architecture .j Index Memory I il: IIm~ ~= [in * I1: I1~ I1~ * I1: I 1 2 | • ! ! * "3(" "X'li. ...... l "X" • !, *Ii ...... ~, * ii li. 3 4 N-th Character Memory Figure 3: Relation between Character Memories and Index Memory 2 3 ~: 4 J~ Shift Shift 7, 8 Ul I~1 L~ (a) (b) (c ggg gg (d) (e) Figure 4: Movement in Shift Register Block store the sub-string of input text, which can be shifted, as shown in Fig. 4. The index memory re- ceives a character from the 1st character register. Then, it outputs the top address and the number of morphemes in the dictionary, whose 1st char- acter corresponds to the input character. Because morphemes are arranged in the incremental order of their key string in the dictionary, the pair for the top address and the number expresses the address range in the dictionary. Figure 3 shows the rela- tion between the index memory and the character memories. For example, when the shift register block content is as shown in Fig. 4(a), where '~' is stored in the 1st character register, the index memory's output expresses the address range for the morpheme set {"~", "~", "~]~", "~]~ ~[~", "~]~", ..., "~J"} in Fig. 3. The address generator sets the same address to all the character memories, and changes their ad- dresses simultaneously within the address range which the index memory expresses. Then, the dic- tionary block outputs an characters constructing one morpheme (key string with length N ) simul- taneously at one address. The comparators are N in number (i.e. 1st comparator, 2nd compara- ,or, ..., N-th comparator). The n-th comparator compares the character in the n-th character reg- ister with the one from the •-th character mem- ory. When there is correspondence between the two characters, a match signal is output. In this comparison, the remainder symbol operates as a wild card. This means that the comparator also outputs a match signal when the ~-th character memory outputs the remainder symbol. Other- wise, it outputs a no match signal. The algorithm, implemented on the above de- scribed fundamental architecture, is as follows. • Main procedure Step 1: Load the top N characters from the input text into the character registers in the shift register block. 309 Step 2: While the text end mark has not ar- rived at the 1st character register, im- plement Procedure 1. • Procedure 1 Step I: Obtain the address range for the morphemes in the dictionary, whose ist character corresponds to the character in the 1st character register. Then, set the top address for this range to the current address for the character memories. Step 2: While the current address is in this range, implement Procedure 2. Step 3: Accomplish a shift operation to the shift register block. • Procedure 2 Step 1: Judge the result of the simultane- ous comparisons at the current address. When all the comparators output match signals, detection of one morpheme is in- dicated. When at least one comparator outputs the no match signal, there is no detection. Step 2: Increase the current address. For example, Fig. 4(a) shows the sub-string in the shift register block immediately after Step 1 for Main procedure, when the input text is " ~ J ~ } ~ L ~ bfc...". Step 3 for Procedure I causes such movement as (a)-*(b), (b)--*(c), (c)---*(d), (d)--*(e), and so on. Step 1 and Step 2 for Procedure 1 are implemented in each state for (a), (b), (c), (d), (e), and so on. In state (a) for Fig. 4, the index memory's out- put expresses the address range for the morpheme set {"~", "~"~", "~'~", "~;", "~:~]~", ..., "~J"} if the dictionary is as shown in Fig. 3. Then, Step 1 for Procedure 2 is repeated at each address for the morpheme set {"~:", "~", ,,~f~f,,, ,,~:~,,, ,,~f,,, ..., ,,~,,}. Figure 5 shows two examples of Step 1 for Pro- cedure 2. In Fig. 5(a), the current address for the dictionary is at the morpheme "~". In Fig. 5(b), the address is at the morpheme "~$; ]~". In Fig. 5(a), all of the eight comparators output match signals as the result of the simul- taneous comparisons. This means that the mor- pheme " ~ " has been detected at the top po- sition of the sub-string "~~j~:~ ~ L". On the other hand, in Fig. 5(b), seven comparators output match signals, but one comparator, at 2nd position, outputs a no match slgual, due to the discord between the two characters, '~' and '~[~'. This means that the morpheme "~]~" hasn't been detected at this position. Key String Text Sub-string from Dictionary Block in Shift Register Block /Comparators ~ comParators\ 2 2 ,.*X~ 2 3 3 ~ 3 4 .~C~ 4 ,,,(~. 4 $ $ "~-~)~" is detected. " ~ " is NOT detected. (a) (b) 0 shows match in a comparator. X shows no match in a comparator. Figure 5: Simultaneous Comparison in Fundamen- tal Architecture 3.2 EXTENDED ARCHITECTURE The architecture described in the previous sec- tion treats one stream of text string. In this sec- tion, the architecture is extended to treat multi- ple text streams, and the algorithm for extract- ing morphemes from multiple text streams is pro- posed. Generally, in character recognition results or speech recognition results, there is a certain amount of ambignJty, in that a character or a syl- lable has multiple candidates. Such multiple can- didates form the multiple text streams. Figure 6(a) shows an example of multiple text streams, expressed by a two dimensional matrix. One di- mension corresponds to the position in the text. The other dimension corresponds to the candi- date level. Candidates on the same level form one stream. For example, in Fig. 6(a), the character at the 3rd position has three candidates: the 1st candidate is '~', the 2nd one is '~' and the 3rd one is ']~'. The 1st level stream is "~]:~:.~...". The 2nd level stream is "~R...". The 3rd level stream is "~R ~... ". Figure 6(b) shows an example of the morphemes extracted from the multiple text streams shown in Fig. 6(a)..In the morpheme extraction process for the multiple text streams, the key strings in the morpheme dictionary are compared with the com- binations of various candidates. For example, "~ ~", one of the extracted morphemes, is com- posed of the 2nd candidate at the 1st position, the 1st candidate at the 2nd position and the 3rd candidate at the 3rd position. The architecture described in the previous section can be easily ex- tended to treat multiple text streams. Figure 7 310 (a) Multiple Text Streams *-Position in Text--* 1234 Candidate Level 2 ;1~ ~ ~ ~verb ! .~ inoun [] inoun i~ I~ i noun (b) Extracted [p) i suffix Morphemes [~]i .,~ !noun noun noun I verb ~ : i nou. • '~ iverb i . . . . . . . . . • Figure 6: Morpheme Extraction from Multiple Text Streams Address~. ] Index '1~ enerator Memory . . . . . ..I , • "'1 I b[ 1st CM ~'( comlpStrator}*~ li '1 I ======================= I! I , 2nd , I~';, I 2ndCM I'~(Comparator)' ~ . . . . . . . . . . . . . . . Shift Register ._.....~ Block "':'."'11" ..... li; . . . . . . . . . I;: . . . . . !l N-th CM [k.C~C°m;arat°r~ 2-N CR . ......... ~ .... bl~¥E~i,;h-~:: D,cttonary Block 'g 1st Le~el 2ndlLevel M~h Level Stream St[earn Stream CM = Character Memory m-n CR = m-th Level n-th Character Register Figure 7: Extended Architecture 311 shows the extended architecture. This extended architecture is different from the fundamental ar- chitecture, in regard to the following three points. First, there are M sets of character registers in the shift register block. Each set is composed of N character registers, which store and shift the sub-string for one text strearn. Here, M is the number of text streams. N has already been in- troduced in Section 3.1. The text streams move simultaneously in all the register sets. Second, the n-th comparator compares the char- a~'ter from the n-th character memory with the M characters at the n-th position in the shift regis- ter block. A match signal is output, when there is correspondence between the character from the memory and either of the M characters in the reg- isters. Third, a selector is a new component. It changes the index memory's input. It connects one of the registers at the 1st position to sequential index memory inputs in turn. This changeover occurs M times in one state of the shift register block. Regarding the algorithm described in Section 3.1, the following modification enables treating multiple text streams. Procedure 1 and Pro- cedure 1.5, shown below, replace the previous Procedure 1. • Procedure 1 Step 1: Set the highest stream to the current level. Step 2: While the current level has not ex- ceeded the lowest stream, implement Procedure 1.5. Step 3: Accomplish a shift operation to the shift register block. • Procedure 1.5 Step 1: Obtain the address range for the morphemes in the dictionary, whose 1st character corresponds to the character in the register at the 1st position with the current level. Then, set the top address for this range to the current address for the character memories. Step 2: While the current address is in this range, implement Procedure 2. Step 3: Lower the current level. Figure 8 shows an example of Step 1 for Proce- dure 2. In this example, all of the eight compara- tors output the match signal as a result of simulta- neous comparisons, when the morpheme from the dictionary is "~:". Characters marked with a circle match the characters from the dictionary. This means that the morpheme "~:" has been detected. When each character has M candidates, the worst case time complexity for sequential mor- pheme extraction algorithms is O(MN). On the other hand, the above proposed algorithm (Fukushima's algorithm) has the advantage that the time complexity is O(M). Sub-Strings Key String for Multiple Text Streams from Dictionary Block in Shift Regoster Block Comparators ,,~ "o l®l L 4 ~ ,=*(~ i i ! ! ! ~. 1 2 3 "~/i" is detected. Figure 8: Simultaneous Comparison in Extended Architecture ,-- MEX-I PC-9801VX Hamaguchi's hardware algorithm (Ham~guchi, 1988), proposed for speech recognition systems, is similax to Fukushima's algorithm. In Hamaguchi's algorithm, S bit memory space expresses a set of syllables, when there are S different kinds of syl- lables ( S = 101 in Japanese). The syllable candi- dates at the saxne position in input phonetic text are located in one S bit space. Therefore, H~n- aguchi's algorithm shows more advantages, as the full set size of syllables is sm~ller s~nd the num- ber of syllable candidates is larger. On the other ha~d, Fukushima's ~Igorithm is very suitable for text with a large character set, such as Japanese (more than 5,000 different chaxacters are com- puter re~able in Japanese). This algorithm ~Iso has the advantage of high speed text stream shift, compared with conventions/algorithms, including Hamaguchi's. 4 A MORPHEME EX- TRACTION MACHINE 4.1 A MACHINE OUTLINE This section describes a morpheme extraction machine, called MEX-I. It is specific hardware which realizes extended architecture and algo- rithm proposed in the previous section. It works as a 5ackend machine for NEC Per- sons/Computer PC-9801VX (CPU: 80286 or V30, clock: 8MHz or 10MHz). It receives Japanese text from the host persona/computer, m~d returns mor- phemes extracted from the text after a bit of time. 312 Figure 9: System Overall View Figure 9 shows an overall view of the system, in- cluding MEX-I and its host persona/ computer. MEX-Iis composed of 12 boards. Approximately 80 memory IC chips (whose total memory storage capacity is approximately 2MB) and 500 logic IC chips are on the boards. The algorithm parameters in MEX-I axe as fol- low. The key length (the maximum morpheme length) in the dictionary is 8 (i.e. N = 8 ). The maximum number of text streams is 3 (i.e. M = 1, 2, 3). The dictionary includes approxi- mately 80,000 Japanese morphemes. This dictio- nary size is popular in Japanese word processors. The data length for the memories a~d the registers is 16 bits, corresponding to the character code in Japanese text. 4.2 EVALUATION MEX-I works with 10MHz clock (i.e. the clock cycle is lOOns). Procedure 2, described in Sec- tion 3.1, including the simultaneous comparisons, is implemented for three clock cycles (i.e. 300ns). Then, the entire implementation time for mor- pheme extraction approximates A x D x L x M x 300n8. Here, D is the number of all morphemes in the dictionary, L is the length of input text, M is the number of text streams, and A is the index- ing coef~dent. This coei~cient means the aver- age rate for the number of compared morphemes, compared to the number of all morphemes in the dictionary. 31ementation Time [sec] Im A=O.O05 6 • Newspapers .," l i r o • Technical Reports / 5 • Novels ,'" ,," • A=0.003 o" 4 / • • •" so 3 / • • s~ ao ~° 2 /• .I A=0.001 j/ o. • so ° • ......--'''''" 1 o ° o o ._..-'" ss o• ~ ...---" .... I'" I I 1 I I ) O 10,000 20,000 30,000 40,000 50,000 60,000 Number of Candidates in Text Streams (=LXM) Figure 10: Implementation Time Measurement Results The implementation time measurement results, obtained for various kinds of Japanese text, are plotted in Fig. 10. The horizontal scale in Fig. 10 is the L x M value, which corresponds to the num- ber of characters in all the text streams. The ver- tical scale is the measured implementation time. The above mentioned 80,000 morpheme dictio- nary was used in this measurement. These re- sults show performance wherein MEX-I can ex- tract morphemes from 10,000 character Japanese text by searching an 80,000 morpheme dictionary in 1 second. Figure 11 shows implementation time compari- son with four conventional sequential algorithms. The conventional algorithms were carried out on NEC Personal Computer PC-98XL 2 (CPU: 80386, clock: 16MHz). Then, the 80,000 morpheme dic- tionary was on a memory board. Implementation time was measured for four diferent Japanese text samplings. Each of them forms one text stream, which includes 5,000 characters. In these measure- ment results, MEX-I runs approximately 1,000 times as fast as the morpheme extraction pro- gram, using the simple binary search algorithm. It runs approximately 100 times as fast as a pro- gram using the digital search algorithm, which has the highest speed among the four algorithms. Morpheme Extraction Methods Text1 Text2 Text3 Text4 Programs Based on Sequential Algorithms [sec] • Binary Search Method (Knuth, 197S) 564 642 615 673 • Binary Search Method 133 153 147 155 Checking Top Character Index • Ordered Hash Method (~e. 1074) 406 440 435 416 • Digital Search Method (Knuth, 1973) 52 56 54 54 with Tree Structure Index MEX-I 0.56 0.50 0.51 0.44 Figure lh Implementation Time Comparison for 5,000 Character Japanese Text toward achieving natural language parsing accel- erators, which is a new approach to speeding up the parsing. The implementation time measurement results show performance wherein MEX-I can extract morphemes from 10,000 character Japanese text by searching an 80,000 morpheme dictionary in 1 second. When input is one stream of text, it runs 100-1,000 times faster than morpheme extraction programs on personal computers. It can treat multiple text streams, which are composed of character candidates, as well as one stream of text. The proposed algorithm is imple- mented on it in linear time for the number of can- didates, while conventional sequential algorithms are implemented in combinational time. This is advantageous for character recognition or speech recognition. Its architecture is so simple that the authors be- lieve it is suitable for VLSI implementation. Ac- tually, its VLSI implementation is in progress. A high speed morpheme extraction VLSI will im- prove the performance of such text processing ap- plications in practical use as Kana-to-Kanji con- version Japanese text input methods and spelling checkers on word processors, machine translation, automatic indexing for text database, text-to- speech conversion, and so on, because the mor- pheme extraction process is necessary for these applications. The development of various kinds of accelera- tor hardware for the other processes in parsing is work for the future. The authors believe that the hardware approach not only improves conven- tional parsing methods, but also enables new pars- ing methods to be designed. 5 CONCLUSION This paper proposes a new hardware algorithm for high speed morpheme extraction, and also de- scribes its implementation on a specific machine. This machine, MEX.I, is designed as the first step 313 REFERENCES Abe, M., Ooskima, Y., Yuura~ K. mad Takeichl, N. (1986). "A Kana-Kanji Translation System for Non-segmented Input Sentences Based on Syntac- tic and Semantic Analysis", Proc. 11th Interna- tional Conference on Computational Linguistics: 280-285. Amble, O. and Knuth, D. E. (1974). "Ordered Hash Tables", The Computer Journal, 17(~): 135-142. Bear, J. (1986). "A Morphological r.e, eognizer with Syntactic and Phonological Rules, Proe. llth International Conference on Computational Linguistics: 272-276. Chisvin, L. and Duckworth, R. J. (1989). "Content-Addressable and Associative Memory: Alternatives to the Ubiquitous RAM", Computer. 51-64. Fukushlma, T., Kikuchi, Y., Ohya~a~ Y. and Miy~i, H. (1989a). "A Study of the Morpheme Extraction Methods with Multi-matching Tech- nique" (in Japanese), Proc. 3gth National Conven- tion of Information Processing Society of Japan: 591-592. Fukuskima, T., Ohyam% Y. and Miy~i, H. (1989b). "Natural Language Parsing Accelera- tors (1): An Experimental Machine for Morpheme Extraction" (in Japanese), Proc. 3gth National Convention o.f Inlormation Processing Society oJ Japan: 600--601. Fukushima, T., Ohyama, Y. and Miy~i, H. 1990a). "Natural Language Parsing Accelerators I): An Experimental Machine for Morpheme Ex- traction" (in Japanese), SIC, Reports of Informa- tion Processing Society of Japan, NL75(9). Fukushima, T. (19901)). "A Parallel Recogni- tion Algorithm of Context-free Language by Ar- ray Processors"(in Japanese), Proc. 40t1~ National Convention oJ Information Processing Society of Japan: 462-463. Haas, A. (1987). "Parallel Parsing for Unifi- cation Grammar", Proc. l Oth International Joint Conference on Artificial Intelligence: 615-618. Hamaguehl, S. mad Suzuki, Y. (1988). "Haxdwaxe-matchlng Algorithm for High Speed Linguistic Processing in Continuous Speech-recognitlon Systems", $~stems and Computers in Japan, 19(_7~. 72-81. Knuth, D. E. (1973). Sorting and Search- ing, The Art of Computer Programming, Vol.3. Addlson-Wesley. Koskenniemi, K. (1983). "Two-level Model for Morphological Analysis", Proe. 8th International Joint Conference on Artificial Intelligence: 683-- 685. Matsumoto, Y. (1986). "A Parallel Parsing Sys- tem for Natural Language Analysis", Proc. 3rd International Conference of Logic Programming, Lecture Notes in Computer Science: 396-409. Miyazakl, M., Goto, S., Ooyaxna, Y. and ShiraJ, S. (1983). "Linguistic Processing in a Japanese- text-to-speech-system", International Conference on Text Processing with a Large Character Set: 315-320. Nak~mura, O., Tanaka, A. and Kikuchi, H. (1988). "High-Speed Processing Method for the 314 Morpheme Extraction Algorithm" (in Japanese), Proc. 37th National Convention oJ Information Processing Society of Japan: 1002-1003. Ohyama, Y., Fukushim~, T., Shutoh, 2". and Shutoh, M. (1986). "A Sentence Analysis Method for a Japanese Book Reading Machine for the Blind", Proc. ~4th Annual Meeting of Association for Computational Linguistics: 165--172. Russell, G. J., Ritchie, G. D., Pulmaa, S. G. and Black, A. W. (1986). "A Dictionary and Morpho- logical Analyser for English", Proc. llth Interna- tional Conference on Computational Linguistics: 277-279. Rytter, W. (1987). "Parallel Time O(log n) Recognition of Unambiguous Context-free Lan- guages", Information and Computation, 75: 75-- 86. Yamad~, H., Hirata, M., Nag~i, H. and Tal~- h~hi, K. (1987). "A High-speed String-search En- gine", IEEE Journal of Solid-state Circuits, SC- ~(5): 829-834.
1990
39