text
stringlengths
0
316k
year
stringclasses
50 values
No
stringclasses
911 values
A CCG APPROACH TO FREE WORD ORDER LANGUAGES Beryl Hoffman " Dept. of Computer and Information Sciences University of Pennsylvania Philadelphia, PA 19104 (hoffman@ linc.cis.upenn.edu) INTRODUCTION In this paper, I present work in progress on an ex- tension of Combinatory Categorial Grammars, CCGs, (Steedman 1985) to handle languages with freer word order than English, specifically Turkish. The ap- proach I develop takes advantage of CCGs' ability to combine the syntactic as well as the semantic rep- resentations of adjacent elements in a sentence in an incremental manner. The linguistic claim behind my approach is that free word order in Turkish is a di- rect result of its grammar and lexical categories; this approach is not compatible with a linguistic theory involving movement operations and traces. A rich system of case markings identifies the predicate-argument structure of a Turkish sentence, while the word order serves a pragmatic function. The pragmatic functions of certain positions in the sen- tence roughly consist of a sentence-initial position for the topic, an immediately pre-verbal position for the focus, and post-verbal positions for backgrounded in- formation (Erguvanli 1984). The most common word order in simple transitive sentences is SOV (Subject- Object-Verb). However, all of the permutations of the sentence seen below are grammatical in the proper discourse situations. (1) a. Ay~e gazeteyi okuyor. Ay~e newspaper-acc read-present. Ay~e is reading the newspaper. b. Gazeteyi Ay~e okuyor. c. Ay~e okuyor gazeteyi. d. Gazeteyi okuyor Ay~e. e. Okuyor gazeteyi Ay~e. f. Okuyor Ay~e gazeteyi. Elements with overt case marking generally can scramble freely, even out of embedded clauses. This suggest a CCG approach where case-marked elements are functions which can combine with one another and with verbs in any order. *I thank Young-Suk Lee, Michael Niv, Jong Park, Mark Steedman, and Michael White for their valuable advice. This work was partially supported by ARt DAAL03-89- C-0031, DARPA N00014-90-J-1863, NSF IRI 90-16592, Ben Franklin 91S.3078C-1. Karttunen (1986) has proposed a Categorial Grammar formalism to handle free word order in Finnish, in which noun phrases are functors that ap- ply to the verbal basic elements. Our approach treats case-marked noun phrases as functors as well; how- ever, we allow verbs to maintain their status as func- tors in order to handle object-incorporation and the combining of nested verbs. In addition, CCGs, unlike Karttunen's grammar, allow the operations of com- position and type raising which have been useful in handling a variety of linguistic phenomena including long distance dependencies and nonconstituent coor- dination (Steedman 1985) and will play an essential role in this analysis. AN OVERVIEW OF CCGs In CCGs, grammatical categories are of two types: curried functors and basic categories to which the functors can apply. A category such as X/Y repre- sents a function looking for an argument of category Y on its right and resulting in the category X. A basic category such as X serves as a shorthand for a set of syntactic and semantic features. A short set of combinatory rules serve to combine these categories while preserving a transparent rela- tion between syntax and semantics. The application rules allow functors to combine with their arguments. Forward Application (>): X/Y Y~X Backward Application (<): Y X\Y ~ X In addition, egGs include composition rules to com- bine together two functors syntactically and semanti- cally. If these two functors have the semantic inter- pretation F and G, the result of their composition has the interpretation Az F(G, ). Forward Composition (> B): x/v v/z x/z Backward Composition (< B): v\z x\v x\z Forward Crossing Composition (> ]3.r): .',IV v\z .\\z Backward Crossing Composition (< B:r): v/z x/z 300 FREE WORD ORDER IN CCGs Representing Verbs: In this analysis, we represent both verbs and case- marked noun phrases as functors. In Karttunen's anal- ysis (1986), although a verb is a basic element rather than a functor, its arguments are specified as subcate- gorization features of its basic element category. We choose to directly represent a verb's subcategorization in its functor category. An advantage of this approach is that at the end of a parse, we do not need an extra process to check if all the arguments of a verb have been found; this falls out of the combination rules. Also, certain verbs need to act as active functors in order to combine with objects without case marking. Following a suggestion of Mark Steedman, I de- fine the verb to be an uncurried function which spec- ifies a set of arguments that it can combine with in any order. For instance, a transitive verb looking for a nominative case noun phrase and an accusative case noun phrase has the category SI{Nn , Na}. The slash I in this function is undetermined in direction; direction is a feature which can be specified for each of the arguments, notated as an arrow above the ar- gument, e.g. S]{~,}. Since Turkish is not strictly verb final, most verbs will not specify the direction features of their arguments. The use of uncurried notation allows great free- dom in word order among the arguments of a verb. However, we will want to use the curried notation for some functors to enforce a certain ordering among the functors' arguments. For example, object nouns or clauses without case-marking cannot scramble at all and must remain in the immediately pre-verbal posi- tion. Thus, verbs which can take a so called incorpo- rated object will also have a curried functor category such as SI{Nn, Nd}l{~ } forcing the verb to first ap- ply to a noun without case-marking to its immediate left before combining with the rest of its arguments. Representing Nouns: The interaction between case-marking and the ability to scramble in Turkish supports the theory that case- marked nouns act as functors. Following Steedman (1985), order-preserving type-raising rules are used to convert nouns in the grammar into functors over the verbs. The following rules are obligatorily activated in the lexicon when case-marking morphemes attach to the noun stems. Type Raising Rules: > N + case (vl{...}) I {vl{N' aa e .... }} < N + case ~ (vl{...}) I {v l{Ncase .... }} The first rule indicates that a noun in the presence of a case morpheme becomes a functor looking for a verb on its right; this verb is also a functor looking for the original noun with the appropriate case on its left. After the noun functor combines with the appro- priam verb, the result is a functor which is looking for the remaining arguments of the verb. v is actu- ally a variable for a verb phrase at any level, e.g. the verb of the matrix clause or the verb of an embedded clause. The notation ... is also a variable which can unify with one or more elements of a set. The second type-raising rule indicates that a case- marked noun is looking for a verb on its left. Our CCG formalism can model a strictly verb-final lan- guage by restricting the noun phrases of that language to the first type-raising rule. Since most, but not all, case-marked nouns in Turkish can occur behind the verb, certain pragmatic and semantic properties of a Turkish noun determine whether it can type-raise us- ing either rule or is restricted to only the first rule. The Extended Rules: We can extend the combinatory rules for uncurried functions as follows. The sets indicated by braces in these rules are order-free, i.e. Y in the following rules can be any element in the set. x Forward Application' (>): Xl{ .... } Y Backward Application' (<): Y .... } =xl{...} Using these new rules, a verb can apply to its argu- ments in any order, or as in most cases, the case- marked noun phrases which are type-raised functors can apply to the appropriate verbs. Certain coordination constructions (such as SO and SOV, SOV and SO) force us to allow two type- raised noun phrases which are looking for the same verb to combine together. Since both noun phrases are functors, the application rules above do not ap- ply. The following composition rules are proposed to allow the combining of two functors. Forward Composition' (> /3): ----Jl xl{r .... ,} Yl{ , .... -,} Backward Composition' (< /3): t,-- YI{...1} xl{r .... 2} Xl{..., .... The following example demonstrates these rules in analyzing sentence (1)b in the scrambled word order Object-S ubject- Verb: 2 1We assume that a category Xl{ } where { } is the empty set rewrites by some clean-up rule to just X. 2The bindings of the first composition axe e~ - v~, {...2}-- {Na .... ,}. 301 Gazeteyi Ay~e vll{...1}l{val{ffa .... a }} v=l{...~}l{v21{ffn .... ~ }} >B > (v,l{...~})l{vll{Nn, Na .... 1 }} > S LONG DISTANCE SCRAMBLING In complex Turkish sentences with clausal arguments, elements of the embedded clauses can be scrambled to positions in the main clause, i.e. long distance scrambling. Long distance scrambling appears to be no different than local scrambling as a syntactic and pragmatic operation. Generally, long distance scram- bling is used to move an element into the sentence- initial topic position or to background it by moving it behind the matrix verb. (2) a. Fauna [Ay~e'mn gittigini] biliyor. Fauna [Ay~e-gen go-ger-3sg-acc] know-prog. FaUna knows that Ay~e went away. b. Ay~e'nm FaUna [gittigini] biliyor. Ay~e-gen Fatma [go-ger-acc] know-prog. c. Fauna [gittigini] biliyor Ay~e'mn. Fauna [go-ger-acc] know-prog Ay~e-gen. The composition rules allow noun phrases to combine regardless of whether or not they are the arguments of the same verb. The same rules allow two verbs to combine together. In the following, the semantic interpretation of a category is expressed fol- lowing the syntactic category. go-nominal-acc knows. S~,:(go'y)l{Ng:y} S:(know'p =)I{Nn:z, SN,:p} <B okuyor. S[{Nn,Na} S : (kno'w'(go'y)x)l{Ng : y, Nn : "~} AS the two verbs combine, their arguments collapse into one argument set in the syntactic representation. However, the verbs' respective arguments are still dis- tinct within the semantic representation of the sen- tence. The predicate-argument structure of the sub- ordinate clause is embedded into the semantic repre- sentation of the matrix clause. Long distance scrambling in Turkish is quite free; however, there are many pragmatic and processing constraints. A syntactic restriction may be needed to explain why elements in certain adjunct clauses (though not all) are very hard to long distance scram- ble. To account for these clauses, we can assign the head of the restricted adjunct clause a curried functor category such as XIXl{argurn.ents...} rather than XI{X ,arguments...}. The curried category forces the adjunct head to combine with all of its arguments in the adjunct clause before combining with the con- stituent it modifies. This blocks long distance scram- bling out of that adjunct clause. 302 As mentioned before, another use for curried functions is with object nouns or clauses without case marking which are forced to remain in the immedi- ately pre-verbal position. A matrix verb can have a category such as SI{Nn}I{S2} to allow it to com- bine with a subordinate clause without case-marking ($2) to its immediate left. However, to restrict a type-raised Nn from interposing in between the ma- trix verb and the subordinate clause, we must restrict type raised noun phrases and verbs from composing together. A language specific restriction, allowing composition only if (X ~ vl...) or (Y = vl...), is pro- posed, similar to the one placed on the Dutch gram- mar by Steedman (1985), to handle this case. CONCLUSIONS What I have described above is work in progress in developing a CCG account of free word order lan- guages. We introduced an uncurried functor notation which allowed a greater freedom in word order. Cur- ried functors were used to handle certain restrictions in word order. A uniform analysis was given for the general linguistic facts involving both local and long distance scrambling. 1 have implemented a small grammar in Prolog to test out the ideas presented in this paper. Further research is necessary in the handling of long distance scrambling. The restriction placed on the composition rules in the last section should be based on syntactic and semantic features. Also, we may want to represent subordinate clauses with case- marking as type-raised functions over the matrix verb in order to distinguish them from clauses without case-marking. As a related area of research, prosody and prag- matic information must be incorporated into any ac- count of free word order languages. Steedman (1990) has developed a categorial system which allows in- tonation to contribute information to the parsing pro- cess of CCGs. Further research is necessary to decide how best to use intonation and pragmatic information within a CCG model to interpret Turkish. References [1] Erguvanli, Eser Emine. 1984. The Function of Word Order in Turkish Grammar. University of California Press. [2] Karttunen, Lauri. 1986. 'Radical Lexicalism'. Pa- per presented at the Conference on Alternative Conceptions of Phrase Structure, July 1986, New York. [3] Steedman, Mark. 1985. 'Dependency and Coor- dination in the Grammar of Dutch and English', Language, 61,523-568. [4] Steedman, Mark. 1990. 'Structure and Intona- tion', MS-CIS-90-45, Computer and Information Science, University of Pennsylvania.
1992
44
INFORMATION STATES AS FIRST CLASS CITIZENS Jorgen Villadsen Centre for Language Technology, University of Copenhagen Njalsgade 80, DK-2300 Copenhagen S, Denmark Internet: [email protected] ABSTRACT The information state of an agent is changed when a text (in natural language) is processed. The meaning of a text can be taken to be this informa- tion state change potential. The inference of a con- sequence make explicit something already implicit in the premises -- i.e. that no information state change occurs if the (assumed) consequence text is processed after the (given) premise texts have been processed. Elementary logic (i.e. first-order logic) can be used as a logical representation language for texts, but the notion of a information state (a set of possibilities -- namely first-order models) is not available from the object language (belongs to the meta language). This means that texts with other texts as parts (e.g. propositional attitudes with embedded sentences) cannot be treated di- rectly. Traditional intensional logics (i.e. modal logic) allow (via modal operators) access to the information states from the object language, but the access is limited and interference with (exten- sional) notions like (standard) identity, variables etc. is introduced. This does not mean that the ideas present in intensional logics will not work (possibly improved by adding a notion of partial- ity), but rather that often a formalisation in the simple type theory (with sorts for entities and in- dices making information states first class citizens -- like individuals) is more comprehensible, flexi- ble and logically well-behaved. INTRODUCTION Classical first-order logic (hereafter called elemen- tary logic) is often used as logical representa- tion language. For instance, elementary logic has proven very useful when formalising mathemati- cal structures like in axiomatic set theory, num- ber theory etc. Also, in natural language process- ing (NLP) systems, "toy" examples are easily for- malised in elementary logic: Every man lies. John is a man. So, John lies. (1) vx(man(x) lie(x)), man(John) zi (john) (2) 303 The formalisation is judged adequate since the model theory of elementary logic is in correspon- dence with intuitions (when some logical maturity is gained and some logical innocence is lost) -- moreover the proof theory gives a reasonable no- tion of entailment for the "toy" examples. Extending this success story to linguistically more complicated cases is difficult. Two problem- atic topics are: Anaphora It must be explained how, in a text, a dependent manages to pick up a referent that was introduced by its antecedent. Every man lies. John is a man. So, he lies. (3) Attitude reports Propositional attitudes involves reports about cog- nition (belief/knowledge), perception etc. Mary believes that every man lies. John is a man. So, Mary believes that John lies. (4) It is a characteristic that if one starts with the "toy" examples in elementary logic it is very dif- ficult to make progress for the above-mentioned problematic topics. Much of the work on the first three topics comes from the last decade -- in case of the last topic pioneering work by Hin- tikka, Kripke and Montague started in the sixties. The aim of this paper is to show that by taking an abstract notion of information states as start- ing point the "toy" examples and the limitations of elementary logic are better understood. We ar- gue that information states are to be taken serious in logic-based approaches to NLP. Furthermore, we think that information states can be regarded as sets of possibilities (structural aspects can be added, but should not be taken as stand-alone). Information states are at the meta-level only when elementary logic is used. Information states are still mainly at the meta-level when intensional logics (e.g. modal logic) are used, but some ma- nipulations are available at the object level. This limited access is problematic in connec- tion with (extensional) notions like (standard) identity, variables etc. Information states can be put at object level by using a so-called simple type theory (a classical higher-order logic based on the simply typed A-calculus) -- this gives a very ele- gant framework for NLP applications. The point is not that elementary or the vari- ous intensional logics are wrong -- on the contrary they include many important ideas -- but for the purpose of understanding, integrating and imple- menting a formalisation one is better off with a simple type theory (stronger type theories are pos- sible, of course). AGENTS AND TEXTS Consider an agent processing the texts tl,..., tn- By processing we mean that the agent ac- cepts the information conveyed by the texts. The texts are assumed to be declarative (purely infor- mative) and unambiguous (uniquely informative). The texts are processed one by one (dynamically) -- not considered as a whole (statically). The dy- namic interpretation of texts seems more realistic than the static interpretation. By a text we consider (complete) discourses - - although as examples we use only single (com- plete) sentences. We take the completeness to mean that the order of the texts is irrelevant. In general texts have expressions as parts whose or- der is important -- the completeness requirement only means that the (top level) texts are complete units. INFORMATION STATES We first consider an abstract notion of an infor- mation state (often called a knowledge state or a belief state). The initial information state I0 is assumed known (or assumed irrelevant). Changes are of the information states of the agent as fol- lows: I0 r1'I1 r2, I2 r3 ... r%i n where r/ is the change in the information state when the text t/is processed. An obvious approach is to identify information states with the set of texts already processed -- hence nothing lost. Some improvements are pos- sible (normalisation and the like). Since the texts are concrete objects they are easy to treat compu- tationally. We call this approach the syntactical approach. An orthogonal approach (the semantical ap- proach) identifies information states with sets of possibilities. This is the approach followed here. 304 Note that a possibility need not be a so-called "possible world" -- partiality and similar notions can be introduced, see Muskens (1989). A combination of the two approaches might be the optimal solution. Many of these aspects are discussed in Konolige (1986). Observe that the universal and empty sets are understood as opposites: the empty set of possi- bility and the universal set of texts represent the (absolute) inconsistent information state; and the universal set of possibility and the empty set of texts represent the (absolute) initial information state. Other notions of consistency and initiality can be defined. A partial order on information states ("getting better informed") is easy obtained. For the syn- tactical approach this is trivial -- more texts make one better informed. For the semantical approach one could introduce previously eliminated possi- bilities in the information state, but we assume eliminative information state changes: r(I) C I for all I (this does not necessarily hold for non- monotonic logics / belief revision / anaphora(?) -- see Groenendijk and Stokhof (1991) for further details). Given the texts tl,...,t~ the agent is asked whether a text t can be inferred; i.e. whether pro- cessing t after processing tl,...,t~ would change the information state or not: Here r is the identity function. ELEMENTARY LOGIC When elementary logic is used as logical represen- tation language for texts, information states are identified with sets of models. Let the formulas ¢1,..., On, ¢ be the transla- tions of the texts tl,...,tn,t. The information state when tl .... ,tk has been processed is the set of all models in which ¢1,..., ¢n are all true. Q, • ..,tn entails t if the model set correspond- ing to the processing of Q,..., t,, does not change when t is processed. I.e. alternatively, consider a particular model M -- if ¢1,-.., &n are all true in M then ¢ must be true in M as well (this is the usual formulation of entailment). Hence, although any proof theory for elemen- tary logic matches the notion of entailment for "toy" example texts, the notion of information states is purely a notion of the model theory (hence in the meta-language; not available from the object language). This is problematic when texts have other texts as parts, like the embedded sentence in propositional attitudes, since a direct formalisation in elementary logic is ruled out. TRADITIONAL APPROACH When traditional intensional logics (e.g. modal logics) are used as logical representation languages for texts, information states are identified with sets of possible worlds relative to a model M = (W,...), where W is the considered set of possible worlds. The information state when tl,...,tk has been processed is, relative to a model, the set of possible worlds in which ¢1,.--, ek are all true. The truth definition for a formula ¢ allows for modal operators, say g), such that if ¢ is (3¢ then is true in the possible worlds We C_ W if ¢ is true in the possible worlds We _C W, where We -- fv(W¢) for some function f¢~ : :P(W) --* :P(W) (hence U = (W, fv,...)). For the usual modal operator [] the function f:: reduces to a relation R:~ : W × W such that: W¢ -- fo(W,) - U {w¢ I Ro(w~,, w¢)} w~EWeb By introducing more modal operators the informa- tion states can be manipulated further (a small set of "permutational" and "quantificational" modal operators would suffice -- compare combinatory logic and variable-free formulations of predicate logic). However, the information states as well as the possible worlds are never directly accessible from the object language. Another complication is that the fv function cannot be specified in the object language directly (although equivalent object language formulas can often be found -- of. the correspondence theory for modal logic). Perhaps the most annoying complication is the possible interference with (extensional) no- tions like (standard) identity, where Leibniz's Law fails (for non-modally closed formulas) -- see Muskens (1989) for examples. If variables are present the inference rule of V-Introduction fails in a similar way. SIMPLE TYPE THEORY The above-mentioned complications becomes even more evident if elementary logic is replaced by a simple type theory while keeping the modal oper- ators (cf. Montague's Intensional Logic). The ~- calculus in the simple type theory allows for an el- egant compositionality methodology (category to type correspondence over the two algebras). Often the higher-order logic (quantificational power) fa- cilities of the simple type theory are not necessary -- or so-called general models are sufficient. The complication regarding variables men- tioned above manifests itself in the way that /3- reduction does not hold for the A-calculus (again, 305 see Muskens (1989) and references herein). Even more damaging: The (simply typed!) A-calculus is not Church-Rosser (due to the limited a-renaming capabilities of the modal operators). What seems needed is a logical representation language in which the information states are ex- plicit manipulable, like the individuals in elemen- tary logic. This point of view is forcefully defended by Cresswell (1990), where the possibilities of the information states are optimised using the well- known technique of indexing. Hence we obtain an ontology of entities and indices. In recent papers we have presented and dis- cussed a categorial grammar formalism capable of (in a strict compositional way) parsing and translating natural language texts, see Villadsen (1991a,b,c). The resulting formulas are terms in a many-sorted simple type theory. An example of a translation (simplified): Mary believes that John lies. (5) )~i.believe(i, Mary, ()~j.lie(j, John))) (6) Adding partiality along the lines in Muskens (1989) is currently under investigation. ACKNOWLEDGMENTS Reports work done while at Department of Com- puter Science, Technical University of Denmark. REFERENCES M. J. Cresswell (1990). Entities and Indices. Kluwer Academic Publishers. J. Groenendijk and M. Stokhof (1991). Two Theo- ries of Dynamic Semantics. In J. van Eijck, editor, Logics in AI - 91, Amsterdam. Springer-Verlag (Lecture Notes in Computer Science 478). K. Konolige (1986) A Deduction Model of Belief. Pitman. R. Muskens (1989). Meaning and Partiality. PhD thesis, University of Amsterdam. J. Villadsen (1991a). Combinatory Categorial Grammar for Intensional Fragment of Natural Language. In B. Mayoh, editor, Scandinavian Conference on Artificial Intelligence- 91, Roskilde. IOS Press. J. Villadsen (1991b). Categorial Grammar and In- tensionality. In Annual Meeting of the Danish As- sociation for Computational Linguistics - 91, Aal- borg. Department of Computational Linguistics, Arhus Business School. J. Villadsen (1991c). Anaphora and Intensional- ity in Classical Logic. In Nordic Computational Linguistics Conference - 91, Bergen. To appear.
1992
45
Spatial Lexicalization in the Translation of Prepositional Phrases Arturo Trujillo* Computer Laboratory University of Cambridge Cambridge CB2 3QG, England [email protected] Abstract A pattern in the translation of locative prepositional phrases between English and Spanish is presented. A way of exploiting this pattern is proposed in the con- text of a multilingual machine translation system under development. Introduction Two of the main problems in machine translation (MT) are ambiguity and lexical gaps. Ambiguity occurs when a word in the source language (SL) has more that one translation into the target language (TL). Lexical gaps occur when a word in one language can not be trans- lated directly into another language. This latter prob- lem is viewed by some as the key translation problem, (Kameyama et al., 1991). A case in point is the translation of prepositional phrases (PP). The following entry for the translations into Spanish of the preposition along demonstrates this (entry taken from (Garcia-Pelayo, 1988)). along: pot (by), a lo largo de (to the length of), segfin (according to) Both problems occur here: there are three different translations for the same English preposition, and the second of these is a phrase used to describe a sense of along which is not encoded as one word in Spanish. Lexicalization Patterns It is argued in (Talmy, 1985) that languages differ in the type of information they systematically encode in lexical units. That is, languages exhibit distinct lexical- ization patterns. For instance, in a sentence where both the direction and manner of motion are expressed, En- glish will encode motion and manner in the same verb, whereas in Spanish a distinct lexicalization of these two meaning components will be favoured (Ibid. p. 69): Spa. E1 globo subi6 pot la chimenea flotando Lit. the balloon moved-up through the chimney floating Eng. The balloon floated up the chimney *This work was funded by the UK Science and Engineer- ing Research Council Here Spanish subi6 encodes 'move + up' whereas En- glish floated encodes 'move ÷ floating'. Capturing lexicalization patterns of this sort can help us make certain generalizations about lexical gaps and ambiguities in MT. In the rest of this paper two lex- icalization patterns for English locative prepositional phrases (PP) will be presented. It will be shown how they allow us to simplify the bilingual lexicon of a trans- fer based, multi-lingual MT system under development. Evidence The two lexicalization patterns under analysis can be illustrated using the following three sentences (loc = location, dest = destination): Eng. She ran underloc the bridge (in circles) Spa. Corri5 debajo del puente (en circulos) Lit. Ran-she under of-the bridge Eng. She ran underpa, h+zoc the bridge (to the other side) Spa. Corri6 por debajo del puente (hasta el otro lado) Lit. Ran-she along under of-the bridge Eng. She ran underde,t+aoc the bridge (and stopped there) Spa. Corri6 hasta debajo del puente (y alll se de- tuvo) Lit. Ran-she to under of-the bridge In the first sentence there is a direct translation of the English sentence. In this case the features encoded by the English and Spanish PP's are the same. In the sec- ond sentence the English preposition encodes the path followed by the runner and the location of this path with respect to the bridge; in Spanish such a combina- tion needs to be expressed by the two prepositions pot and debajo de. In the third example the English prepo- sition expresses the destination of the running and the location of that destination with respect to the bridge; this has to be expressed by the two Spanish prepositions basra and debajo de. 306 Other English prepositions which allow either two or three of these readings in locative expressions are shown in the table below. P location path 'along P' destination 'to P' behind detr~s de pot detrLs de hasta detr£s de below debajo de pot debajo de hasta debajo de inside dentro de pot dentro de hasta dentro de outside fuera de pot fuera de hasta fuera de under debajo de pot debajo de hasta debajo de between entre por entre - near cerca de hasta cerca de From the table the following generalization can be made: whatever the translation P of the locative sense of an English preposition is, its path incorporating sense is translated as pot P and its destination incorporating sense is translated as hasta P. In short, certain English prepositions are ambiguous between encoding location, path + location or destina- tion + location. This is not the case in Spanish. When translating from English such ambiguities can not be preserved very naturally. In particular, whenever it is necessary to preserve them (e.g. for legal documents), a disjunction of each individual sense must be used in the TL sentence. In certain cases, however, it may be the case that only one of these readings is allowed. Disambiguation As far as the selection of the appropriate target lan- guage (TL) preposition is concerned the constituent which the PP modifies plays a major role in determining which readings of a preposition sense are allowed. Deciding whether the preposition is used in a spatial sense, as opposed to a temporal or causative sense, is determined by the semantics of the noun phrase (NP) within it, e.g. under the table, under the regime, under three minutes, under pressure, under development, un- der the bridge; that is, a place denoting NP gives rise to a spatial PP. There are two cases to consider in disambiguating spatial senses. In the case of the PP attaching to a noun, the sense selected will be the location one. For example Eng. The park outside the city Spa. E1 parque fuera de la ciudad The second case is when the PP modifies a verb. For this case it is necessary to consider the semantics of the verb in question. Verbs of motion such as walk, crawl, run, swim, row, gallop, march, fly, drive, jump and climb allow location, path and destination readings. For instance: Eng. The diver swam below the boat Spa. E1 buceador had6 debajo de/por debajo de/hasta debajo de/1 bote Verbs which do not express motion such as stand, sit, rest, sleep, live and study usually require the location sense of the preposition: Eng. The diver rested below the boat Spa. El buceador descans6 debajo del bote This second analysis is oversimplistic since some readings depend on other semantic features of the verb, preposition and complement NP involved. However, these can be incorporated into the strategy explained below. One last point to note is that not all the prepositions presented allow all three readings. This will be taken into consideration when making the generalizations in the encoding of the above observation. Encoding Representation for Prepositions As exemplified above, the translation of a preposition depends on three sources of information: 1) the word modified by the PP determines whether the sense of the preposition may include a path or a destination component, 2) the preposition itself determines how many spatial senses it allows, 3) the NP complement of the preposition determines whether it is being used spatially, temporally, causatively, etc. To encode these three sources, prepositions will be represented as three place relations. The pattern for a prepositional entry is shown in 1); a possible entry for below is shown in 2). 1) P[modified, preposition, complement] 2) below[motion-verb, [path,dest],place] The notation here is an informal representation of the typed feature structures described in (Briscoe et al., 1992) and (Copestake, 1992). The argument types in 1) can be explained as follows. 'Modified' is a type which subsumes 'events' (denoted by verbs) and 'objects' (de- noted by nouns); the type 'event' is further subdivided into 'motion-verb' and 'non-motion-verb'. 'Preposition' is a type which subsumes properties which depend on the preposition itself; for the examples presented this type will encode whether the preposition can express a path or a destination (the extra square brackets indi- cate a complex type). Finally, 'complement' subsumes a number of types corresponding to the semantic field of the complement NP; these include 'spatial' with sub- type 'place'; 'temporal', and 'causative'. The instantiated entry in 2) corresponds to the use of below in the diver swam below the boat. Such in- stantiations would be made by the grammar by struc- ture sharing of the semantic features from the modified constituent and from the complement NP. In this way the three translations of below would only be produced when the semantic features of the modified constituent and complement NP unify with the first and third ar- guments respectively. 307 Bilingual Lexical Rules To encode the regularity of the translations presented, bilingual lexical rules will be introduced. These rules take as input a bilingual lexical entry and give as out- put a bilingual lexical entry. An oversimplified rule to generate the 'path' sense for a preposition that allows such a reading is given below (P = variable ranging over prepositions, e = the empty type, lugar = place, camino = path). Rule: PE.g [motion-verb, [path,-],place] P sp~ [verbo-movimiento,e,lugax] de Pz,g [motion-verb, [path,-] ,place] P OR[verbo-movimiento,camino,lugar] P ap~ [verbo-movimiento,e,lugar] de A similar rule would encode the 'destination' sense gen- eralization. The bilingual lexical rules work by extending the bilingual lexicon automatically before any translation takes place; this gives rise to a static transfer compo- nent with faster performance but more memory con- sumption. Only those entries which unify with the in- put part of a rule actually produce a new bilingual en- try. An example of the 'path' rule being applied is shown below. Input: below[motion-verb,[path,dest],place] ~-* debaj o[verbo-movimiento,e,lugar] de Output: below [motion-verb,[path,dest],place] *-* P OR.[verbo-movimiento,camino,lugar] debajo[verbo- movimiento,e,lugar] de Note that not all prepositions in the table above al- low all three readings; for this the allowed readings are stated in the second argument of the preposition. Related Research In (Copestake e~ al., 1992) the notion of a llink is intro- duced. These are typed feature structures which encode generalizations about the type of transfer relations that occur in the bilingual lexicon. That is, each bilingual entry corresponds to one ffink. Because ffmks are rep- resented as a hierarchy of types, the amount of data stored in the bilingual lexicon is minimal. The bilin- gual lexical rules presented here will further refine the idea of a tlink by minimizing the number of bilingual lexical entries that have to be coded manually, since the bilingual lexical rules can be seen as operating over ffinks (and hence bilingual lexical entries) to give new tlinks. The grammatical formalism used broadly resembles earlier versions of HPSG. The idea of bilingual lexical rules is partly inspired by the lexical rules introduced within this framework in (Pollard & Sag, 1992). Conclusion We have argued that ambiguities and lexical mis- matches found in English-Spanish translation of PP's can be dealt with using ideas from cross-linguistic stud- ies of lexicalization patterns, and suggested a use of the relevant linguistic insights for MT applications. This consisted of encoding prepositions as three place relations, and of having bilingual lexical rules which op- erate over the bilingual lexicon to expand it. By for- mulating regularities in this way consistency and com- pactness in the bilingual lexicon, and therefore in the transfer module, are achieved. The next steps will include the implementation of the mechanism to drive the bilingual lexical rules, the refining and testing of the semantic classification, the isolation of further regularities and the investigation of other types of PP's. Acknowledgements Many thanks to Ted Briscoe, Antonio Sanfilippo, Ann Copestake and two anonymous reviewers. Thanks also to Trinity Hall, Cambridge, for a travel grant. All re- maining errors are mine. References Briscoe, T.; Copestake, A., and de Paiva, V., editors. 1992 (forthcoming). Default Inheritance in Unification Based Approaches to the Lexicon. Cambridge University Press, Cambridge, England. Copestake, A.; Jones, B.; Sanfilippo, A.; Rodriguez, H.; Vossen, P.; Montemagni, S., and Marinal, E. 1992. Multilin- gual lexical representations. Technical Report 043, ESPRIT BRA-3030 AQUILEX Working Paper, Commission of the European Communities, Brussels. Copestake, A. 1992. The AQUILEX LKB: Representa- tion issues in semi-automatic axluisition of large lexicons. In Proceedings 3rd Con]erence on Applied Natural Language Processing, Trento, Italy. Garcia-Pelayo, R. 1988. Larousse Gran Diccionario Espaaol-Inglgs English-Spanish. Larousse, Mexico DF, Mex- ico. Kameyama, M.; Ochitani, R., and Peters, S. 1991. Re- solving translation mismatches with information flow. In Proceedings A CL-91, Berkeley, CA. Pollard, C., and Sag, I. 1992 forthcoming. Agreement, Binding and Control: Information Based Syntax and Se- mantics Vol. II. Lecture Notes. CSLI, Stanford, CA, USA. Talmy, L. 1985. Lexicalization patterns: semantic struc- ture in lexical forms. In Shopen, T., editor, Language Typol- ogy and Syntactic Description Vol. 111: Grammatical Cate- gories and the Lexicon. Cambridge University Press, Cam- bridge, England. 308
1992
46
METONYMY: REASSESSMENT, SURVEY OF ACCEPTABILITY, AND ITS TREATMENT IN A MACHINE TRANSLATION SYSTEM Shin-ichiro Kamei* & Takahiro Wakao Computing Research Laboratory New Mexico State University Las Cruces, New Mexico 88003 Tel : 505-646-5466 Fax : 505-646-6218 Interact: [email protected] & [email protected] * visiting researcher from NEC Corporation in Japan ABSTRACT In this article we outline a basic approach to treating metonymy properly in a multil- ingual machine translation system. This is the first attempt at treating metonymy in an machine translation environment. The approach is guided by the differences of acceptability of metonymy which were obtained by our comparative survey among three languages, English, Chinese, and Japanese. The characteristics of the approach are as follows: (1) Influences of the context, individuals, and familiality with metonymy are not used. (2) An actual acceptability of each meto- nymic expression is not realized directly. (3) Grouping metonymic examples into patterns is determined by the accepta- bility judgement of the speakers sur- veyed as well as the analysts' intui- tion. (4) The analysis and generation com- ponents treat metonymy differently using the patterns. (5) The analysis component accepts a wider range of metonymy than the actual results of the survey, and the generation component treats meto- nymy more strictly than the actual results. We think that the approach is a start- ing point for more sophisticated approaches to translation in a multilirtgual machine translation environment. INTRODUCTION Among others, both Lakoff and Johnson (1980), and Fass (1991) divide metonymic expressions into several fixed patterns such as Part-For-Whole and Container-For- Content. Sentence (1) is a typical Container-For-Content metonymy and "this glass" is replaced with "the liquid in this glass" in its metonymic reading. (1) "He drank this glass." One of the things that has been less focused on in previous literature on meto- nymy is the problem of generation typically in a machine translation system. For exam- ple, even though the analysis component of a machine translation system produces a correct metonymic reading for sentence (1), i.e. "the liquid in this glass" for "this glass", if the result of the analysis com- ponent is translated directly in word-for- word manner, such an output sentence may not be natural in the target language. On the other hand, it may not be appropriate either for the generation component to pro- duce a sentence which is a direct transla- tion of the original metonymy if the target language does not allow such expression. We think it is necessary for a multil- ingual machine translation system to have not only understanding of metonymy which most previous works on metonymy have focused on, but also proper ways to handle generation of metonymy. In order to find out ways to treat metonymy properly in a multilingual environment, we have con- ducted a survey on acceptability of various examples of metonymy among English, Chinese, and Japanese. The patterns of previous works (Fass 1991, Lakoff and 309 Johnson 1980, Yamanashi 1987) seem to be obtained from the intuition of the analysts. However, we think that the pat- terns which are based on the analysts' intuition to begin with should be supported and determined more precisely by the result of this kind of survey. An analysis based on actual data allows us to establish a clear set of patterns and sub-groups, for example to decide whether we require either Producer-For-Product (Lakoff and Johnson 1980) or Artist-for-Artform (Fass 1991), or both of them. A SURVEY OF METONYMY A comparative survey on acceptability of metonymic expressions in English, Chinese and Japanese has been conducted. All of the 25 sentences which are used in the sur- vey are taken from metonymy examples in English in previous works (Lakoff and Johnson 1980, Fass 1991, Yamanashi 1987). We asked native speakers of the three languages to score the acceptability of each sentence. Direct translations were used for Chinese and Japanese. The dif- ferent languages show differences in accep- tability (for the details, Kamei and Wakao 1992). Based on both intuitive analyses and the result of the .survey, we have esta, blished four major patterns, and several sub-groups for the first pattern (Locating) as shown in Appendix A. The patterns are 1) Locating, 2) Emphasis of one aspect, 3) Abstract and collective entity for its con- sisting concrete items, and 4) Information conveyer for information giver. For example, sentence (2) belongs to the second group of Locating pattern (Pro- ducer for Product). Examples of "Ford", "Picasso", "Steinbeck" and "Bach" also belong to this group (see Appendix A 1.2). These sentences are fully acceptable in English and Japanese, however, their acceptability is low in Chinese and sen- tence (2) is completely unacceptable. (2) "He read Mao." On the other hand, sentence (3) belongs to the fourth pattem, information conveyer and giver. The tendency of the pattern is that those examples in this pat- tern are acceptable in English and Chinese, but not in Japanese. (3) "The sign said fishing was prohibited here." AN APPROACH TO TRANSLATING METONYMY An important point to realize is that actual computational treatment of metonymic expressions is determined by the accepta- bility of the pattern to which the expression belongs. Another important point is that the analysis and generation components of a machine translation system should treat metonymy differently. We believe that the main factors for treating metonymy correctly in a multil- ingual machine translation system are 1) its universality, which can be a guideline for the analysis component, 2) language depen- dency, which can be used for generation, and 3) others such as the context, culture, and familiarity. We think that it seems unrealistic to expect an actual machine translation system to cope well with the third of these factors at present. Given the lack of such knowledge, our basic heuris- tics for treating metonymy are as follows: Even if some language shows the ten- dency of unacceptability, if one or more languages show acceptance in the group to which the expression belongs to in the result of the survey, the system should accept it for analysis, and come up with some metonymic reading using its infer- ence mechanism (Iverson and Helmreich 1992, Fass 1991). Given such information, the generation component should look at the tendency of each language. If the tar- get language allows a metonymic expres- sion which corresponds to the original form, then the system should produce a direct translation since the translation preserves the naturalness. However, if the 310 target language does not allow a meto- nymic expression which corresponds to the original form, then the system should use the result of the metonymic inference and come up with an acceptable translation. We think that these basic heuristics are a good starting point for more sophisti- cated approaches to translation in a multi- lingual environment. We intend as our next step to implement our ideas using existing systems such as the ULTRA MT system (Wilks and Farwell 1990) and the Metallel metonymic analysis program (Iverson and Helmreich 1992). APPENDIX A Some of the metonymic sentences used in the survey. 1. Locating 1.1 Container for Content Dave drank the glasses. The kettle is boiling. 1.2 Producer for Product He bought a Ford. He's got a Picasso in his room. Anne read Steinbeck. Ted played Bach. He read Mao. 2. Emphasis of one aspect We need a couple of strong bodies for our team. There are a lot of good heads in the university. 3. Abstract entity for concrete entity Exxon has raised its price again. Washington is insensitive to the needs of the people. 4. Information conveyer for information giver The T.V. said it was very crowded at the festival. The sign said fishing was prohibited here. REFERENCES Fass, Dan. (1991). met*:A Method for Discriminating Metonymy and Meta- phor by Computer. Computational Linguistics, 17 (1): 49-90. Iverson, Eric and Helmreich, Stephen. (1992). Metallel: An Integrated Approach to Non-literal Phrase Interpretation. Memoranda in Com- puter and Cognitive Science, MCCS- 92-231 Computing Research Labora- tory, New Mexico State University, Las Cruces NM. Kamei, Shin-ichiro and Wakao, Takahiro. (1992). Metonymy: reassessment, sur- vey of acceptability, and its treatment in a machine translation system. Memoranda in Computer and Cogni- tive Science, MCCS92-236 Computing Research Laboratory, New Mexico State University, Las Cruces, NM. Lakoff, George and Johnson, Mark. (1980). Metaphor We Live By. London: Chi- cago University Press. Wilks, Yorick and Farwell, David. (1990). A White Paper on Research in Pragmatic-based Machine Translation. Memoranda in Computer and Cogni- tive Science, MCCS-90-188 Comput- ing Research Laboratory, New Mexico State University, Las Cruces NM. Yamanashi, Masa-aki. (1987). Metonymic interpretation and associative processes in natural language. In Language and Artificial Intelligence, Makoto Nagao (ed): 77-86. Amster- dam: Elsevier Science Publishers B.V. 311
1992
47
A BASIS FOR A FORMALIZATION OF LINGUISTIC STYLE Stephen J. Green Department of Computer Science University of Waterloo Waterloo, Ontario, Canada N2L 3G1 INTRODUCTION Style in language is more than just surface ap- peaxance, on the contrary, it is an essential part of the meaning conveyed by the writer. A com- putational theory of style could be of great use in many computational linguistics applications. A system that is 'stylistically aware' could analyze the writer's stylistic intent and understand the com- plex interaction of choices that produce a certain effect. In applications such as machine translation, a computational theory of stylistics would allow the preservation or modification of stylistic effects across languages. The theory would also be use- ful in computer-aided language instruction where, along with vocabulary and grammar, the individ- ual writing style of the student could be analysed and amended. The work described in this paper will be incorporated into into the Nigel grammar of the Penman system to provide a fine degree of stylistic control in language generation. Drawing on both classical and contemporary rhetorical theory, we view style as goal directed: that is, texts axe written for a purpose and this purpose dictates the stylistic choices that must be made. We find a computational counterpart to this view in the work of Hovy (1988), who used style as one of the pragmatic factors controlling generation in his PAULINE system. More recently, DiMaxco (1990), the basis for this research, attempted to codify many of the elements of style that had pre- viously been defined only descriptively and infor- mally. DiMaxco presented a vocabulary of stylistic terms that was used to construct a syntactic stylis- tic grammar at three levels: primitive elements, ab- stract elements, and stylistic goals. At the base level of the grammar, the primitive elements de- scribe the stylistic effects of individual sentence components. These primitive elements axe then combined at a level of more abstract descriptions. These abstract elements comprise a stylistic 'met- alanguage' that allows each element to be used to characterize a gIoup of stylistically similar sen- tences. At the top level are the stylistic goals, such as clarity or concreteness, that are realized by pat- terns of the abstract elements. 312 The primitive-element level of DiMaxco's stylistic grammar is divided into two views, connec- tire and hierarchic. Here I will focus on the connec- tive view, for which the stylistic effect of a sentence component is determined by considering its degree of cohesiveness within the sentence. The degrees of cohesiveness, or connectivity, vary on a scale from conjunct ° (neither connective nor disconnective) through conjunct 4 (excessively connective). 1 In more recent work, DiMaxco and Hirst (1992) have provided a more formal basis for their the- ory of linguistic style, a basis that has its roots in the established linguistic theory of Halliday and Hasan (1976) and Halliday (1985). I am extend- ing and refining their preliminary classifications of the primitive elements to provide a sounder basis for the entire computational theory of style. I will show how the connective primitive elements can be firmly tied to linguistic theory and how their properties are transmitted through the levels of the stylistic grammar. A BASIS FOR LINGUISTIC STYLE Drawing on the work of Halliday and Hasan (1976), a seminal work in textual cohesion, I will show how intrasentence cohesion, and its related stylistic ef- fects, can be derived from the textual cohesive rela- tions that Halliday and Hasan describe. Although there are undoubtedly significant stylistic effects at the text level, I feel that the codification of style at the sentence level has not yet been fully explored. For the most part, these cohesive relations func- tion as well at the sentence level as they do at the text level. This is illustrated in Quirk et al. (1985), where all of the relations that Halliday and Hasan describe for texts are also demonstrated within sin- gle sentences. Halliday and Hasan enumerate four major types of cohesive relations for English: ellipsis, sub- stitution, reference, and conjunction. They classify IThere is also a scale of disconnectivity, or 'anti- junctness', but I will not be using it in this discussion. these relations in terms of their cohesive strengths relative to one another: ellipsis and substitution axe the most cohesive relations, followed by reference, with conjunction being the least cohesive. One of the main objectives of my research is determining how all of these cohesive relations can be incorpo- rated into the scale of 'conjunctness' described ear- lier. In this paper, I will deal only with ellipsis. 2 Halliday and Hasan consider substitution to be equally as cohesive as ellipsis. I argue that el- lipsis is more cohesive, after Quirk etal. (1985, p. 859) who state that for substitution and ellip- sis "there are generally strong preferences for the most economical variant, viz the one which exhibits the greatest degree of reduction." Thus, the ellip- tical relations are more cohesive, due to the fact that they are generally more reduced. In DiMaxco and Hirst, all forms of eRipsis are given a classifica- tion of conjunct s (strongly connective), but here I will look at the three types of ellipsis separately, as- signing each its own degree of cohesiveness, s This assignment is made using by considering the most reducing relations to be the most cohesive, in the spirit of the above quote from Quirk et al. Since Halliday and Hasan provide a ranking for the four types of cohesive relation, and since ellipsis is con- sidered to be the most cohesive relation, all of the degrees assigned for the different types of ellipsis will be ranked in the top half of the scale of cohe- siveness. The first type of ellipsis which Halliday and Hasan deal with is nominal ellipsis. This occurs most often when a common noun is elided from a nominal group and some other element of the nominal group takes the place of this elided noun. An example of this occurs in (1), where the noun ezpedition is elided, and the numerative t~0o takes its place. (1) The first expedition was quickly followed by another two Q.4 This is the least concise form of ellipsis, since only a single noun is elided. As such, it is given the lowest classification in this category: conjunct s (moderately-strong connective). Next, we have verbal ellipsis. In instances of verbal ellipsis, any of the operators in the verbal group may be elided, as opposed to nominal ellipsis aWhen identifying the kinds of ellipsis, I use the texans defined by Halliday and Hasan and Quirk etal. All examples are taken from the appropriate sections of these references. sI will be using a wider scale of cohesiveness than the one used by DLMarco and Hirst. Here conjunc~ e, rather than conjunct*, becomes the classification for the excessively connective. This change is made to al- low for the description of more-subtle stylistic effects than is currently possible. 4Adapted from Quirk etal. example 12.54, p. 900. where only the noun is elided. As Halliday and Hasan point out, many forms of verbal ellipsis are very diiticnlt to detect, due to the complexity of the English verbal group. Because of this, I will deal only with two simple cases of verbal ellipsis: those in which the verbal group is removed entirely, as in (2), and those in which the verbal group consists of only modal operators, as in (3). (2) You will speak to whoever I tell you to Q.5 (3) It may come or it may not ®.e Both of these sentences axe quite concise, as all, or nearly all, of the verbal group is elided. Verbal ellipsis is generally more concise than nominal el- lipsis, and thus it has a higher level of cohesiveness: conjunct 4. Finally, we look at clausal ellipsis, in which an entire clause is elided from a sentence. We see an example of this in (4). (4) You can borrow my pen if you want Q.7 Since this form is more concise than either of the previous two verbal forms, we accord it a still higher level of cohesiveness: conjunct s. This clas- sification gives clausal ellipsis a degree of cohesive- ness verging on the extreme. The excessive amount of missing information tends to become conspicu- ous by its absence. Here we axe beginning to devi- ate from what would be considered normal usage, creating an effect that DiMaxco (1990) would call st~/listic discord. I will now present a short example to demon- strate how the effects of a foundation based on functional theory axe built up through the three levels of the stylistic grammar. 313 A SIMPLE EXAMPLE I will use the functional basis of style described above to illustrate how small variations in sen- tence structure can lead to larger variations in the stylistic goals of a sentence. This will be demon- strated by tracing the analysis of an example sen- tence through the levels of description of the stylis- tic grammar. The first step in the analysis determines which connective primitive elements axe present in the sentence and where they occur in our scale of co- hesiveness. Next, the primitive elements axe used to determine which abstract elements axe present. Finally the abstract elements axe examined to de- termine the stylistic goals of the sentence. We start with sentence (4) as above. This sentence contains several connective primitive d- ements, the most prominent being the conjunct s SQuirk et al. example 12.64, p. 908. eAdapted from Halliday and Hasan example 4:57, p. 170. ~Quisk etal. example 12.65, p. 909. clausal ellipsis noted eaxlier, as well as instances of a conjunct a personal reference (you), a conjunct 2 deictic (my), and a conjunct 1 adversative (if you want). (Although I have completed the analysis for the other cohesive relations, here I am using the preliminary classifications given by DiMaxco and Hirst (1992) for the other connective elements.) Apart from the terminal ellipsis, all of these connective elements are concordant, that is, they represent constructions that conform to normal us- age. The terminal ellipsis, due to its excessively high level of cohesiveness, is weakly discordant, a slight deviation from normal usage. Thus, this sen- tence contains initial and medial concords, followed by a terminal discord. In the terms of the stylis- tic grammar, this shift from concord to discord is formalized in the abstract element of dissolution. The presence of dissolution characterizes the stylis- tic goal of concreteness, which is associated with sentences that suggest an effect of specificity by an emphasis on certain components of a sentence. In this sentence, the emphasis is created by the ter- minal discord. The clausal ellipsis requires that a great deal of information be recovered by the reader and because of this it leaves her feeling that the sentence is unfinished. The next example, sentence (5), is a modifica- tion of (4) and is an example of verbal ellipsis, as in (2). (5) You can borrow my pen if you want to Q. In this sentence, all of the previous connective el- ements remain except for the terminal clausal el- lipsis. This ellipsis has been replaced by a ver- bal ellipsis that is conjunct 4, strongly but not ex- cessively cohesive. This replacement consequently eliminates the terminal discord present in the pre- vious sentence, changing it to a strong concord. Thus, (5) has initial, medial, and terminal con- cords, making it a fully concordant sentence. At the level of abstract elements, such a sentence is said to be centroschematic, that is, a sentence with a central, dominant clause with optional depen- dent clauses and complex subordination. Cen- troschematic sentences characterize the stylistic goal of clarity, which is associated with sentences that suggest plainness, preciseness, and predictabil- ity. In this sentence, the effect of predictability is created by removing the terminal discord, thus re- solving the unfulfilled expectations of (4). Thus, using the cohesive relations of Halliday and Hasan, it is possible, as I have shown, to pro- vide a formal basis for the connective primitive el- ements of the syntactic stylistic grammar. These primitive elements can now be used as the compo- nents of more-precise abstract elements, with sub- tle variations in the primitive elements allowing more-expressive variations in the abstract elements 314 that constitute a sentence. These variations at the abstract-element level of the grammar axe mirrored at the level of stylistic goals by large variations in the overall goals attributed to a sentence. CONCLUSION The research presented above is a part of a larger group project on the theory and applications of computational stylistics. I have completed the in- tegration of all the connective primitive elements with Halliday and Hasan's theory of cohesion. My next step is to perform the same kind of analysis for the hierarchic primitive elements, giving them a solid basis in functional theory. In addition, I have completed refinements to the abstract elements, making them more expressive, and I will be able to formulate their definitions in terms of the new primitive elements. The full theory of style will be implemented in a functionally-based stylistic analyzer by Pat Hoyt. This control of stylistic analysis combined with my work on the Penman generation system will allow us to begin exploring the myriad of applications that require an understanding of the subtle but sig- nificant nuances of language. ACKNOWLEDGMENTS This work was supported by the University of Waterloo and the Information Technology Re- seaxch Centres. My thanks to Chyrsanne DiMaxco, Gracme Hirst, and Cameron Shelley for their com- ments on an earlier version of this paper, and to the Anonymous Referees for their helpful criticisms. REFERENCES DiMaxco, Chrysanne (1990). Computational stylis- tics for natural language translation. PhD the- sis, University of Toronto. DiMaxco, Chrysanne and Hirst, Graeme (1992). "A computational approach to style in lan- guage." Manuscript submitted for publication. Halliday, Michael (1985). An introduction to func- tional grammar. Edward Arnold. Halliday, Michael and Hasan, Ruqaiya (1976) Co- hesion in English. Longman. Hovy, Eduaxd H. (1988). Generating natural lan- guage under pragmatic constraints. Lawrence Edbaum Associates. Quirk, Randolph, Greenbaum, Sidney, Leech, Ge- offrey, and Svartvik, Jan (1985). A comprehen- sive grammar of the English language. Long- man.
1992
48
ELABORATION IN OBJECT DESCRIFFIONS THROUGH EXAMPLES Vibhu O. Mittal Department of Computer Science University of Southern California Los Angeles, CA 90089 USC/Information Sciences Institute 4676 Admiralty Way Marina del Rey, CA 90292 Abstract Examples are often used along with textual descrip- tions to help convey particular ideas - especially in instructional or explanatory contexts. These accompa- nying examples reflect information in the surrounding text, and in turn, also influence the text. Sometimes, examples replace possible (textual) elaborations in the description. It is thus clear that if object descriptions are to be generated, the system must incorporate strate- gies to handle examples. In this work, we shall inves- tigate some of these issues in the generation of object descriptions. INTRODUCTION There is little doubt that people find examples very ben- eficial in descriptions of new or complex objects, rela- tions,orprocesses. Various studies have shown that the inclusion of examples in instructional material signifi- cantly increases user comprehension (for e.g., (Houtz, Moore & Davis, 1973; MacLachlan, 1986; Pirolfi, 1991; Reder, Charney & Morgan, 1986; Tennyson & Park, 1980)). Users like examples because exam- pies tend to put abstract, theoretical information into concrete terms they can understand. Few generation systems have attempted to make significant use of ex- amples, however. In particular, most systems have not integrated examples in the textual descriptions, but have used them mostly on their own, independently of the explanation that may also have been provided at that point. However, examples cannot be generated in isolation, but must form an integral part of the descrip- tion, supporting the text they help to illustrate. Most previous work (especially in the context of tutoring systems) focused on the issue offinding useful examples (for e.g., Rissland's CEG system (1981) and Ashley's HYPO system (Ashley, 1991; Rissland & Ashley, 1986; Rissland, 1983)). Work by Woolf and her colleagues considered issues in the generation of tu- torial discourse, including the use of examples (Woolf & McDonald, 1984; Woolf & Murray, 1987), but their 315 analysis did not address specific issues of integrated example and language generation. In this paper, we build upon some of these stud- ies and describe the issues in generating descriptions which include examples in a coordinated, coherent fashion, such that they complement and support each other. AN EXAMPLE Consider for inst,'mce, the example in Figure 1, from a well known introductory book on the programming language LISP. It describes an object (a data structure) called a "fist." There are a number of issues that can be immediately seen to be relevant: I. Should the system choose to elaborate on the object attributes in text, or through the use of examples? For instance, the information in Figure I could also have been expressed textually as: "A list always be- gins with a left parenthesis. Then come zero or more pieces of data (called the elements of a list), and a right parenthesis. Data elements can be of any LISP type, including numbers, symbols and strings". In the figure, the examples arc used to elaborate on two aslaeCtS of the data-elements: the variable number of the elements, and the different types of which these elements may belong to. In some contexts, the examples tend to re-iterate certain aspects (in this case, the number was mentioned in the explanation as well), while in others, the examples tend to elab- orate on aspects that are not mentioned explicitly in the description (in our case, the type information). 2. Should the system use one example, or multiple ex- amples? Consider for instance, the following exam- ple of a LISP list: (FORMAT T "~2% ~ A ~ A - A" 12345678 ' James ' Smith (address person) ) It is not entirely obvious that single examples of the type above arc always the most appropriate ones, A list always begins with a left parenthesis. Then come zero or more pieces of data (called the elements of a list) and a right parenthesis. Some examples of lists are: (AARDVARK) ;;; an atom (RED YELLOW GREEN BLUE);;; many atoms (2 3 5 11 19) ;;; numbers (3 FRENCH FRIES) ;;; atoms & numbers A list may contain other lists as elements. Given the three lists: (BLUE SKY) (GREEN GRASS) (BROWN EARTH) we can make a list by combining them all with a parentheses. ((BLUE SKY) (GREEN GRASS) (BROWN EARTH)) Figure 1: A description of the object LIST using ex- amples (From (Touretzky, 1984), p.35) even though such examples are frequently seen in technical reference material. The system must there- fore be able to make reasonable decisions regarding the granularity of information to be included in each example and structure its presentation accordingly. 3. If there are multiple examples that are to be pre- sented, their order of presentation is important too. Studies has shown that users tend to take into ac- count the sequence of the examples as a source of implicit information about the examples (Carnine, 1980; Litchfield, IMiscoll & Dempsey, 1990; Ten- nyson, Steve & Boutwell, 1975). For instance, in Figure 1, the first and second examples taken to- gether illustrate the point that the number of data elements is not important. 4. When are 'prompts' necessary? Examples often have attention focusing devices such as arrows, marks, or as in the Figure 1, extra text, associated with them. These help the user disambiguate the salient from the irrelevant. What information should be included in the prompts, and in the case of text, how should be be phrased? 5. How should the example be positioned with respect to the text? Studies of instructional texts reveal that examples can occur before the text (and the text elaborates upon the example), within the text, or (as in our figure), after the text (Feldman, 1972). There are other issues that need to be considered in an integrated framework - some of these that affect most of the issues raised above are the audience-type, the knowledge-type (whether the concept being de- scribed is a noun or a relation for instance) and the text-type (tutorial vs. reference vs. report, ete). The 316 DESCRI~I~LIST I DATA l.mT $1t'NTA.CT]C ~ I natm.~z .I .I .I 111'~1111~. I~II~IIIM _. INI~QM._ Figure 2: Plan skeleton for listing the main features of a LIST. 4. NU1BIR TYPB M M M M , ~ M m / z Pit~M ID.CX~OUI~ LIST OF Lm~ I I M lqtolKr [ - ATOMS - NU~K~ --/aDM$+ -- ~/~-/d$'~ FiN 3: Partial text plan for generating the LIST examples. issue of how the examples are selected (generated vs. retrieved is also an important issue, but we shall not discuss that here. STATUS OF WORK We are investigating these issues by implementing a system that can generate examples within explanatory contexts (within theEES framework (Neches, Swartout & Moore, 1985; Swartout & Smoliar, 1987)) using the Moore-Paris planner (1992, 1991 ) for discourse gener- ation. Our initial system is for the automatic generation of documentation for small sub-sets of programming languages. One reason for this choice is that it al- lows us to study a variety of example-rich texts in a relatively unambiguous domain. A partial text-plan generated by our planner for the description given in Figure 1 is given in Figures 2 and 3. It shows some of the communicative goals that the planner needs to be able to satisfy in order to generate some of the simple object descriptions in our application. These descrip- tions can make use of examples (instead of tex0 to list and describe feature elaborations, or use them in conjunction with a textual description to clarify and illustrate various points. Among issues that we plan to study are the differ- ences between opportunistic generation of examples and top-down planning of text with examples, and the effects arising from differences in the knowledge type, the text-type and other sources of information. Acknowledgments Thanks to C6cile Paris for critical discussions, different perspectives and bright ideas. This work was supported in part by the NASA-Ames grant NCC 2-520 and under DARPA contract DABT63-91 42-0025. References Ashley, K. D. (1991). Reasoning with cases and hy- potheticals in HYPO. International Journal of Man-Machine Studies, 34(6), 753-796. Carnine, D. W. (1980). Two Letter Discrimination Se- quences: High-Confusion-Alternatives first ver- sus Low-Confusion-Alternatives first. Journal of Reading Behaviour, XII(1), 41-47. Feldman, K. V. (1972). The effects of the number of positive and negative instances, concept def- initions, and emphasis of relevant attributes on the attainment of mathematical concepts. In Pro- ceedings of the Annual Meeting of the American Educational Research Association, Chicago, Illi- nois. Houtz, J. C., Moore, J. W., & Davis, J. K. (1973). Ef- fects of Different Types of Positive and Negative Examples in I.eaming "non-dimensioned" Con- cepts. Journal of Educational Psychology, 64(2), 206-211. Litchfield, B. C., Driscoll, M. P., & Dempsey, J. V. (1990). Presentation Sequence and Example Dif- ficulty: Their Effect on Concept and Rule l..eam- ing in Computer-Based Instruction. Journal of Computer-Based Instruction, 17(1), 35-40. MacLachlan, J. (1986). Psychologically Based Tech- niques for Improving Learning within Comput- erized Tutorials. Journal of Computer-Based In- struction, 13(3), 65-70. Moore, J. D. & Paris, C. L. (1991). Discourse Struc- ture for Explanatory Dialogues. Presented at the Fall AAAI Symposium on Discourse Structure in Natural Language Understanding and Generation. Moore, J. D. & Paris, C. L. (1992). User models and dialogue: An integrated approach to producing effective explanations. To appear in the 'User Model and User Adapted Interaction Journal'. Neches, R., Swartout, W. R., & Moore, J. D. (1985). Enhanced Maintenance and Explanation of Ex- pert Systems Through Explicit Models of Their Development. IEEE Transactions on Software Engineering, SE-11( l l ), 1337-1351. Pirolli, P. ( 1991). Effects of Examples and Their Expla- nations in a Lesson on Recursion: A Production System Analysis. Cognition andlnstruction, 8(3), 207-259. Reder, L. M., Chamey, D. H., & Morgan, K. I. (1986). The Role of Elaborations in learning a skill from an Inslructional Text. Memory and Cognition, 14(1), 64-78. Rissland, E. L. (1981). Constrained Example Genera- tion. COINS Technical Report 81-24, Department of Computer and Information Science, University of Massachusetts, Amherst, MA. Rissland, E. L. (1983). Examples in Legal Reason- ing: Legal Hypotheticals. In Proceedings of the International Joint Conference on Artificial Intel- ligence, (pp. 90--93), Karlsrnhe, Germany. Rissland, E. L. & Ashley, K. D. (1986). Hypothet- icals as Heuristic Device. In Proceedings of the National Conference on Artificial Intelligence (AAAI-86), (pp. 289-297). Swartout, W. & Smoliar, S. W. (1987). Explaining the link between causal reasoning and expert behav- ior. In Proceedings of the Symposium on Com- puter Applications in Medical Care, Washington, D.C. Tennyson, R. D. & Park, 0.42. (1980). The Teaching of Concepts: A Review of Instructional Design Research Literature. Review of Educational Re- search, 50(1), 55-70. Tennyson, R. D., Steve, M., & Boutwell, R. (1975). Instance Sequence and Analysis of Instance At- tribute Representation in Concept Acquisition. Journal of Educational Psychology, 67, 821-827. Touretzky, D. S. (1984). LISP: A Gentle Introduction to Symbolic Computation. New York: Harper & Row Publishers. Woolf, B. & McDonald, D. D. (1984). Context- Dependent Transitions in Tutoring Discourse. In Proceedings of the Third National Conference on Artificial Intelligence (AAAI.84 ), (pp. 355-361). Woolf, B. & Murray, T. (1987). A Framework for Representing Tutorial Discourse. In Proceedings of the Tenth International Joint Conference on Artijicial Intelligence, (pp. 189-192). 317
1992
49
MONOTONIC SEMANTIC INTERPRETATION* Hiyan Alshawi and Richard Crouch SRI International Cambridge Computer Science Research Centre 23 Millers Yard Cambridge CB2 1RQ, U.K. hiyan~cam, sri. corn rc~cam, sri. corn ABSTRACT Aspects of semantic interpretation, such as quan- tifier scoping and reference resolution, are often realised computationally by non-monotonic opera- tions involving loss of information and destructive manipulation of semantic representations. The paper describes how monotonic reference resolu- tion and scoping can be carried out using a re- vised Quasi Logical Form (QLF) representation. Semantics for QLF are presented in which the de- notations of formulas are extended monotonically as QLF expressions are resolved. 1. INTRODUCTION The monotonicity property of unification based grammar formalisms is perhaps the most impor- tant factor in their widespread use for grammatical description and parsing. Monotonicity guarantees that the grammatical analysis of a sentence can proceed incrementally by combining information from rules and lexical entries in a nondestructive way. By contrast, aspects of semantic interpreta- tion, such as reference and quantifier scope reso- lution, are often realised by non-monotonic opera- tions involving loss of information and destructive manipulation of semantic representations. A 'two- level' approach to semantic interpretation tends to result (Bronneberg el al. 1980), where an initial, underspecified representation is transformed into a separate, specified, representation. The goal of the work described here is to pro- vide a model for semantic interpretation that is fully monotonic in both linguistic and contextual aspects of interpretation, and which employs just one level of semantic representation -- Quasi Log- ical Form (QLF). Contextual resolution of under- *This work on the Core Language Engine was carried out under CLARE, a collaborative project involving BP Research, British Aerospace, British Telecom, Cambridge University, SRI International and the UK Defence Research Agency. The project is funded in part by the UK Depart- ment of Trade and Industry. specified QLF expressions involves the instantia- tion of QLF meta-variables. The semantics for the QLF formalism makes the denotation of a QLF formula a partial function to truth-values, with resolution leading to a monotonic extension of the denotation function. We believe that there are several advantages to the approach taken, includ- ing: • Order independence of resolution operations • Production of partial interpretations • Simpler interactions between phenomena • Reversibility for synthesis/generation The QLF formalism is a development of Alshawi 1990. As before, underspecified QLFs are pro- duced on the basis of a unification grammar. Pre- viously, QLF resolution was only partially mono- tonic; full monotonicity required changes to the original QLF formalism and the resolution and scoping processes. These changes have been im- plemented in a further development of the Core Language Engine (Alshawi 1992), although we will ignore most implementation issues in the present paper. The paper is organized as follows. Section 2 provides the syntax of the QLF language and Sec- tion 3 gives some illustrative examples of mono- tonic QLF resolution. Sections 4 and 5 present the semantics of the QLF formalism. Section 6 dis- cusses the relationship between monotonic inter- pretation, Pereira's categorial semantics (Pereira 1990), and context change approaches to seman- tics. Section 7 mentions some benefits of using QLF-like representations in implementing natural language systems. 2. SYNTAX OF MONOTONIC QLF We give here a syntactic description of the QLF constructs for terms and formulas 1. 1The notation we use in implementations is slightly dif- ferent but equivalent to that presented here. 32 A QLF term must be one of the following • a term variable: X, Y, ... • a term index: +i, +j, ... • a constant term: 7, maryl, ... • an expressions of the form: term ( Idx, Cat, Re str, Quant, Reft ) The term index, Idx, uniquely identifies the term expression. Cat is a list of feature-value equations, for example <type=pro ,num=sing,... >. Restr is a first-order, one-place predicate. For a resolved term, Quant will be a generalized quantifier (a car- dinality predicate holding of two properties) and Reft, the term's 'referent', will be a constant or term index. For an 'unresolved' term, Quant and Reft may be meta-variables (_x,_y,...). (QLF terms may also be functional applications, though we will ignore these here). A QLF formula must be one of the following • the application of a predicate to arguments: Predicate (Argument 1,..., Ar~mentn) • an expression of the form: form(Category, Restriction, l~es olut ion) • a formula with scoping constraints: Scope : Formula Predicate is a first or higher-order predicate, in- cluding the usual logical operators and, not, etc. An argument may be a term, a formula or a lambda abstract. Lambda abstracts take the form Var'Body where Body is a formula or an abstract and Vat is a variable ranging over individuals or relations. Restriction is a higher-order predi- cate. Resolut ion is a formula (the 'referent' of the form expression), or is a meta-variable if the form expression is unresolved. Scope is either a meta- variable when scoping information is underspeci- fled or a (possibly empty) list of term indices e.g. [+i,+j] if term +i outscopes +j. The terms iden- tified by the indices must occur within Formula. The degree to which a QLF is unresolved cor- responds approximately to the extent to which meta-variables (appearing above as Quant, ReSt, Scope, and Resolution) are instantiated to the appropriate kind of object level expressions (though see Section 5 for an explicit characteri- zation of unresolved QLFs and partial interpreta- tions.) 3. EXAMPLE QLF RESOLUTIONS Resolution of QLFs through the instantiation of recta-variables has been applied to a wide range of phenomena. These include pronouns, definite descriptions, implicit or vague relations, ellipsis and temporal relations (see Alshawi 1990 for an account of some kinds of reference resolution in an earlier QLF formalism). For concreteness, we present a few illustrative examples of monotonic QLF resolution 2. We do not attempt to describe the mechanism by which the resolutions are cho- sen. It will become evident that the notation is closer to (the syntactic structure of) natural language than is the case for traditional logical formalisms. For example, terms usually correspond to noun phrases, with information about whether e.g. they are pronominal, quantified or proper names in- cluded in the term's category. This makes the QLF representation easier to read than it might seem at first, once its initial unfamiliarity is over- come. Quantification: Every boy met a tall girl illus- trates the representation of quantification. The basic QLF analysis might be (ignoring tense): _s:meet(term(+b,<type=q,lex=every>,boy,_q,x), term(+g,<type=q,lex=a>, Y'and(girl(Y),tall(Y)),_r,_y)). A resolved structure could be obtained by instan- tinting the quantifier meta-variables _q and _r to forall and exists 3, and the scoping meta- variable s to [+b,+g] for the 'Y3' reading: [+b,+g]: meet(term(+b,<type=q,lex=every>, boy,forall,+b), term(+g,<type=q,lex=a>, Y'and(girl(Y),tall(Y)),exists,+g)). In a restriction-body notation for generalized quantifiers, the truth conditional content of this resolved expression corresponds to forall(B,boy(B), exists(G,and(girl(G),tall(G)), meet(B,G))). Anaphora: Every boy claims he met her illus- trates the treatment of anaphora (in a context 2Although the QLF framework can support a variety of alternative semantic analyses for specific phenomena, to provide concrete illustrations one or other analysis needs to be chosen. In the following examples, it should be possible to separate particular analyses from the general points we wish to make about monotonic interpretation. 3The benefits of being able to resolve determiners to quantifiers are discussed in Alshawi 1990. For example, determiners like some (plural) could be resolved to collec- tive or distributive quantifiers, three could be interpreted as meaning either 'exactly three' or 'at least three', and if need be, bare plurals like dogs could be variously interpreted as meaning 'some dogs', 'all dogs' or 'most dogs'. 33 where Mary is assumed to be salient) 4 Unresolved: _sl:claim( term(+b,<type=q,lexfevery>,boy,_ql,_x), _s2:meet(term(+hl,<type=pro,lex=he>, male,_q2,_y), term(+h2,<type--pro,lex=her>, female,_q3,_z))). Resolved: [+b]:claim( term(+b,<type=q,lex=every>, boy,forall,+b), [+hl]:meet(term(+hl,<type=pro,lex=he>, male,exists,+b), term(+h2,<type=pro,lex=her>, female,exists,mary))). The pronominal term for her is resolved so that it existentially quantifies over female objects ident~ cal to mary. The 'bound variable' pronoun he has a referent coindexed with its antecedent, +b. The scope of +h2 is leK unspecified, since exactly the same truth conditions arise if it is given wide or narrow scope with respect to every boy or he. Vague Relations: An unresolved QLF expres- sion representing the noun phrase a woman on a bus might be a term containing a form that arises from the the prepositional phrase modification: term(+w,<lexsa,..>, X'and(woman(X), form(<type=prep,lex=on>, R'R(+w,term(+b,<lex=a,..>, bus,_q2,_b)), _f)), _ql,_w). Informally, the form is resolved by applying its re- striction, R'R(... ) to an appropriate salient pred- icate, and instantiating the form's meta~variable, f, with the result. In this case, the appropriate predicate might be inside, so that _f is instant~ ated to inside(+w,term(+b,<lex=a,..>,bus,_q2,_b)). Tense: One way of treating tense is by means of a temporal relation form in the restriction of an event term. For John slept we might have: _s:sleep(term(+e,<type=event>, E-form(<type=trel,tense=past>, R'and(event(E),R(E)), _t), _ql,_e), term(+j,<type=name>, J'name(J,'John'),_q2,_j)). 4 Here we simplify the issues arising out of tile semantics of intensional, sentential complement verbs like claim. Since the tense on the temporal relation category is past, the resolution says that the event occurred before a particular speech time, t7: [+el : sleep( term(+e, <type=event>, E~f orm (<type=trel, t enseffipast >, R'and(event (E) ,R(E) ), and (event (E), precede (E, t7) ) ), exists ,+e), t erm(+j, <typefname>, J'name (J, ' John ' ), exists, j ohnl ) ). The resolution and(event(E) ,precede(E,tT)) is the result of applying the form's restriction K'and(event (E), R(E)) to a contextually derived predicate, in this case El'precede(El,tT). QLF is not committed to an event based treat- ment of tense. An alternative that has also been implemented is to treat the verbal predication sleep(...) as a temporal form, whose category specifies tense and aspect information. Ellipsis: A more complex example, involving el- lipsis and quantification, is provided by Each boy claimed he was clever, and so did John. A partially resolved QLF, but one in which the ellipsis is still unresolved, might be as follows (ig- noring tense and event variables): and( claim (term (+b, <lex=every>, boy ,exists ,+b), clever (term (+h, <lexfhe>, male, exists ,+b) ) ), f orm (<type=vpellipsis>, P'P (term (+j ,<typefname>, J'name(J, ' John' ), exists, john) ), _e)). This is a conjunction of the QLF for the an- tecedent clause (Each boy claimed he was clever under a bound pronoun reading) with a form ex- pression for the verb phrase ellipsis. Solutions for instantiating the meta~variable _e for the ellipsis are the result of applying a property Pl, derived from the antecedent clause, to the term with in- dex +j. The sentence has two readings: a sloppy reading where John claims that he is clever, and a strict one where John claims that each of the boys is clever. The choice between a strict or sloppy reading depends on how the term he is reinter- preted in the ellipsis resolution. Intuitively, strict identity involves referring to the same object as before, whereas sloppy identity involves referring to a relevantly similar object. In QLF, a strict reading results from re- interpreting the ellipsis pronoun as co-indexed with the original, i.e. taking Pl to be: 84 X'ciair. (X, clever (+h)). Constraints on legitimate scoping (Section 5) force +b and +h to take wide scope over both the an- tecedent and ellipsis. The sloppy reading results from re-indexing the ellipsis pronoun so that it has the same restriction and category as the original, but is resolved to +j and has a new index +hl. This corresponds to taking P1 to be: X~claim (X, clever (t erm (+hl, <lex=he> male, exists,+j))). More generally, in Crouch and Alshawi 1992 we explore the claim that solutions to verb phrase el- lipsis have the general form: P1 = Xl'..Xi'S[Xl/sl .... Xi/si .... tn/sn]. That is, P1 is formed out of an antecedent clause QLF S by abstracting over the 'parallel elements' sl..si, perhaps with some additional substitu- tions for terms si+l..sn in S (E[a/b] is the ex- pression E with a substituted for b). This seems to be sufficient to cover the range of examples treated by Dalrymple, Shieber and Pereira (1991), but that is a specific linguistic claim about verb phrase ellipsis in English and not central to the present paper. 4. SEMANTICS FOR QLF In this section we outline the semantics of the QLF language in a way that is as close as possible to classical approaches that provide the semantics in terms of a function from models to truth values. The main difference is that denotation functions will be partial functions for some unresolved QLF formulas, reflecting the intuition that these are 'partial interpretations'. The denotation of a QLF expression will be extended monotonically as it is further resolved, a fully resolved formula receiving a total function as its denotation. The semantics is not intended to describe the resolution process. Before giving evaluation rules for the QLF lan- guage, we first present a simplified version of the semantics for fully instantiated QLF expressions. This is for expository purposes only; the full QLF semantics does not depend on the simplified ver- sion. 4.1 SIMPLIFIED SEMANTICS We will use the notation [[~.]]m for the truth value of an expression ~. with respect to a model m (but will leave m implicit), m includes an interpretation function I for mapping constants and predicates into domain individuals and relations. Also left implicit is a function assigning values to variables, which is required for the evaluation of lambda ab- stracts as characteristic functions. Constructs in the 'standard' predicate logic sub- set of QLF receive their semantics with the usual evaluation rules, for example: • [[P(al ..... an)]] = 1 iff I(al) ... I(an) are in the relation I(P), and 0 otherwise. • [[and(F1,F2)]] = 1 iff [[F1]]=I and [[F2]]=l, and 0 otherwise. The evaluation rule for a formula F with a scop- ing variable instantiated to [I,J .... ] and con- taining a term T----term(I,C,R,Q,A) is as follows: • [[[I,J .... ] :F]]=I iff [[Q(R' ,F')]]=I, and 0 otherwise, where R' is X'(and(R(X),X=A))[X/I], and F' is X'([J .... ] :and(F,X=A))[X/T, X/I] This evaluation rule states that a formula with a scoping constraint list may be evaluated by 'dis- charging' the term for the first index on the list with respect to a formula with a reduced scop- ing constraint. The rule discharges the term by abstracting over occurrences of the term and its index, and applying the generalized quantifier Q to the term's restriction and the abstract derived from the formula. In Section 5 we will say more about the ramifications of adopting this type of quantifier evaluation rule. Note that this rule is also applicable to resolved terms such as pronouns for which q has been resolved to exists and T is a constant or a scoped variable. The denotation assigned to a resolved formula form(C, R, F' ) in which the resolution variable has been instantiated to a formula F' is simply: • [[form(C,R,F')]]=l iff [[F']]=I, and 0 other- wise. 4.2 QLF SEMANTICS As mentioned earlier, the denotation of a formula F in the QLF language will be a possibly par- tial function ([[... ]]) from models to truth values. Again we use the notation [[F]]m for the truth value of a formula F with respect to a model m (explicit reference to a variable assignment func- tion is again suppressed). For interpretation to be monotonic, we want [[G]] to be an extension of [[F]] whenever G is a more resolved version of F, and in particular for [[G]] to be total if G is fully resolved. We will define [[...]] for QLFs in terms of a re- lation W between formulas, models and truth val- ues. Evaluation rules will be given for W(F,m,v), but since more than one rule may apply (or a rule may apply in more than one way), W will in gen- eral be a relation. The relationship between [[... ]] and W for a formula F is as follows: 35 • [[F]]m=l iff W(F,m,1) but not W(F,m,0); • [[F]]m:0 iff W(F,m,0) but not W(F,m,1); • [[F]]m undefined iff W(F,m,1) and W(F,m,0). Henceforth we will leave the model argument m implicit. The evaluation rules for W will generally take the form W(F,v) if W(F',v) where F' contains one fewer unresolved expression than F (so that it is possible for the process of rule application to terminate). The use of if rather than iffin these rules means that it is possible for rules producing more than one value v to apply and hence for [IF]] to be partial. The model provides an interpretation function I mapping constants and predicates to individual and relations. We will also need to assume a rela- tion S(C,H) (for 'salient') between QLF categories C and QLF expressions H standing for individuals, quantifiers, or predicates, but the precise nature of the salience relation and the way it changes during a discourse are not important for the evaluation rules for QLF given here. The intuitive motiva- tion for S is that the category in an unresolved QLF expression restricts the set of possible refer- ents for that expression. S is discussed further in Section 5. We are now in position to present the evaluation rules, which we number Q1, Q2, etc. For standard connectives we have the obvious evaluation rules, for example, Q1 W(and(F,G),I) if W(F,1) and W(G,1). Q2 W(and(F,G),0) if W(F,0) or W(G,0). Q3 W(not (F) ,l) if W(F,0). Q4 W(not(F),0) if W(F,1). Two rules applicable to a formula F containing a term with uninstantiated referent and quantifier meta-variables: Q5 W(F,v)if W(F[existsl_q,h/_z],v) W(RCA) ,1), where: F is a formula containing the term T=term(I ,C,R,_q,_r), and h is term such that S(C,A). and Q6 W(F,v) if W(F[Q/_q, I/_r],v), where: F is a formula containing the term T=term(l,C,R,_q,_r), and Q is a quantifier such that S(C,Q). (The substitutions for the meta-variables _r and _q are to be read as part of the evaluation rule.) A rule applicable to a formula F in which a (pos- sibly unscoped) quantified term occurs: Q7 W(F,v) if W(Q (R',F') ,v), where: F is a formula containing the term T=term(I,C,R,Q,A), R' is X" (and(R(X), X=A) ) IX/I], and F' is X'(a_nd(F,X=A))[X/T, X/I]. A rule applicable to a formula with an instantiated seoping constraint Q8 W(EI,J .... ] :F,v) if W(Q(R' ,F'),v), where: F is a formula containing the term T=term(I,C,R,Q,h), R' is X'(and(R(X),X=A))[X/I], and F' is X'([J .... ] :and(F,X=A))[X/T, X/I]. We also need a trivial rule for a formula with an uninstantiated scoping constraint so that it re- duces to application of other rules: Q9 W(_s:F,v) if W(F,v). Two rules are applicable to form expressions, cor- responding to the cases of an uninstantiated or instantiated resolution meta-variable: Q10 W(F,v) if W(F[R(P)/_r],v) where: F is a formula form(C,R,_r) P is a predicate such that S(C,P). Qll W(forra(C,R,F'),v)if W(F',v) where F' is a QLF formula. In a more complete description of the semantics we would also have to state that the evaluation rules provided give the only way of determining membership of the relation W. 5. NOTES ON THE SEMANTICS Monotonlclty: In this paper we are using monotonicity in two senses which (by design) turn out to be consistent. The first is a syntactic no- tion for QLF representations (instantiation rather than destructive manipulation), while the second is semantic: 1. 2. F1 is a more resolved version of F2 if F1 can be obtained by instantiating zero or more meta- variables in F2. F1 is a less partial interpretation than F2 if [IF1]] is an extension of [[F2]]. The claim of monotonicity for QLF is that for for- mulas F1 and F2, if F1 is a more resolved version of F2 then F1 is a less partial interpretation than F2. 36 Scoping Constraints: The quantification rules, (Q7) and (Q8), (i) select a term from a for- mula, (ii) discharge all occurrences of the term and its index in the formula and the term's restriction, replacing them by a variable, and (iii) apply the term's quantifier to the discharged restriction and formula. The difference between (QT) and (Q8) is simply that the latter also discharges the head of the scoping list, in this case by removing it rather than by replacing it. (Keep in mind that the dis- charge and replacement operations take place at the level of the evaluation rules for QLF; they are not applied to the QLF expressions representing natural language meanings themselves). As with Lewin's scoping algorithm, (Lewin 1990), there are no constraints built explicitly into the QLF semantics on where a quantification rule for a term may be applied, or indeed on the num- ber of times it may be applied. However, several constraints arise out of (a) the absence of any se- mantic rules for evaluating isolated terms, term indices or scope lists, and (b) the requirement that a term be selected from a formula so that its quan- tifier is known. The emergent conditions on legitimate scoping are 1. No term may be quantified-in more than once: The first application of the quantifier rule dis- charges the term. Subsequent applications of the rule lower down in the evaluation would fail to select an undischarged term. 2. When a term's index occurs in a scope list, the quantifier rule for the term must be applied at that point: It must be applied to discharge the head of the scope list, and by (1) above cannot additionally be applied anywhere else. 3. All occurrences of a term's index must oc- cur within the scope of the application of the term's quantifier rule: The quantification rule will only discharge indices within the formula to which it is applied. Any occurrences of the index outside the formula will be undis- charged, and hence unevaluable. 4. If a term R occurs within the restriction of a term H, and R is to be given wide scope over the restriction, then R must also be given wide scope over H: Otherwise, suppose H is given wide scope over R. Term H will first be discharged, replacing the term, and with it its restriction, in the formula to which the rule is applied. Then the quantification rule for R needs to be applied to the discharged formula, but the formula will not contain an occurrence of the term R, making the rule inapplicable. The last two constraints have often been at- tributed to restrictions on free variables and vacu- ous quantification. The attribution is problematic since open formulas and vacuously quantified for- mulas are both logically well defined, and without suspect appeal to the syntax of the logical formal- ism they cannot be ruled out as linguistically ill- formed. By contrast, QLF makes these violations semantically unevaluable. Unscoped Terms: When a term's index is not mentioned in any scope list, the term may be quantified in at any point within the formula. For anaphoric terms whose referent has been resolved to some individual constant, it does matter where the quantification rule is applied; since the term existentially quantifies over things identical to a single object, the scope of the quantification is im- material. It is thus convenient to leave anaphoric terms like this unscoped in QLF. Although this makes the QLF look (syntactically) as though it is not fully resolved, semantically it is. For other un- scoped terms, alternative applications of the quan- tifier rule may well lead to distinct truth condi- tions, and in these cases the QLF is genuinely un- resolved. Context Dependence: Fully resolved QLFs are context-independent in the same sense that holds for closed formulas in traditional predicate logic (i.e. if the interpretation of the constant symbols in the language is fixed). Unresolved QLFs behave more like open formulas, and there is an analogy between assignments to unbound vari- ables in predicate logic and possible resolutions of meta-variables admitted by the salience relation S. S(C,H) should be thought of as providing QLF expressions whose denotations are possible refer- ents for unresolved expressions with category C. (It would have been possible to define S as a direct relation between categories and referents, but this complicates the statement of its role in resolution and in the semantic definitions.) We used S above in the definition of QLF semantics, but it is also central to NL processing: being able to compute S can clearly play an important role in the process of reference resolution during NL interpretation and in the process of building descriptions during NL synthesis. (The computational analogue of S was implemented as a collection of 'resolution rules' in Alshawi 1990.) An important question is what to allow as possi- ble expressions in the range of S. One observation is that as the range is widened, more NL resolu- tion phenomena are covered. A rough summary is as follows: • constants: intersentential pronouns • predicate constants: compound nouns, prepo- sitions 37 • quantifiers: vague determiners • indices: bound variable, intrasentential pro- nouns • predicates built from NP restrictions: one- anaphora • predicates built from previous QLFs: inter- sentential ellipsis • predicates built from current QLF: intrasen- tential ellipsis 6. RELATED APPROACHES Viewed from a slightly different perspective, monotonic interpretation has a number of points of contact with Pereira's categorial semantics (Pereira 1990). Put briefly, in categorial seman- tics, semantic evaluation is represented as deduc- tion in a functional calculus that derives the mean- ings of sentences from the meanings of their parts. Considerable emphasis is placed on the nature of these semantic derivations, as well as on the fi- nal results of the derivations (the 'logical forms' of sentences). One significant advantage of this approach is that constraints on legitimate scoping emerge nat- urally from a consideration of permissible deriva- tions of sentence meaning, rather than arising arti- ficially from syntactic constraints imposed on log- ical forms. Derivations involving quantified terms first introduce an assumption that allows one to derive a simple term from a quantified term. This assumption is later discharged by the application of a quantifier. Conditions on the appropriate in- troduction and discharge of assumptions in natu- ral deduction systems impose restrictions on the way that quantifiers may legitimately be applied. For example, a quantifier assumption may not be discharged if it depends on further assumptions that have not themselves been discharged. This prevents the occurrence of free variables in logical form, but without appeal to the syntax of logical form. The discharge of terms and term indices when evaluating QLF closely parallels the discharge of quantifier assumptions in categorial semantics. In- deed, the terms and the indices are precisely the assumptions introduced by quantified expressions, and which need to be discharged. Furthermore, the different orders in which quantifier assump- tions may be discharged in categorial derivation correspond to the choices that the quantifier rules permit for discharging quantified terms. Where monotonic interpretation and categorial semantics part company is on the degree of ex- plicitness with which semantic derivations are rep- resented. In categorial semantics, derivation is a background process that builds up logical forms, but is not explicitly represented in the semantic formalism. By contrast, the annotation of QLFs with scope lists provides an extra level of informa- tion about how the derivations proceed. In partic- ular, they indicate which evaluation rules should be applied where. QLF thus provides a (usually partial) specifica- tion of a semantic derivation, showing (a) what the initial 'premises' are (roughly, lexical meanings, al- though these too may only be partially specified), and (b) the rules by which the 'premises' are com- bined. QLF resolution amounts to further instan- tiating this specification. This view of QLF can be contrasted with Logical Form as it is normally un- derstood, which represents the results of carrying out a semantic derivation. The difference between specifying a derivation and carrying it out is what makes resolution order independent in monotonic interpretation. Making a resolution to QLF only specifies when and how an expression should be evaluated during seman- tic derivation; it does not carry out that part of the derivation. Where no distinction is drawn be- tween making a resolution and carrying out the corresponding step of the derivation, the order of resolution can be important. Thus, for Dalrymple, Shieber and Pereira (1991), where this distinction is not drawn, the precise interleaving of scope and ellipsis resolution determines the interpretation of the sentence. In QLF, resolutions dictate the order in which various steps of the derivation are carried out, but the resolution order does not reflect the derivation order. Distinguishing between specifying and perform- ing a derivation also means that a monotonic treatment of ellipsis resolution does not need to resort to higher-order unification. Dalrymple, Shieber and Pereira use higher-order unification to 'unpick' the composition of constituent mean- ings obtained in the semantic derivation from the ellipsis antecedent. Some of these meanings are then put back together to produce a predicate that can be applied to the ellipsis arguments. Since monotonic resolution does not carry out the final composition of meanings, but merely sets out con- ditions on how it is to take place, there is no need to unpick composed meanings and put them back together again. It is worth pointing out that monotonic inter- pretation is compatible with approaches to mean- ing as a transition between contexts or information states, and where the order in which transitions are made is significant (e.g. Veltman 1991). In such a framework, monotonic interpretation would amount to making decisions about which transi- tions to take when, but would not involve putting those decisions into action. The monotonicity in 38 monotonic interpretation thus refers to the way in which alternative derivations of sentence meanings may be chosen, but not to the semantic effects of those sentence meanings. 7. IMPLEMENTATION BENEFITS A description of the language processing mecha- nisms to which we have applied the monotonic semantics model is beyond the scope of this pa- per. However, we believe that the QLP represen- tation presented here brings significant advantages to implementing mechanisms for reference resolu- tion, scoping, preference and generation. Reference and Scoping: The order indepen- dence of resolution operations allows for a variety of control structures in implementing a resolution mechanism. We find it convenient to make a bot- tom up pass through QLFs making reference res- olutions, followed by a stage of scoping resolution, and to iterate over this should any of the resolu- tions introduce further unresolved expressions. The salience relation S can be implemented as procedures that search for properties, objects or indices in context. Scoping proceeds simply by the non-deterministic instantiation ofscoping con- straints, subject to the restrictions imposed on evaluable QLFs (Section 5), plus techniques for ignoring logically equivalent scopings, as for ex- ample described by Moran (1988). Preference and Disambiguation: A resolved QLF preserves all the information in the original unresolved QLF, and also records the correspon- dence between resolved and unresolved expres- sions. This makes it possible to define preference metrics that can be used for ranking alternative interpretations independently of the search strate- gies used to derive them. For example, in the case of scoping, these metrics can combine information about how far a quantifier was 'raised' with infor- mation about the surface form of its determiner. Preference ranking over alternative resolutions fa- cilitates automatic disambiguation of input. Inter- active disambiguation can make use of generation from resolved QLFs for confirmation by a user. Generation: There is a strong connection be- tween monotonicity and reversibility in language processing systems. Monotonicity of unification means that algorithms such as head-driven gener- ation (Shieber et al 1990) can be applied to gram- mars developed for analysis. We use a variant of this algorithm for generating from QLFs, and the monotonicity of semantic interpretation means that the grammar used for generating from un- resolved QLFs (the normal 'output' of the gram- mar) can also be used for generation from resolved QLFs. In parallel to the distinction between grammat- ical analysis (of NL into unresolved QLFs) and interpretation, we make the distinction between grammatical synthesis (of NL from QLFs) and de- scription. Description is the process of deriving a QLF from which synthesis proceeds by taking a fact (e.g. a database assertion) as input. We hope to report on our approach to description else- where. However, one of the principles of QLF- based description is that while interpretation in- stantiates referent fields in underspecified QLFs, description involves instantiating category and re- striction fields for QLFs in which referent fields are already instantiated. The preference metrics applied to rank alternative interpretations can be applied equally well to ranking resolved QLFs pro- duced by a nondeterministic description process, so there is a sense in which the preference mecha- nism can also be made reversible. REFERENCES Alshawi, H. 1990. "Resolving Quasi Logical Forms". Computational Linguistics 16:133-144. Alshawi, H., ed. 1992 (in press). The Core Language Engine. Cambridge, Massachusetts: The MIT Press. Bronneberg, W.J.H.J., H.C. Bunt, S.P.J. Landsber- gen, R.J.H. Scha, W.J. Schoenmakers and E.P.C. van Utteren. 1980. "The Question Answering System PHLIQAI". In L. Bole (ed.), Natural Language Question Answering Systems. Macmil- lan. Crouch, R. and H. Alshawi. 1992. "Ellipsis and Dis- tributivity in Monotonic Interpretation", Techni- cal Report, SRI International, Cambridge, UK. Dalrymple, M., S. M. Shieber, and F. C. N. Pereira. 1991. "Ellipsis and Higher-Order Unification". Linguistics and Philosophy, 14:399-452. Lewin, I. 1990. "A Quantifier Scoping Algorithm with- out a Free Variable Constraint", Proceedings of COLING 1990. Moran, D. B. 1988. "Quantifier Scoping in the SRI Core Language Engine". Proceedings of the 26th Annual Meeting of the Association for Computa- tional Linguistics, 33-40. Pereira, F. C. N. 1990. "Categorial Semantics and Scoping", Computational Linguistics 16:1 1-10. Shieber, S. M., G. van Noord, F. C. N. Pereira, and R. C. Moore. 1990. "Semantic-Head-Driven Gen- eration". Computational Linguistics 16:30-43. Veltman, F. 1990. "Defaults in Update Semantics", in H. Kamp (ed), Conditionals, Defaults and Belief Revision, DYANA deliverable R2.5.A. 39
1992
5
THE EXPRESSION OF LOCAL RHETORICAL RELATIONS IN INSTRUCTIONAL TEXT* Keith Vander Linden Department of Computer Science University of Colorado Boulder, CO 80309-0430 Internet: [email protected] INTRODUCTION Given the prevalence of the use of rhetorical rela- tions in the generation of text (Itovy, 1989; Moore and Paris, 1988; Scott and Souza, 1990), it is surprising how little work has actually been done on the grammatical realization of these relations. Most systems, based on Mann and Thompson's formulation of Rhetorical Structure Theory (Mann and Thompson, 1988), have adopted simplified so- lutions to their expression. If, for example, an ac- tion, X, and a purpose for that action, Y, must be expressed, a standard form such as "Do X in or- der to Y" will be generated. In reality, the purpose relation can be and is expressed in a myriad of dif- ferent ways depending upon numerous functional considerations. Consider the following examples: (la) Follow the steps in the illustration below, for desk installation. (code 1) (lb) To install the phone on a desk, follow the steps in the illustration below. (le) Follow the steps in the illustration below, for installing the phone on a desk. (ld) For the desk, follow the steps in the illustration below. These examples of purpose expressions illus- trate two issues of choice at the rhetorical level. First, the purpose clauses/phrases can occur ei- ther before or after the actions which they moti- vate. Second, there are four grammatical forms to choose from (all found in our corpus). In (la), we see a "for" prepositional phrase with a nominaliza- tion ("installation") as the complement, in (lb), a "to" infinitive form (tnf), in (lc), a "for" preposi- tion with a gerund phrase as a complement, and *This work was supported in part by NSF Grant IRI-9109859. 1 My convention will be to add a reference to the end of all examples that have come from our corpus, indi- cating which manual they came from. (code) and (exc) will stand for examples from the Code-a-Phone and Excursion manuals respectively (Code-a-phone, 1989; Excursion, 1989). All other examples are contrived. 318 in (ld), a "for" preposition with a simple object as the complement. Although all these forms are grammatical and communicate the same basic in- formation, the form in (la) was used in the corpus. I am interested in the functional reasons for this choice. Another aspect of this analysis to notice is that, contrary to the way rhetorical structure the- ory has been used in the past, I have allowed phrases, as well as clauses, to enter into rhetor- ical relations. This enables me to address the use of phrases, such as those in (la), (lc), and (ld), which hold rhetorical relations with other spans of text. The proper treatment of alternations such as these is crucial in the generation of understandable text. In the following sections, I will discuss a methodology for identifying such alternations and include samples of those I have found in a corpus of instructional text. I will then discuss how to formalize and implement them. IDENTIFYING ALTERNATIONS I identified alternations by studying the linguistic forms taken on by various rhetorical relations in a corpus of instructional text. The corpus, currently around 1700 words of procedural text from two cordless telephone manuals, was large enough to expose consistent patterns of instructional writing. I plan to expand the corpus, but at this point, the extent to which my observations are valid for other types of instructions is unclear. To manage this corpus, a text database sys- tem was developed which employs three inter- connected tables: the clause table, which repre- sents all the relevant information concerning each clause (tense, aspect, etc.), the argument table, which represents all the relevant information con- cerning each argument to each clause (subjects, objects, etc.), and the rhetorical relation table, which represents all the rhetorical relations be- tween text spans using Mann and Thompson's for- malism. I used this tool to retrieve all the clauses and phrases in the corpus that encode a particular local rhetorical relation. I then hypothesized func- tional reasons for alternations in form and tested them with the data. I considered a hypothesis successful if it correctly predicted the form of a high percentage of the examples in the corpus and was based on a functional distinction that could be derived from the generation environment 2. I have analyzed a number of local rhetorical relations and have identified regularities in their expression. We will now look at some representa- tive examples of these alternations which illustrate the various contextual factors that affect the form of expression of rhetorical relations. A full anal- ysis of these examples and a presentation of the statistical evidence for each result can be found in Vander Linden (1992a). PURPOSES One important factor in the choice of form is the availability of the lexicogrammatical tools from which to build the various forms. The purpose re- lation, for example, is expressed whenever possible as a "for" prepositional phrase with a nominaliza- tion as the complement. This can only be done, however, if a nominalization exists for the action being expressed. Consider the following examples from the corpus: (2a) Follow the steps in the illustration below, for desk installation. (code) (2b) End the second call, and tap FLASH to return to the first call (code) (2e) The OFF position is primarily used for charging the batteries. (code) Example (2a) is a typical purpose clause stated as a "for" prepositional phrase. Example (2b) would have been expressed as a prepositional phrase had a nominalization for "return" been available. Because of this lexicogrammatical gap in English, a "to" infinitive form is used. There are reasons that a nominalization will not be used even if it exists, one of which is shown in (2e). Here, the action is not the only action required to accomplish the purpose, so an "-ing" gerund is used. This preference for the use of less prominent grammatical forms (in this case, phrases rather 2In the process of hypothesis generation, I have frequently made informal psycholinguistic tests such as judging how "natural" alternate forms seem in the context in which a particular form was used, and have gone so far as to document this process in more com- plete discussions of this work (Vander Linden et al., 1992a), but these tests do not constitute the basis of my criteria for a successful hypothesis. than clauses) marks the purposes as less impor- tant than the actions themselves and is common in instructions and elsewhere (Cumming, 1991). PRECONDITIONS Another issue that affects form is the textual con- text. Preconditions, for example, change form de- pending upon whether or not the action the pre- condition refers to has been previously discussed. Consider the following examples: (3a) When you hear dial tone, dial the number on the Dialpad [4]. (code) (3b) When the 7010 is installed and the battery has charged for twelve hours, move the OFF/STBY/TALK [8] switch to STBY. (code) Preconditions typically are expressed as in (3a), in present tense as material actions. If, however, they are repeat mentions of actions pre- scribed earlier in the text, as is the case in (3b), they are expressed in present tense as conditions that exist upon completion of the action. I call this the terminating condition form. In this case, the use of this form marks the fact that the readers don't have to redo the action. RESULTS Obviously, the content of process being described affects the form of expression. Consider the fol- lowing examples: (4a) When the 7010 is installed and the battery has charged for twelve hours, move the OFF/STBY/TALK [8] switch to STBY. The 7010 is now ready to use. (code) (4b) 3. Place the handset in the base. The BATTERY CHARGE INDICATOR will light. (exc) Here, the agent that performs the action de- termines, in part, the form of the expression. In (4a), the action is being performed by the reader which leads to the use of a present tense, relational clause. In (4b), on the other hand, the action is performed by the device itself which leads to the use of a future tense, action clause. This use of fu- ture tense reflects the fact that the action is some- thing that the reader isn't expected to perform. CLAUSE COMBINING User modeling factors affect the expression of in- structions, including the way clauses are com- bined. In the following examples we see actions being combined and ordered in different ways: (5a) Remove the handset from the base and lay it on its side. (exc) 319 (5b) Listen for dial tone, then make your next call (code) (5c) Return the OFF/STBY/TALK switch to STBY after your call. (code) Two sequential actions are typically expressed as separate clauses conjoined with "and" as in (5a), or, if they could possibly be performed si- multaneously, with "then" as in (5b). If, on the other hand, one of the actions is considered obvi- ous to the reader, it will be rhetorically demoted as in (5c), that is stated in precondition form as a phrase following the next action. The manual writer, in this example, is emphasizing the actions peculiar to the cordless phone and paying rela- tively little attention to the general skills involved in using a standard telephone, of which making a call is one. IMPLEMENTING ALTERNATIONS This analysis of local rhetorical relations has re- sulted in a set of interrelated alternations, such as those just discussed, which I have formalized in terms of system networks from systemic-functional grammar (Halliday, 1976) 3. I am currently implementing these networks as an extension to the Penman text generation ar- chitecture (Mann, 1985), using the existing Pen- man system network tools. My system, called IMAGENE, takes a non-linguistic process structure such as that produced by a typical planner and uses the networks just discussed to determine the form of the rhetorical relations based on functional factors. It then uses the existing Penman networks for lower level clause'generation. IMAGENE starts by building a structure based on the actions in the process structure that are to be expressed and then passes over it a number of times making changes as dictated by the system networks for rhetorical structure. These changes, including various rhetorical demotions, marking nodes with their appropriate forms, ordering of clauses/phrases, and clause combining, are im- plemented as systemic-type realization statements for text. IMAGENE finally traverses the completed structure, calling Penman once for each group of nodes that constitute a sentence. A detailed dis- cussion of this design can be found in Vander Lin- den (1992b). IMAGENE is capable, consequently, of producing instructional text that conforms to a formal, corpus-based notion of how realistic in- structional text is constructed. 3System networks are decision structures in the form of directed acyclic graphs, where each decision point represents a system that addresses one of the alternations. 320 REFERENCES Code-a-phone (1989). Code-A-Phone Owner's Guide. Code-A-Phone Corporation, P.O. Box 5678, Portland, OR 97228. Cumming, Susanna (1991). Nominalization in English and the organization of grammars. In Proceedings of the IJCAI-91 Workshop on Decision Making Throughout the Generation Process, August 24-25, Darling Harbor, Syd- ney, Australia. Excursion (1989). Excursion 8100. Northwestern Bell Phones, A USWest Company. Halliday, M. A. K. (1976). System and Function in Language. Oxford University Press, London. Ed. G. R. Kress. Hovy, Eduard H. (1989). Approaches to the planning of coherent text. Technical Report ISI]RR-89-245, USC Information Sciences In- stitute. Mann, William C. (1985). An introduction to the Nigel text generation grammar. In Ben- son, James D., Freedle, Roy O., and Greaves, William S., editors, Systemic Perspectives on Discourse, volume 1, pages 84-95. Ablex. Mann, William C. and Thompson, Sandra A. (1988). Rhetorical structure theory: A the- ory of text organization. In Polanyi, Livia, editor, The Structure of Discourse. Ablex. Moore, Johanna D. and Paris, Cdcile L. (1988). Constructing coherent text using rhetorical relations. Submitted to the Tenth Annual Conference of the Cognitive Science Society, August 17-19, Montreal, Quebec. Scott, Donia R. and Souza, Clarisse Sieckenius de (1990). Getting the message across in RST- based text generation. In Dale, Robert, Mel- lish, Chris, and Zock, Michael, editors, Cur- rent Research in Natural Language Genera- lion, chapter 3. Academic Press. Vander Linden, Keith, Cumming, Susanna, and Martin, James (1992a). The expression of lo- cal rhetorical relations in instructional text. Technical Report CU-CS-585-92, the Univer- sity of Colorado. Vander Linden, Keith, Cumming, Susanna, and Martin, James (1992b). Using system net- works to build rhetorical structures. In Dale, R., Hovy, E., RSesner, D., and Stock, O., edi- tors, Aspects of Automated Natural Language Generation. Springer Verlag.
1992
50
GENERATING A SPECIFIC CLASS OF METAPHORS Mark Alan Jones 1 Department of Computer and Information Sciences University of Delaware, Newark DE 19716 Internet: [email protected] 1 Introduction Although some progress has been made in the area of metaphor understanding, little has been made in metaphor generation. Current solutions rely upon a rather direct encoding of alternatives. There is no computational theory that can ac- count for metaphor generation from basic princi- ples. Although generating all types of metaphors from basic principles is very difficult, there is a subset of metaphors that are prevalent in natural expressions and perhaps more amenable to compu- tational approaches. We call these transparently- motivated (T-M) metaphors (Jones and McCoy 1992). Interestingly, metaphors in general, as well as the class described here, can be used to achieve important textual goals (e.g., brevity, conceptual fit, focus, perspective). Metaphorical expressions often reflect concep- tual models which are the basis for how we under- stand the world. Mark Johnson (1987) has made some important observations about the building blocks of thought, most notably that they are tied closely to our bodily experience. Among the build- ing blocks he has described are attraction, block- age and containment. Consider describing the pur- chase of shares of stock as, "I took $2500 out of my money market account and put it into Exxon common stock." The speaker did not literally put money into the stock, but rather bought stock with the money. This metaphor is based on the simpli- fying concepts that represent investments as con- tainers which can hold money. When we write and talk we automatically use non-literal expres- sions that reflect our common conceptual ground- ings. These lead to very natural and easily under- stood expressions because we (speaker and audi- ence) share these common conceptual groundings. 2 Transparently-Motivated Metaphor All metaphors have a mapping between two domains. The more literal domain is called the tenor domain, the less literal is the metaphorical domain. In the stock example these are the fi- nancial domain and the containment domain re- spectively. T-M metaphors are similar to conven- tional metaphors (Lakoff and Johnson 1980, Lakoff 1987) in that they are both based upon famil- iar conceptual motivations. However, conventional metaphors are also defined in contrast to novel and dead metaphors. This distinction appeals to 1This work is supported by Grant #H133E80015 from the National Institute on Disability and Rehabilitation Re- search. Support has also been provided by The Nemours Foundation. 321 knowledge about the history of expressions. In this sense, T-M metaphors are broader in scope, how- ever, in other ways they are more constrained. Several qualities show that the stock example above is transparently-motivated. It is based on the bodily grounding of containment. It conveys the verb-phrase action message of a purchase being conducted, which is more than merely the mapping from containment to purchase. This mapping is not even highlighted, rather it is merely used to convey the purchase message. Consider two counterexamples. The expres- sion, "Men are wolves" (Black 1962), is not a transparently-motivated metaphor. Most impor- tantly, its meaning is primarily the mapping itself; the mapping is not employed transparently to yield another specific meaning. Consider describing the starting of a business in terms of gardening. "You must plant the seed in fertile soil, give it plenty of water..." Such al- legorical metaphors are not T-M because they do not have a referent from the tenor domain which remains unchanged by the metaphor (e.g., "shares of stock"). The scope of this work is further con- strained to metaphors that convey a verb=phrase meaning. The bounds of T-M metaphors that pri- marily convey other meanings, such as reference, are less clear. So, transparently-motivated metaphors: 1) Are based on universal groundings that are of- ten linked to bodily experience; 2) Convey a mes- sage (via the mapping) that is something more than the mapping; 3) Are subtle in the way that they do not draw attention to themselves as bla- tant metaphors. In fact, at first glance, these metaphors are often not recognized as non-literal; 4) Retain a referent from the tenor domain. 3 Uses of Metaphor Metaphor is not merely a device that adds a flowery flavor to text. Rather, metaphor can be harnessed to achieve textual goals that may be dif- ficult to achieve with literal statements. A simple goal that metaphor can achieve is that of being concise. More complex uses are explained below. 3.1 Conceptual Fit We noted that the building blocks of thought may lead us to speak metaphorically. It is rea- sonable to conclude that metaphors based on such building blocks are easier to understand because they reflect human conceptualizations. Two observations are salient. People naturally tend to describe things in a more concrete manner, even when the issue at hand is rather abstract. Such behavior generally yields more natural and understandable text. This explains why expres- sions like, "grasping an idea" are common. Second, it is natural to talk about things in the light of ba- sic building blocks of thought that are commonly shared by a community. Therefore, describing a stock transaction as putting a token in a container may be more intuitive to the audience. 3.2 Focus of Attention The traditional view of focus of attention is that it is something that is accomplished syntacti- cally, such as by making the desired focus the sub- ject of the sentence (as can be done with a passive construction). However, further inspection reveals that there is a relationship between semantic types and level of focus. Jones and McCoy (1992) show evidence for the intuitive proposition that the semantic types of words/concepts affect the perceived level of fo- cus attributed to those words/concepts. We intro- duce a simple focus hierarchy to model the effects of semantic types on focus levels. Items at the top of the focus hierarchy, because of their semantic qualities, are more likely to be focused upon than those below. Generally, concepts which are very concrete and volitional are toward the top while more amorphous and abstract things are below. Given the focus hierarchy which explains in- herent focus level according to the semantic type of an object, it is interesting to note that one effect of metaphorical statements can be to al- ter the perceived semantic type of an object (and therefore potentially raise the perceived focus level). Consider the metaphorical statement "AI is no stranger to object-oriented paradigms" (Elliot 1991). Notice that the phrase "is no stranger" has the effect of conceptually personifying the objects involved (i.e., AI and object-oriented paradigms) since it is a phrase that, literally, can only be used with humans (or perhaps other animate ob- jects). Compare the perceived focus level with that in a more literal rendition of the sentence such as: "AI and object-oriented paradigms have previ- ously been incorporated together." In the traditional view of focus of attention a word is treated as having a static semantics. How- ever metaphor can make the semantic type of ob- jects more flexible. By using a verb that only ap- plies to humans, as above, the objects are pushed up the focus hierarchy towards the position that humans occupy. 3.3 Perspective While the notion of perspective on an item is related to focus, they are distinct. Rather than concentrating on which object is focused on, per- spective has to do with how an object is viewed. A given perspective on an item causes certain as- pects of that item to be highlighted (and not oth- ers) (McCoy 1989). 322 Consider a couple with young children attend- ing a party with all of their children's parapher- nalia in tow. One tells the other "It is time for us to pull up stakes." Here, the leaving is metaphorically described via the camping domain, where leaving is an involved process. This use of metaphor has highlighted or put a particular per- spective on the leaving that emphasizes the work involved. 4 Approach to a Solution T-M metaphors are a promising sub-class of metaphors in which to work, because they carry special requirements that restrict the possible search space from which they can be generated. We have begun preliminary work to specify the basic structures and methodologies that together can generate good metaphors. Input to the sys- tem has two parts. The first part is the literal statement of what should be expressed, in a for- mal form. For example, describe the leave role of the object party. The second part of the input to the system is a specification of the goal that the metaphor is to achieve. 4.1 General Approach The idea behind the approach is to identify re- lated domains of the tenor domain that are appro- priate as metaphorical domains. Both the tenor and metaphorical domains share some roles that are defined by their common ancestor in the "is- a" hierarchy. Specifically, we require that they share the role that is the focus of the metaphor (that aspect of an action which is being referred to metaphorically). We can identify an ancestor of the tenor domain from which the tenor domain inherits the role in question. The metaphorical do- main also will share this ancestor. In addition to sharing the common ancestor, a reasonable metaphorical domain must have the following qualities: • Be universal, or considered very familiar (with respect to the user model}. If the audi- ence is ignorant of the metaphorical domain, there is little hope of the expression's success. • Have the potential to achieve additional goals (e.g., focus). • Have specialized iexical expressions in the metaphorical domain for the role being de- scribed. This is necessary because the lexi- cal expression used to describe the role is the only information that conveys the mapping. This restriction would not apply to non-literal expressions that explicitly state the mapping. Without the specialized lexical expression, a T-M metaphor cannot be generated. These specifications constrain what potential metaphorical domains will be considered. By lim- iting the candidate domains, the space and search time requirements will be held down. Consider how we can generate the metaphor- ical expression conveying "leave the party," while at the same time emphasizing the effort that it takes to leave (as in section 3.3). A party can be described, via "is-a" links of the abstraction hi- erarchy, as a human process. Such a process can have a termination. For partying, leaving and say- ing goodbye to everyone can be considered part of the termination of this process. After ascending the "is-a" hierarchy to the proper level of abstraction, where the key role (ter- mination) is specified, we can search for a can- didate metaphorical domain that shares this an- cestor. There are several possible metaphorical domains that meet this criteria. To narrow this choice several considerations come into play. One of these considerations is whether the pos- sible metaphorical domains have specialized ex- pressions available for the role that the metaphor involves (termination). Two possible metaphorical domains that meet this criteria are camping, with "pulling up stakes," and electrical equipment, with "pull the plug." Both domains are potential candidates. How- ever, now the constraints imposed by the goal of the metaphor must be considered: emphasize the complexity of the termination. Will an allusion to camping rhetorically make leaving the party ap- pear more involved? Here we must appeal to more detailed knowledge about the termination of the camping experience. In this case we find that the termination of a camping experience is not trivial, it requires a moderate amount of work (compared to the party and electrical domains). Therefore the camping domain may be chosen. Other metaphor- ical goals (e.g., focus) will cause different reasoning to be done in this final stage. 4.2 Specific Conceptual Mappings The previous approach may work well for some T-M metaphors, but notice there is a severe re- striction on the relationship between the tenor and metaphorical domains - they must have the role involved in the expression in common. Here we discuss how the previous method can benefit from additional information, which can link two very different domains. Recall from section 3.1 that conceptual fit is a motivation for metaphor generation. The method introduced here helps implement the prin- ciple that it is useful to describe things in terms of shared conceptual roots. If a system is to generate metaphors that follow from conceptual roots, those roots must be represented in the system. We will need metaphorical domain selection rules and re- lated mapping information to capture the concep- tual roots by reflecting such common metaphorical behaviors as those pointed to in Lakoff and John- son's work (Lakoffand Johnson 1980, Lakoff 1987). Selection rules will encode such familiar patterns as "describe progress in terms of a vehicle moving toward a goal" and "describe securities in terms of containers for money." Consider describing the progress with a pub- lication or career, with the intention of being as 323 Table 1: Progress in terms of a moving vehicle Tenor Domain Met. Domain progress ....... forward negative progress ....... backward no progress ....... still unsatisfactory progress ....... slow intuitive as possible (conceptual fit). A rule en- coding the notion "describe progress in terms of a vehicle moving toward a goal" would be triggered. Closely attached to this rule is information about how the mapping from tenor domain to metaphor- ical domain should relate. Such information would include the mappings in Table 1. Notice that these expressions for progress in the domain of physical motion are natural and probably-more frequent than the "literal" forms found on the left side of Table 1. This may be be- cause people understand progress in terms of mo- tion. In this way metaphor generation can yield a more conceptually appropriate expression, which may actually be easier to understand than its lit- eral counterpart. There is potential for abstracting the informa- tion in the table. Note that the moving object has some starting point, some goal and some points on its path. With time involved, it also has speed. With a sophisticated model of this behavior in the metaphorical domain available, the four mappings in Table 1 could be derived. Interestingly, a more general structure matched with reasoning in the metaphorical domains could derive other expres- sions. With the knowledge that energy is required to move objects, and given that a prototypical moving object is a car that runs on gas, we could hope to generate "My career is running out of gas" from general knowledge and principles. REFERENCES Max Black (1962). Models and Metaphors. Cornell University Press, Ithica, NY. Lance B. Elliot (1991). The bandwagon blues. AI Expert, 6(5):11-13. Mark Johnson (1987). The Body in the Mind: The Bodily Basis of Reason and Imagination. Uni- versity of Chicago press, Chicago, IL. Mark A. Jones and Kathleen F. McCoy (1992). Transparently-Motivated Metaphor Generation. In R. Dale, E. Hovy, D. Rosner and O. Stock, eds., Aspects of Automated Natural Language Generation: The 6th International Workshop on Natural Language Generation Proceedings, Trento, Italy. 231-246. George Lakoff (1987). Women, Fire and Danger- ous Things What Categories Reveal About the Mind. University of Chicago Press, Chicago, IL. George Lakoff and Mark Johnson (1980). Metaphors we live by. University of Chicago Press, Chicago, IL. Kathleen F. McCoy (1989). Generating context sensitive responses to object-related misconcep- tions. Artificial Intelligence, 41:157-195.
1992
51
SEXTANT: EXPLORING UNEXPLORED CONTEXTS FOR SEMANTIC EXTRACTION FROM SYNTACTIC ANALYSIS Gregory Grefenstette Computer Science Department, University of Pittsburgh, Pittsburgh, PA 15260 [email protected] Abstract For a very long time, it has been con- sidered that the only way of automati- cally extracting similar groups of words from a text collection for which no se- mantic information exists is to use docu- ment co-occurrence data. But, with ro- bust syntactic parsers that are becom- ing more frequently available, syntacti- cally recognizable phenomena about word usage can be confidently noted in large collections of texts. We present here a new system called SEXTANT which uses these parsers and the finer-grained con- texts they produce to judge word similar- ity. BACKGROUND Many machine-based approaches to term sim- ilarity, such as found in TItUMP (Jacobs and Zernick 1988) and FERRET (Mauldin 1991), can be characterized as knowledge-rich in that they presuppose that known lexical items possess Conceptual Dependence(CD)- like descriptions. Such an approach neces- sitates a great amount of manual encoding of semantic information and suffers from the drawbacks of cost (in terms of initial coding, coherence checking, maintenance after modi- fications, and costs derivable from a host of other software engineering concern); of do- main dependence (a semantic structure de- veloped for one domain would not be applica- ble to another. For example, sugar would have very different semantic relations in a medi- cal domain than in a commodities exchange domain); and of rigidity (even within well- established domain, new subdomains spring up, e.g. AIDS. Can hand-coded systems keep up with new discoveries and new relations with an acceptable latency?) In the Information Retrieval community. researchers have consistently considered that 324 "the linguistic apparatus required for effec- tive domain-independent analysis is not yet at hand," and have concentrated on counting document co-occurrence statistics (Peat and Willet 1991), based on the idea that words appearing in the same document must share some semantic similarity. But document co- occurrence suffers from two problems: granu- laxity (every word in the document is consid- ered potentially related to every other word, no matter what the distance between them) and co-occurrence (for two words to be seen as similar they must physically appear in the same document. As an illustration, consider the words tumor and turnout. These words certainly share the same contexts, but would never appear in the same document.) In gen- eral different words used to describe similar concepts might not be used in the same doc- ument, and are missed by these methods. Recently, a middle ground between these two approaches has begun to be broken. Re- searchers such as (Evans et al. 1991) and (Church and Hanks 1990) have applied robust grammars and statistical techniques over large corpora to extract interesting noun phrases and subject-verb, verb-object pairs. (Hearst 1992) has shown that certain lexical-syntactic templates can reliably extract hyponym re- lations from text. (Ruge 1991) shows that modifier-head relations in noun phrases ex- tracted from a large corpus provide a use- ful context for extracting similar words. The common thread of all these techniques is that they require no hand-coded domain knowl- edge, but they examine more cleanly defined contexts than simple document co-occurrence methods. Similarly, our SEXTANT 1 uses fine- grained syntactically derived contexts, but de- rives its measures of similarity from consider- I Semantic EXtraction from Text via Analyzed Net- works of Terms ing not the co-occurrence of two words in the same context, but rather the overlapping of all the contexts associated with words over an entire corpus. Calculation of the amount of shared weighted contexts produces a similar- ity measure between two words. SEXTANT SEXTANT can be run on any English text, without any pre-coding of domain knowledge or manual editing of the text. The input text passes through the following steps: (I) Mor- phological analysis. Each word is morpholog- ically analyzed and looked up in a 100,000 word dictionary to find its possible parts of speech. (II) Grammatical Disambiguation. A stochastic parser assigns one grammatical cat- egory to each word in the text. These first two steps use CLARIT programs (Evans et al. 1991). (III) Noun and Verb Phrase Splitting. Each sentence is divided into verb and noun phrases by a simple regular grammar. (IV) Syntagmatic Relation Extraction. A four- pass algorithm attaches modifiers to nouns, noun phrases to noun phrases and verbs to noun phrases. (Grefenstette 1992a) (V) Con- text Isolation. The modifying words attached to each word in the text are isolated for all nouns. Thus the context of each noun is given by all the words with which it is asso- ciated throughout the corpus. (VI) Similarity matching. Contexts are compared by using similarity measures developed in the Social Sciences, such as a weighted Jaccard measure. As an example, consider the following sen- tence extracted from a medical corpus. Cyclophosphamide markedly prolonged induction time and suppressed peak titer irrespective of the time of antigen administration. Each word is looked up in a online dictionary. After grammatical ambiguities are removed by the stochastic parser, the phrase is divided into noun phrases(NP) and verb phrases(VP), giving, NP cyclophosphamide (sn) -- markedly (adv) VP prolong (vt-past) NP induction (sn) time (sn) -- and (cnj) VP suppress (vt-past) NP peak (sn) titer (sn) irrespective-of (prep) the (d) time (sn) of (prep) antigen (en) administration (sn) Once each sentence in the text is divided into phrases, intra- and inter-phrase structural re- lations are extracted. First noun phrases are scanned from left to right(NPLR), hook- ing up articles, adjectives and modifier nouns to their head nouns. Then, noun phrases are scanned right to left(NPttL), connecting nouns over prepositions. Then, starting from verb phrases, phrases are scanned before the verb phrase for an unconnected head which becomes the subject(VPRL), and likewise to the right of the verb for objects(VPLtt), pro- ducing for the example: VPRL cyclophosphamide , prolong < SUBJ NPRL time , induction < NN VPLR prolong , time < DOBJ VPRL cyclophosphamide , suppress < SUBJ NPRL titer , peak < NN VPLR suppress , titer < DOBJ NPLR titer , time < NNPREP NPRL administration , antigen < NN Next SEXTANT extracts a user specified set of relations that are considered as each word's context for similarity calculations. For exam- ple, one set of relations extracted by SEX- TANT for the above sentence can be cyclophosphamide prolong-SUBJ time induction time prolong-DOBJ cyclophosphamide suppress-SUBJ titer peak titer suppress-DOBJ titer time administration antigen time administration In this example, the word time is found mod- ified by the words induction, prolong-DOBJ and administration, while administration is only considered by this set of relations to be modified by antigen. Over the whole corpus of 160,000 words, one can consider what mod- ifies administration. Isolating these modifiers gives a list such as administration androgen administration antigen administration aortic administration examine administration associate-DOBJ administration aseociate-SUBJ administration azathioprine administration carbon-dioxide administration case administration cause-SUBJ . . . At this point SEXTANT compares all the other words in the corpus, using a user- specified similarity measure such the Jaccard measure, to find which words are most simi- lar to which others. For example, the words found as most similar to administration in this medical corpus were the following words in or- der of most to least similar: 325 administration injection, treatment, therapy, infusion, dose, response, ... As can be seen, the sense of administra- tion as in the "administration of drugs and medicines" is clearly extracted here, since ad- ministration in this corpus is most similarly used as other words such as injection and ther- apy having to do with dispensing drugs and medicines. One of the interesting aspects of this approach, contrary to the coarse-grained document co-occurrence approach, is that ad- ministration and injection need never appear in the same document for them to be recog- nized as semantically similar. In the case of this corpus, administration and injection were considered similar because they shared the fol- lowing modifiers: acid follow-DOBJ growth prior produce-IOBJ dose extract increase-SUBJ intravenous treat-IOBJ associate-SUSJ associate-DOBJ rapid cause-SUBJ antigen adrenalectomy aortic hormone subside-IOBJ alter-IOBJ folio-acid amd folate It is hard to select any one word which would indicate that these two words were similar, but the fact that they do share so many words, and more so than other words, indicates that these words share close semantic characteris- tics in this corpus. When the same procedure is run over a corpus of library science abstracts, adminis- tration is recognized as closest to administration graduate, office, campus, education, director, ... Similarly circulation was found to be closest to flow in the medical corpus and to date in the library corpus. Cause was found to be closest to etiology in the medical corpus and to deter- minant in the library corpus. Frequently oc- curring words, possessing enough context, are generally ranked by SEXTANT with words in- tuitively related within the defining corpus. DISCUSSION While finding similar words in a corpus with- out any domain knowledge is interesting in itself, such a tool is practically useful in a number of areas. A lexicographer building a domain-specific dictionary would find such a tool invaluable, given a large corpus of rep- resentative text for that domain. Similarly, a Knowledge Engineer creating a natural lan- guage interface to an expert system could use this system to cull similar terminology in a field. We have shown elsewhere (Grefenstette 1992b), in an Information itetrieval setting, that expanding queries using the closest terms to query terms derived by SEXTANT can im- prove recall and precision. We find that one of the most interesting results from a linguis- tic point of view, is the possibility automati- caUy creating corpus defined thesauri, as can be seen above in the differences between re- lations extracted from medical and from in- formation science corpora. In conclusion, we feel that this fine grained approach to context extraction from large corpora, and similarity calculation employing those contexts, even us- ing imperfect syntactic analysis tools, shows much promise for the future. References (Church and Hanks 1990) K.W. Church and P. Hanks. Word association norms, mutual information, and lexicography. Computa- tional Linguistics, 16(1), Mar 90. (Evans et al. 1991) D.A. Evans, S.K. Hender- son, R.G. Lefferts, and I.A. Monarch. A summary of the CLARIT project. Tit CMU-LCL-91-2, Carnegie-Mellon, Nov 91. (Grefenstette 1992a) G. Grefenstette. Sex- tant: Extracting semantics from raw text, implementation details. Tit CS92-05, Uni- versity of Pittsburgh, Feb 92. (Grefenstette 1992b) G. Grefenstette. Use of syntactic context to produce term associ- ation lists for text retrieval. SIGIR'9~, Copenhagen, June 21-24 1992. ACM. (Hearst 1992) M.A. Hearst. Automatic acqui- sition of hyponyms from large text corpora. COLING'92, Nantes, France, July 92. (Jacobs and Zeruick 1988) P. S. Jacobs and U. Zernick. Acquiring lexical knowledge from text: A case study. In Proceedings Seventh National Conference on Artificial Intelligence, 739-744, Morgan Kaufmann. (Mauldin 1991) M. L. Mauldin. Conceptual Information Retrieval: A case study in adaptive parsing. Kluwer, Norwell, 91. (Peat and WiUet 1991) H.J. Peat and P. Wil- let. The limitations of term co-occurrence data for query expansion in document re- trieval systems. JASIS, 42(5), 1991. (ituge 1991) G. ituge. Experiments on lin- guistically based term associations. In RIAO'91, 528-545, Barcelona, Apr 91. CID, Paris. 326
1992
52
A CLASS-BASED APPROACH TO LEXICAL DISCOVERY Philip Resnik* Department of Computer and Information Science, University of Pennsylvania Philadelphia, Pennsylvania 19104, USA Internet: [email protected] 1 Introduction In this paper I propose a generalization of lexical association techniques that is intended to facilitate statistical discovery of facts involving word classes rather than individual words. Although defining as- sociation measures over classes (as sets of words) is straightforward in theory, making direct use of such a definition is impractical because there are simply too many classes to consider. Rather than consid- ering all possible classes, I propose constraining the set of possible word classes by using a broad-coverage lexical/conceptual hierarchy [Miller, 1990]. 2 Word/Word Relationships Mutual information is an information-theoretic mea- sure of association frequently used with natural lan- guage data to gauge the "relatedness" between two words z and y. It is defined as follows: • Pr(z, y) I(x;y) = log r (1) Pr(z)Pr(y) As an example of its use, consider Itindle's [1990] application of mutual information to the discovery of predicate argument relations. Hindle investigates word co-occurrences as mediated by syntactic struc- ture. A six-million-word sample of Associated Press news stories was parsed in order to construct a collec- tion of subject/verb/object instances. On the basis of these data, Hindle calculates a co-occurrence score (an estimate of mutual information) for verb/object pairs and verb/subject pairs. Table 1 shows some of the verb/object pairs for the verb drink that occurred more than once, ranked by co-occurrence score, "in effect giving the answer to the question 'what can you drink?' " [Hindle, 1990], p. 270. Word/word relationships have proven useful, but are not appropriate for all applications. For example, *This work was supported by the following grants: Alto DAAL 03-89-C-0031, DARPA N00014-90-J-1863, NSF IRI 90- 16592, Ben Franklin 91S.3078C-1. I am indebted to Eric Brill, Henry Gleitman, Lila Gleitman, Aravind Joshi, Chris- tine Nakatani, and Michael Niv for helpful discussions, and to George Miller and colleagues for making WordNet available. Co-occurrence score [ verb [ object 11.75 drink tea 11.75 drink Pepsi 11.75 drink champagne 10.53 drink liquid 10.20 drink beer 9.34 drink wine Table 1: High-scoring verb/object pairs for drink (part of Hindle 1990, Table 2). the selectional preferences of a verb constitute a re- lationship between a verb and a class of nouns rather than an individual noun. 3 Word/Class Relationships 3.1 A Measure of Association In this section, I propose a method for discovering class-based relationships in text corpora on the ba- sis of mutual information, using for illustration the problem of finding "prototypical" object classes for verbs. Let V = {vl,v~,...,vz} andAf = {nl,n2,...,nm} be the sets of verbs and nouns in a vocabulary, and C = {clc C_ Af} the set of noun classes; that is, the power set of A f. Since the relationship being inves- tigated holds between verbs and classes of their ob- jects, the elementary events of interest are members of V x C. The joint probability of a verb and a class is estimated as rtEc Pr(v,c) E E (2) u'EV n~EJV " Given v E V, c E C, define the association score Pr( , c) A(v,c) ~ Pr(cl~ )log Pr(~)Pr(c) (3) = Pr(clv)I(v; c). (4) The association score takes the mutual information between the verb and a class, and scales it according 327 to the likelihood that a member of that class will actually appear as the object of the verb. 1 3.2 Coherent Classes A search among a verb's object nouns requires at most I.A/" I computations of the association score, and can thus be done exhaustively. An exhaustive search among object classes is impractical, however, since the number of classes is exponential. Clearly some way to constrain the search is needed. I propose re- stricting the search by imposing a requirement of co- herence upon the classes to be considered. For ex- ample, among possible classes of objects for open, the class {closet, locker, store} is more coherent than {closet, locker, discourse} on intuitive grounds: ev- ery noun in the former class describes a repository of some kind, whereas the latter class has no such obvious interpretation. The WordNet lexical database [Miller, 1990] pro- vides one way to structure the space of noun classes, in order to make the search computationally feasi- ble. WordNet is a lexical/conceptual database con- structed on psycholinguistic principles by George Miller and colleagues at Princeton University. Al- though I cannot judge how well WordNet fares with regard to its psycholinguistic aims, its noun taxon- omy appears to have many of the qualities needed if it is to provide basic taxonomic knowledge for the purpose of corpus-based research in English, includ- ing broad coverage and multiple word senses. Given the WordNet noun hierarchy, the definition of "coherent class" adopted here is straightforward. Let words(w) be the set of nouns associated with a WordNet class w. 2 Definition. A noun class e • C is coher- ent iff there is a WordNet class w such that words(w) N A/" = c. I A(v,c) l verb [ object class [ 3.58 2.05 I drink drink ] /beverage' [beverage .... ]~) { (intoxicant, [alcohol .... J Table 2: Object classes for drink 4 Preliminary Results An experiment was performed in order to discover the "prototypical" object classes for a set of 115 common English verbs. The counts of equation (2) were cal- culated by collecting a sample of verb/object pairs from the Brown corpus. 4 Direct objects were iden- tified using a set of heuristics to extract only the surface object of the verb. Verb inflections were mapped down to the base form and plural nouns mapped down to singular. 5 For example, the sen- tence John ate two shiny red apples would yield the pair (eat, apple). The sentence These are the apples that John ate would not provide a pair for eat, since apple does not appear as its surface object. Given each verb, v, the "prototypical" object class was found by conducting a best-first search upwards in the WordNet noun hierarchy, starting with Word- Net classes containing members that appeared as ob- jects of the verb. Each WordNet class w consid- ered was evaluated by calculating A(v, {n E Afln E words(w)}). Classes having too low a count (fewer than five occurrences with the verb) were excluded from consideration. The results of this experiment are encouraging. Table 2 shows the object classes discovered for the verb drink (compare to Table 1), and Table 3 the highest-scoring object classes for several other verbs. Recall from the definition in Section 3.2 that each WordNet class w in the tables appears as an ab- breviation for {n • A/'ln • words(w)}; for example, (intoxicant, [alcohol .... ]) appears as an abbrevi- ation for {whisky, cognac, wine, beer}. As a consequence of this definition, noun classes that are "too small" or "too large" to be coherent are excluded, and the problem of search through an ex- ponentially large space of classes is reduced to search within the WordNet hierarchy. 3 1 Scaling mutual information in this fashion is often done; see, e.g., [l:tosenfeld and Huang, 1992]. 2Strictly speaking, WordNet as described by [Miller, 1990] does not have classes, but rather lexical groupings called synonym sets. By "WordNet class" I mean a pair (word, synonym-set ). ZA related possibility being investigated independently by Paul Kogut (personal communication) is assign to each noun and verb a vector of feature/value pairs based upon the word's classification in the WordNet hierarchy, and to classify nouns on the basis of their feature-value correspondences. 5 Acquisition of Verb Properties More work is needed to improve the performance of the technique proposed here. At the same time, the ability to approximate a lexical/conceptual classifica- tion of nouns opens up a number of possible applica- tions in lexical acquisition. What such applications have in common is the use of lexical associations as a window into semantic relationships. The technique described in this paper provides a new, hierarchical 4The version of the Brown corpus used was the tagged cor- pus found as part of the Penn Treebank. 5Nouns outside the scope of WordNet that were tagged as proper names were mapped to the token pname, a subclass of classes (someone, [person] ) and (location, [location] ). 328 I A(v,c) I verb I object class 1.94 ask 0.16 call 2.39 climb 3.64 cook 0.27 draw 3.58 drink 1.76 eat 0.30 lose 1.28 play 2.48 pour 1.03 pull 1.23 push 1.18 read 2.69 sing (quest ion, [question .... ] } someone, [person .... ] } stair, [step .... ] I I repast, [repast .... ] ) cord, [cord .... ] } (beverage, [beverage .... ] } <nutrient, [food .... ] } <sensory-faculty, [sense .... ] } (part, [daaracter .... ]) <liquid, [liquid .... ] } (cover, [coverin~ .... l} (button, [button .... ] <writt en-mat eriai, [writ in~ .... ] } (xusic, [ ~ i c .... ]) Table 3: Some "prototypical" object classes source of semantic knowledge for statistical applica- tions. This section briefly discusses one area where this kind of knowledge might be exploited. Diathesis alternations are variations in the way that a verb syntactically expresses its arguments [Levin, 1989]. For example, l(a,b) shows an in- stance of the indefinite object alternation, and 2(a,b) shows an instance of the causative/inchoative alter- nation. 1 a. John ate lunch. b. John ate. 2 a. John opened the door. b. The door opened. Such phenomena are of particular interest in the study of how children learn the semantic and syn- tactic properties of verbs, because they stand at the border of syntax and lexical semantics. There are nu- merous possible explanations for why verbs fall into particular classes of alternations, ranging from shared semantic properties of verbs within a class, to prag- matic factors, to "lexieal idiosyncracy." Statistical techniques like the one described in this paper may be useful in investigating relationships be- tween verbs and their arguments, with the goal of contributing data to the study of diathesis alterna- tions, and, ideally, in constructing a computational model of verb acquisition. For example, in the experi- ment described in Section 4, the verbs participating in "implicit object" alternations 6 appear to have higher association scores with their "prototypical" object classes than verbs for which implicit objects are dis- allowed. Preliminary results, in fact, show a statis- tically significant difference between the two groups. eThe indefinite object alternation [Levin, 1989] and the specified object alternation [Cote, 1992]. Might such shared information-theoretic properties of verbs play a role in their acquisition, in the same way that shared semantic properties might? On a related topic, Grim_shaw has recently sug- gested that the syntactic bootstrapping hypothe- sis for verb acquisition [Gleitman, 1991] be ex- tended in such a way that alternations such as the causative/inchoative alternation (e.g. 2(a,b)) are learned using class information about the observed subjects and objects of the verb, in addition to sub- categorization information. 7 I hope to extend the work on verb/object associations described here to other arguments of the verb in order to explore this suggestion. 6 Conclusions The technique proposed here provides a way to study statistical associations beyond the level of individ- ual words, using a broad-coverage lexical/conceptual hierarchy to structure the space of possible noun classes. Preliminary results, on the task of discover- ing "prototypical" object classes for a set of common English verbs, appear encouraging, and applications in the study of verb argument structure are appar- ent. In addition, assuming that the WordNet hier- archy (or some similar knowledge base) proves ap- propriately broad and consistent, the approach pro- posed here may provide a model for importing basic taxonomic knowledge into other corpus-based inves- tigations, ranging from computational lexicography to statistical language modelling. References [Cote, 1992] Sharon Cote. Discourse functions of two types of null objects in English. Presented at the 66th Annual Meeting of the Linguistic Society of America, Philadelphia, PA, January 1992. [Gleitman, 1991] Lila Gleitman. The structural sources of verb meanings. Language Acquisition, 1, 1991. [Hindle, 1990] Donald Hindle. Noun classification from predicate-argument structures. In Proceedings of the ~Sth Annual Meeting of the ACL, 1990. [Levin, 1989] Beth Levin. Towards a lexical organization of English verbs. Technical report, Dept. of Linguistics, Northwestern University, November 1989. [Miller, 1990] George Miller. Wordnet: An on-line lexical database. International Journal o] Lexicography, 4(3), 1990. (Special Issue). [Rosenfeld and Huang, 1992] Ronald Rosenfeld and Xue- dong Huang. Improvements in stochastic language modelling. In Mitch Marcus, editor, Fifth DARPA Workshop on Speech and Natural Language, February 1992. Arden House Conference Center, Harriman, NY. z Jane Grimshaw, keynote address, Lexicon Acquisition Workshop, University of Pennsylvania, January, 1992. 329
1992
53
Sense-Linking in a Machine Readable Dictionary Robert Krovetz Department of Computer Science University of Massachusetts, Amherst, MA 01003 Abstract (LDOCE), is a dictionary for learners of English as Dictionaries contain a rich set of relation- ships between their senses, but often these relationships are only implicit. We report on our experiments to automatically iden- tify links between the senses in a machine- readable dictionary. In particular, we au- tomatically identify instances of zero-affix morphology, and use that information to find specific linkages between senses. This work has provided insight into the perfor- mance of a stochastic tagger. 1 Introduction Machine-readable dictionaries contain a rich set of relationships between their senses, and indicate them in a variety of ways. Sometimes the relation- ship is provided explicitly, such as with a synonym or antonym reference. More commonly the relationship is only implicit, and needs to be uncovered through outside mechanisms. This paper describes our ef- forts at identifying these links. The purpose of the research is to obtain a bet- ter understanding of the relationships between word meanings, and to provide data for our work on word- sense disambiguation and information retrieval. Our hypothesis is that retrieving documents on the basis of word senses (instead of words) will result in bet- ter performance. Our approach is to treat the in- formation associated with dictionary senses (part of speech, subcategorization, subject area codes, etc.) as multiple sources of evidence (cf. Krovetz [3]). This process is fundamentally a divisive one, and each of the sources of evidence has exceptions (i.e., instances in which senses are related in spite of be- ing separated by part of speech, subcategorization, or morphology). Identifying related senses will help us to test the hypothesis that unrelated meanings will be more effective at separating relevant from nonrelevant documents than meanings which are re- lated. We will first discuss some of the explicit indica- tions of sense relationships as found in usage notes and deictic references. We will then describe our efforts at uncovering the implicit relationships via stochastic tagging and word collocation. 2 Explicit Sense Links The dictionary we are using in our research, the Longman Dictionary of Contemporary English a second language. As such, it provides a great deal of information about word meanings in the form of example sentences, usage notes, and gram- mar codes. The Longman dictionary is also unique among learner's dictionaries in that its definitions are generally written using a controlled vocabulary of approximately 2200 words. When exceptions oc- cur they are indicated by means of a different font. For example, consider the definition of the word gravity: • gravity n lb. worrying importance: He doesn't understand the gravity of his illness - see GRAVE 2 • grave adj 2. important and needing attention and (often) worrying: This is grave news -- The sick man's condition is grave These definitions serve to illustrate how words can be synonymous 1 even though they have different parts of speech. They also indicate how the Long- man dictionary not only indicates that a word is a synonym, but sometimes specifies the sense of that word (indicated in this example by the superscript following the word *GRAVE'). This is extremely im- portant because synonymy is not a relation that holds between words, but between the senses of words. Unfortunately these explicit sense indications are not always consistently provided. For example, the definition of *marbled' provides an explicit indica- tion of the appropriate sense of *marble' (the stone instead of the child's toy), but this is not done within the definition of *marbles'. LDOCE also provides explicit indications of sense relationships via usage notes. For example, the def- inition for argument mentions that it derives from both senses of argue - to quarrel (to have an ar- gument), and to reason (to present an argument). The notes also provide advice regarding similar look- ing variants (e.g., the difference between distinct and distinctive, or the fact that an attendant is not some- one who attends a play, concert, or religious ser- vice). Usage notes can also specify information that is shared among some word meanings, but not others (e.g., the note for venture mentions that both verb and noun carry a connotation of risk, but this isn't necessarily true for adventure). Finally, LDOCE provides explicit connections be- tween senses via deictic reference (links created by 1We take two words to be synonymous if they have the same or closely related meanings. 330 'this', 'these', 'that', 'those', 'its', 'itself', and 'such a/an'). That is, some of the senses use these words to refer to a previous sense (e.g., 'the fruit of this tree', or 'a plant bearing these seeds'). These rela- tionships are important because they allow us to get a better understanding of the nature of polysemy (related word meanings). Most of the literature on polysemy only provides anecdotal examples; it usu- ally does not provide information about how to de- termine whether word meanings are related, what kind of relationships there are, or how frequently they occur. The grouping of senses in a dictionary is generally based on part of speech and etymology, but part of speech is orthogonal to a semantic rela- tionship (cf. Krovetz [3]), and word senses can be re- lated etymologically, but be perceived as distinct at the present time (e.g., the 'cardinal' of a church and 'cardinal' numbers are etymologically related). By examining deictic reference we gain a better under- standing of senses that are truly related, and it also helps us to understand how language can be used creatively (i.e., how senses can be productively ex- tended). Deictic references are also important in the design of an algorithm for word-sense disambigua- tion (e.g., exceptions to subcategorization). The primary relations we have identified so far are: substance/product (tree:fruit or wood, plant:flower or seeds), substance/color (jade, amber, rust), object/shape (pyramid, globe, lozenge), ani- mal/food (chicken, lamb, tuna), count-noun/mass- noun, 2 language/people (English, Spanish, Dutch), animal/skin or fur (crocodile, beaver, rabbit), and music/dance (waltz, conga, tango). 3 3 Zero-Affix Morphology Deictic reference provides us with different types of relationships within the same part of speech. We can also get related senses that differ in part of speech, and these are referred to as instances of zero-affix morphology or functional shift. The Longman dic- tionary explicitly indicates some of these relation- ships by homographs that have more than one part of speech. It usually provides an indication of the relationship by a leading parenthesized expression. For example, the word bay is defined as N,ADJ, and the definition reads '(a horse whose color is) reddish- brown'. However, out of the 41122 homographs de- fined, there are only 695 that have more than one part of speech. Another way in which LDOCE pro- vides these links is by an explicit sense reference for a word outside the controlled vocabulary; the def- ~These may or may not be related; consider 'com- puter vision' vs. 'visions of computers'. The related senses are usually indicated by the defining formula: 'an example of this'. 3The related senses are sometimes merged into one; for example, the definition of/oztrot is '(a piece of music for) a type of formal dance...' inition of anchor (v) reads: 'to lower an anchor 1 (1) to keep (a ship) from moving'. This indicates a reference to sense 1 of the first homograph. Zero-affix morphology is also present implicitly, and we conducted an experiment to try to identify instances of it using a probabilistic tagger [2]. The hypothesis is that if the word that's being defined (the definiendum) occurs within the text of its own definition, but occurs with a different part of speech, then it will be an instance of zero-affix morphology. The question is: How do we tell whether or not we have an instance of zero-affix morphology when there is no explicit indication of a suffix? Part of the an- swer is to rely on subjective judgment, but we can also support these judgments by making an anal- ogy with derivational morphology. For example, the word wad is defined as 'to make a wad of'. That is, the noun bears the semantic relation of formation to the verb that defines it. This is similar to the effect that the morpheme -ize has on the noun union in order to make the verb unionize (cf. Marchand [5]). The experiment not only gives us insight into se- mantic relatedness across part of speech, it also en- abled us to determine the effectiveness of tagging. We initially examined the results of the tagger on all words starting with the letter 'W'; this letter was chosen because it provided a sufficient number of words for examination, but wasn't so small as to be trivial. There were a total of 1141 words that were processed, which amounted to 1309 homographs and 2471 word senses; of these senses, 209 were identified by the tagger as containing the definiendum with a different part of speech. We analyzed these instances and the result was that only 51 of the 209 instances were found to be correct (i.e., actual zero-morphs). The instances that are indicated as correct are currently based on our subjective judgment; we are in the process of examining them to identify the type of semantic relation and any analog to a derivational suffix. The instances that were not found to be cor- rect (78 percent of the total) were due to incorrect tagging; that is, we had a large number of false pos- itives because the tagger did not correctly identify the part of speech. We were surprised that the num- ber of incorrect tags was so high given the perfor- mance figures cited in the literature (more than a 90 percent accuracy rate). However, the figures re- ported in the literature were based on word tokens, and 60 percent of all word tokens have only one part of speech to begin with. We feel that the perfor- mance figures should be supplemented with the tag- ger's performance on word types as well. Most word types are rare, and the stochastic methods do not perform as well on them because they do not have sufficient information. Church has plans for improv- ing the smoothing algorithms used in his tagger, and this would help on these low frequency words. In addition, we conducted a failure analysis and it in- dicated that 91% the errors occurred in idiomatic 331 expressions (45 instances) or example sentences (98 instances). We therefore eliminated these from fur- ther processing and tagged the rest of the dictionary. We are still in the process of analyzing these results. 4 Derivational Morphology Word collocation is one method that has been pro- posed as a means for identifying word meanings. The basic idea is to take two words in context, and find the definitions that have the most words in com- mon. This strategy was tried by Lesk using the Ox- ford Advanced Learner's Dictionary [4]. For exam- ple, the word 'pine' can have two senses: a tree, or sadness (as in 'pine away'), and the word 'cone' may be a geometric structure, or a fruit of a tree. Lesk's program computes the overlap between the senses of 'pine' and 'cone', and finds that the senses meaning 'tree' and 'fruit of a tree' have the most words in common. Lesk gives a success rate of fifty to seventy percent in disambiguating the words over a small collection of text. Later work by Becker on the New OED indicated that Lesk's algorithm did not perform as well as expected [1]. The difficulty with the word overlap approach is that a wide range of vocabulary can be used in defin- ing a word's meaning. It is possible that we will be more likely to have an overlap in a dictionary with a restricted defining vocabulary. When the senses to be matched are further restricted to be morpho- logical variants, the approach seems to work very well. For example, consider the definitions of the word 'appreciate' and 'appreciation': * appreciate I. to be thankful or grateful for 2. to understand and enjoy the good qualities of 3. to understand fully 4. to understand the high worth of 5. (of property, possessions, etc.) to increase in value • appreciation I. judgment, as of the quality, worth, or facts of something 2. a written account of the worth of something 3. understanding of the qualities or worth of something 4. grateful feelings 5. rise in value, esp. of land or possessions The word overlap approach pairs up sense 1 with sense 4 (grateful), sense 2 with sense 3 (understand; qualities), sense 3 with sense 3 (understand), sense 4 with sense 1 (worth), and sense 5 with sense 5 (value; possessions). The matcher we are using ignores closed class words, and makes use of a simple mor- phological analyzer (for inflectional morphology). It ignores words found in example sentences (prelim- inary experiments indicated that this didn't help and sometimes made matches worse), and it also ignores typographical codes and usage labels (for- real/informal, poetic, literary, etc.). It also doesn't try to make matches between word senses that are idiomatic (these are identified by font codes). We are currently in the process of determining the effec- tiveness of the approach. The experiment involves comparing the morphological variations for a set of queries used in an information retrieval test collec- tion. We have manually identified all variations of the words in the queries as well as the root forms. Those variants that appear in LDOCE will be com- pared against all root forms and the result will be examined to see how well the overlap method was able to identify the correct sense of the variant with the correct sense of the root. 5 Conclusion The purpose of this work is to gain a better under- standing of the relationships between word mean- ings, and to help in development of an algorithm for word sense disambiguation. Our approach is based on treating the information associated with dictio- nary senses (part of speech, subcategorization, sub- ject area codes, etc.) as multiple sources of evidence (of. Krovetz [3]). This process is fundamentally a divisive one, and each of the sources of evidence has exceptions (i.e., instances in which senses are related in spite of being separated by part of speech, sub- categorization, or morphology). Identifying the rela- tionships we have described will help us to determine these exceptions. References [1] Becker B., "Sense Disambiguation using the New Ozford English Dictionary", Masters The- sis, University of Waterloo, 1989. [2] Church K., "A Stochastic Parts Program and Noun Phrase Parser for Unrestricted Text", in Proceedings of the ~nd Conference on Ap- plied Natural Language Processing, pp. 136-143, 1988. [3] Krovetz R., "Lexical Acquisition and Informa- tion Retrieval", in Lezical Acquisition: Build- ing the Lezicon Using On-Line Resources, U. Zernik (ed), pp. 45-64, 1991. [4] Lesk M., "Automatic Sense Disambiguation Us- ing Machine Readable Dictionaries: How to tell a Pine Cone from an Ice Cream Cone", Proceed- ings of SIGDOC, pp. 24-26, 1986. [5] Marchand H, "On a Question of Contrary Anal- ysis with Derivational Connected but Mor- phologically Uncharacterized Words", English Studies, 44, pp. 176-187, 1963 332
1992
54
Efficiency, Robustness and Accuracy in Picky Chart David M. Magerman Stanford University Stanford, CA 94305 [email protected] anford.edu Parsing* Carl Weir Paramax Systems Paoli, PA 19301 [email protected] ABSTRACT This paper describes Picky, a probabilistic agenda-based chart parsing algorithm which uses a technique called p~'ob- abilistic prediction to predict which grammar rules are likely to lead to an acceptable parse of the input. Using a subopti- mal search method, "Picky significantly reduces the number of edges produced by CKY-like chart parsing algorithms, while maintaining the robustness of pure bottom-up parsers and the accuracy of existing probabilistic parsers. Experiments using Picky demonstrate how probabilistic modelling can im- pact upon the efficiency, robustness and accuracy of a parser. 1. Introduction This paper addresses the question: Why should we use probabilistic models in natural language understanding? There are many answers to this question, only a few of which are regularly addressed in the literature. The first and most common answer concerns ambigu~ ity resolution. A probabilistic model provides a clearly defined preference nile for selecting among grammati- cal alternatives (i.e. the highest probability interpreta- tion is selected). However, this use of probabilistic mod- els assumes that we already have efficient methods for generating the alternatives in the first place. While we have O(n 3) algorithms for determining the grammatical- ity of a sentence, parsing, as a component of a natural language understanding tool, involves more than simply determining all of the grammatical interpretations of an input. Ill order for a natural language system to process input efficiently and robustly, it must process all intelligi- ble sentences, grammatical or not, while not significantly reducing the system's efficiency. This observ~ttiou suggests two other answers to the cen- tral question of this paper. Probabilistic models offer a convenient scoring method for partial interpretations in a well-formed substring table. High probability con- stituents in the parser's chart call be used to interpret ungrammat.ical sentences. Probabilistic models can also *Special I.hanks to Jerry Hobbs and F3ob Moo*re at S[II for providing access to their colllptllel's, and to Salim ]-/oukos, Pe- l:er Brown, and Vincent and Steven Della Piel.ra ,-xt IF3M for their inst.ructive lessons on probabi|isti,: modelling of natural I:mguage. be used for efficiency by providing a best-first search heuristic to order the parsing agenda. This paper proposes an agenda-based probabilistic chart parsing algorithm which is both robust and efficient. The algorithm, 7)icky 1, is considered robust because it will potentially generate all constituents produced by a pure bottom-up parser and rank these constituents by likeli- hood. The efficiency of the algorithm is achieved through a technique called probabilistic prediction, which helps the algorithm avoid worst-case behavior. Probabilistic prediction is a trainable technique for modelling where edges are likely to occur in the chart-parsing process. 2 Once the predicted edges are added to the chart using probabilistic prediction, they are processed in a style similar to agenda-based chart parsing algorithms. By limiting the edges in the chart to those which are pre- dicted by this model, the parser can process a sentence while generating only the most likely constituents given the input. In this paper, we will present the "Picky parsing al- gorithm, describing both the original features of the parser and those adapted from previous work. Then, we will compare the implementation of `picky with exist- ing probabilistic and non-probabilistic parsers. Finally, we will report the results of experiments exploring how `picky's algorithm copes with the tradeoffs of efficiency, robustness, and accuracy. 3 2. Probabilistic Models in "Picky The probabilistic models used ill the implementation of "Picky are independent of the algorithm. To facilita.te the comparison between the performance of "Picky and its predecessor, "Pearl, the probabilistic model ilnplelnented for "Picky is similar to "Pearl's scoring nlodel, the context- l'pearl =-- probabilistic Earley-style parser (~-Earl). "Picky =- probabilistic CI(Y-like parser ('P-CKY). 2Some familiarity with chart parsing terminology is assumed in this paper. For terminological definitions, see [9], [t0l, [11], or [17]. 3Sections 2 and 3, the descriptions of the probabilistie models used in ",Picky and the T'icky algorithn,, are similar in content to the corresponding sections of Magernmn and Weir[13]. The experimental results and discussions which follow in sections .1-6 ~tre original. 40 free grammar with context-sensitive probability (CFG with CSP) model. This probabilistic model estimates the probability of each parse T given the words in the sentence S, P(TIS), by assuming that each non-terminal and its immediate children are dependent on the non- terminal's siblings and parent and on the part-of-speech trigram centered at the beginning of that rule: P(TIS) ~- II P(A --+ a]C --~ 13A7, aoala2) (1) AET where C is the non-terminal node which immediately dominates A, al is the part-of-speech associated with the leftmost word of constituent A, and a0 and a2 are the parts-of-speech of the words to the left and to the right of al, respectively. See Magerman and Marcus 1991 [12] for a more detailed description of the CFG with CSP model. 3. The Parsing Algorithm A probabilistic language model, such as the aforemen- tioned CFG with CSP model, provides a metric for eval- uating the likelihood of a parse tree. However, while it may suggest a method for evaluating partial parse trees, a language model alone does not dictate the search strat- egy for determining the most likely analysis of an input. Since exhaustive search of the space of parse trees pro- duced by a natural language grammar is generally not feasible, a parsing model can best take advantage of a probabilistic language model by incorporating it into a parser which probabilistically models the parsing pro- cess. "Picky attempts to model the chart parsing process for context-free grammars using probabilistic prediction. Picky parses sentences in three phases: covered left- corner phase (I), covered bidirectional phase (II), and tree completion phase (III). Each phase uses a differ- ent method for proposing edges to be introduced to the parse chart. The first phase, covered left-corner, uses probabilistic prediction based on the left-corner word of the left-most daughter of a. constituent to propose edges. The covered bidirectional phase also uses probabilistic prediction, but it allows prediction to occur from the left-corner word of any daughter of a constituent, and parses that constituent outward (bidirectionally) from that daughter. These phases are referred to as "cov- ered" because, during these phases, the parsing mech- anism proposes only edges that have non-zero proba- bility according to the prediction model, i.e. tha.t have been covered by the training process. The final phase, tree completion, is essentially an exhaustive search of all interpretations of the input, according to the gra.mn]a.r. However, the search proceeds in best-first order, accord- ing to the measures provided by the language model. This phase is used only when the probabilistic prediction model fails to propose the edges necessary to complete a parse of the sentence. The following sections will present and motivate the pre- diction techniques used by the algorithm, and will then describe how they are implemented in each phase. 3.1. Probabilistic Prediction Probabilistie prediction is a general method for using probabilistic information extracted from a parsed corpus to estimate the likelihood that predicting an edge at a certain point in the chart will lead to a correct analysis of the sentence. The Picky algorithm is not dependent on the specific probabilistic prediction model used. The model used in the implementation, which is similar to the probabilistic language model, will be described. 4 The prediction model used in the implementation of Picky estimates the probability that an edge proposed at a point in the chart will lead to a correct parse to be: P( A --+ otB[3]aoal a~. ), (2) where ax is the part-of-speech of the left-corner word of B, a0 is the part-of-speech of the word to the left of al, and a~ is the part-of-speech of the word to the right of al. To illustrate how this model is used, consider the sen- tence The cow raced past the barn. (3) The word "cow" in the word sequence "the cow raced" predicts NP --+ det n, but not NP --4 det n PP, since PP is unlikely to generate a verb, based on train- ing material, s Assuming the prediction model is well trained, it will propose the interpretation of "raced" as the beginning of a participial phrase modifying "the cow," as in The cow raced past the barn mooed. (4) However, the interpretation of "raced" as a past par- ticiple will receive a low probability estimate relative to the verb interpretation, since the prediction naodel only considers local context. 4It is not necessary for ~he prediction model to be the same as the language model used to evaluate complete analyses. However, it is helpful if this is the ca.se, so that the probability estimates of incomplete edges will be consistent with the probability estimates of completed constituents. SThroughout this discussion, we will describe the prediction process using wo,-ds as the predictors of edges. In the implementa- tion, due to sparse data concerns, only parts-of-speech are used to predict edges. Give,, more robust estimation techniques, a prob- abilistic prediction model conditioned on word sequences is likely to perform as well or better. 41 The process of probabilistic prediction is analogous to that of a human parser recognizing predictive lexical items or sequences in a sentence and using these hints to restrict the search for the correct analysis of the sentence. For instance, a sentence beginning with a wh-word and auxiliary inversion is very likely to be a question, and try- ing to interpret it as an assertion is wasteful. If a verb is generally ditransitive, one should look for two objects to that verb instead of one or none. Using probabilistic pre- diction, sentences whose interpretations are highly pre- dictable based on the trained parsing model can be ana- lyzed with little wasted effort, generating sometimes no more than ten spurious constituents for sentences which contain between 30 and 40 constituents! Also, in some of these cases every predicted rule results in a completed constituent, indicating that the model made no incorrect predictions and was led astray only by genuine ambigu- ities in parts of the sentence. 3.2. Exhaustive Prediction When probabilistic prediction fails to generate the edges necessary to complete a parse of the sentence, exhaus- tive prediction uses the edges which have been generated in earlier phases to predict new edges which might com- bine with them to produce a complete parse. Exhaus- tive prediction is a combination of two existing types of prediction, "over-the-top" prediction [11] and top-down filtering. Over-the-top prediction is applied to complete edges. A completed edge A -+ a will predict all edges of the form B -+ flAT. 6 Top-down filtering is used to predict edges in order to complete incomplete edges. An edge of the form A --4 aBoBxB2fl, where a B1 has been recognized, will predict edges of the form B0 + 3' before B1 and edges of the form B2 --4 ~ after B1. 3.3. Bidirectional Parsing The only difference between phases I and II is that phase II allows bidirectional parsing. Bidirectional parsing is a technique for initiating the parsing of a constituent from any point in that constituent. Chart parsing algo- rithms generally process constituents from left-to-right. For instance, given a grammar rule A -+ B1B2..-B,, (5) 6In the implementation of "Picky, over-the-top prediction fi)r A --+ o' will only predict edges of the form B -+ A~'. This liJnitaticm on over-the-top precliction is due to the expensive bookl~eeping involved in bidirectional parsing. See the section on bidirectional parsing for more details. a parser generally would attempt to recognize a B1, then search for a B2 following it, and so on. Bidirectional parsing recognizes an A by looking for any Bi. Once a Bi has been parsed, a bidirectional parser looks for a /3/-1 to the left of the Bi, a Bi+I to the right, and so on. Bidirectional parsing is generally an inefficient tech- nique, since it allows duplicate edges to be introduced into the chart. As an example, consider a context-free rule NP -+ DET N, and assume that there is a deter- miner followed by a noun in the sentence being parsed. Using bidirectional parsing, this NP rule can be pre- dicted both by the determiner and by the noun. The edge predicted by the determiner will look to the right for a noun, find one, and introduce a new edge consisting of a completed NP. The edge predicted by the noun will look to the left for a determiner, find one, and also intro- duce a new edge consisting of a completed NP. Both of these NPs represent identical parse trees, and are thus redundant. If the algorithm permits both edges to be inserted into the chart, then an edge XP --+ ~ NP/3 will be advanced by both NPs, creating two copies of every XP edge. These duplicate XP edges can themselves be used in other rules, and so on. To avoid this propagation of redundant edges, the parser must ensure that no duplicate edges are introduced into the chart. 79icky does this simply by verifying every time an edge is added that the edge is not already in the chart. Although eliminating redundant edges prevents exces- sive inefficiency, bidirectional parsing may still perform more work than traditional left-to-right parsing. In the previous example, three edges are introduced into the chart to parse the NP -+ DET N edge. A left-to-right parser would only introduce two edges, one when the determiner is recognized, and another when the noun is recognized. The benefit of bidirectional parsing can be seen when probabilistic prediction is introduced into the parser. Freqneatly, the syntactic structure of a constituent is not determined by its left-corner word. For instance, in the sequence V NP PP, the prepositional phrase PP can modify either the noun phrase NP or the entire verb phrase V NP. These two interpretations require different VP rules to be predicted, but the decision about which rule to use depends on more than just the verb. The cor- rect rule may best be predicted by knowing the preposi- tion used in the PP. Using probabilistic prediction, the decision is made by pursuing the rule which has the high- est probability according to the prediction model. This rule is then parsed bidirectionally. If this rule is in fact the correct rule to analyze the constituent, then no other 42 predictions will be made for that constituent, and there will be no more edges produced than in left-to-right pars- ing. Thus, the only case where bidirectional Parsing is less efficient than left-to-right parsing is when the pre- diction model fails to capture the elements of context of the sentence which determine its correct interpretation. 3.4. The Three Phases of 7~icky Covered Left-Corner The first phase uses probabilis- tic prediction based on the part-of-speech sequences from the input sentence to predict all grammar rules which have a non-zero probability of being dominated by that trigram (based on the training corpus), i.e. P(A --4 BSlaoala2 ) > O i6) where al is the part-of-speech of the left-corner word of B. In this phase, the only exception to the probabilis- tic prediction is that any rule which can immediately dominate the preterminal category of any word in the sentence is also predicted, regardless of its probability. This type of prediction is referred to as exhaustive pre- diction. All of the predicted rules are processed using a standard best-first agenda processing algorithm, where the highest scoring edge in the chart is advanced. Covered Bidirectional If an S spanning the entire word string is not recognized by the end of the first phase, the covered bidirectional phase continues the parsing process. Using the chart generated by the first phase, rules are predicted not only by the trigram cen- tered at the left-corner word of the rule, but by the trigram centered at the left-corner word of any of the children of that rule, i.e. V(A --+ ,~B*lbob~b2 ) > 0. (7) where bl is the part-of-speech associated with the left- most word of constituent B. This phase introduces in- complete theories into the chart which need to be ex- panded to the left and to the right, as described in the bidirectional parsing section above. Tree Completion If the bidirectional processing fails to produce a successful parse, then it is assumed that there is some part of the input sentence which is not covered well by the training material. In the final phase, exhaustive prediction is performed on all complete the- ories which were introduced in the previous phases but which are not predicted by the trigrams beneath t.heln (i.e. V(rule ] trigram) = 0). In this phase, edges ~tre only predicted by their left- corner word. As mentioned previously, bidirect.ional parsing can be inefficient when the prediction model is inaccurate. Since all edges which the pledictioa model assigns non-zero probability have already been predicted, the model can no longer provide any information for future predictions. Thus, bidirectional parsing in this phase is very likely to be inefficient. Edges already in the chart will be parsed bidirectionally, since they were predicted by the model, but all new edges will be pre- dicted by the left-corner word only. Since it is already known that the prediction model will assign a zero probability to these rules, these predictions are instead scored based on the number of words spanned by the subtree which predicted them. Thus, this phase processes longer theories by introducing rules which can advance them. Each new theory which is proposed by the parsing process is exhaustively predicted for, using the length-based scoring model. The final phase is used only when a sentence is so far outside of the scope of the training material that none of the previous phases are able to process it. This phase of the algorithm exhibits the worst-case exponential be- havior that is found in chart parsers which do not use node packing. Since the probabilistic model is no longer useful in this phase, the parser is forced to propose an enormous number of theories. The expectation (or hope) is that one of the theories which spans most of the sen- tence will be completed by this final process. Depending on the size of the grammar used, it may be unfeasible to allow the parser to exhaust all possible predicts be- fore deciding an input is ungrammatical. The question of when the parser should give up is an empiricM issue which will not be explored here. Post-processing: Partial Parsing Once the final phase has exhausted all predictions made by the gram- mar, or more likely, once the probability of all edges in the chart falls below a certain threshold, Picky deter- mines the sentence to be ungrammatical. However, since the chart produced by 7)icky contains all recognized con- stituents, sorted by probability, the chart can be used to extract partial parses. As implemented, T'icky prints out the most probable completed S constituent. 4. Why a New Algorithm? Previous research efforts have produced a wide vari- ety of parsing algorithms for probabilistic and non- probabilistie grammars. One might question the need for a. new algorithm to deal with context-sensitive prob- abilistic models. However, these previous efforts have generally failed to address both efficiency and robust- hess effe(:ti rely. For noll-probabilistic grammar models, tile CKY algo- rithm [9] [17] provides efficiency and robustness in poly- nomia.1 time, O(6'n3). C,I(Y can be modified to ha.n- 43 dle simple P-CFGs [2] without loss of efficiency. How- ever, with the introduction of context-sensitive proba- bility models, such as the history-based grammar[l] and the CFG with CSP models[12], CKY cannot be mod- ified to accommodate these models without exhibiting exponential behavior in the grammar size G. The linear behavior of CKY with respect to grammar size is depen- dent upon being able to collapse the distinctions among constituents of the same type which span the same part of the sentence. However, when using a context-sensitive probabilistic model, these distinctions are necessary. For instance, in the CFG with CSP model, the part-of- speech sequence generated by a constituent affects the probability of constituents that dominate it. Thus, two constituents which generate different part-of-speech se- quences must be considered individually and cannot be collapsed. Earley's algorithm [6] is even more attractive than CKY in terms of efficiency, but it suffers from the same expo- nential behavior when applied to context-sensitive prob- abilistic models. Still, Earley-style prediction improves the average case performance of en exponential chart- parsing algorithm by reducing the size of the search space, as was shown in [12]. However, Earley-style pre- diction has serious impacts on robust processing of un- grammatical sentences. Once a sentence has been de- termined to be ungrammatical, Earley-style prediction prevents any new edges from being added to the parse chart. This behavior seriously degrades the robustness of a natural language system using this type of parser. A few recent works on probabilistic parsing have pro- posed algorithms and devices for efficient, robust chart parsing. Bobrow[3] and Chitrao[4] introduce agenda- based probabilistic parsing algorithms, although nei- ther describe their algorithms in detail. Both algo- rithms use a strictly best first search. As both Chitrao and Magerman[12] observe, a best first search penalizes longer and more complex constituents (i.e. constituents which are composed of more edges), resulting in thrash- ing and loss of efficiency. Chitrao proposes a heuristic penalty based on constituent length to deal with this problem. Magerman avoids thrashing by calculating the score of a parse tree using the geometric mean of the probabilities of the constituents contained in the tree. Moore[14] discusses techniques for improving the effi- ciency and robustness of chart parsers for unification grammars, but the ideas are applicable to probabilistic grammars as well. Some of the techniques proposed are well-known ideas, such as compiling e-t, ra.nsitions (null gaps) out of the grammar and heuristically controlling the introduction of predictions. The Picky parser incorporates what we deem to be the most effective techniques of these previous works into one parsing algorithm. New techniques, such as proba- bilistic prediction and the multi-phase approach, are in- troduced where the literature does not provide adequate solutions. Picky combines the standard chart parsing data structures with existing bottom-up and top-down parsing operations, and includes a probabilistic version of top-down filtering and over-the-top prediction. Picky also incorporates a limited form of bi-directional pars- ing in a way which avoids its computationally expensive side-effects. It uses an agenda processing control mech- anism with the scoring heuristics of Pearl. With the exception of probabilistic prediction, most of the ideas in this work individually are not original to the parsing technology literature. However, the combination of these ideas provides robustness without sacrificing ef- ficiency, and efficiency without losing accuracy. 5. Results of Experiments The Picky parser was tested on 3 sets of 100 sentences which were held out from the rest of the corpus during training. The training corpus consisted of 982 sentences which were parsed using the same grammar that Picky used. The training and test corpora are samples from the MIT's Voyager direction-finding system. 7 Using Picky's grammar, these test sentences generate, on average, over 100 parses per sentence, with some sentences generated over 1,000 parses. The purpose of these experiments is to explore the im- pact of varying of Picky's parsing algorithm on parsing accuracy, efficiency, and robustness. For these exper- iments, we varied three attributes of the parser: the phases used by parser, the maximum number of edges the parser can produce before failure, and the minimum probability parse acceptable. In the following analysis, the accuracy rate represents the percentage of the test sentences for which the high- est probability parse generated by the parser is identical to the "correct" pa.rse tree indicated in the parsed test corpus, s Efficiency is measured by two ratios, the prediction ratio and the completion ratio. The prediction ratio is defined as the ratio of number of predictions made by the parser 7Special thanks to Victor Zue at MIT for the use of the speech data from MIT's Voyager system. 8There are two exceptions to this accuracy measure. If tile parser generates a plausible parse for a sentences which has multi- pie plausible int.erpretations, the parse is considered cc~rrcct. Also. if the parser generates a correct; pal'se~ I)ll~ the parsecl test corpus contains an incorrect parse (i.e. if there is an error in the answer key), the parse is considered col-rect. 44 during the parse of a sentence to the number of con- stituents necessary for a correct parse. The completion ratio is the ratio of the number of completed edges to the number of predictions during the parse of sentence. Robustness cannot be measured directly by these ex- periments, since there are few ungrammatical sentences and there is no implemented method for interpreting the well-formed substring table when a parse fails. However, for each configuration of the parser, we will explore the expected behavior of the parser in the face of ungram- matical input. Since Picky has the power of a pure bottom-up parser, it would be useful to compare its performance and effi- ciency to that of a probabilistic bottom-up parser. How- ever, an implementation of a probabilistic bottom-up parser using the same grammar produces on average over 1000 constituents for each sentence, generating over 15,000 edges without generating a parse at all! This supports our claim that exhaustive CKY-like parsing al- gorithms are not feasible when probabilistic models are applied to them. 5.1. Control Configuration The control for our experiments is the configuration of Picky with all three phases and with a maximum edge count of 15,000. Using this configuration, :Picky parsed the 3 test sets with an 89.3% accuracy rate. This is a slight improvement over Pearl's 87.5% accuracy rate reported in [12]. Recall that we will measure the efficiency of a parser configuration by its prediction ratio and completion ratio on the test sentences. A perfect prediction ratio is 1:1, i.e. every edge predicted is used in the eventual parse. However, since there is ambiguity in the input sentences, a 1:1 prediction ratio is not likely to be achieved. Picky's prediction ratio is approximately than 4.3:1, and its ratio of predicted edges to completed edges is nearly 1.3:1. Thus, although the prediction ratio is not perfect, on average for every edge that is predicted more than one completed constituent results. This is the most robust configuration of Picky which will be attempted in our experiments, since it includes bidi- rectional parsing (phase II) and allows so many edges to be created. Although there was not a sufficient num- ber or variety of ungrammatical sentences to explore the robustness of this configuration further, one inter- esting example did occur in the test sets. The sentence How do I how do I get to MIT? is an ungranm~atical but interpretable sentence which begins with a restart. The Pearl parser would have gen- erated no analysis tbr the latter part of the sentence and the corresponding sections of the chart would be empty. Using bidirectional probabilistic prediction, Picky pro- duced a correct partial interpretation of the last 6 words of the sentence, "how do I get to MIT?" One sentence does not make for conclusive evidence, but it repre- sents the type of performance which is expected from the Picky algorithm. 5.2. Phases vs. Efficiency Each of Picky's three phases has a distinct role in the parsing process. Phase I tries to parse the sentences which are most standard, i.e. most consistent with the training material. Phase II uses bidirectional parsing to try to complete the parses for sentences which are nearly completely parsed by Phase I. Phase III uses a simplis- tic heuristic to glue together constituents generated by phases I and II. Phase III is obviously inefficient, since it is by definition processing atypical sentences. Phase II is also inefficient because of the bidirectional predictions added in this phase. But phase II also amplifies the in- efficiency of phase III, since the bidirectional predictions added in phase II are processed further in phase III. Pred. Comp. Phases Ratio Ratio I 1.95 1.02 I,II 2.15 0.94 II 2.44 0.86 I,III 4.01 1.44 III 4.29 1.40 I,II,III 4.30 1.28 II,III 4.59 1.24 Coverage %Error 75.7% 2.3% 77.0% 2.3% 77.3% 2.0% 88.3% 11.7% 88.7% 11.3% 89.3% 10.7% 89.7% 10.3% Table 1: Prediction and Completion Ratios and accuracy statistics for Picky configured with different subsets of Picky's three phases. In Table 1, we see the efficiency and accuracy of Picky using different, subsets of the parser's phases. Using the control parser (phases I, II, and II), the parser has a 4.3:1 prediction ratio and a 1.3:1 completion ratio. By omitting phase III, we eliminate nearly half of the predictions and half the completed edges, resulting in a 2.15:1 prediction ratio. But this efficiency comes at the cost of coverage, which will be discussed in the next section. By omitting phase II, we observe a slight reduction in predictions, but an increase in completed edges. This behavior results from the elimination of the bidirectional predictions, which tend to genera.re duplicate edges. Note that this configuration, while slightly more efficient, 45 is less robust in processing ungrammatical input. 5.3. Phases vs. Accuracy For some natural language applications, such as a natu- ral language interface to a nuclear reactor or to a com- puter operating system, it is imperative for the user to have confidence in the parses generated by the parser. Picky has a relatively high parsing accuracy rate of nearly 90%; however, 10% error is far too high for fault- intolerant applications. Phase No. I + II 238 III 62 Overall 300 Accuracy Coverage %Error 97% 77% 3% 60% 12% 40% 89.3% 89.3% 10.7% Table 2: 7~icky's parsing accuracy, categorized by the phase which the parser reached in processing the test sentences. Consider the data in Table 2. While the parser has an overall accuracy rate of 89.3%, it is.far more accurate on sentences which are parsed by phases I and II, at 97%. Note that 238 of the 300 sentences, or 79%, of the test sentences are parsed in these two phases. Thus, by elimi- nating phase III, the percent error can be reduced to 3%, while maintaining 77% coverage. An alternative to elim- inating phase III is to replace the length-based heuristic of this phase with a secondary probabilistic model of the difficult sentences in this domain. This secondary model might be trained on a set of sentences which cannot be parsed in phases I and II. 5.4. Edge Count vs. Accuracy In the original implementation of the Picky algorithm, we intended to allow the parser to generate edges un- til it found a complete interpretation or exhausted all possible predictions. However, for some ungrammati- cal sentences, the parser generates tens of thousands of edges without terminating. To limit the processing time for the experiments, we implemented a maximum edge count which was sufficiently large so that all grammat- ical sentences in the test corpus would be parsed. All of the grammatical test sentences generated a parse be- fore producing 15,000 edges. However, some sentences produced thousands of edges only to generate an incor- rect parse. In fact, it seemed likely tha,t there might be a correlation between very high edge counts and incor- rect parses. We tested this hypothesis by varying the maximum edge count. In Table 3, we see an increase in efficiency and a decrease Maximum Pred. Comp. Edge Count Ratio Ratio 15,000 4.30 1.35 1,000 3.69 0.93 500 3.08 0.82 300 2.50 0.86 150 1.95 0.92 100 1.60 0.84 Coverage %Error 89.3% 10.7% 83.3% 7.0% 80.3% 5.3% 79.3% 2.7% 66.0% 1.7% 43.7% 1.7% Table 3: Prediction and Completion Ratios and accuracy statistics for 7~icky configured with different maximum edge count. in accuracy as we reduce the maximum number of edges the parser will generate before declaring a sentence un- grammatical. By reducing the maximum edge count by a factor of 50, from 15,000 to 300, we can nearly cut in half the number of predicts and edges generated by the parser. And while this causes the accuracy rate to fall from 89.3% to 79.3%, it also results in a significant decrease in error rate, down to 2.7%. By decreasing the maximum edge count down to 150, the error rate can be reduced to 1.7%. 5.5. Probability vs. Accuracy Since a probability represents the likelihood of an inter- pretation, it is not unreasonable to expect the probabil- ity of a parse tree to be correlated with the accuracy of the parse. However, based on the probabilities associ- ated with the "correct" parse trees of the test sentences, there appears to be no such correlation. Many of the test sentences had correct parses with very low probabil- ities (10-1°), while others had much higher probabilities (10-2). And the probabilities associated with incorrect parses were not distinguishable from the probabilities of correct parses. The failure to find a correlation between probability a.nd accuracy in this experiment does not prove conclusively that no such correlation exists. Admittedly, the training corpus used for all of these experiments is far smaller than one would hope to estimate the CFG with CSP model parameters. Thus, while the model is trained well enough to steer the parsing search, it may not be suffi- ciently trained to provide meaningful probability values. 6. Conclusions There are many different applications of natural lan- guage parsing, and each application has a different cost threshold for efficiency, robustness, and accuracy. '['he "Pick), algorithm introduces a framework for integral.ing 46 these thresholds into the configuration of the parser in order to maximize the effectiveness of the parser for the task at hand. An application which requires a high de- gree of accuracy would omit the Tree Completion phase of the parser. A real-time application would limit the number of edges generated by the parser, likely at the cost of accuracy. An application which is robust to er- rors but requires efficient processing of input would omit the Covered Bidirectional phase. The :Picky parsing algorithm illustrates how probabilis- tic modelling of natural language can be used to improve the efficiency, robustness, and accuracy of natural lan- guage understanding tools. REFERENCES 1. Black, E., Jelinek, F., Lafferty, J., M~german, D. M., Mercer, R. and Roukos, S. 1992. Towards History-based Grammars: Using Richer Models of Context in Prob- abilistic Parsing. In Proceedings of the February 1992 DARPA Speech and Natural Language Workshop. Ar- den House, NY. 2. Brown, P., Jelinek, F., and Mercer, R. 1991. Basic Method of Probabilistic Context-free Grammars. IBM Internal Report. Yorktown Heights, NY. 3. Bobrow, R. J. 1991. Statistical Agenda Parsing. In Pro- ceedings of the February 1991 DARPA Speech and Nat- ural Language Workshop. Asilomar, California. 4. Chitrao, M. and Grishman, R. 1990. Statistical Parsing of Messages. In Proceedings of the June 1990 DARPA Speech and Natural Language Workshop. Hidden Valley, Pennsylvania. 5. Church, K. 1988. A Stochastic Parts Program and Noun Phrase Parser for Unrestricted Text. In Proceedings of the Second Conference on Applied Natural Language Processing. Austin, Texas. 6. Earley, J. 1970. An Efficient Context-Free Parsing Algo- rithm. Communications of the A CM Vol. 13, No. 2, pp. 94-102. 7. Gale, W. A. and Church, K. 1990. Poor Estimates of Context are Worse than None. In Proceedings of the June 1990 DARPA Speech and Natural Language Work- shop. Hidden Valley, Pennsylvania. 8. Jelinek, F. 1985. Self-orgmlizing Language Modeling for Speech Recognition. IBM Report. 9. Kasami, T. 1965. An Efficient Recognition and Syn- tax Algorithm for Context-Free Languages. Scientific Report AFCRL-65-758, Air Force Cambridge Research Laboratory. Bedford, Massachusetts. 10. Kay, M. 1980. Algorithm Schemata and Data Structures in Syntactic Processing. CSL-80-1~, October 1980. 11. Kimball, J. 1973. Principles of Surface Structure Parsing in Natural Language. Cognition, 2.15-47. 12. Magerman, D. M. and Marcus, M. P. 1991. Pearl: A Probabilistic Chart Parser. In Proceedings of the Euro- pean ACL Conference, Mavcli 1991. Berlin, Germany. 13. Magerman, D. M. and Weir, C. 1992. Probabilisti¢: Pre- diction and Picky Chart Parsing. In Proceedings of the 47 February 1992 DARPA Speech and Natural Language Workshop. Arden House, NY. 14. Moore, R. and Dowding, J. 1991. Efficient Bottom-Up Parsing. In Proceedings of the February 1991 DARPA Speech and Natural Language Workshop. Asilomar, Cal- ifornia. 15. Sharman, R. A., Jelinek, F., and Mercer, R. 1990. Gen- erating a Grammar for Statistical Training. In Proceed- ings of the June 1990 DARPA Speech and Natural Lan- guage Workshop. Hidden Valley, Pennsylvania. 16. Seneff, Stephanie 1989. TINA. In Proceedings of the Au- gust 1989 International Workshop in Parsing Technolo- gies. Pittsburgh, Pennsylvania. 17. Younger, D. H. 1967. Recognition and Parsing of Context-Free Languages in Time n 3. Information and ControlVol. 10, No. 2, pp. 189-208.
1992
6
A Functional Approach to Generation with TAG 1 Kathleen F. McCoy, K. Vijay-Shanker, & Gijoo Yang Department of Computer and Information Sciences University of Delaware Newark, Delaware 19716, USA email: [email protected], [email protected] Abstract It has been hypothesized that Tree Adjoining Grammar (TAG) is particularly well suited for sentence generation. It is unclear, however, how a sentence generation system based on TAG should choose among the syntactic possibilities made available in the grammar. In this paper we con- sider the question of what needs to be done to generate with TAGs and explain a generation sys- tem that provides the necessary features. This approach is compared with other TAG-based gen- eration systems. Particular attention is given to Mumble-86 which, like our system, makes syntac- tic choice on sophisticated functional grounds. 1 Introduction Joshi (1987) described the relevance of Tree Adjoining Grammar (TAG) (Joshi, 1985; Sch- abes, Abeille &5 Joshi, 1988) to Natural Language Generation. In particular, he pointed out how the unique factoring of recursion and dependen- cies provided by TAG made it particularly appro- priate to derive sentence structures from an input provided by a text planning component. Of par- ticular importance is the fact that (all) syntactic dependencies and function argument structure are localizest in TAG trees. Shieber and Schabes (1991) discuss using Synchronous TAG for generation. Synchronous TAG provides a formal foundation to make ex- plicit the relationship between elementary syntac- tic structures and their corresponding semantic counterparts, both expressed as elementary TAG trees. This relationship is made explicit by pairing the elementary trees in the syntactic and logical form languages, and associating the correspond- ing nodes. Shieber and Schabes (1990) describe a generation algorithm which "parses" an input log- ical form string recording the adjoining and sub- stitution operations necessary to build the string from its elementary components. The correspond- ing syntactic structure is then generated by doing 1 This work is supported ill part by Grant #H133E80015 from the National hlstitute on Disability and Rehabilita- tion Research. Support was also provided by the Nemours Fotmdation. We would like to thank John Hughes for Iris many conunents and discussions concerning this work. the same. set of operations (in reverse. ) on the cor- responding elementary structures m the grammar describing the natural language. Note that the generation methodology pro- posed for synchronous TAG (and the hypotheti- cal generator alluded to in (Joshi, 1987)) takes as input the logical form semantic representation and produces a syntactic representation of a natu- ral language sentence which captures that logical form. While the correspondence between logical form and the natural language syntactic form is certainly an important and necessary component of any sentence generation system, it is unclear how finer distinctions can be made in this frame- work. That is, synchronous TAG does not address the question of which syntactic rendition of a par- ticular logical form is most appropriate in a given circumstance. This aspect is particularly crucial from the point of view of generation. A full-blown generation system based on TAG must choose be- tween various renditions of a given logical form on well-motivated grounds. Mumble-86 (McDonald & Pustejovsky, 1985; Meteer et al., 1987) is a sentence genera- tor based on TAG that is able to take more than just the logical form representation into account. Mumble-86 is one of the foremost sentence gener- ation systems and it (or its predecessors) has been used as the sentence generation components of a number of natural language generation projects (e.g., (McDonald, 1983; McCoy, 1989; Conklin & McDonald, 1982; Woolf& McDonald, 1984; Rubi- noff, 1986)). After briefly describing the method- ology in Mumble-86, we will point out some prob- lematic aspects of its design. We will then describe our architecture which is based on interfacing TAG with a rich functional theory provided by func- tional systemic grammar (Halliday, 1970; Halli- day, 1985; Fawcett, 1980; Hudson, 1981). 2 We pay particular attention to those aspects which distin- guish our generator from Mumble-86. 2 Mumble-86 Mumble-86 generates from a specification of what is to be said in the form of an "L-Spec" 2The particular suitability of TAG as a grammatical for- realism to be used in conjtmction with a systemic granunar is discussed in (McCoy, Vijay-Shalrker & Yang, 1990). 44t (Linguistic Specification). An L-Spec captures the content of what is to be generated along with the goals and rhetorical force to be achieved. While the form of the L-Spec is dependent on the partic- ular application, for the purposes of this discus- sion we can think of it as a set of logical form expressions that describe the content to be ex- pressed. Mumble-86 uses a dictionary-like mecha- nism to transform a piece of the L-Spec into an el- ementary TAG tree which realizes that piece. The translation process itself (performed in the dictio- nary) may be influenced by contextual factors (in- cluding pragmatic factors which are recorded as a side-effect of grammar routines), and by the goals recorded in the L-Spec itself. It is in this way that the system can make fine-grained decisions con- cerning one realization over another. Once a TAG tree is chosen to realize the ini- tial subpiece, that structure is traversed in a left to right fashion. Grammar routines are run dur- ing this traversal to ensure grammaticality (e.g., subject-verb agreement) and to record contextual information to be used in the translation of the remaining pieces of the L-Spec. In addition to the grammar routines, as the initial tree is traversed at each place where new information could be added into the evolving surface structure (called attach- ment points), the remaining L-Spec is consulted to see if it contains an item whose realization could be adjoined or substituted at that position. In order for this methodology to work, (McDonald & Pustejovsky, 1985) point out that they have to make some strong assumptions about the logical form input to their generator. Notice that the methodology described always starts gen- erating from an initial tree and other auxiliary or initial trees are adjoined or substituted into that initial structure. 3 As a result, in generating an embedded sentence, the generator must start with the innermost clause in order to ensure that the first tree chosen is an initial (and not an auxiliary) tree. Consider, for example, the generation of the sentence "Who did you think hit John". Mumble- 86 must start generating from the clause "Who hit John" which is (roughly) captured in the tree shown in Figure 4. This surface structure would then be traversed. At the point labeled fr-node (an attachment point) the auxiliary tree representing "you think" in Figure 2 would be adjoined in. Notice, however, that if Mumble-86 must work from the inner-most clause out, then the ini- tial L-Spec must be in a particular form which is not consistent with the "logician's usual represen- 3An initial tree is a minimal non-recursive structure in TAG, wlfile an auxiliary tree is a minimal recursive struc- ture. Thus, an auxiliary tree is characterized as having a leaf node (wlfich is termed the foot node) which has the same label as the root node. The tree in Figure 2 is an auxiliary tree. The adjoining operation essentially inserts an auxiliary tree into another tree. For instance, the tree in Figure 5 is the result of adjoining the auxiliary tree shown in Figure 2 into the ilfitial tree shown in Figure 4 at the node labeled It-node. tation of sentential complement verbs as higher operators" (McDonald & Pustejovsky, 1985)[p. 101] (also noted by (Shieber & Schabes, 1991)). Instead Mumble-86 requires an alternative logi- cal form representation which amounts to break- ing the more traditional logical form into smaller pieces which reference each other. Mumble-86 must be told which of these pieces is the embedded piece that the processing should start with. 4 Notice that this architecture is particularly problematic for certain kinds of verbs that take in- direct questions. For instance, it would preclude the proper generation of sentences involving "won- der" (as in "I wonder who hit John"). Verbs which require the question to remain embedded are prob- lematic for Mumble-86 since the main verb (won- der) would not be available when its inclusion in the surface structure needs to be determined. ~ An additional requirement on the logical form input to the generator is that the lambda expression (representing a wh-question) and the expression containing the matrix trace be present in a single layer of specification. This, they claim, is necessary to generate an appropriate sentence form without the necessity of looking arbitrarily deep into the representation. This would mean that for sentences such as "Who do you think hit John", the lambda expression would have to come with the "hit John" part of the input. We will show that our system does not place either of these restrictions on the logical form input and yet is able to generate the appropriate sentence without looking arbitrarily deep into the input specifica- tion. One can notice a few features of the sys- tem just described. First, because the dictionary translation process is context sensitive, the gener- ation methodology is able to take more than just logical form into account. Note, however, that it is unclear what the theory is behind the realizations made. In addition, these decisions are encoded procedurally thus the theory is rather difficult to abstract. It is also the case that Mumble-86 makes no distinction between decisions that are made for functional reasons and those that are made for syntactic reasons. Both kinds of information must be recorded (procedurally) in grammar routines so that they can be taken into account during subse- quent translations. While the fact that the gram- mar is procedurally encoded and that functional 4 The task of ordering the elements of logical fonn is con- sidered by Mumble-86 to be part of a component wlfich is also responsible for ensuring that what is given to mmnble is actually expressible in the language (e.g., English). Tiffs component is described in (Meteer, 1991). ~Tlfis is because the logical form for an embedded ques- tion and a non-embedded question camlot be distinguished in the kind of input required by Mmnble-86 mid the main verb (wonder) is not able to pass a~ly information down to the embedded clause since it is realized after the embedded clause. 49 and syntactic decisions are mixed does not affect the power of the generator, we argue that it does make development and maintenance of the system rather difficult. Functional decisions (e.g., that a particular item should be made prominent) and syntactic decisions (e.g., number agreement) rely on two different bodies of work which should be able to evolve independently of each other. There is no separation of these two different influences in Mumble-86. The generation process in Mumble-86 is syntax driven. From the input L-Spec an initial elementary) TAG tree is chosen. This structure s then traversed and grammar routines are initi- ated. At each possible attachment point during the traversal, the semantic structure (L-Spec) is consulted to see if it contains an item whose real- ization could be adjoined or substituted at that position. Thus the syntactic surface structure drives the processing. As a side effect of the above processing strategy, Mumble-86 creates a strictly left-to-right realization of surface structure. While this side- effect is deliberate for reasons of psychological va- lidity, this can be problematic for generating some connectives (as is pointed out in (MeKeown & E1- hadad, 1991)). This is because Mumble-86 does not have access to the content of the items being conjoined at the time the connective is generated. In the remainder of this paper we describe a sentence generation system which we have de- veloped. In some ways it is similar to Mumble-86, but there are several major differences: • The realization of the input in our sys- tem is based on systemic functional linguis- tics (Halliday, 1970; Halliday, 1985; Fawcett, 1980; Hudson, 1981). This is a linguistic the- ory which states that a generated sentence is obtained as a result of a series of func- tional choices which are made in a parallel fashion along several different functional do- mains. The choices are represented as a series of networks with traversal of the networks de- pendent on the given input along with several knowledge sources which encode information about how various concepts can be linguisti- cally realized. The bulk of the work in sys- temic linguistics has been devoted to describ- ing what/how functional choice affects surface form. We adopt this work from systemic lin- guistics, but unlike other implementations, we use a formal syntactic framework (TAG) to express the syntactic constraints. • Our method is not syntax directed, but fol- lows a functional decomposition called for by the systemic grammar. • There is a clear separation between the func- tional and the syntactic aspects of sentence generation which actually allows these two as- pects of generation to be developed indepen- dently. • We do not place any constraints on the logical form input. Our methodology calls for noth- ing different from what is required for a stan- dard systemic grammar (whose input is based on a typical logical form representation). • The methodology which we describe allows sentence generation to proceed in a seman- tic head-driven fashion (Shieber, Van Noord, Pereira ~ Moore, 1990). This is the case even for the embedded sentences discussed earlier which had to be worked "inside out" in Mumble-86. 3 Generator Architecture There are many different ways of imple- menting a TAG-based generator. We consider the principles that we take to be common to any TAG generator and indicate how these principles have influenced our architecture. We present various aspects of our architecture and contrast them with choices that have been made in Mumble-86 and Synchronous TAG. Our approach is motivated by arguments presented in (McCoy, Vijay-Shanker Yang, i990), but the details of the processing pre- sented there have changed significantly. Our basic processing strategy is detailed in (Yang, McCoy & Vijay-Shanker, 1991); the work presented here is an extension of that strategy. In order for a TAG generator to be ro- bust, it must have a methodology for decipher- ing the input and associating various pieces of the input with TAG trees. In Mumble-86 this is ac- complished through dictionary look-up along with querying the input at various points during the surface structure traversal. In contrast, we use a systemic grammar traversal for this purpose. In a TAG, each elementary tree lexicalizes a predicate and contains unexpanded nodes for the required arguments. Thus any TAG based generation sys- tem should incorporate the notions of semantic head-driven generation. Our approach, based on systemic grammars, does this because the func- tional decomposition that results from traversal of a systemic grammar at a single rank identifies the head and establishes necessary argumentsl Thus it perfectly matches the information captured in an elementary TAG tree. Once the input has been deciphered, a TAG generator must use this to select a tree. Given that a systemic grammar is being used in our case, we must have a method for associating TAG trees with the network traversal. The traversal of a sys- temic grammar at a single rank establishes a set of functional choices that can be used to select a TAG tree. The selection process in any TAG-based gen- erator can be considered as providing a classifi- cation of TAG trees on functional grounds. We make this explicit by providing a network (called the TAG network) 6 which is traversed to select a TAG tree. The network itself can be thought of as 6 hi fact we view a systemic network in a similar fashion 50 s - act : wh - question wh- it : nl tense : past proc : "think" actor: n2 : ["you"] I proc : "hit" tense : past phen : actee : n3 = actor Tt 1 f "john" ] type : person ] id : quest J Figure 1. Input for Who did yon think hit John Region rl: i"~ fr-node ! ! V I think nl a decision tree whose choice points are functional features chosen in the systemic network traversal. So far we have identified how the head can be lexicalized and placed in an appropriate tree with respect to its arguments. This is accom- plished by a traversal of a systemic network at one rank followed by a TAG network traversal based on the functional choices made. Of course, the ar- guments themselves must also be realized. This is accomplished by a recursive network (systemic followed by TAG) traversal (focused on the piece of input associated with the particular argument being realized). The recursive network traversals will also result in the realization of a TAG tree. We record information collected during a single (rank) network traversal in a data structure called a region. Thus, an initial region will be created and will record all features necessary for the se- lection of a tree realizing the head and argument placement. The selected tree (and other struc- tures discussed below) will be recorded in the re- gion. Each argument will itself be realized in a subregion which will be associated with the recur- sire network traversal spawned by the piece of in- put associated with that argument. Thus we have separate regions for each independent piece of in- put. This is in contrast to Mumble-86's use of the evolving surface structure in which all grammati- cal information is recorded. Once all arguments have been realized as el- ementary trees in the individual regions, the trees selected in the individual regions must be com- bined with the tree in the initial region. For this we use the standard TAG operations of adjoining and substitution. Essentially, our generation methodology consists of two phases: 1. The descent process - where a systemic net- work traversal is used to collect a set of fea- tures which are used to select a TAG tree that realizes the head and into which the argu- ments can be fit. The traversal is also respon- as a classification of all fmlctional choices expressible in a language. Figure 2. Initial tree selected in region rl sible for spawning the creation of subregions in which the arguments (and modifiers) are realized. 2. The ascent process - where the trees cre- ated in the individual subregions are com- bined with the tree in the mother region re- suiting in the final realization of the whole. In our system the systemic network traver- sal basically replaces the dictionary look-up phase found in Mumble-867 which translates the input L-Spec into surface structure. In addition, our sys- tem does not walk a surface structure (i.e., the ac- tual tree chosen). In Mumble-86 the surface struc- ture walk spawned grammar routines and caused additional pieces of the L-Spec to be translated into surface structure. Our methodology relies on the systemic network traversal to spawn realiza- tions of the decomposed subpieces. The syntac- tic aspects of the grammar routines are now in- corporated into our TAG network and grammar. Thus our methodology keeps a clearer separation between functional and syntactic aspects of the generation process. The processing in our system will be ex- plained with an example. Consider the simplified input given in Figure 1. s See (Yang, McCoy & Vijay-Shanker, 1991) for a more detailed descrip- tion of the processing. ;'The systenxic grammar also replaces the grammar rou- tines of Mmnble-86 responsible for recording contextual in- formation for subsequent translations. In addition, the part of the dictionary look-up concerned with syntactic realiza- tion (i.e., the actual tree chosen) is handled by our TAG component. STiffs input is simplified in that it is basically a standard logical form input with lexicM items specified. In general the input is a set of features wlffch drive the traversal of the ftmctional systemic networks. 51 Region r2: I~P ~ if-node you Figure 3. Tree selected in Actor region r2 3.1 The Descent Process The input given (along with other knowl- edge sources traditionally associated with a sys- temic network) will be used to drive the traversal of a functional systemic network. The purpose of this traversal is two fold: (1) to identify the head/argument structure of the sentence to be re- alized, and (2) to identify a set of functional fea- tures which can be used to choose a tree which ap- propriately realizes the head/argument structure. Traditionally a systemic network consists of a number of networks of functional choices which are traversed in parallel. Each network considers choices along one functional domain. One such network is the mood network which is responsible for, among other things, determining what kind of speech act should be generated for the top-level element. This network must notice, for example, that the speech-act specified is wh-questioning, but that the item being questioned is not one of the arguments to the top level process. Thus a standard declarative form should be chosen for the realization of this top level element. Standard implementations of systemic grammar (Davey, 1978; Mann & Matthiessen, 1985; Patten, 1988; Fawcett, 1990), upon traversal of the mood network to this point, would evalu- ate a set of realization operations which manipu- late an eventual surface string. For instance, upon identifying that a declarative form is needed, the subject would be ordered before the finite. We ar- gue in (McCoy, Vijay-Shanker & Yang, 1990) that it is more practical to replace the use of such re- alization operators with a more formal grammat- ical system (and that the use of such a system is perfectly consistent with the tenets of systemic linguistics). Thus during the network traversal, our system simply collects the chosen features and these are used to drive the traversal of a TAG net- work whose traversal results in the selection of a tree. At the same time the mood network is tra- versed, so would be other networks. The transitiv- ity network is concerned with identifying the head argument structure of the item being realized. In Region r3: V•Hi who ;S I | ,! ! $ uS t £ i:; I ' I~,~yr-node iS hit I N I john Figure 4. Tree selected in Phenomenon region r3 this case, it would consider the fact that the item to be realized has a "process" which is mental. This identification results in the expectation of two arguments - an actor (doing the mental pro- cess) and a phenomenon (that thing the process is about). Each of these identified arguments must be realized individually. This is accomplished via the pveselect operation2 This operation causes a recursive network traversal (whose results are recorded in a subregion) to be done focused on the input for the identified sub-element. The features collected during the functional systemic network traversal are used to drive the traversal of the TAG network which results in the selection of a tree realizing the indicated features. Features such as that the process is mental and that the speech act is declarative would cause the selection of a tree for the mother region such as the tree in Figure 2. Similar processing would then take place in the two subregions, each eventually resulting in the trees such as those shown in Figures 3 and 4. 3.2 The Ascent Process In a TAG generator, after the input has been decomposed and elementary trees associated with each subpiece of the input, the chosen trees must be put together. Therefore, every TAG gen- erator must provide a means to determine where 9 From the realization operations used in systemic grmn- mars (particularly Nigel), we need only the preselect and the conflate operations because all structure building op- erations are incorporated into TAG. The conflation oper- ation is used to map functional features (e.g., agent, phe- nomenon) into granunatical functions (e.g., subject, com- plement). Note that in the networks from systemic gram- mars, we take ouly the functional part and thus avoid hav- ing choice points that exist for purely syntactic reasons. 52 Region rl: S ~ S z ~ AUX S who I did ~P you think hit John Figure 5: Final tree: Who did you think hit John? the substitution or adjunction must take place. In order to do this, with each tree there must be a mapping of grammatical functions to nodes in the tree. In our case, we associate a mapping table with each tree. For instance, the mapping table associated with the tree shown in Figure 2 would indicate that the phenomenon (which would have been conflated with complement) is associ- ated with the node labeled nl in the tree. In the simplest case the tree which realizes the phe- nomenon would be substituted at the node labeled nl in the tree in the mother region. A data structure similar to a mapping table is used by the other TAG generators as well. In synchronous TAG the mapping table corresponds to the explicit node for node mapping between el- ementary logical form and syntactic trees. The mapping table in Mumble-86 is implicit in the schemas which create the surface structure tree (during the dictionary look-up phase) since they place L-spec elements in the appropriate place in the surface structure they create. A more complex case arises when an argu- ment node is a footnode of an auxiliary tree. Sup- pose an auxiliary tree, fl, was chosen in a region and a tree, 7, was chosen in a subregion to real- ize the argument specified by the footnode of ft. Rather than substituting 7 in/3, fl is adjoined into a node in 7- This node is the node in 7 that heads the subtree realizing the function specified for the subregion. For this reason, each tree in a region also has associated with it a pointer we call an fr- node which points to the node heading this subtree (functional root). In Regions rl and r2 the func- tional root is also the root of the tree. Notice in Region r3 that the functional root is the embed- ded S node. This fr-node is chosen because the tree chosen in the region is a wh-question tree due to the fact that (according to the input) the phe- nomenon is being questioned. There is nothing in the phenomenon itself, however, that specifies that NP ' I i S ! tried I t % • - PRO to win Figure 6. Standard tree for "John tried to win" its speech-act should be wh-questioning. Thus the portion of the tree under the embedded S node captures the predicate argument structure which realizes the phenomenon as is specified in the in- put. If it were the case that the phenomenon was specified to be a wh-question (as in "Mary won- dered who hit John") then the root node would be chosen as the fr-node. The fr-node comes into play when the trees in the individual regions are com- bined via adjunction during the ascent process. Other TAG generators have analogues to our fr-node. In synchronous TAG it is implicit in the mapping between the nodes in the two kinds of trees. In Mumble-86, it is the attachment points on surface structure. The point is that if trees might be adjoined into, any TAG generator must specify where adjoining might take place and this specification depends (at least in part) on the func- tional content that the tree is intended to capture. Going back to our example, in combining trees in the subregions with the tree chosen in the initial region rl, the agent tree would be combined with the tree in region rl using straight substitu- tion. The location of the substitution would be determined by the address given for the agent in the mapping table for the tree in region rl. The mapping table also indicates that the phenomenon should be placed at nl in the tree in Figure 2. Notice, however, that nl is the foot node. This is an indication to the processor that the final tree in region rl should result from ad- joining the tree in rl into the tree in the subregion r3 (Figure 4). The place of adjoining is specified by the fr-node in the phenomenon tree in region r3. The result of this adjoining is shown in Fig- ure 5. l° 1°The details of how the AUX is inserted can be found in 53 region r_l: entry-point functional I syntactic features features ) ~'aversal of the l~ traversal of the functionalnetwork ~ TAG, network ] , : . . . . . . . . . . . . . . . . . . . _ , . . . . . . . . . . ubregion r__2: '.. I i I functional network TAG network Figure 7: Flow of Information in Processing Model 4 Passing Features So far we have established that any TAG- based generator, once an elementary tree has been chosen, would need to realize the arguments of the predicate by recursively calling the same proce- dure. The resulting trees chosen would be com- bined with the original elementary tree at the ap- propriate place by substitution and adjunction. In this recursive process, we have indicated the need for only functional information to be passed down from the mother region to the subregions (at the very least, in the form of the functional input asso- ciated with the piece being realized in the region). We now consider an example where syntactic in- formation must be passed down as well. Consider the generation of a sentence such as "John tried to win". The standard structure for this sentence is given in Figure 6. The problem is that in TAG this tree must be derived from the combination of two separate sentential trees: one headed by the verb "tried" and the other by the verb "win". However we must capture the con- straint that the subject of the "win" tree is John (which is the same as the subject of the "tried" (Yang, 1991). It is inserted in the region rl as a result of a feature disparity on the nodes of the tree resulting from the adjoining operation just described. The same disparity would not occur in indirect questions (e.g., "I wonder who kit Jolm" ). tree) but that it is realized only as a (null) pro. Note that this constraint cannot be localized in TAG but cuts across two elementary trees. While generating this sentence, when we choose the "tried" tree in the mother region, we must pass down the information that among the trees associated with win, the one with "pro" in the subject position must be chosen. Notice that this is a purely syntactic constraint based on the choice of the verb "try". The choosing of this tree has ramifications on both the functional network traversal (since the agent of "win" should not be expanded) and the TAG network traversal. In addition, any syntactic constraint that is placed on the arguments (perhaps by the choice of the head) must be passed down to the subregion to influence the realization of the arguments. In general, the passed down features may influence either the functional or the TAG network traver- sal (see Figure 7). Such passing of syntactic and functional features must occur in any TAG gener- ator where the realization of the head is done prior to the realization of its arguments. 5 Conclusions In this paper we started with considering the principles underlying the design of any TAG- based generator. We have shown how these princi- ples have been incorporated in our generation sys- tem and have compared it with other TAG-based generators. The architecture of our generation system incorporates both functional aspects of generation and syntactic aspects. Each of these aspects is handled separately, by two different formalisms which are uniquely combined in our architecture. The result is a sentence generation system which has the advantage of incorporating two bodies of knowledge into one system. Our system has sev- eral advantages over Mumble-86. In addition to the use of systemic grammar as a theory for real- ization and a function (rather than syntactic) di- rected generation process, we have shown that our methodology does not place any special require- ments on the input logical form. Our methodology can proceed in a head-driven manner using notions such as the mapping table and the functional root to decide how trees should be combined. These notions allow fine distinctions in form which are not possible in Mumble-86. In addition, our sys- tem separates functional from syntactic decisions thus allowing these two bodies to be expanded in- dependently. A prototype of our system has been imple- mented in Lucid Common Lisp on a Sun Worksta- tion. Details of the implementation can be found in (Yang, 1991). References Conklin, E. & McDonald, D. (1982). Salience: The key to the selection problem in natu- ral language generation. In Proceedings of 54 the 20th Annual Meeting, (pp. 129-135)., Toronto, Canada. Association for Computa- tional Linguistics. Davey, A. (1978). Discourse Production. Edin- burgh: Edinburgh University Press. Fawcett, R. (1980). Cognitive linguistics and social interaction. Heidelberg: Julius Groos Verlag Heidelberg and Exeter University. Fawcett, R. P. (1990). The communal project: two years old and going well. Network, (13). Halliday, M. A. K. (1970). Language struc- ture and language function. In J. Lyons (Ed.), New Horizons in Linguistics. Har- mondsworth, England: Penguins Books. Halliday, M. A. K. (1985). An introduction to functional grammar. London England: Ed- ward Arnold. Hudson, R. A. (1981). Systemic generative gram- mar. In M. A. K. Halliday & J. R. Mar- tin (Eds.), Readings in Systemic Linguistics. North Pomfret, Vermont: Batsford. Joshi, A. K. (1985). How much context-sensitivity is necessary for characterizing structural de- scriptions : Tree adjoining grammar. In D. Dowty, L. Karttunen, & A. Zwicky (Eds.), Natural Language Processing : Theoreti- cal, Computational and Psychological Per- spectives. New York: Cambridge University Press. Joshi, A. K. (1987). The relevance of tree ad- joining grammar to generation. In G. Kem- pen (Ed.), Natural Language Generation: New Results in Artificial Intelligence, Psy- chology, and Linguistics (pp. 233-252). Dor- drecht/Boston: Martinus Nijhoff Publishers (Kluwer Academic Publishers). Mann, W. & Matthiessen, C. (1985). Nigel: A systemic grammar for text generation. In O. Freedle (Ed.), Systemic Perspectives on Discourse. N J: Norwood. McCoy, K. F. (1989). Generating context sen- sitive responses to object-related misconcep- tions. Artificial Intelligence, 41, 157-195. McCoy, K. F., Vijay-Shanker, K., & Yang, G. (1990). Using tree adjoining grammars in the systemic framework. In Proceedings of 5 th International Workshop on Natural Language Generation., Dawson, PA. McDonald, D. (1983). Dependency directed con- trol: Its implications for natural language generation. In N. Cercone (Ed.), Computa- tional Linguistics (pp. 111-130). Pergamon Press. McDonald, D. & Pustejovsky, J. D. (1985). Tags as a formalism for generation. In Proceedings of the 23rd Annual Meeting, Chicago, IL. As- sociation for Computational Linguistics. McKeown, K. R. & Elhadad, M. (1991). A contrastive evaluation of functional unifica- tion grammar for surface language gener- ation: A case study in choice of connec- tives. In C. Paris, W. Swartout, ~c W. Mann (Eds.), Natural Language Generation in Ar- tificial Intelligence and Linguistics (pp. 351- 396). Boston/Dordrecht/London: Kluwer Academic Publishers. Meteer, M. (1991). Bridging the 'generation gap'. Computational Intelligence, 7(4). Meteer et al., M. (1987). Mumble-86: Design and implementation. COINS Tech Report 87-87a, University of Massachusetts. Patten, T. (1988). Systemic Text Generation as Problem Solving. Cambridge: Cambridge University Press. Rubinoff, R. (1986). Adapting mumble: Expe- rience with natural language generation. In Proceedings of the 1986 National Conference on Artificial Intelligence, (pp. 1063-1068)., Philadelphia, Pa. AAAI. Schabes, Y., Abeille, A., & Joshi, A. (1988). Pars- ing strategies with 'lexicalized' grammars: Application to tree adjoining grammars. In Proceedings of COLING' 88, Budapest, Hun- gary. Shieber, S. M. & Schabes, Y. (1991). Gener- ation and synchronous tree-adjoining gram- mars. Computational Intelligence, 7(4). Shieber, S. M., Van Noord, G., Pereira, F., & Moore, R. C. (1990). Semantic-head- driven generation. Computational Linguis- tics, 16(1). Woolf, B. & McDonald, D. (1984). Context- dependent transitions in tutoring discourse. In Proceedings of the 1984 National Confer- ence on Artificial Intelligence, Washington, D.C. AAAI. Yang, G. (1991). An Integrated Approach to Gen- eration Using Systemic Grammars and Tree Adjoining Grammars. PhD thesis, University of Delaware. Yang, G., McCoy, K. F., & Vijay-Shanker, K. (1991). From functional specification to syn- tactic structures: Systemic grammar and tree adjoining grammar. Computational Intelli- gence, 7(4). 55
1992
7
INTEGRATING MULTIPLE KNOWLEDGE SOURCES FOR DETECTION AND CORRECTION OF REPAIRS IN HUMAN-COMPUTER DIALOG* John Bear, John Dowding, Elizabeth Shriberg t SRI International Menlo Park, California 94025 ABSTRACT We have analyzed 607 sentences of sponta- neous human-computer speech data containing re- pairs, drawn from a total corpus of 10,718 sen- tences. We present here criteria and techniques for automatically detecting the presence of a repair, its location, and making the appropriate correc- tion. The criteria involve integration of knowledge from several sources: pattern matching, syntactic and semantic analysis, and acoustics. INTRODUCTION Spontaneous spoken language often includes speech that is not intended by the speaker to be part of the content of the utterance. This speech must be detected and deleted in order to correctly identify the intended meaning. The broad class of disfluencies encompasses a number of phenom- ena, including word fragments, interjections, filled pauses, restarts, and repairs. We are analyzing the repairs in a large subset (over ten thousand sentences) of spontaneous speech data collected for the DARPA Spoken Language Program3 We have categorized these disfluencies as to type and frequency, and are investigating methods for their automatic detection and correction. Here we re- port promising results on detection and correction of repairs by combining pattern matching, syn- tactic and semantic analysis, and acoustics. This paper extends work reported in an earlier paper *This research was supported by the Defense Advanced Research Projects Agency under Contract ONR N00014- 90-C-0085 with the Office of Naval Research. It was also supported by a Grant, NSF IRI-8905249, from the National Science Foundation. The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official poll- cies, either expressed or implied, of the Defense Advanced Research Projects Agency of the U.S. Government, or of the National Science Foundation. tElizabeth Shriberg is also affiliated with the Depart- ment of Psychology at the University of California at Berkeley. 1DARPA is the Defense Advanced Research Projects Agency of the United States Government 56 (Shriberg et al., 1992a). The problem of disfluent speech for language understanding systems has been noted but has received limited attention. Hindle (1983) at- tempts to delimit and correct repairs in sponta- neous human-human dialog, based on transcripts containing an "edit signal," or external and reli- able marker at the "expunction point," or point of interruption. Carbonell and Hayes (1983) briefly describe recovery strategies for broken-off and restarted utterances in textual input. Ward (1991) addresses repairs in spontaneous speech, but does not attempt to identify or correct them. Our ap- proach is most similar to that of Hindle. It differs, however, in that we make no assumption about the existence of an explicit edit signal. As a reli- able edit signal has yet to be found, we take it as our problem to find the site of the repair automat- ically. It is the case, however, that cues to repair exist over a range of syllables. Research in speech pro- duction has shown that repairs tend to be marked prosodically (Levelt and Cutler, 1983) and there is perceptual evidence from work using lowpass- filtered speech that human listeners can detect the occurrence of a repair in the absence of segmental information (Lickley, 1991). In the sections that follow, we describe in de- tail our corpus of spontaneous speech data and present an analysis of the repair phenomena ob- served. In addition, we describe ways in which pattern matching, syntactic and semantic analy- sis, and acoustic analysis can be helpful in detect- ing and correcting these repairs. We use pattern matching to determine an initial set of possible repairs; we then apply information from syntac- tic, semantic, and acoustic analyses to distinguish actual repairs from false positives. THE CORPUS The data we are analyzing were collected as part of DARPA's Spoken Language Systems project. The corpus contains digitized waveforms and transcriptions of a large number of sessions in which subjects made air travel plans using a com- puter. In the majority of sessions, data were col- lected in a Wizard of Oz setting, in which subjects were led to believe they were talking to a com- puter, but in which a human actually interpreted and responded to queries. In a small portion of the sessions, data were collected using SRI's Spo- ken Language System (Shriberg et al., 1992b), in which no human intervention was involved. Rel- evant to the current paper is the fact that al- though the speech was spontaneous, it was some- what planned (subjects pressed a button to begin speaking to the system) and the transcribers who produced lexical transcriptions of the sessions were instructed to mark words they inferred were ver- bally deleted by the speaker with special symbols. For further description of the corpus, see MAD- COW (1992). NOTATION In order to classify these repairs, and to facil- itate communication among the authors, it was necessary to develop a notational system that would: (1) be relatively simple, (2) capture suf- ficient detail, and (3) describe the vast majority of repairs observed. Table 1 shows examples of the notation used, which is described fully in Bear et al. (1992). The basic aspects of the notation include marking the interruption point, the extent of the repair, and relevant correspondences between words in the region. To mark the site of a re- pair, corresponding to Hindle's "edit signal" (Hin- die, 1983), we use a vertical bar (I)- To express the notion that words on one side of the repair correspond to words on the other, we use a com- bination of a letter plus a numerical index. The letter M indicates that two words match exactly. R indicates that the second of the two words was intended by the speaker to replace the first. The two words must be similar-either of the same lexical category, or morphological variants of the same base form (including contraction pairs like "I/I'd"). Any other word within a repair is no- tated with X. A hyphen affixed to a symbol in- dicates a word fragment. In addition, certain cue words, such as "sorry" or "oops" (marked with CR) as well as filled pauses (CF) are also labeled 57 I want fl- flights to boston. M1 - I M1 what what are the fares M1 I M, show me flights daily flights M, [ X M1 I want a flight one way flight M1 ] X X M1 I want to leave depart before ... R1 ] R1 what are what are the fares M, M2 [ M1 M2 ... fly to boston from boston R, M1 [ R1 M1 ... fly from boston from denver M1 R1 [ M1 R1 what are are there any flights x x [ Table 1: Examples of Notation if they occur immediately before the site of a re- pair. DISTRIBUTION Of the 10,000 sentences in our corpus, 607 con- tained repairs. We found that 10% of sentences longer than nine words contained repairs. In con- trast, Levelt (1983) reports a repair rate of 34% for human-human dialog. While the rates in this cor- pus are lower, they are still high enough to be sig- nificant. And, as system developers move toward more closely modeling human-human interaction, the percentage is likely to rise. Although only 607 sentences contained dele- tions, some sentences contained more than one, for a total of 646 deletions. Table 2 gives the breakdown of deletions by length, where length is defined as the number of consecutive deleted words or word fragments. Most of the deletions Deletion Length Occurrences Percentage 1 376 59% 2 154 24% 3 52 8% 4 25 4% 5 23 4% 6+ 16 3% Table 2: Distribution of Repairs by Length Type Pattern Freq. Length 1 Repairs Fragments MI-, RI-, X- 61% Repeats M1 [M1 16% Insertions M1 [ X1 ... XiM1 7% Replacement R1 [ R1 9% Other X[X 5% Length 2 Repairs Repeats M1 M2 [ M1 M2 28% Replace 2nd M1 R1 [ M1 R1 27% Insertions M1 M2 [MIX1 ... Xi M2 19% Replace 1st R1 M1 [ R1 M1 10% Other ...[... 17% Table 3: Distribution of Repairs by Type Match Length 2 3 4 Fill Length 0 1 2 3 .82 .74 .69 .28° (39) (65) (43) (39) 1.0 .83 .73 .00 (10) (6) (11) (1) 1.0 .80 1.0 (4) (5) (2) 1.0 1.0 (2) (1) -- indicates no observations Table 4: Fill Length vs. Match Length were fairly short; deletions of one or two words ac- counted for 82% of the data. We categorized the length 1 and length 2 repairs according to their transcriptions. The results are summarized in Ta- ble 3. For simplicity, in this table we have counted fragments (which always occurred as the second deleted word) as whole words. The overall rate of fragments for the length 2 repairs was 34%. A major repair type involved matching strings of identical words. More than half (339 out of 436) of the nontrivial repairs (more editing necessary than deleting fragments and filled pauses) in the corpus were of this type. Table 4 shows the distri- butions of these repairs with respect to two param- eters: the length in words of the matched string, and the number of words between the two matched strings. Numbers in parentheses indicate the num- ber of occurrences, and probabilities represent the likelihood that the phrase was actually a repair and not a false positive. Two trends emerge from these data. First, the longer the matched string, the more likely the phrase was a repair. Second, the more words there were intervening between the matched strings, the less likely the phrase was a repair. SIMPLE PATTERN MATCHING We analyzed a subset of 607 sentences con- taining repairs and concluded that certain sim- ple pattern-matching techniques could successfully detect a number of them. The pattern-matching 58 component reported on here looks for identical se- quences of words, and simple syntactic anomalies, such as "a the" or "to from." Of the 406 sentences containing nontrivial re- pairs, the program successfully found 309. Of these it successfully corrected 177. There were 97 sentences that contained repairs which it did not find. In addition, out of the 10,517 sentence corpus (10,718 - 201 trivial), it incorrectly hypothesized that an additional 191 contained repairs. Thus of 10,517 sentences of varying lengths, it pulled out 500 as possibly containing a repair and missed 97 sentences actually containing a repair. Of the 500 that it proposed as containing a repair, 62% actu- ally did and 38% did not. Of the 62% that had re- pairs, it made the appropriate correction for 57%. These numbers show that although pattern matching is useful in identifying possible repairs, it is less successful at making appropriate correc- tions. This problem stems largely from the over- lap of related patterns. Many sentences contain a subsequence of words that match not one but sev- eral patterns. For example the phrase "FLIGHT <word> FLIGHT" matches three different pat- terns: show the flight time flight date M1 R1 [ M1 R1 show the flight earliest flight M1 [ X M1 show the delta flight united flight R1 M1 [ ~I~l M1 Each of these sentences is a false positive for the other two patterns. Despite these problems of overlap, pattern matching is useful in reducing the set of candidate sentences to be processed for repairs. Rather than applying detailed and pos- sibly time-intensive analysis techniques to 10,000 sentences, we can increase efficiency by limiting ourselves to the 500 sentences selected by the pat- tern matcher, which has (at least on one measure) a 75% recall rate. The repair sites hypothesized by the pattern matcher constitute useful input for further processing based on other sources of infor- mation. NATURAL LANGUAGE CONSTRAINTS Here we describe two sets of experiments to measure the effectiveness of a natural language processing system in distinguishing repairs from false positives. One approach is based on parsing of whole sentences; the other is based on parsing localized word sequences identified as potential re- pairs. Both of these experiments rely on the pat- tern matcher to suggest potential repairs. The syntactic and semantic components of the Gemini natural language processing system are used for both of these experiments. Gemini is an extensive reimplementation of the Core Lan- guage Engine (Alshawi et al., 1988). It includes modular syntactic and semantic components, inte- grated into an efficient all-paths bottom-up parser (Moore and Dowding, 1991). Gemini was trained on a 2,200-sentence subset of the full 10,718- sentence corpus. Since this subset excluded the unanswerable sentences, Gemini's coverage on the full corpus is only an estimated 70% for syntax, and 50% for semantics. 2 Global Syntax and Semantics In the first experiment, based on parsing com- plete sentences, Gemini was tested on a subset of the data that the pattern matcher returned as likely to contain a repair. We excluded all sen- tences that contained fragments, resulting in a 2Gemlni's syntactic coverage of the 2,200-sentence dataset it was trained on (the set of annotated and an- swerable MADCOW queries) is approximately 91~, while its semantic coverage is approximately 77%. On a recent fair test, Gemini's syntactic coverage was 87~0 and seman- tic coverage was 71%. Syntax Only Marked Marked as as Repair False Positive Repairs 68 (96%) 56 (30%) False Positives 3 (4%) 131 (70%) Syntax and Semantics Marked Marked as as Repair False Positive Repairs 64 (85%) 23 (20%) False Positives 11 (15%) 90 (80%) Table 5: Syntax and Semantics Results 59 dataset of 335 sentences, of which 179 contained repairs and 176 contained false positives. The ap- proach was as follows: for each sentence, parsing was attempted. If parsing succeeded, the sentence was marked as a false positive. If parsing did not succeed, then pattern matching was used to detect possible repairs, and the edits associated with the repairs were made. Parsing was then reattempted. If parsing succeeded at this point, the sentence was marked as a repair. Otherwise, it was marked as no opinion. Table 5 shows the results of these experiments. We ran them two ways: once using syntactic con- straints alone and again using both syntactic and semantic constraints. As can be seen, Gemini is quite accurate at detecting a repair, although somewhat less accurate at detecting a false posi- tive. Furthermore, in cases where Gemini detected a repair, it produced the intended correction in 62 out of 68 cases for syntax alone, and in 60 out of 64 cases using combined syntax and semantics. In both cases, a large number of sentences (29% for syntax, 50% for semantics) received a no opinion evaluation. The no opinion cases were evenly split between repairs and false positives in both tests. The main points to be noted from Table 5 are that with syntax alone, the system is quite ac- curate in detecting repairs, and with syntax and semantics working together, it is accurate at de- tecting false positives. However, since the coverage of syntax and semantics will always be lower than the coverage of syntax alone, we cannot compare these rates directly. Since multiple repairs and false positives can occur in the same sentence, the pattern matching process is constrained to prefer fewer repairs to more repairs, and shorter repairs to longer repairs. This is done to favor an analysis that deletes the fewest words from a sentence. It is often the case that more drastic repairs would result in a syntac- tically and semantically well-formed sentence, but not the sentence that the speaker intended. For instance, the sentence "show me <flights> daily flights to boston" could be repaired by deleting the words "flights daily," and would then yield a grammatical sentence, but in this case the speaker intended to delete only "flights." Local Syntax and Semantics In the second experiment we attempted to im- prove robustness by applying the parser to small substrings of the sentence. When analyzing long word strings, the parser is more likely to fail due to factors unrelated to the repair. For this ex- periment, the parser was using both syntax and semantics. The phrases used for this experiment were the phrases found by the pattern matcher to contain matching strings of length one, with up to three intervening words. This set was selected because, as can be seen from Table 4, it constitutes a large subset of the data (186 such phrases). Further- more, pattern matching alone contains insufficient information for reliably correcting these sentences. The relevant substring is taken to be the phrase constituting the matched string plus in- tervening material plus the immediately preceding word. So far we have used only phrases where the grammatical category of the matched word was ei- ther noun or name (proper noun). For this test we specified a list of possible phrase types (NP, VP, PP, N, Name) that count as a successful parse. We intend to run other tests with other grammatical categories, but expect that these other categories could need a different heuristic for deciding which substring to parse, as well as a different set of ac- ceptable phrase types. Four candidate strings were derived from the original by making the three different possible edits, and also including the original string un- changed. Each of these strings was analyzed by the parser. When the original sequence did not 60 parse, but one of edits resulted in a sequence that parsed, the original sequence was very unlikely to be a false positive (right for 34 of 35 cases). Fur- thermore, the edit that parsed was chosen to be the repaired string. When more than one of the edited strings parsed, the edit was chosen by pre- ferring them in the following order: (1) M1]XM1, (2) R1MIIR1M1, (3) M1RI[M1R1. Of the 37 cases of repairs, the correct edit was found in 27 cases, while in 7 more an incorrect edit was found; in 3 cases no opinion was registered. While these numbers are quite promising, they may improve even more when information from syntax and se- mantics is combined with that from acoustics. ACOUSTICS A third source of information that can be help- ful in detecting repairs is acoustics. In this sec- tion we describe first how prosodic information can help in distinguishing repairs from false positives for patterns involving matched words. Second, we report promising results from a preliminary study of cue words such as "no" and "well." And third, we discuss how acoustic information can aid in the detection of word fragments, which occur fre- quently and which pose difficulty for automatic speech recognition systems. Acoustic features reported in the following analyses were obtained by listening to the sound files associated with each transcription, and by inspecting waveforms, pitch tracks, and spectro- grams produced by the Entropic Waves software package. Simple Patterns While acoustics alone cannot tackle the prob- lem of locating repairs, since any prosodic patterns found in repairs are likely to be found in fluent speech, acoustic information can be quite effective when combined with other sources of information, in particular with pattern matching. In studying the ways in which acoustics might help distinguish repairs from false positives, we began by examining two patterns conducive to acoustic measurement and comparison. First, we focused on patterns in which there was only one matched word, and in which the two occurrences of that word were either adjacent or separated by only one word. Matched words allow for compar- ison of word duration; proximity helps avoid vari- ability due to global intonation contours not asso- ciated with the patterns themselves. We present here analyses for the MI[M1 ("flights for <one> one person") and M1]XM1 ("<flight> earliest flight") repairs, and their associated false positives ("u s air five one one," '% flight on flight number five one one," respectively). In examining the MI[M1 repair pattern, we found that the strongest distinguishing cue be- tween the repairs (N = 20) and the false positives (N = 20) was the interval between the offset of the first word and the onset of the second. False positives had a mean gap of 42 msec (s.d. = 55.8) as opposed to 380 msec (s.d. = 200.4) for repairs. A second difference found between the two groups was that, in the case of repairs, there was a statis- tically reliable reduction in duration for the sec- ond occurrence of M1, with a mean difference of 53.4 msec. However because false positives showed no reliable difference for word duration, this was a much less useful predictor than gap duration. F0 of the matched words was not helpful in sep- arating repairs from false positives; both groups showed a highly significant correlation for, and no significant difference between, the mean F0 of the matched words. A different set of features was found to be use- ful in distinguishing repairs from false positives for the MI[XM1 pattern. A set of 12 repairs and 24 false positives was examined; the set of false positives for this analysis included only flu- ent cases (i.e., it did not include other types of repairs matching the pattern). Despite the small data set, some suggestive trends emerge. For ex- ample, for cases in which there was a pause (200 msec or greater) on only one side of the inserted word, the pause was never after the insertion (X) for the repairs, and rarely before the X in the false positives. A second distinguishing character- istic was the peak F0 value of X. For repairs, the inserted word was nearly always higher in F0 than the preceding M1; for false positives, this increase in F0 was rarely observed. Table 6 shows the re- sults of combining the acoustic constraints just de- scribed. As can be seen, such features in combina- tion can be quite helpful in distinguishing repairs from false positives of this pattern. Future work will investigate the use of prosody in distinguish- ing the M1 [XM1 repair not only from false posi- tives, but also from other possible repairs having this pattern, i.e., M1RI[M1R1 and R1MI[R1M1. Repairs False Positives Pauses after X (only) and FO of X less than FO of 1st M1 .00 .58 Pauses before X (only) and F0 of X greater than F0 of 1st M1 .92 .00 Table 6: Combining Acoustic Characteristics of M1 IX M1 Repairs Cue Words A second way in which acoustics can be helpful given the output of a pattern matcher is in deter- mining whether or not potential cue words such as "no" are used as an editing expression (Hock- ett, 1967) as in "...flights <between> <boston> <and> <dallas> <no> between oakland and boston." False positives for these cases are in- stances in which the cue word functions in some other sense ("I want to leave boston no later than one p m."). Hirshberg and Litman (1987) have shown that cue words that function differently can be distinguished perceptually by listeners on the basis of prosody. Thus, we sought to determine whether acoustic analysis could help in deciding, when such words were present, whether or not they marked the interruption point of a repair. In a preliminary study of the cue words "no" and "well," we compared 9 examples of these words at the site of a repair to 15 examples of the same words occurring in fluent speech. We found that these groups were quite distinguishable on the basis of simple prosodic features. Table 7 shows the percentage of repairs versus false pos- itives characterized by a clear rise or fall in F0 F0 F0 Lexical Cont. rise fall stress speech Repairs .00 1.00 .00 .00 False Positives .87 .00 .87 .73 6] Table 7: Acoustic Characteristics of Cue Words 8000 !6000 4000 !2000 .2 -:i?.'!.~. • ]'~ • :'~'.:'*~.:." '- "!"!~:': '..~::!' ~i~ ~ ,~..:~? , 1.4 1.6 1.8 ~.k~:~;i:~r • :~:~ ~i fit 2.2 2.4 2.6 :..::.~'~.~:: • i.'.......'.:~i:~.:.:-; ;.~. ,.. ;., -~ '-.~-' .. ~:.:. :~..' . ,:.': ,~ ~,~:..'~.. '.;.-.~. • : : " ' ~ '. " :i':i 2.8 3 3.2 I would 1 i k e to <fra-> f 1 y Figure 1: A glottalized fragment (greater than 15 Hz), lexical stress (determined perceptually), and continuity of the speech im- mediately preceding and following the editing ex- pression ("continuous" means there was no silent pause on either side of the cue word). As can be seen, at least for this limited data set, cue words marking repairs were quite distinguishable from those same words found in fluent strings on the basis of simple prosodic features. Fragments A third way in which acoustic knowledge can assist in detecting and correcting repairs is in the recognition of word fragments. As shown earlier, fragments are exceedingly common; they occurred in 366 of our 607 repairs. Fragments pose diffi- culty for state-of-the-art recognition systems be- cause most recognizers are constrained to produce strings of actual words, rather than allowing par- tial words as output. Because so many repairs in- volve fragments, if fragments are not represented in the recognizer output, then information relevant to the processing of repairs is lost. We found that often when a fragment had suf- ficient acoustic energy, one of two recognition er- rors occurred. Either the fragment was misrecog- nized as a complete word, or it caused a recog- nition error on a neighboring word. Therefore if recognizers were able to flag potential word frag- ments, this information could aid subsequent pro- cessing by indicating the higher likelihood that words in the region might require deletion. Frag- ments can also be useful in the detection of repairs requiring deletion of more than just the fragment. In approximately 40% of the sentences containing fragments in our data, the fragment occurred at the right edge of a longer repair. In a portion of 62 these cases, for example, "leaving at <seven> <fif-> eight thirty," the presence of the fragment is an especially im- portant cue because there is nothing (e.g., no matched words) to cause the pattern matcher to hypothesize the presence of a repair. We studied 50 fragments drawn at random from our total corpus of 366. The most reliable acoustic cue over the set was the presence of a silence following the fragment. In 49 out of 50 cases, there was a silence of greater than 60 msec; the average silence was 282 msec. Of the 50 frag- ments, 25 ended in a vowel, 13 contained a vowel and ended in a consonant, and 12 contained no vocalic portion. It is likely that recognition of fragments of the first type, in which there is abrupt cessation of speech during a vowel, can be aided by looking for heavy glottalization at the end of the fragment. We coded fragments as glottalized if they showed irregular pitch pulses in their associated waveform, spectrogram, and pitch tracks. We found glottal- ization in 24 of the 25 vowel-final fragments in our data. An example of a glottalized fragment, is shown in Figure 1. Although it is true that glottalization occurs in fluent speech as well, it normally appears on unstressed, low F0 portions of a signal. The 24 glottalized fragments we examined however, were not at the bottom of the speaker's range, and most had considerable energy. Thus when com- bined with the feature of a following silence of at least 60 msec, glottalization on syllables with sulfi- cient energy and not at tile bottom of tile speaker's range, may prove a useful feature in recognizing fragments. CONCLUSION In summary, disfluencies occur at high enough rates in human-computer dialog to merit consid- eration. In contrast to earlier approaches, we have made it our goal to detect and correct repairs au- tomatically, without assuming an explicit edit sig- nal. Without such an edit signal, however, re- pairs are easily confused both with false positives and with other repairs. Preliminary results show that pattern matching is effective at detecting re- pairs without excessive overgeneration. Our syn- tactic/semantic approaches are quite accurate at detecting repairs and correcting them. Acoustics is a third source of information that can be tapped to provide evidence about the existence of a repair. While none of these knowledge sources by it- self is sufficient, we propose that by combining them, and possibly others, we can greatly enhance our ability to detect and correct repairs. As a next step, we intend to explore additional aspects of the syntax and semantics of repairs, analyze further acoustic patterns, and pursue the question of how best to integrate information from these multiple knowledge sources. ACKNOWLEDGMENTS We would like to thank Patti Price for her helpful comments on earlier drafts, as well as for her participation in the development of the nota- tional system used. We would also like to thank Robin Lickley for his feedback on the acoustics section, Elizabeth Wade for assistance with the statistics, and Mark Gawron for work on the Gem- ini grammar. REFERENCES 1. Alshawi, H, Carter, D., van Eijck, J., Moore, R. C., Moran, D. B., Pereira, F., Pulman, S., and A. Smith (1988) Research Programme In Natural Language Processing: July 1988 Annual Report, SRI International Tech Note, Cambridge, Eng- land. 2. Bear, J., Dowding, J., Price, P., and E. E. Shriberg (1992) "Labeling Conventions for No- tating Grammatical Repairs in Speech," unpub- lished manuscript, to appear as an SRI Tech Note. 3. Hirschberg, g. and D. Litman (1987) "Now Let's Talk About Now: Identifying Cue Phrases Into- nationally," Proceedings o.f the A CL, pp. 163-171. 4. Carbonell, J. and P. Hayes, P., (1983) "Recov- ery Strategies for Parsing Extragrammatical Lan- guage," American Journal of Computational Lin- guistics, Vol. 9, Numbers 3-4, pp. 123-146. 5. Hindle, D. (1983) "Deterministic Parsing of Syn- tactic Non-fluencies," Proceedings of the A CL, pp. 123-128. 6. Hockett, C. (1967) "Where the Tongue Slips, There Slip I," in To Honor Roman Jakobson: Vol. ~, The Hague: Mouton. 7. Levelt, W. (1983) "Monitoring and self-repair in speech," Cognition, Vol. 14, pp. 41-104. 8. Levelt, W., and A. Cutler (1983) "Prosodic Mark- ing in Speech Repair," Journal of Semantics, Vol. 2, pp. 205-217. 9. Lickley, R., R. ShiUcock, and E. Bard (1991) "Processing Disfluent Speech: How and when are disfluencies found?" Proceedings of the Second European Conference on Speech Communication and Technology, Vol. 3, pp. 1499-1502. 10. MADCOW (1992) "Multi-site Data Collection for a Spoken Language Corpus," Proceedings of the DARPA Speech and Natural Language Workshop, February 23-26, 1992. 11. Moore, R. and J. Dowding (1991) "Efficient Bottom-up Parsing," Proceedings ol the DARPA Speech and Natural Language Workshop, Febru- ary 19-22, 1991, pp. 200-203. 12. Shriberg, E., Bear, 3., and Dowding, J. (1992 a) "Automatic Detection and Correction of Repairs in Human-Computer Dialog" Proceedings of the DARPA Speech and Natural Language Workshop, February 23-26, 1992. 13. Shriberg, E., Wade, E., and P. Price (1992 b) "Human-Machine Problem Solving Using Spoken Language Systems (SLS): Factors Affecting Per- formance and User Satisfaction," Proceedings of the DARPA Speech and Natural Language Work- shop, February 23-26, 1992. 14. Ward, W. (1991) "Evaluation of the CMU ATIS System," Proceedings of the DARPA Speech and Natural Language Workshop, February 19-22, 1991, pp. 101-105. 63
1992
8
CONVERSATIONAL IMPLICATURES IN INDIRECT REPLIES Nancy Green Sandra Carberry Department of Computer and Information Sciences University of Delaware Newark, Delaware 19716, USA email: [email protected], carberry~cis.udel.edu Abstract I In this paper we present algorithms for the interpretation and generation of a kind of particu- larized conversational implicature occurring in cer- tain indirect replies. Our algorithms make use of discourse expectations, discourse plans, and dis- course relations. The algorithms calculate implica- tures of discourse units of one or more sentences. Our approach has several advantages. First, by taking discourse relations into account, it can cap- ture a variety of implicatures not handled before. Second, by treating implicatures of discourse units which may consist of more than one sentence, it avoids the limitations of a sentence-at-a-time ap- proach. Third, by making use of properties of dis- course which have been used in models of other dis- course phenomena, our approach can be integrated with those models. Also, our model permits the same information to be used both in interpretation and generation. 1 Introduction In this paper we present algorithms for the interpretation and generation of a certain kind of conversational implicature occurring in the follow- ing type of conversational exchange. One partici- pant (Q) makes an illocutionary-level request 2 to be informed if p; the addressee (A), whose reply may consist of more than one sentence, conversa- tionally implicates one of these replies: p, "-p, that there is support for p, or that there is support for "-p. For example, in (1), assuming Q's utterance has been interpreted as a request to be informed if A went shopping, and given certain mutual beliefs e.g., that A's car breaking down would normally e sufficient to prevent A from going shopping, and i We wish to thank Kathy McCoy for her comments on this paper. ~i.e., using Austin's (Austin, 1962) distinction between locutionary and il]ocutionary force, Q's utterance is intended to function as a request (although it need not have the gram- matical form of a question) that A's reply is coherent and cooperative), A's re- ply is intended to convey, in part, a 'no'. (1) Q: Did you go shopping? A: a. My car~s not running. b. The timing belt broke. Such indirect replies satisfy the conditions proposed by Grice and others (Grice, 1975; Hirschberg, 1985; Sadock, 1978) for being classi- fied as particularized conversational implicatures. First, A's reply does not entail (in virtue of its conventional meaning) that A did not go shopping. Second, the putative implicature can be cancelled; for example, it can be denied without the result sounding inconsistent, as can be seen by consider- ing the addition of (2) to the end of A's reply in (1.) (2) A: So I took the bus to the mall. Third, it is reinforceable; A's reply in (1) could have been preceded by an explicit "no" without destroy- ing coherency or sounding redundant. Fourth, the putative implicature is nondetachable; the same re- ply would have been conveyed by an alternative re- alization of (la) and (lb) (assuming that the al- ternative did not convey a Manner-based implica- ture). Fifth, Q and A must mutually believe that, given the assumption that A's reply is cooperative, and given certain shared background information, Q can and will infer that by A's reply, A meant 'no'. This paper presents algorithms for calculating such an inference from an indirect response and for gen- erating an indirect response intended to carry such an inference. 2 Solution 2.1 Overview Our algorithms are based upon three notions from discourse research: discourse expectations, discourse plans, and implicit relational propositions in discourse. B4 At certain points in a coherent conversa- tion, the participants share certain expectations (Reichman, 1984; Carberry, 1990) about what kind of utterance is appropriate. In the type of exchange we are studying, at the point after Q's contribu- tion, the participants share the beliefs that Q has requested to be informed if p and that the request was appropriate; hence, they share the discourse expectation that for A to be cooperative, he must now say as much as he can truthfully say in regard to the truth of p. (For convenience, we shall refer to this expectation as Answer-YNQ(p).) A discourse plan operator 3 (Lambert & Car- berry, 1991) is a representation of a normal or con- ventional way of accomplishing certain communica- tive goals. Alternatively, a discourse plan operator could be considered as a defeasihle rule expressing the typical (intended) effect(s) of a sequence of illo- cutionary acts in a context in which certain appli- cability conditions hold. These discourse plan op- erators are mutually known by the conversational participants, and can be used by a speaker to con- struct a plan for achieving his communicative goals. We provide a set of discourse plan operators which can be used by A as part of a plan for fulfilling Answer-YNQ(p). Mann and Thompson (Mann ~z Thompson, 1983; Mann & Thompson, 1987) have described how the structure of a written text can be analyzed in terms of certain implicit relational propositions that may plausibly be attributed to the writer to preserve the assumption of textual coherency. 4 The role of discourse relations in our approach is moti- vated by the observation that direct replies may occur as part of a discourse unit conveying a rela- tional proposition. For example, in (3), (b) is pro- vided as the (most salient) obstacle to the action (going shopping) denied by (a); (3) Q: Did you go shopping? A:a. No. b. my car~s not running. in (4), as an elaboration of the action (going shop- ping) conveyed by (a); (4) Q: Did you go shopping? A:a. Yes, b. I bought some shoes. and in (5), as a concession for failing to do the action (washing the dishes) denied by (a). (S) Q: Did you wash the dishes? A:a. No, b. (but) I scraped them. 3in Pollack's terminology, a recipe-for-action (Pollack, 1988; Grosz & Sidner, 1988)~ 4Although they did not study dialogue, they suggested that it can be analyzed similarly. Also note that the rela- tional predicates which we define are similar but not neces- sarily identical to theirs. Note that given appropriate context, the (b) replies in (3) through (5)would be sufficient to conversa- tionally implicate the corresponding direct replies. This, we claim, is by virtue of the recognition of the relational proposition that would be conveyed by use of the direct reply and the (b) sentences. Our strategy, then, is to generate/interpret A's contribution using a set of discourse plan oper- ators having the following properties: (1) if the ap- plicability conditions hold, then executing the body would generate a sequence of utterances intended to implicitly convey a relational proposition R(p, q); (2) the applicability conditions include the condi- tion that R(p, q) is plausible in the discourse con- text; (3) one of the goals is that Q believe that p, where p is the content of the direct reply; and (4) the step of the body which realizes the direct re- ply can be omitted under certain conditions. Thus, whenever the direct reply is omitted, it is neverthe- less implicated as long as the intended relational proposition can be recognized. Note that prop- erty (2) requires a judgment that some relational proposition is plausible. Such judgments will be de- scribed using defeasible inference rules. The next section describes our discourse relation inference rules and discourse plan operators. 2.2 Discourse Plan Opera- tors and Discourse Relation In- ference Rules A typical reason for the failure of an agent's attempt to achieve a domain goal is that the agent's domain plan encountered an obstacle. Thus, we give the rule in (6) for inferring a plausible discourse relation of Obstacle. s (8) If (i) coherently-relatedCA,B), and (ii) A is a proposition that an agent failed to perform an action of act type T, and (iii) B is a proposition that a) a normal applicability condition of T did not hold, or b) a normal precondition of T failed, or c) a normal step of T failed, or d) the agent did not want to achieve a normal goal of T, then plausible(Obstacle(B,A)). In (6) and in the rules to follow, 'coherently- related(A,B)' means that the propositions A and B are assumed to be coherently related in the dis- course. The terminology in clause (iii) is that of the extended STRIPS planning formalism (Fikes 5For simplicity of exposition, (6) and the discourse rela- tion inference rules to follow are stated in terms of the past; we plan to extend their coverage of times. 65 & Nilsson, 1971; Allen, 1979; Carberry, 1990; Lit- man & Allen, 1987). Examples of A and B satisfying each of the conditions in (6.iii) are given in (7a) - (7d), respec- tively. (7) [A]I didn't go shopping. a. [B] The stores were closed. b. [B] My car wasn't run-ing. c. [B] My car broke doen on the way. d. [B] I didn't want to buy anything. The discourse plan operator given in (8) de- scribes a standard way of performing a denial (ex- emplified in (3)) that uses the discourse relation of Obstacle given in (6). In (8), as in (6), A is a propo- sition that an action of type T was not performed. (8) Deny (with Obstacle) Applicability conditions: 1) S BMB plausible(Obstacle(B,A)) Bo~ (unordered) : (optional) S inform H that A 2) TelI(S,H,B) Goals: 1) H believe that A 2) H believe that Obstacle(B,A) In (8) (and in the discourse plan operators to follow) the" formalism described above is used; 'S' and 'H' denote speaker and hearer, respectively; 'BMB' is the one-sided mutual belief s operator (Clark & Marshall, 1981); 'inform' denotes an il- locutionary act of informing; 'believe' is Hintikka's (Hintikka, 1962) belief operator; 'TelI(S,H,B)' is a subgoal that can be achieved in a number of ways (to be discussed shortly), including just by S in- forming H that B; and steps of the body are not ordered. (Note that to use these operators for gen- eration of direct replies, we must provide a method to determine a suitable ordering of the steps. Also, although it is sufficient for interpretation to spec- ify that step 1 is optional, for generation, more in- formation is required to decide whether it can or should be omitted; e.g., it should not be omitted if S believes that H might believe that some relation besides Obstacle is plausible in the context. 7 These are areas which we are currently investigating; for related research, see section 3.) Next, consider that a speaker may wish to inform the hearer of an aspect of the plan by which she accomplished a goal, if she believes that H may not be aware of that aspect. Thus, we give the rule in (9) for inferring a plausible discourse relation of Elaboration. e'S BMB p' is to be read as 'S believes that it is mutually believed between S and H that p'. ZA related question, which has been studied by oth- ers (Joshi, Webber ~ Weischedel, 1984a; Joshi, Webber & Weischedel, 1984b), is in what situations is a speaker re- quired to supply step 2 to avoid misleading the hearer? (9) If (i) (ii) coherently-related(A,B), and A is a proposition that an agent performed some action of act type T, and (iii) B is a proposition that describes information believed to be new to H about a) the satisfaction of a normal applicability condition of T such that its satisfaction is not believed likely by H, or b) the satisfaction of a normal precondition of T such that its satisfaction is not believed likely by H, or c) the success of a normal step of T, or d) the achievement of a normal goal of T, then plausible(Elaboration(B,A)). Examples of A and B satisfying each of the conditions in (9.iii) are given in (10a) - (10d), re- spectively. (I0) [A]I went shopping today. a. [B] I found a store that was open. b. [B] I got my car fixed yesterday. c. [B] I went to Macy's. d. [B] I got running shoes. The discourse plan operator given in (11) de- scribes a standard way of performing an affirmation (exemplified in (4)) that uses the discourse relation of Elaboration. (11) Affirm (with Elaboration) Applicability conditions: 1) S BMB plausible(Elaboration(B,A)) Body (unordered): 1) (optional) S inform H that A 2) TelI(S,H,B) Goals: 1) H believe that A 2) H believe that Elaboration(B,A) Finally, note that a speaker may concede a failure to achieve a certain goal while seeking credit for the partial success of a plan to achieve that goal. For example, the [B] utterances in (10) can be used following (12) (or aIone, in the right context) to concede failure. (12) [A]I didn't go shopping today, but Thus, the rule we give in (13)for inferring a plausible discourse relation of Concession is similar (but not identical) to (9). (13) If (i) (ii) coherently-related(A,B), and A is a proposition that an agent failed to do an action of act type T, and (iii) B is a proposition that describes a) the satisfaction of a normal applicability condition of T, or b) the satisfaction of a normal precondition of T, or c) the success of a normal step of T, or d) the achievement of a normal goal of T, and (iv) the achievement of the plan's component in B may bring credit to the agent, then plausible(Concession(B,A)). A discourse plan operator, Deny (with Con- cession), can be given to describe another standard way of performing a denial (exemplified in (5)). This operator is similar to the one given in (8), except with Concession in the place of Obstacle. An interesting implication of the discourse plan operators for Affirm (with Elaboration) and Deny (with Concession) is that, in cases where the speaker chooses not to perform the optional step (i.e., chooses to omit the direct reply), it requires that the intended discourse relation be inferred in order to correctly interpret the indirect reply, since either an affirmation or denial could be realized with the same utterance. (Although (9) and (13) contain some features that differentiate Elaboration and Concession, other factors, such as intonation, will be considered in future research.) The next two discourse relations (described in (14) and (16)) may be part of plan operators for conveying a 'yes' similar to Affirm (with Elabora- tion). (14) If (i) coherently-related(A,B), and (ii) A is a proposition that an agent performed an action X, and (iii) B is a proposition that normally implies that the agent has a goal G, and (iv) X is a type of action occurring as a normal part of a plan to achieve G, then plausible( Motivate-Volitional-Action(B,A)). 15) shows tile use of Motivate-Volitional-Action MVA) in an indirect (affirmative) reply. (15) Q: Did you close the window? A: I was cold. (16) If (i) coherently-related(A,B), and (ii) A is a proposition that an event E occurred, and (iii) B is a proposition that an event F occurred, and (iv) it is not believed that F followed E, and (v) F-type events normally cause E-type events, then plausible(Cause-Non-Volitional(B,A)). /17) shows the use of Cause-Non-Volitional (CNV) m an indirect (affirmative) reply. (17) Q: Did you wake up very early? A: The neighbor's dog was barking. The discourse relation described in (18) may be part of a plan operator similar to Deny (with Obstacle) for conveying a 'no'. (18) If (i) coherently-related(A,B), and (ii) A is a proposition that an event E did not occur, and (iii) B is a proposition that an action F was performed, and (iv) F-type actions are normally performed as a way of preventing E-type events, then plausible(Prevent(B,A)). (19) showsthe use of Preventin an indirect denial. (19) Q: Did you catch the flu? A: I got a flu shot. The discourse relation described in (20) can be part of a plan operator similar to the others de- scribed above except that one of the speaker's goals is, rather than affirming or denying p, to provide support for the belief that p. (20) If (i) coherently-related(A,B), and (ii) B is a proposition that describes a typical result of the situation described in proposition A, then plausible(Evidence(B,A)). 67 Assuming an appropriate context, (21) is an .example of use of this relation to convey support, Le., to convey that it is likely that someone is home. (21) Q: Is anyone home? A: The upstairs lights are on. A similar rule could be defined for a relation used to convey support against a belief. 2.3 Implicatures of Discourse Units Consider the similar dialogues in (22) and (23). (22) Q: Did you go shopping? A:a. I had to take the bus. b. (because) My car's not running. c. (You see,) The timing belt broke. (23) Q: Did you go shopping? A:a. My car's not running. b. The timing belt broke. c. (So) I had to take the bus. First, note that although the order of the sentences realizing A's reply varies in (22) and (23), A's over- all discourse purpose in both is to convey a 'yes'. Second, note that it is necessary to have a rule so that if A's reply consists solely of (22a) (=23c), an implicated 'yes' is derived; and if It consists solely of (22b) (=23a), an implicated 'no'. In existing sentence-at-a-time models of cal- culating implicatures (Gazdar, 1979; Hirschberg, 1985), processing (22a) would result in an impli- cated 'yes' being added to the context, which would successfully block the addition of an implicated 'no' on processing (22b). However, processing (23a) would result in a putatively implicated 'no" be- in S added to the context (incorrectly attributing a fleeting intention of A to convey a 'no'); then, on processing (23c) the conflicting but intended 'yes' would be blocked by context, giving an incorrect result. Thus, a sentence-at-a-time model must pre- dict when (23c) should override (23a). Also, in that model, processing (23) requires "extra effort", a nonmonotonic revision of belief not needed to han- dle (22); yet (23) seems more like (22) than a case in which a speaker actually changes her mind. In our model, since implicatures correspond to goals of inferred or constructed hierarchical plans, we avoid this problem. (22A) and (23A) both correspond to step 2 of Affirm (with Elaboration), TelI(S,H,B); several different discourse plan opera- tors can be used to construct a plan for this Tell action. For example, one operator for Tell(S,H,B) is given below in (24); the operator represents that in telling H that B, where B describes an agent's volitional action, a speaker may provide motivation for the agent's action. (24) Tell(S,H,p) Applicability Conditions: 1) S BMB plausible( Motivate-Volitional-Action(q,p)) Body (unordered): 1) Tell(S,H,q) 2) S inform H that p Goals: I) H believe that p 2) H believe that Motivate-Volitional-Action(q,p) (We are currently investigating, in generation, when to use an operator such as (24). For ex- ample, a speaker might want to use (24) in case he thinks that the hearer might doubt the truth of B unless he knows of the motivation.) Thus, (22a)/(23c) corresponds to step 2 of (24); (22b) - (22c), as well as (23a) - (23b), correspond to step 1. Another operator for Tell(S,H,p) could represent that in telling H that p, a speaker may provide the cause of an event; i.e., the operator would be like (24) but with Cause-Non-Volitional as the discourse relation. This operator could be used to decom- pose (22b)- (22c)/(23a)- (23b). The structure pro- posed for (22A)/(23A) is illustrated in Figure 1. s Linear precedence in the tree does not necessarily represent narrative order; one way of ordering the two nodes directly dominated by TeII(MVA) gives (22A), another gives (23A). (Narrative order in the generation of indirect replies is an area we are cur- rently investigating also; for related research, see section 3.) Note that Deny (with Obstacle) can not be used to generate/interpret (22A) or (23A) since its body can not be expanded to account for b22a)/(23c). Thus, the correct implicatures can e derived without attributing spurious intentions to A, and without requiring cancellation of spurious implicatures. 8To use the terminology of (Moore & Paris, 1989; Moore & Paris, 1988), the labelled arcs represent satellites, and the unlabelled arcs nucleii. However, note that in their model, a nucleus can not be optional. This differs from our approach, in that we have shown that direct replies are optional in contexts such as those described by plan operators such as Affirm (with Elaboration). 9Determiuing this requires that the end of the relevant discourse unit be marked/recognlzed by cue phrases, into- nation, or shift of focus; we plan to investigate this problem. 68 Affirm (with Elaboration) I I went shopping (Motivate-Volitional-Action) J My car's not running I Tell (CNV) I (Elaboration) Tell (MVA) I I I bad to take the bus (Cause-Non-Volitional) The timing belt broke Figure 1. A Sample Discourse Structure 2.4 Algorithms Generation and interpretation algorithms are given in (25) and (26), respectively. They presuppose that the plausible discourse relation is available. 1° The generation algorithm assumes as given an illocutionary-level representation of A's communicative goals.ll (25) Generation of indirect reply: I. Select discourse plan operator: Select from the Ans,er-YHQ(p) plan operators all those for ,hich a) the applicability conditions hold, and b) the goals include S's goals. 2. If more than one operator was selected in step I, then choose one. Also, determine step ordering and whether it is necessary to include optional steps. (We are currently investigating how these choices are determined.) 3. Construct a plan from the chosen operator and execute it. 1°We plan to implement an inference mechanism for the discourse relation inference rules. 11 Note that A's goals depend, in part, on the illocutionary- level representation of Q's request. We assume that an analysis, such as provided in (Perrault & Allen, 1980), is available. (26) Interpretation of indirect reply: I. Infer discourse plan: Select from the Ansver-YNQ(p) plan operators all those for ,hich a) the second step of the body matches S's contribution, and b) the applicability conditions hold, and ¢) it is mutually believed that the goals are consistent with S's goals. 2. If more than one operator was selected in step I, then choose one. (We are currently investigatin E what factors are involved in this choice. Of course, the utterance may be ambiguous.) 3. Ascribe to S the goal(s) of the chosen plan operator. 3 Comparison to Past Re- search Most previous work in computational or for- mal linguistics on particularized conversational im- plicature (Green, 1990; Horacek, 1991; Joshi, Webber & Weischedel, 1984a; .]oshi, Webber Weischedel, 1984b; Reiter, 1990; Whiner & Maida, 1991) has treated other kinds of implicature than we consider here. ttirschberg (Hirschberg, 1985) provided licensing rules making use of mutual be- liefs about salient partial orderings of entities in 69 the discourse context to calculate the scalar im- plicatures of an utterance. Our model is similar to Hirschberg's in that both rely on the represen- tation of aspects of context to generate implica- tures, and our discourse plan operators are roughly analogous in function to her licensing rules. How- ever, her model makes no use of discourse relations. Therefore, it does not handle several kinds of indi- rect replies which we treat. For example, although A in (27) could be analyzed as scalar implicating a 'no' in some contexts, Hirschberg's model could not account for the use of A in other contexts as an elaboration (of how A managed to read chapter 1) intended to convey a 'yes'. 12 (27) Q: Did you read the first chapter? A: I took it to the beach with me. Furthermore, Hirschberg provided no computa- tional method for determining the salient partially ordered set in a context. Also, in her model, impli- catures are calculated one sentence at a time, which has the potential problems described above. Lascarides, Asher, and Oberlander (Lascarides & Asher, 1991; Lascarides & Oberlan- der, 1992) described the interpretation and gen- eration of temporal implicatures. Although that type of implicature (being Manner-based) is some- what different from what we are studying, we have adopted their technique of providing defeasible in- ference rules for inferring discourse relations. In philosophy, Thomason (Thomason, 1990) suggested that discourse expectations play a role in some implicatures. McCafferty (McCafferty, 1987) argued that interpreting certain implicated replies requires domain plan reconstruction. However, he did not provide a computational method for inter- preting implicatures. Also, his proposed technique can not handle many types of indirect replies. For example, it can not account for the implicated nega- tive replies in (1) and (5), since their interpretation involves reconstructing domain plans that were not executed successfully; it can not account for the im- plicated affirmative reply in (17), in which no rea- soning about domain plans is involved; and it can not account for implicated replies conveying sup- port for or against a belief, as in (21). Lastly, his approach cannot handle implicatures conveyed by discourse units containing more than one sentence. Finally, note that our approach of including rhetorical goals in discourse plans is modelled on the work of Hovy (Hovy, 1988) and Moore and Paris (Moore & Paris, 1989; Moore & Paris, 1988), who used rhetorical plans to generate coherent text. 12 The two intended interpretations are marked by different intonations. 4 Conclusions We have provided algorithms for the inter- pretation/generation of a type of reply involving a highly context-dependent conversational implica- ture. Our algorithms make use of discourse ex- pectations, discourse plans, and discourse relations. The algorithms calculate implicatures of discourse units of one or more sentences. Our approach has several advantages. First, by taking discourse rela- tions into account, it can capture a variety of im- plicatures not handled before. Second, by treating implicatures of discourse units which may consist of more than one sentence, it avoids the limitations of a sentence-at-a-time approach. Third, by making use of properties of discourse which have been used in models of other discourse phenomena, our ap- proach can be integrated with those models. Also, our model permits the same information to be used both in interpretation and in generation. Our current and anticipated research in- cludes: refining and implementing our algorithms (including developing an inference mechanism for the discourse relation rules); extending our model to other types of implicatures; and investigating the integration of our model into general interpretation and generation frameworks. References Allen, James F. (1979). A Plan-Based Approach to Speech Act Recognition. PhD thesis, University of Toronto, Toronto, Ontario, Canada. Austin, J. L. (1962). How To Do Things With Words. Cambridge, Massachusetts: Harvard University Press. Carberry, Sandra (1990). Plan Recognition in Nat- ural Language Dialogue. Cambridge, Mas- sachusetts: MIT Press. Clark, H. & Marshall, C. (1981). Definite refer- ence and mutual knowledge. In A. K. Joshi, B. Webber, & I. Sag (Eds.), Elements of dis- course understanding. Cambridge: Cambridge University Press. Fikes, R. E. & Nilsson, N. J. (1971). Strips: A new approach to the application of theorem proving to problem solving. Artificial Intelligence, 2, 189-208. Gazdar, G. (1979). Pragmatics: lmplicature, Pre- supposition, and Logical Form. New York: Academic Press. Green, Nancy L. (1990). Normal state impli- cature. In Proceedings of the 28th Annual Meeting, Pittsburgh. Association for Compu- tational Linguistics. Grice, H. Paul (1975). Logic and conversation. In Cole, P. & Morgan, J. L. (Eds.), Syntax and 70 Semantics III: Speech Acts, (pp. 41-58)., New York. Academic Press. Grosz, Barbara & Sidner, Candace (1988). Plans for discourse. In P. Cohen, J. Morgan, M. Pollack (Eds.), Intentions in Communica- tion. MIT Press. Hintikka, J. (1962). Knowledge and Belief. Ithaca: Cornell University Press. Hirschberg, Julia B. (1985). A Theory of Scalar Implicature. PhD thesis, University of Penn- sylvania. ttoracek, Helmut (1991). Exploiting conversa- tional implicature for generating concise expla- nations. In Proceedings. European Association for Computational Linguistics. Hovy, Eduard H. (1988). Planning coherent multi- sentential text. In Proceedings of the 26th An- nual Meeting, (pp. 163-169). Association for Computational Linguistics. Joshi, Aravind, Webber, Bonnie, & Weischedel, Ralph (1984a). Living up to expectations: Computing expert responses. In Proceedings of the Fourth National Conference on Artificial Intelligence, (pp. 169-175)., Austin, Texas. Joshi, Aravind, Webber, Bonnie, & Weischedel, Ralph (1984b). Preventing false inferences. In Proceedings of Coling84, (pp. 134-138), Stanford University, California. Association for Computational Linguistics. Lambert, Lynn & Carberry, Sandra (1991). A tri- partite plan-based model of dialogue. In Pro- ceedings of the 29th Annual Meeting, (pp. 47- 54). Association for Computational Linguis- tics. Lascarides, Alex & Asher, Nicholas (1991). Dis- course relations and defensible knowledge. In Proceedings of the 2gth Annual Meeting, (pp. 55-62). Association for Computational Lin- guistics. Lascarides, Alex & Oberlander, Jon (1992). Tem- poral coherence and defensible knowledge. Theoretical Linguistics, 18. Litman, Diane & Allen, James (1987). A plan recognition model for subdialogues in conver- sation. Cognitive Science, 11, 163-200. Mann, William C. & Thompson, Sandra A. (1983). Relational propositions in discourse. Technical Report ISI/RR-83-115, ISI/USC. Mann, William C. & Thompson, Sandra A. (1987). Rhetorical structure theory: Toward a func- tional theory of text organization. Text, 8(3), 167-182. McCafferty, Andrew S. (1987). Reasoning about Im- plicature: a Plan-Based Approach. PhD thesis, University of Pittsburgh. Moore, Johanna D. & Paris, Cecile (1989). Plan- ning text for advisory dialogues. In Proceed- ings of the 27th Annual Meeting, University of British Columbia, Vancouver. Association of Computational Linguistics. Moore, Johanna D. ~z Paris, Cecile L. (1988). Con- structing coherent text using rhetorical rela- tions. In Proc. lOth Annual Conference. Cog- nitive Science Society. Perrault, Raymond & Allen, James (1980). A plan-based analysis of indirect speech acts. American Journal of Computational Linguis- tics, 6(3-4), 167-182. Pollack, Martha (1988). Plans as complex men- tal attitudes. In P. Cohen, J. Morgan, & M. Pollack (Eds.), Intentions in Communica- tion. MIT Press. Reichman, Rachel (1984). Extended person- machine interface. Artificial Intelligence, 22, 157-218. Reiter, Ehud (1990). The computational complex- ity of avoiding conversational implicatures. In Proceedings of the 28th Annual Meeting, (pp. 97-104)., Pittsburgh. Association for Compu- tational Linguistics. Sadock, Jerrold M. (1978). On testing for conversa- tional implicature. In Cole, P. & Morgan, J. L. (Eds.), Syntax and Semantics, (pp. 281-297)., N.Y. Academic Press. Thomason, Richmond H. (1990). Accommoda- tion, meaning, and implicature: Interdisci- plinary foundations for pragmatics. In P. Co- hen, J. Morgan, &; M. Pollack (Eds.), In- tentions in Communication. Cambridge, Mas- sachusetts: MIT Press. Wainer, Jacques & Maida, Anthony (1991). Good and bad news in formalizing generalized impli- catures. In Proceedings of the Sixteenth An- nual Meeting of the Berkeley Linguistics Soci- ety, (pp. 66-71)., Berkeley, California. 71
1992
9
Char_align: A Program for Aligning Parallel Texts at the Character Level Kenneth Ward Church AT&T Bell Laboratories 600 Mountain Avenue Murray Hill NJ, 07974-0636 kwc @research.att.com Abstract There have been a number of recent papers on aligning parallel texts at the sentence level, e.g., Brown et al (1991), Gale and Church (to appear), Isabelle (1992), Kay and R/Ssenschein (to appear), Simard et al (1992), Warwick- Armstrong and Russell (1990). On clean inputs, such as the Canadian Hansards, these methods have been very successful (at least 96% correct by sentence). Unfortunately, if the input is noisy (due to OCR and/or unknown markup conventions), then these methods tend to break down because the noise can make it difficult to find paragraph boundaries, let alone sentences. This paper describes a new program, charalign, that aligns texts at the character level rather than at the sentence/paragraph level, based on the cognate approach proposed by Simard et al. 1. Introduction Parallel texts have recently received considerable attention in machine translation (e.g., Brown et al, 1990), bilingual lexicography (e.g., Klavans and Tzoukermann, 1990), and terminology research for human translators (e.g., Isabelle, 1992). We have been most interested in the terminology application. Translators find it extremely embarrassing when "store" (in the computer sense) is translated as "grocery," or when "magnetic fields" is translated as "magnetic meadows." Terminology errors of this kind are all too common because the translator is generally not as familiar with the subject domain as the author of the source text or the readers of the target text. Parallel texts could be used to help translators overcome their lack of domain expertise by providing them with the ability to search previously translated documents for examples of potentially difficult expressions and see how they were translated in the past. While pursuing this possibility with a commercial translation organization, AT&T Language Line Services, we discovered that we needed to completely redesign our alignment programs in order to deal more effectively with texts supplied by AT&T Language Line's customers in whatever format they happen to be available in. All too often these texts are not available in electronic form. And even if they are available in electronic form, it may not be worth the effort to clean them up by hand. 2. Real Texts are Noisy Most previous work depends on being able to identify paragraph and sentence boundaries with fairly high reliability. We have found it so difficult to find paragraph boundaries in texts that have been OCRed that we have decided to abandon the paragraph/sen- tence approach. Figure i, for example, shows some parallel text (selected from the official record of the European Parliament) that has been processed with the Xerox ScanWorX OCR program. The OCR output is remarkably good, but nevertheless, the paragraphs are more elusive than it might appear at first. The first problem we encountered was the missing blank line between the second and third paragraphs in the French (Figure lb). Although this missing line might obscure the boundary between the two paragraphs, one could imagine methods that could overcome missing blank lines. A more serious problem is illustrated by two phrases highlighted in italics in Figure 1, "Petitions Documents received...," and its French equivalent, "Prtittons - Drprt de documents...." When we first read the OCR output, we found these two expressions somewhat confusing, and didn't understand why they ended up in such different places in the OCR output. After inspecting the original hardcopy, we realized that they were footnotes, and that their location in the OCR output depends on the location of the page breaks. Page breaks are extremely complicated. Most alignment programs don't attempt to deal with issues such as footnotes, headers, footers, tables, figures and other types of floating displays. One might believe that these layout problems could be avoided if only we could obtain the texts in electronic format. Perhaps so. But ironically, electronic formats are also problematic, though for different reasons. Figure la: An Example of OCRed English 4. Agenda PRESIDENT. - We now come to the agenda for this week. SEAL (5). - Mr President, I should like to protest most strongly against the fact that there is no debate on topical and urgent subjects on the agenda for this part-session. I know that this decision was taken by the enlarged Bureau because this is an extraordinary meeting. None the less, how can we be taken seriously as a Parliament if we are going to consider only inter- nal matters while the world goes on outside? I would like to ask you to ask the enlarged Bureau to look at how we might have extra sittings in which urgencies would be included. Having said that to the Chair and bearing in mind that there are no urgencies, I should like to ask the Com- mission to make statements on two items. First of all, what action is the Community taking to help the peo- ple of Nicaragua, who have suffered a most enormous natural disaster which has left one-third of the popula- tion homeless? Secondly, would Commissioner Suth- erland make a statement on the situation that has aft- sen in the United Kingdom, where the British Govern- ment has subsidized Aerospace to the tune of UKL 1 billion by selling them the Royal Ordnance factories at a knockdown price and allowing them to asset-strip in order to get this kind of cash? (Protests from the right) Petitions Documents received - Texts of treaties for- warded by the Council: see minutes. [italics added] No 2-370/6 Debates of the European [...] PRESIDENT. - I think you have just raised about four urgencies in one them. We cannot allow this. The enlarged Bureau made a decision. This decision came to this House and the House has confirmed it. This is a special part-session. We have an enormous amount of work to do and I suggest we get on with it. There are a large number of different markup languages, conventions, implementations, platforms, etc., many of which are obscure and some of which are proprietary. In more than one instance, we have decided that the electronic format was more trouble than it was worth, and have resorted to OCR. Even when we did end up using the electronic format, much of the markup had to be treated as noise since we haven't been able to build interpreters to handle all of the world's markup languages, or even a large percentage of them. 2 Figure lb: An Example of OCRed French 4. Ordre du jour Le Pr6sident. - Nous passons maintenant h l'or- dre du jour de cette semaine. Seal (s). - (EN> Monsieur le Pr6sident, je pro- teste 6nergiquement contre le fait que l'ordm du jour de cette session ne pr6voit pas de d6bat d'actualit6 et d'urgence. Je sais que cette d6cision a 6t6 prise par le Bureau 61argi parce qu'il s'agit d'une session extraordinaire. N6anmoins, comment pourrions-nous, en tant que Parlement, &re pris au s6rieux si nous ne nous occupons que de nos petits probl~mes internes sans nous soucier de ce qui se passe dans le monde? Je vous serais recon- naissant de bien vouloir demander au Bureau 61ar- gi de voir comment nous pourrions avoir des s6ances suppl6mentaims pour aborder les questions urgentes. Cela dit, et puisqu'il n'y a pas de probl~mes urgents, je voudrais demander ~t la Commission de faire des d6clarations sur deux points. Premiere- merit: quelles actions la Communaut6 envisage-t- elle pour venir en aide au peuple du Nicaragua, Pdtittons - DdpSt de documents Transmission par le Conseil de textes d'accords: CE. proc~s-verbai. [italics added] qui vient de subir une immense catastrophe natu- relle laissant sans abri le tiers de la population? Deuxi~mement: le commissaire Sutherland pour- rait-il faire une d6claration au sujet de la situation cr66e au Royaume-Uni par la d6cision du gouver- nement britannique d'accorder ~t la soci~t6 Aero- space une subvention s'61evant hun milliard de livres sterling en lui vendant les Royal Ordinance Factories ~t un prix cadeau et en lui permettant de brader des 616ments d'actif afin de r6unir des liquidit6s de cet ordre? (Protestations ~t droite> Le Pr6sident. - Je pense que vous venez de parler de quatre urgences en une seule. Nous ne pouvons le permettre. Le Bureau 61argi a pris une d6cision. Cette d6cision a 6t6 transmise ~ l'Assem- bl6e et l'Assembl6e l'a ent6rin6e. La pr~sente p~- riode de session est une p6riode de session sp~- ciale. Nous avons beaucoup de pain sur la planche et j e vous propose d'avancer. 3. Aligning at the Character Level Because of the noise issues, we decided to look for an alternative to paragraph-based alignment methods. The resulting program, charalign, works at the character level using an approach inspired by the cognate method proposed in Simard et al (1992). Figures 2 show the results of char_align on a sample of Canadian Hansard data, kindly provided by Simard et al, along with alignments as determined by their panel of 8 judges. Simard et al (1992) refer to this dataset as the "bard" dataset and their other dataset as the "easy" dataset, so-named to reflect the fact that the former dataset was relatively more difficult than the latter for the class of alignment methods that they were evaluating. Figure 2 plotsf(x) as a function of x, where x is a byte position in the English text andf(x) is the corresponding byte position in the French text, as determined by char_align. For comparison's sake, the plot also shows a straight line connecting the two endpoints of the file. Note that f(x) follows the straight line fairly closely, though there are small but important residuals, which may be easier to see in Figure 3. Figure 3 plots the residuals from the straight line. The residuals can be computed as f(x) - cx, where c is the ratio of the lengths of the two files (0.91). The residuals usually have fairly small magnitudes, rarely more than a few percent of the length of the file. In Figure 3, for example, residuals have magnitudes less than 2% of the length of the target file. If the residuals are large, or if they show a sharp discontinuity, then it is very likely that the two texts don't match up in some way (e.g., a page/figure is missing or misplaced). We have used the residuals in this way to help translators catch potentially embarras- sing errors of this kind. Figure 4 illustrates this use of the residuals for the European Parliamentary text presented in Figure 1. Note that the residuals have relatively large magnitudes, e.g., 10% of the length of the file, 'compared with the 2% magnitudes in Figure 3. Moreover, the residuals in Figure 4 have two very sharp discontinuities. The location of these sharp discontinuities is an important diagnostic clue for identifying the location of the problem. In this case, the discontinuities were caused by the two trouble- some footnotes discussed in section 2. _m ,+! II 0 50000 150000 250000 x = Position in English File Figure 2: char_align output on the "Hard" Dataset It x o A x 0 50000 150000 250000 x = Position in English File Figure 3: rotated version of Figure 2 II ~ m 0 500 1000 1500 X = Position in English Figure 4: Residuals for text in Figure 1 (large discontinuities correspond to footnotes) o II x r~ 0 50000 150000 250000 x = Position in English File Figure 5: Figure 3 with judges' alignments 3 0 "Hard" Dataset -200 -100 0 100 Error (in characters) Figure 6: histogram of errors 200 "Easy" Dataset -200 -100 0 100 Error (in characters) 200 Figure 7: histogram of errors Figure 5 shows the correct alignments, as determined by Simard et ars panel of 8 judges (sampled at sentence boundaries), superimposed over char_align's output. Char_align's results are so close to the judge's alignments that it is hard to see the differences between the two. Char_align's errors may be easier to see in Figure 6, which shows a histogram of charalign's errors. (Errors with an absolute value greater than 200 have been omitted; less than 1% of the data fall into this category.) The errors (2_+46 bytes) are much smaller than the length of a sentence (129_+84 bytes). Half of the errors are less than 18 characters. In general, performance is slightly better on shorter files than on longer files because char_align doesn't use paragraph boundaries to break up long files into short chunks. Figure 7 shows the errors for the "easy" dataset (-1___57 bytes), which ironically, happens to be somewhat harder for char_align because the "easy" set is 2.75 times longer than the "hard" dataset. (As in Figure 6, errors with an absolute value greater than 200 have been omitted; less than 1% of the data fall into this category.) 4 4. Cognates How does char_align work? The program assumes that there will often be quite a number of words near x that will be the same as, or nearly the same as some word nearf(x). This is especially true for historically related language pairs such as English and French, which share quite a number of cognates, e.g., government and gouvernement, though it also holds fairly well for almost any language pair that makes use of the Roman alphabet since there will usually be a fair number of proper nouns (e.g., surnames, company names, place names) and numbers (e.g., dates, times) that will be nearly the same in the two texts. We have found that it can even work on some texts in English and Japanese such as the AWK manual, because many of the technical terms (e.g., awk, BEGIN, END, getline, print, pring3 are the same in both texts. We have also found that it can work on electronic texts in the same markup language, but different alphabets (e.g., English and Russian versions of 5ESS® telephone switch manuals, formatted in troff). Figures 8 and 9 below demonstrate the cognate property using a scatter plot technique which we call dotplots (Church and Helfman, to appear). The source text (N x bytes) is concatenated to the target text (Ny bytes) to form a single input sequence of Nx+Ny bytes. A dot is placed in position i,j whenever the input token at position i is the same as the input token at position j. (The origin is placed in the upper left corner for reasons that need not concern us here.) Various signal processing techniques are used to compress dotplots for large N x +Ny. The implementation of dotplots are discussed in more detail in section 7. The dotplots in Figures 8 and 9 look very similar, with diagonal lines superimposed over squares, though the features are somewhat sharper in Figure 8 because the input is much larger. Figure 8 shows a dotplot of 3 years of Canadian Hansards (37 million words) in English and French, tokenized by words. Figure 9 shows a dotplot of a short article (25 kbytes) that appeared in a Christian Science magazine in both English and German, tokenized into 4-grams of characters. The diagonals and squares are commonly found in dotplots of parallel text. The squares have a very simple explanation. The upper-left quadrant and the lower-right quadrant are darker than the other two quadrants because the source text and the target text are more themselves than either is like the other. This fact, of course, is not very surprising, and is not particularly useful for our purposes here. However, the diagonal line running through the upper-right quadrant is very important. This line indicates how the two texts should be aligned. Figure 10 shows the upper-fight quadrant of Figure 9, enhanced by standard signal processing techniques (e.g., low-pass filtering and thresholding). The diagonal line in Figure 10 is almost straight, but not quite. The minor deviations in this line are crucial for determining the alignment of the two texts. Figures 11 and 12 make it easier to see these deviations by first rotating the image and increasing the vertical resolution by an order of magnitude. The alignment program makes use of both of these transformation in order to track the alignment path with as much precision as possible. "~!!.~.~.,, .. ?.~ :~.,.~-:. ",..~,, ~,:..;: .:~: •: .':: :, (~i.;~ '.! ~ J ' :.,::."- < :',',:.-;:.~ : .~," '! ',: • ~;":~"-"," '~:" ::.ii!~: .".i:;,?~'Z'~ ;. :;.:.~i~.'-~ ::~.i~;.'.!::'.:.?" : ~,~<.:..:';.<:i~;.~<:"~ ' :~-.:'",P~I~':~:: i: : ,'.;, Figure 8: A dotplot demonstrating the cognate property (37 million words of Canadian Hansards) [~#~._l'%~i~'~l~.lgLl~li~'/~..¢.~.~.:"~': ~*~. '" . .!.. ~:" "~t,~: ~' , ~"'.:" <:7~ "-L ¢~ ."~.'5,' ': ~ . . . . ? . ~ . r ~ ........ ; ............ ~.~,.. r. ...... .. ~," ,.., ... ,~. ~. ~:~miu~~,~:.. ~-.'-:~.:,~, ~.w.,',.,:~ ",~: ' :.~W~.'..'= ~,.~ . ~'...,r~! .~:L;.~.: ;:i:~ i,~,ll~,N~l~l.~rgr~----a~._ ~. Figure 9: A dotplot demonstrating the cognate property (25 kbytes selected of Christian Science material) [] lalhow r~--lr~--~ -o I "I I, *! °o I - O @OI % x o_', "! Figure 10: Upper-right quadrant of Figure 9 (enhanced by signal processing) 5 ~ x m .° Figure 11: Rotated version of Figure 10 current best estimate of the position in the target file that corresponds to position x in the source file. On subsequent iterations, the bounds are reduced as the algorithm obtains tighter estimates on the dynamic range of the signal. The memory that was saved by shrinking the bounds in this way can now be used to enhance the horizontal resolution. We keep iterating in this fashion as long as it is possible to improve the resolution by tightening the bounds on the signal. while (making_progress) { Estimate_Bounds: Bn~n, Bm~x Estimate_Resolution_Factor : r Compute_Dotplot Comput e_Al ignment_Path } Figure 13 shows the four iterations that were required for the Christian Science text. For expository convenience, the last three iterations were enhanced with a low-pass filter to make it easier to see the signal. • • ~,gl. • 4 1"* ' ,'i" ", • .~,~. .. .. "....~, Figure 12: Figure 11 with 10x gain on vertical axis 5. Bounds Estimation It is difficult to know in advance how much dynamic range to set aside for the vertical axis. Setting the range too high wastes memory, and setting it too low causes the signal to be clipped. We use an iterative solution to find the optimal range. On the first iteration, we set the bounds on the search space, B rain and B rnax, very wide and see where the signal goes. The search will consider matching any byte x in the source file with some byte in the target file between f(x) - Bn,an and f(x) + Bmax, where f(x) is the 6 • . . •- ..#- _ .• . . _ ?~.-'a * ,. ,ft.:,4. ,.,~e# p. • • ..~. •., .~. -3- _~ .. • m r ;-;':" -.".I"'",".,,. -. ~..-;.....; ;..•,4._. ~..-_..~-.-_'.: ..... " . . • .4.o ;-:'~." -.'U'~ .... ~"-" • "- ~:*.-- :--.4"-~.. • ..~" •-_'2". ...... " . . Figure 13: Four iterations 6. Resolution Factor Estimation We need to allocate an array to hold the dots. Ideally, we would like to have enough memory so that no two points in the search space corresponded to the same cell in the array. That is, we would like to allocate the dotplot array with a width of w =N x +Ny and a height of h=Bmax+Bmin. (The array is stored in rotated coordinates.) Unfortunately, this is generally not possible. Therefore, we compute a "resolution" factor, r, which indicates how much we have to compromise from this ideal• The resolution factor, r, which depends on the available.amount of memory M, indicates the resolution of the dotplot array in units of bytes per cell. • ] (N x + Ny) (Bma x + Brain) r= M The dotplot array is then allocated to have a width of N x + Ny Bma x + Bmi n w = and a height of h - r r The dots are then computed, followed by the path, which is used to compute tighter bounds, if possible. As can be seen in Figure 13, this iteration has a tendency to start with a fairly square dotplot and generate ever wider and wider dotpiots, until the signal extends to both the top and bottom of the dotplot. In practice, the resolution places a lower bound on the error rate. For example, the alignments of the "easy" and "hard" datasets mentioned above had resolutions of 45 and 84 bytes per cell on the final iterations. It should not be surprising that the error rates are roughly comparable, ±46 and __.57 bytes, respectively. Increas- ing the resolution would probably reduce the error rate. This could be accomplished by adding memory (M) or by splitting the input into smaller chunks (e.g., parsing into paragraphs). 7. Dotplot Calculation In principle, the dotplot could be computed by simply iterating through all pairs of positions in the two input files, x and y, and testing whether the 4-gram of characters in text x starting at position i are the same as the 4-gram of characters in text y starting at position j. float dotplot[Nx] [Ny] ; for(i=0; i<Nx; i++) for(j=0; j<Ny; j++) if(chars4(x, i) == chars4(y, dotplot[i] [j] = i; else dotplot[i] [j] = 0; j)) In fact, the dotplot calculation is actually somewhat more complicated. First, as suggested above, the dotplot is actually stored in rotated coordinates, with a limited resolution, r, and band limited between Bmin and Bma x. These heuristics are necessary for space considerations. In addition, another set of heuristics are used to save time. The dots are weighted to adjust for the fact that some matches are much more interesting than others. Matches are weighted inversely by the frequency of the token. Thus, low frequency tokens (e.g., content words) contribute more to the dotplot than high frequency tokens (e.g., function words). This weighting improves the quality of the results, but more importantly, it makes it possible to save time by ignoring the less important dots (e.g., those 7 corresponding to tokens with a frequency greater than 100). This heuristic is extremely important, especially for large input files. See Church and Helfman (to appear) for more details and fragments of c code. 8. Alignment Path Calculation The final step is to find the best path of dots. A sub- optimal heuristic search (with forward pruning) is used to find the path with the largest average weight. That is, each candidate path is scored by the sum of the weights along the path, divided by the length of the path, and the candidate path with the best score is returned. Admittedly, this criterion may seem a bit ad hoc, but it seems to work well in practice. It has the desirable property that it favors paths with more matches over paths with fewer matches. It also favors shorter paths over longer paths. It might be possible to justify the optimization criterion using a model where the weights are interpreted as variances. 9. Conclusion The performance of charalign is encouraging. The error rates are often very small, usually well within the length of a sentence or the length of a concordance line. The program is currently being used by translators to produce bilingual concordances for terminology research. For this application, it is necessary that the alignment program accept noisy (realistic) input, e.g., raw OCR output, with little or no manual cleanup. It is also highly desirable that the program produce constructive diagnostics when confronted with texts that don't align very well because of various snafus such as missing and/or misplaced pages. Charalign has succeeded in meeting many of these goals because it works at the character level and does not depend on finding sentence and/or paragraph boundaries which are surprisingly elusive in realistic applications. References Brown, P., J. Cocke, S. Della Pietra, V. Della Pietra, F. Jelinek, J. Lafferty, R. Mercer, and P. Roossin, (1990) "A Statistical Approach to Machine Translation," Computational Linguistics, vol. 16, pp. 79-85. Brown, P., Lai, J., and Mercer, R. (1991) "Aligning Sentences in Parallel Corpora," ACL-91. Church, K. and Helfman, J. (to appear) "Dotplot: A Program for Exploring Self-Similarity in Millions of Lines of Text and Code," The Journal of Computational and Graphical Statistics, also presented atlnterface-92. Gale, W., and Church, K. (to appear) "A Program for Aligning Sentences in Bilingual Corpora," Computational Linguistics, also presented at A CL-91. Isabelle, P. (1992) "Bi-Textual Aids for Translators," in Proceedings of the Eigth Annual Conference of the UW Centre for the New OED and Text Research, available from the UW Centre for the New OED and Text Research, University of Waterloo, Waterloo, Ontario, Canada. Kay, M. and R/Ssenschein, M. (to appear) "Text- Translation Alignment," Computational Linguistics. Klavans, J., and Tzoukermann, E., (1990), "The BICORD System," COLING-90, pp 174-179. Simard, M., Foster, G., and Isabelle, P. (1992) "Using Cognates to Align Sentences in Bilingual Corpora," Fourth International Conference on Theoretical and Methodological Issues in Machine Translation (TMI-92), Montreal, Canada. Warwick-Armstrong, S. and G. Russell (1990) "Bilingual Concordancing and Bilingual Lexi- cography," Euralex. 8
1993
1
TEMPORAL CENTERING Megumi Kameyama SRI International AI Center 333 Ravenswood Ave., Menlo Park, CA 94025 megumi©ai.sri.com Rebecca Passonneau Dept. of Computer Science Columbia University New York, NY 10027 becky¢cs.columbia.edu Massimo Poesio Dept. of Computer Science University of Rochester Rochester, NY 14627-0226 poesio©cs.rochester.edu Abstract We present a semantic and pragmatic account of the anaphoric properties of past and perfect that improves on previous work by integrating dis- course structure, aspectual type, surface structure and commonsense knowledge. A novel aspect of our account is that we distinguish between two kinds of temporal intervals in the interpretation of temporal operators -- discourse reference inter- vals and event intervals. This distinction makes it possible to develop an analogy between center- ing and temporal centering, which operates on dis- course reference intervals. Our temporal property- sharing principle is a defeasible inference rule on the logical form. Along with lexical and causal reasoning, it plays a role in incrementally resolv- ing underspecified aspects of the event structure representation of an utterance against the current context. The Problem The past tense has been compared with anaphoric definite pronouns ([20] [22]) and definite noun phrases ([27]). The supporting observation is that in two consecutive past tense descriptions of events, as in (1) below (from [18]), the second sen- tence refers to a time (t') whose identity depends on the time (t) of the event described in the first sentence. (1)a. The Lone Ranger got on his horse. (t) b. He rode off into the sunset. (t') Tense interpretation also involves common- sense inferences in that the specific relation be- tween the two event times may vary. In (1), the relation inferred to hold is temporal progression (t -~ t'), but other ordering relations are also possible (see [6] [27]). Any temporal relations are in fact possible for two consecutively described events in discourse. A number of factors affect the interpretation as to whether successive past tenses are anaphor- ically related, and if they are, what the relative order of the associated events is. The determinant factors have been argued to be discourse struc- ture ([27] [14]), aspectual type ([61 [12] [17]), sur- face structure ([7] [14]), and commonsense knowl- edge ([19] [271 [13]). However, no account has ad- equately addressed all four factors. The problem in tense interpretation that we address is illustrated with Example (2) (from [27]). (2)a. John went over (el) to Mary's house. b. On the way, he had (t2) stopped (t3) by the flower shop for some roses. (t3 -~ t2 (=tl)) c. Unfortunately, they failed (t4) to cheer her up. (t3 -~ tl -~ t4) c'. He picked out (t4') 5 red ones, 3 white ones and 1 pale pink. (t3 -< t4' -< tl) (2c) and (2c') are alternative third sentences. Al- though both are in the simple past, and both evoke events of the same aspectual type (transition event [23]), they are interpreted differently. We refer to the contextually established time that a past tense is resolved against as the "discourse refer- ence time." A discourse reference time (tl) is in- troduced in (2a) with the event of John going to Mary's house at tl) The past perfect in (2b) in- troduces two times: John's stopping at the flower shop (t3) precedes the time t2 (t3 -~ t2), and t2 is typically inferred to be equal to the time of going over to Mary's house (tl); hence t3 ~ tl. In (2c), the time of failing to cheer Mary (t4) is inferred to occur just after tl, whereas in the parallel ver- sion (2c'), the time of picking out roses (t4') is inferred to occur during or just after t3, thus be- fore tl. Thus, as noted by Webber [27], a past perfect evokes two temporal referents (e.g., t2 and t3) and either of them can provide a discourse ref- erence time for a subsequent past tense. 1Later we will see that although the first tense gen- erates a discourse reference time for the subsequent one, its temporal referent (here tl) is not precisely equivalent to it. 70 Another aspect of the problem in tense in- terpretation is illustrated with so-called extended flashbacks such as Example (3), similar to the one discussed in [14]. (3) John and Mary went to buy a lawnmower. Somebody had stolen theirs the day before. They had seen the thief go away. John had run after him to no avail. All the lawnmow- ers were too expensive. They decided they couldn't afford a new one. There are two narrative threads in (3), one de- scribing John and Mary's visit to a store, the other John's chase of the thief. These threads corre- spond to the repeated use of the past and of the past perfect, respectively. The return from the past perfect to the past in also coincides with the return to the earlier thread. Note that (i) not only the simple past but also the past perfect can achieve an effect of continuity and that (it) more than one discourse reference time needs to be maintained in order to account for the return to an earlier one. The general problem in tense interpretation that we address is how to keep track of all the po- tential discourse reference times in a context, and how to select the right one for a given anaphoric past tense. We argue that the choice of the discourse an- tecedent of a temporal operator is subject to cen- tering effects. We assume that each temporal op- erator in a sentence introduces a discourse refer- ence time into the discourse context. We claim that this set of times constitutes a list of poten- tial discourse reference times for the next sentence, which we'll later refer to as the temporal forward- looking center (TCf), and that the position of a temporal operator in the logical form of the sen- tence affects the choice of the antecedent through structural parallelism (as a case of the property- sharing effect in centering [16]). We formalize the effect of surface structure on the choice of temporal antecedent by means of de- feasible axioms. These axioms must be less specific than axioms encoding causal reasoning. We argue that the choice of discourse reference time is an instance of a general principle in defeasible rea- soning, namely, the Penguin Principle [19] that chooses the most specific axiom applicable. We support our claims with data from the Brown cor- pus. In the next section, we review the three ex- isting proposals most related to ours -- Webber [27], Lascarides and Oberlander [19], and Hwang and Schubert [14]. The next two sections present a review of centering followed by a discussion of the analogous effects of temporal centering. To account for temporal centering effects, we then present our tense rules and our account of how commonsense reasoning interacts with the default preferences generated by the surface structure. Related Work Webber [27] argues that temporal entities are subject to focusing processes analogous to those constraining the interpretation of definite noun phrases. She explicitly rejects, however, a more direct analogy to Sidner's [26] potential local foci, and assumes only one temporal referent in the temporal focus (TF). Lascarides and Oberlander [19] present de- feasible reasoning rules for narrative understand- ing that partly address the reasoning and control needs of Webber's model. For example, they argue that in the case of conflicting inferences regard- ing the temporal order of two times [e.g., whether precedes, follows, or overlaps], the most specific interpretation should be preferred. However, they do not address cases such as (2), where there are multiple possible discourse reference times and the choice of one interpretation over the other needs to be accounted for. Itwang and Schubert [14] intend to account for the role played by structural factors in the choice of anaphoric relations among distinct past tenses. They propose the contextual structures called tense trees built as a narrative gets inter- preted. Tense trees reflect the structural depen- dencies among the tense and aspect operators in the interpretation of the sentences. The events evoked by a sentence are "appended" to the tree nodes whose positions reflect the structural posi- tions of the corresponding temporal operators in the sentence. Webber [27] and Hwang and Schubert [14] dif- fer markedly regarding the relative weight they assign to commonsense inferences versus surface structure, but both have problems with examples like (2). Neither can readily account for the in- terpretation assigned to (2c'). Webber assumes that the TF of (2b) is initially tl, but shifts to t3, thus initiating an embedded segment, after (2c') has been processed sufficiently to recognize that TF=t3 yields a more sensible interpretation than TF=t2(=tl). She does not discuss how t3 comes to be considered as a potential TF in this case, much less how to reject t2(=tl). Like Webber, Hwang and Schubert assume that (2c') requires recognition of an embedded segment, but admit- tedly they have not resolved the conflict between the need for discourse segment recognition, a task for commonsense reasoning, and the clear sepa- ration of tense resolution from commonsense rea- soning that they advocate. They also fail to dis- tinguish betwen the hierarchical structures of sen- 7] tences and discourses in the tense tree. Tense tree thus overcommits in places where the rule should belong to defeasible preferences. Our approach is to formulate the structural preferences for tense resolution as defeasible rules analogous to centering preferences for pronoun res- olution. These structural preferences, which are overlooked in Webber's or Lascarides and Ober- lander's accounts, interact with the commonsense inferences used in tense interpretation. Centering Centering [9] is a refinement of Sidner's [26] local focusing model. It consists of a set of principles and rules for dynamically updating the local at- tentional state [11] in discourse processing. We assume the following general picture of discourse processing. A discourse consists of a se- quence of utterances uttl,..., uttn. The sentence grammar translates the content of each utterance utti into a (set of) surface logical form(s) contain- ing unresolved anaphoric expressions and opera- tors. We call it here a "surface" formula ¢i. This logical form is similar, in spirit, to Hwang and Schubert's [14] indexical formula and Alshawi's [2] quasi logical form, whose main motivations are to represent that part of the sentence meaning in- dependent from the particular discourse context. This "baseline" meaning representation acts as a clean interface to the pragmatic processing needed to resolve context-dependent expressions. Utter- ance interpretation takes place in a context, and outputs an updated context. Part of this dynamic context is the attentional state that represents the currently salient entities partially ordered by rel- ative salience. We say that each formula ¢i defines a transi- tion relation between the input attentional state ASi_I and the output attentional state ASi. An attentional state ASi contains the focus of atten- tion Foci whose most salient subpart is the center of attention. After resolving anaphoric expressions in ¢i, the center of attention contains a partially ordered set of forward-looking centers Cfi com- prising the entities realized in ¢i. A member of Cfi might (but need not) be the backward-looking center Cbi, the currently most salient entity. Centering has mainly been used to constrain how discourse anaphoric pronouns are processed; e.g., the centering rule [9] predicts that Cbl will be realized with a pronoun if Cbi=Cbi_l. 2 Also, when Cbi=Cbi-1 and both are realized by definite pronouns, it is predicted that both will be real- 2Here we avoid the complication acknowledged in [11] that the two relevant utterances need not literally be adjacent. a. John went to the store. C fl =[Johnl,storel] Cbl=NULL b. He saw Bill. C f2=[John ~,Bill'] Cb2=John ~ Cb-establishment c. He walked towards him. C fj=[John',BiW] Cbz=John ~ Cb-retention c'. He appeared pale to him. Cfz,=[Bill', John'] Cb3,=Bill' Cb-establishment Figure 1: Illustration of Centering ized in a common grammatical role, with subject preferred over non-subject [16] [24]. A number of transition relation types have been distinguished in centering. In this paper, we will use the fol- lowing four types: Cb-retention, Cb-establishment, Cb-resumption, and NULL-transition. z In Cb-retention, the same entity is retained as the Cb: Cbi-1 = Cbi y£ NULL. In Cb- establishment, another member of Cf becomes the Cb: Cbi-1 ~ Cbl, Cbi E eli-1 (Cbi-1 may be NULL). These two are the most relevant to the present paper. In Cb-resumption, an old Cb not in Cf but in the current Foc is resumed. In NULL-transition, the output state has no Cb (Cbi=NULL). Centering posits a default prefer- ence for retention over establishment. We pro- visionally assume that establishment is preferred over resumption or NULL-transition. We illustrate centering with Fig. 1, where c and c' are alternative continuations of b. After a., C fl contains two entities, John ~ and storel. In b., John ~ is referred to with a subject pronoun, and is established as Cb2. In c., because John ~ is the current Cb, and because retention is preferred over establishment, centering predicts that a subject pronoun will refer to John ~ rather than to Bill( The default is overridden in c' and instead, the subject pronoun is inferred to refer to BiW because it is likely that the perceiver in the first perceptual state, see ~, remains the perceiver in the subsequent perceptual state, appear ~. " 3Cb-retention and Cb-establishment are due to Kameyama [15] [16]. These two roughly correspond to the three [10] and four [5] transition types pro- posed elsewhere. Cb-resumption captures Sidner's [26] use of a discourse focus stack in the potential focus list, and can be analogously formalized as a Cb stack within the Cf. NULL-transition has been implicit in Kameyama's work but has not been made an explicit transition type. 72 a. John went over (tl) to Mary's house. TCfl=[rl] TCbl=NULL b. On the way, he had (t2) stopped (t3) by the flower shop for some roses. TC f2=[r2=rl,r3] TCb2=rl TCb-establishment c. Unfortunately, they failed (t4) to cheer her up TC fz=[r4=rl] TCbz=rl TCb-retention c'. He picked out (t4') 5 red ones, 3 white ones, and 1 pale pink. TCf3,=[r5=r3] TCbz,=r3 TCb-establishment Figure 2: Illustration of Temporal Centering Centering effects in tense interpretation Our proposal is motivated by the parallel between example (2) above and the centering example in Fig. I. As illustrated in Figure 2, we would like to say that utterances (2a) and (2b) share the same discourse reference time rl that links t2 to tl. The shared discourse reference time rl is thereby established as a backward-looking tempo- ral center (TCb). (2c) retains the TCb, linking t4 to t2(=tl), whereas (2c') establishes a new TCb, linking t4' to t3. In order to establish a direct analogy between centering and temporal centering, however, we need to first clarify the nature of the temporal entities in the attentional state. Note that if (2c) retains the "same" TCb, this TCb cannot be iden- tified with either t2 of (2b) or t4 of (2c), since t2 and t4 are distinct (tl=t2, tl -< t4). The TCb remains fixed while the event times within it may move forward or backward. The TCb is then quite different from the reference time as used in propos- als inspired by Reichenbach [25] such as Dowty's [6] and ttinrichs [12]). 4 Recall the extended flash- back example (3) above. There are two simulta- neous narrative threads, in the simple past and in the past perfect, and the former seems to remain somehow in the background while the latter is in the foreground. These examples seem to indicate that when processing a text a reader maintains a list of "potential foci" ordered by their relative salience, instead only one temporal focus, as in Webber's account. Further evidence in favor of the analogy was obtained by examining a random sample of dis- 4A similar proposal is made by Kamp and Reyle [17], where they argue for the separation of the tem- poral perspective point ( TPpt) that is relatively fixed and the re]erence point (Rpt) that moves with narra- tive progression. course sequences from the Brown corpus [8], a heterogeneous corpus that should yield unbiased data. Each multi-sentence sequence contained one of two types of trigger sentences with multi- ple temporal operators -- the past perfect matrix clause or the past matrix verb taking an embed- ded past or past perfect complement clause (e.g., John remembered that Mary had asked for roses). We observed that this trigger's output TCb was almost always the time associated with the super- ordinate operator. Following each trigger was a target sentence with a simple past matrix clause. This creates a sequence analogous to (2b-c) or (2b-c'). We analyzed each sequence to determine whether the discourse reference time for the past in the target sentence was introduced by the trig- ger's superordinate operator (as a case of TCb- retention) or by a subordinate operator (as a case of TCb-establishment). In the total of 80 exam- pies, we found 52 retentions (65%) and 12 estab- lishments 15%). Others included 9 cases of "nei- ther" (11%) (some combination of resumption and NULL-transition) and 7 undecidable cases (9%). These data are consistent with our hypothesis that in a sentence with multiple temporal operators, the relative likelihood that a given temporal op- erator provides the discourse reference time for a subsequent past depends on its syntactic promi- nence. The question is, how do temporal centering and commonsense reasoning interact to yield the observed results? Two levels of logical representation Our explanation for the interaction between tem- poral centering and commonsense reasoning rests on assuming two distinct levels of representation in sentence interpretation. One is the logical form independent from the context and obtained by a direct translation of the surface syntactic structure of the utterance. The other is a fully resolved log- ical expression that results from incrementally re- solving context-dependent expressions in the log- ical form. For simplicity, our discussion ignores anything but temporal operators. Much as in Hwang and Schubert's proposal [14], the logical form contains unresolved tempo- ral operators -- e.g., tense operators, PR~.S(ent) and PAST, and aspectual operators, PERF(ect) and PROG(ressive). It also represents the structural po- sition of the temporal operators in a sentence. The crucial difference is that we take each tense and aspect operator to also give rise to a discourse ref- erence interval (see below) that is contextually re- solved. Our logical forms for (2a) and (2b) are shown in (4). (4)a. (PASTrl 'John goes over to Mary's house ' ) 73 r(PASTr¢)=(3 e (e C r) A (e starts_before SuperNow) A (Las$In(r) orients e) [e ~ r(¢)]) r(PERFr¢)=(3 e (e C r) A (e starts_before SuperRef) A (LastIn(r) orients e) Ce ~ r(¢)] ) Figure 3: Mapping rules for PAST and PERF b. (PASTr2 (PERFr3 'John stops by the florist for some roses')) The temporal operators in our logical forms are translated into the language of what we call the event structure representation (ESR). In essence, ESR represents the temporal and causal relations among the eventualities described in discourse. We will use Allen's [1] interval representation scheme for this representation level. Described eventualities correspond to event intervals and what we have been calling the "discourse reference times" correspond to discourse reference intervals. To represent relations among these temporal inter- vals, we use disjunctive subsets of Allen's thirteen relations. With Allen's representation, we can di- rectly represent vague or ambiguous temporal rela- tions expressed in natural language and incremen- tally resolve them to more specific relations using constraint propagation. Our discourse reference intervals coincide exactly with the computational role of Allen's reference intervals. Tense mapping rules Now we define the recursive mapping r be- tween the logical form and the event structure representation. 5 These tense mapping rules "un- pack" the relations among relevant event inter- vals and discourse reference intervals encoded by temporal operators in the logical form, and gen- erate the initial event structure representation. Although these rules look similar to IIwang and Schubert's tense deindexing rules, they play a very different role. Rather than performing sentence interpretation, as their deindexing rules do, our mapping rules specify the semantics of the logical form in terms of the event structure representa- tion. The v rules for PAST and PERF are shown in Fig. 3. We assume that formulas denote sets of time intervals. For every clause associated with a for- mula 0PC, where 0P is a temporal operator (PAST or PERF) and ¢ is a formula, there is an event inter- SThe use of r was inspired by the r translation in temporal logic [4]. val e corresponding to the specific time for which ¢ is inferred to hold. We express this relation as e ~ ¢ (e supports ¢). A statement of this form evaluates to true if the denotation of e is an ele- ment of the denotation of ¢. Each event interval e is temporally included in a discourse reference interval r, semantically treated as an open variable -- the value of r is constrained by contextual factors such as tempo- ral centering and commonsense reasoning, as de- scribed in a later section. Superllo~ and SuperRef have values deter- mined by the position of the subformula 0PC in the logical form. SuperNow evaluates to the current utterance interval- the time interval in which the current utterance takes place. We assume that there is an updating mechanism for SuperNow de- fined for each utterance initiation including the onset of a quoted speech. $uperRef evaluates to the event interval evoked by the temporal opera- tor that immediately dominates the current one in the logical form. Starts.before, a disjunction of meets, pre- cedes and the inverse of finishes (m -~ fi), is the possible relation between e and Supe.rNow for any PASTe (or between e and SuperRef for any PERF¢), irrespective of the aspectual type of 4. Orients, borrowed from Hwang and Schubert, is the disjunctive set of all of Allen's thirteen re- lations. Both starts_before and orients may be further specialized after computing ¢'s aspectual type, which depends partly on the aspectual types of its constituents and partly on commonsense reasoning? We can state certain default speci- fications of orients. For example, involving two transition event intervals, orients defaults to pre- cedes (e.g., see (1) where t -~ t'). Stative inter- vals by default overlap other non-stative or stative intervals (cf. [6] [12]). LastIn(r) is a function defined over dis- course reference intervals, and evaluates to the most recent non-stative event interval in the dis- course reference interval r, where applicable. This event interval orients the new event interval e. LastIn(r) corresponds to the "reference time" that moves with narrative progression in approaches like Dowty [6] and Hinrichs [12]. How the mapping rules work Table 1 shows the logical form and the event struc- ture determined by the mapping rules for each sen- tence of (2) (repeated here). (5)a. John went over (tl) to Mary's house. 6See [23] for a partial model of computing aspectual type; see [21] for a partial model of aspectual coercion, one type of commonsense reasoning. 74 ¢i eC_ r Constraint a PAST a tl C_ rl tl (-~) u~ b PAST (PERFfl) t2 C r2 t2 (m fi) ub PERF ~ t3 C r3 t3 (-~) t2 c PAST 7 t4 C r4 t4 (-<) uc c' PAST 7' t4'C_ r4' t4'(-~) uc, Table 1: Application of Rules to (2) b. On the way, he had (t2) stopped (t3) by the flower shop for some roses. c. Unfortunately, they failed (t4) to cheer her up. c'. He picked out (t4') 5 red ones, 3 white ones and 1 pale pink. The symbols a-7' correspond to the tenseless propositions of sentences a-c', and ua--ue, repre- sent the respective utterance event intervals. We explain the interpretation of sentence (25) ((5b) above) in detail. Applying the rule for PAST, we obtain t2 ~ T(PERF~), with discourse reference interval r2 such that t2 C r2. SuperNow evalu- ates to Ub, yielding t2 starts_before Ub. PERFfl is necessarily stative, so by default its event inter- val is assumed to persist up through the present. Thus t2 starts_before Ub is specialized to t2 (m fi) Ub. Applying the PERF rule, we obtain t3 fl, with discourse reference interval r3 such that Z3 C r3. Superl~ef evaluates to t2, the event in- terval of the dominating PAST operator, yielding t3 starts_before t2, which is then specialized to t3 ~ t2 due to the aspectual types of PERFfl and ft. The interaction between structural factors and causal knowledge We propose that defeasible rules producing hy- potheses about the interpretation of an utterance operate on both the levels of representation used in tense interpretation. On the event structure representation level, we assume the existence of axioms formalizing lexical semantics and other as- pects of commonsense knowledge (e.g., [13] [19]), and call them causal axioms. Simplified examples are given in (6) and (7) below. We use the default conditional > introduced by Asher and Morreau [3] and used by Lascarides and Oberlander [19]. (6) encodes the inference that people usually pick up flowers at the florist's. (7) say that people are usually at certain places as a result of their going there. (6) PICK-UP(X,flowers,%) > (AT(X,florist,t') A tCt') (7) AT(X,place,t) > (GO-TO(X,place,t') A t'-~t) The other, more novel, assumption we make is that there are defeasible rules that operate on the logical form, on the basis of structural infor- mation only. We formalize the effects of structural information in terms of what we call logical form axioms. Much as in the case of axioms formaliz- ing commonsense knowledge, logical form axioms can be defeasible. Although the effects of struc- tural factors on anaphora interpretation are widely acknowledged in linguistic theories (e.g., binding theory), the interest is more on indefeasible gram- matical principles. Our logical form axioms en- code grammatically derived defeasible preferences, the kind of structural effects that centering and temporal centering uncover. We assume the following architecture for sen- tence interpretation. Once the logical form has been obtained, logical form reasoning takes place. There are two kinds of rules in logical form reason- ing -- monotonic (or indefeasible) and defeasible. The former rewrite the logical form, and the lat- ter generate logical form hypotheses, that are addi- tional logical form expressions. All of the hypothe- ses resulting from logical form reasoning, as well as the original interpretation, are then mapped into event structure representations, and causal axioms are used to complete the interpretation. All com- plete interpretations of a sentence are compared to find the best hypothesis. We propose to encode temporal centering as defeasible logical form axioms that generate logi- cal form hypotheses about the interpretations of unresolved discourse reference intervals. To be more precise, we claim that the following principle holds: Temporal Property-sharing Principle (TPP) Resolve the current discourse reference interval rl against a discourse reference interval rj in the attentional state. By default, ri and rj should share the same properly reflecting the temporal position in the logical form. This general preference can be encoded by means of logical form axioms of the type shown in (8), whose intended interpretation is as follows -- Let uttn-1 and uttn be two consecutive ut- terances in discourse, lf(utt) be a function from utterances to their logical forms, (PASTrl ¢) be a subformula of l:~(uttn_l) , (PASTr2 ~) be a subformula of if (utt,), and pos (lf 1, lf2) be a function from pairs of a formula lfl and a sub- formula lf2 into the "temporal position" of lf2 in lfl. The temporal position of a subformula in a formula is the sequence of higher temporal oper- ators of the subformula. For example, in the for- 75 mula (PAST (PEP~F ¢)), the temporal position of (PAST (PERF ¢)) is the empty sequence (), that of (PERF ¢) is the sequence {PAST), and that of ¢ is the sequence (PAST,PERF). In a case where pos(u.-1,(PASTrl ¢) = pos(u., (PASTr2 ~)), the TPP generates the hypothesis that the two discourse reference intervals rl and r2 are the same. consecutive (utt._ 1 ,uttn), pos(If(utt._l), (PASTrl ~)) = (8) pos(lf(uttn), (PASTr2 ~)) ¢.-+ uttn : (PASTr2=rl ~) A similar rule generates hypotheses about the identity of two discourse reference intervals both introduced by PERF operators in the same tempo- ral positions. Lascarides and Oberlander [19] propose the Penguin Principle as a way of choosing among conflicts generated by the application of defensible rules. The principle says that whenever a conflict between rules arises, the hypothesis obtained by applying the most specific rule should be chosen. The logical form axioms encode general knowledge of linguistic structures independent from particu- lar events or situations being described. Thus the information they encode is necessarily less specific than rules encoding lexical inferences and causal relations. Hence the following consequences on the TPP: The TPP applies in the absence of informa- tion about causal relation between the events. Any hypothesis that results from causal rea- soning is preferred over the hypothesis sug- gested by the TPP. This general pattern of interaction between causal and logical form axioms predicts the fol- lowing Temporal Centering Principle: Temporal Centering Principle (TCP) When ¢i evokes a set of unresolved discourse reference intervals rx,...,rn, each rk is resolved against the current atttentional state as follows-- (i) IF causal axioms generate a hypothesis, take it, (ii) ELSE IF the TPP generates a hypothesis, take it, (iii) ELSE rl,..., r, are new to the discourse. After resolution, rl, ..., rn are in the TCfl in the output context of ¢i. Case (i) of the TCP allows any of the four transi- tion types, retention, establishment, resumption, or NULL. A NULL-transition only result only from negative conclusions obtained from causal axioms. Under the TPP in the form of rule (8) that applies only on consecutive utterances, case (ii) would result in either TCb-retention or TCb-establishment, but not TCb-resumption. How does this accord with our corpus data dis- cussed earlier? In the most frequent (65%) TCb- retention, the discourse reference interval evoked by the matrix past tense is identified with an- other evoked by the matrix past tense in the pre- vious utterance. In the next frequent (15%) TCb- establishment, it is identified with the one evoked by the subordinate temporal operator. This sharp difference in frequency would be explained by the fact that the TPP supports the former but not the latter. Webber's example, revisited. We now go back to Webber's example (2) and ex- plain how the interactions of defensible rules result in the temporal centering transitions shown in Fig. 2. The input consists of the logical forms from Ta- ble 1. (2a) For the initial sentence of discourse, only the NULL-transition can apply, rl is not identified, and remains as an open variable. AS1 contains TCfl=[rl]. (2b) The TPP can apply for PAST; resulting in TCb-establishment, with rl=r2. As a re- sult, TCf2=[rl,r3], TCb2=rl, and the value of LastIn(r2)=LastIn(rl) is tl. This yields tl orients t2, where tl is the transition event interval for (~) and t2, for a state (PERF /3). When a non-stative event interval orients a stative event interval, the former is partly cov- ered by (overlaps, starts, finishes, equals, during, or their inverses) the latter by default. (2c) The TPP, in the form of rule (8), generates the hypothesis expressed by the following log- ical form: (PASTr4=rl 'The roses fail to cheer her'). The causal reasoning generates the same conclusion, namely, in order to give somebody flowers, one has ~o be at somebody's place, so we conclude that the 'failing-to-cheer event' continues the "story" of John's going over to Mary's house. Either way, we have a TCb-retention, yielding TCf3 =[rl(=r4)] and TCb3=rl. The value of LastIn(r4) is now tl, yielding tl orients t4. Since tl and t4 are event intervals for transition events (a and 7), orients defaults to tl (-<) t4. (2c') There is a conflict. While the TPP would sug- gest a continuation of the story of John's visit at Mary's house, the causal rules such as (6) suggest that the event of picking up flowers takes place at the florist's, and the rules such as (7) suggest that being at the florist's results from the action of "stopping by" the florist's. The Penguin Principle now applies, resulting in TCb-establishment, and t3 (-4) t4. 76 Conclusions We have presented a semantic and pragmatic ac- count of the past tense that uniformly accounts for its discourse anaphoric properties. We distinguish between two kinds of intervals, discourse reference intervals and event intervals, which allows us to provide a direct analogy between centering and temporal centering. We introduce the notion of logical form rea- soning, which obviates the need for tense trees as the source for structural reasoning, and, at the same time, enables us to account for the interac- tion between the structural preferences and causal reasoning. By including in both logical form ex- pressions and event structure representation an ex- plicit indication of the contextual parameters to be resolved, rules can be formulated that exploit the kind of information available at each level. We can therefore factor out different sources of de- feasible preferences in tense interpretation. Rules operating on the logical form formalize the effects of structural factors, and rules operating on the event structure formalize the effects of common- sense knowledge. The ordering preferences in our model are con- sistent with data from a large, heterogeneous cor- pus. In the future work, we need to develop an evaluation method for our model, to specify more fully the knowledge representation and reasoning needs, and finally to clarify consequences that our model might have on a more unified model of at- tentional state to simultaneously account for nom- inal and temporal anaphora. Acknowledgement Each author would like to thank the other two authors for stimulating discussions and encour- agements. We would also like to thank Johan van Benthem, David Carter, Janet Hitzeman, Len Schubert, and ttenri~tte de Swart for helpful dis- cussions. References [1] Jar~es F. Allen. Maintaining knowledge about tempo- ral intervals. Communications o] the ACM, 26:832-843, 1983. [2] Alshawi, Hiyan. 1990. Resolving quasi logical forms. Com- putational Linguistics. 16(3), 133-144. [3] Nicholas Asher and Michael Morreau. 1991. Common- sense entailment: a modal theory of nonmonotonic rea- soning . In Proceedings of the IJCAI 1991. [4] Johan van Benthem. forthcoming. Temporal logic. In D. Gabbay, Chr. Hogger and J. Robinson, eds. Handbook of Logic in Artificial Intelligence and Logic Programming, Volume IIL (second revised version) Oxford University Press. [5] Brennan, S., L. Friedman, and C. Pollard. 1987. A cen- tering approach to pronouns. In Proceedings of the 25th ACL Meeting, 155-162. [6] David Dowty. The effects of aspectual class on the tempo- ral structure of discourse: Semantics or pragmatics. Lin- guistics and Philosophy, 9:37-61, 1986. [7] MiJrvet Enq. Anchoring conditions for tense. Linguistic Inquiry, 18.4:633-657, 1987. [8] W. Francis and H. Kucera. Frequency Analysis of En- glish Usage: Lexicon and Grammar, Houghton Mifflin, Boston, MA, 1982. [9] Barbara J. Gross, Aravind K. Joshi, and Scott Weinstein. Providing a unified account of definite noun phrases in discourse. In Proceedings of the Plst ACL, pages 44-50, 1983. [10] Gross, B., A. Joshi, and S. Weinstein. 1983. Towards a computational theory of discourse interpretation. Unpub- lished manuscript. [11] Barbara J. Gross and Candace L. Sidner. Attention, in- tentions and the structure of discourse. Computational Linguistics, 12:175-204, 1986. [12] Erhard Hinrichs. Temporal anaphora in discourses of En- glish. Linguistics and Philosophy, 9:63-82, 1986. [13] Jerry R. Hobbs, Mark Stickel, Paul Martin, and Douglas Edwards. Interpretation as abduction. In Proceedings of the ~6th ACL, pages 95-103, 1988. [14] Chung Hee Hwang and Lehnart K. Schubert. Tense trees as the 'fine structure' of discourse. Proceedings of the 30th ACL, pages 232-240, 1992. [15] Megumi Kameyama. Zero Anaphora: The Case of Japanese. PhD thesis, Stanford University, 1985. [16] Megumi Kameyama. A property-sharing constraint in centering. In Proceedings of the ~,th Annual Meeting of the ACL, pages 200-206, New.York, 1986. [17] Hans Kamp and Uwe Reyle. From Discourse to Logic, Vol. I. Kluwer. [181 Lauri Karttunen. Discourse referents. In J. McCawley, ed- itor, Syntax and Semantics, Vol. 7: Notes from the Lin- guistic Underground. Academic Press, New York, 1976. [19] Alex Lascarides and Jon Oberlander. Temporal coherence and defeasible knowledge. Theoretical Linguistics, 1992. To appear. [20] James D. McCawley. Tense and time reference in English. In Charles J. Fillmore and D. Terence Langendoen, edi- tors, Studies in Linguistic Semantics. Holt, Rinehart and Winston, Inc., New York, 1971, pages 97-114. [21] Mark Moens and Mark J. Steedman. Temporal ontol- ogy and temporal reference. Computational Linguistics, 14:15-28~ 1988. [22] Barbara Hall Partee. Some structural analogies between tenses and pronouns in English. Journal of Philosophy, 70:601-609, 1973. [23] Rebecca J. Passonneau. A computational model of the se- mantics of tense and aspect. Computational Linguistics, 14:44-60, 1988. [24] Rebecca J. Passonneau. Getting and keeping the cen- ter of attention. In R. Weisehedel and M. Bates, editors, Challenges in Natural Language Processing. Cambridge University Press, To appear. Also available as Tech. Re- port CUCS-060-90, Dept. of Computer Science, Columbia University. [25] Hans Reiehenhach. 1947. Elements of Symbolic Logic. Macmillan Company. Reprinted, 1980, Dover Publica- tions, New York. [26] Candace L. Sidner. Focusing in the comprehension of def- inite anaphora. In M. Brady and R. C. Berwick, editors, Computational Models of Discourse, pages 267-330. The MIT Press, Cambridge, Massachusetts, 1983. [27] Bonnie Lynn Webber. Tense as discourse anaphor. Com- putational Linguistics, pages 113-122, 1988. 77
1993
10
ASSIGNING A SEMANTIC SCOPE TO OPERATORS Massimo Poesio University of Rochester, Department of Computer Science Rochester, NY 14627-0226, USA poesio@cs, rochester, edu Abstract I propose that the characteristics of the scope disamhiguation process observed in the literature can be explained in terms of the way in which the model of the situation described by a sentence is built. The model construction procedure I present builds an event structure by identifying the situations associated with the operators in the sentence and their mu- tual dependency relations, as well as the relations between these situations and other situations in the context. The pro- cedure takes into account lexical semantics and the result of various discourse interpretation procedures such as definite description interpretation, and does not require a complete disambiguation to take place. THE PROBLEM Because new ways of obtaining semantically distinct inter- pretations for sentences are continuously discovered, com- ing to grips with ambiguity is becoming more and more of a necessity for developers of natural language process- ing systems, linguists and psychologists alike [9, 31, 7, 2]. In this paper, I am concerned with the scopal ambiguity of operators I [31, 33]. The attention of both psycholinguists and computational linguists interested in ambiguity has concentrated on the problem of combinatorial explosion. If the number of read- ings of an utterance were to actually grow with the factorial of the number of operators, even a simple sentence like (1), with 4 operators (the modal 'should', tense, an indefinite and a definite), would have 4I = 24 scopally different readings. Two distinct questions thus must be answered: how can lis- teners (and how should machines) deal with the combinato- rial explosion of readings? Do we really use the brute-force strategy of considering all of the available readings, and then choose among them? And, if we do choose among several readings, how is that done? (1) We should hook up an engine to the boxcar. To my knowledge, three positions on the problem of com- binatorial explosion have been taken in the literature. Some have argued that there is no problem: our brains contain 1I use here the term operator as it is used by Heim [13], i.e., to mean either quantifier or modal/tense operator. 78 more than enough machinery to process in parallel 4I in- terpretations. It's unclear, however, whether this strategy is feasible when larger numbers of readings are concerned. A classical demonstration of the number of readings one may have to consider is (2), which has 1 l I interpretations if the standard treatment of quantification and modality is assumed. (2) You can fool most people on most of the issues most of the time, but you can't fool everybody on every single issue all of the time. [15] Another position is that sentences like (1) are not semanti- cally ambiguous, but vague. Consider for example (3): (3) Every kid climbed a tree. Here, one of the readings (the one in which the indefinite takes narrow scope) is entailed by the other (in which the indefinite takes wide scope). The claim is that (3) is inter- preted in the vaguest possible way, and the strongest reading, if at all, is derived by pragmatic 'strengthening' [25]. A dif- ficulty with this approach is that a vaguest reading doesn't always exist. The two readings of (4), for example, are dis- tinct. (4) Few people speak many languages. [27] Finally, it has been proposed that the reason why listeners do not seem to have problems in processing utterances like (1) is because they do not disambiguate. They build a non- disambiguated representation of the sentence and leave the interpretation open. This strategy might be advantageous for some kinds of applications 2 and it has been argued that a complete disambiguation never takes place [7]. No matter what processing strategy is chosen, the ques- tion of how listeners choose one particular interpretation cannot be ignored. All experimental work done on the sub- ject of scopal ambiguity [20, 35, 26] indicates that subjects do have preferred interpretations when confronted with tasks which require understanding. In addition, sentences like (1), (5) and (6) clearly have preferred interpretations. However, the only answers to to this question that I have seen are based on heuristics) 2E.g., machine translation [2]. 3See [17] for an example of state-of-the-art techniques (5) A girl took every chemistry course. [20] (6) Each daughter of an admiral married a captain. I present in this paper an hypothesis about interpretation that accounts for facts about scope disambiguation that were pre- viously explained in the literature by stipulating a number of unmotivated principles. The proposal developed here is be- ing applied to develop the module of the TRAINS-93 system [1] that handles scope disambiguation and reference inter- pretation. The goal of the TRAINS project is to develop a conversationally proficient planning assistant. More details about the project and the work presented here can be found in [29]. SCOPE DISAMBIGUATION FACTORS Most proposals on scope disambiguation were developed to account for the general preference of the leftmost quantified phrase from taking wide scope in simple active sentences like (7): (7) Every kid climbed a tree. Lakoff [27] proposed that this preference is due to the fact that sentences are parsed from left to right; "every kid" takes scope over "a tree" because it is processed first. (Kurtzman and MacDonald called this the Left to Right principle.) Ioup [20] argued instead that "...in natural language, or- der has little to do with the determination of quantifier scope." ([20], p.37). The preferred reading of (8), for ex- ample, is the one in which the NP "each child" takes wide scope. (8) I saw a picture of each child. [20] According to Ioup, the relative scope of quantifiers is deter- mined by the interaction of two factors. First of all, quan- tifiers such as "each" or "the" have the inherent property of taking wide scope over indefinites, which, in turn are lexi- cally marked to take scope over plural quantifiers like "all." This hypothesis is motivated by contrasts such as those in (9), and accounts for cases such as (8). 4 (9) a. I saw a picture of each child. b. I saw a picture of all the children. Secondly, Ioup proposed that a hierarchy exists among grammatical functions, such that listeners tend to attribute to NPs in subject position wide scope over NPs in indirect object position, which in turn tend to take wide scope over NPs in object position. The hierarchy between grammatical functions accounts for the preferred reading of (7). Ioup also observed that NPs in topic position tend to take wide scope, This is especially obvious in languages that have a specific grammatical category for topic, like Japanese or Korean. The Japanese sentence (10b) is ambiguous, but the reading in which the NP in subject position, "most stu- dents" takes scope over the NP in object position, "every language," is preferred. This preference is maintained if the ')Van Lehn [35] and Hendrix [14] also studied the effect of lexical preferences, or 'strengths' as they are also called. 79 NP in object position is scrambled in sentence-initial posi- tion, as in (10c) (another counterexample to Lakoff's left- to-right principle). If, however, the NP is marked with the topic-marking suffix "wa," as in (10d), suddenly the pre- ferred reading of the sentence becomes the one in which "every language" takes wide scope. 5 (I0) a. Most students speak every language. b. Hotondo-no gakusei-ga subete-no gengo-o hanasu most-gen student-nora every language-ace speak c. Subete-no gengo-o hotondo-no gakusei-ga hanasu every language-ace most-gen student-nora speak d. Subete-no gengo-wa hotondo-no gakusei-ga hanasu every language-TOP most-gen student-nora speak Several proposals attribute an important role to structural factors in assigning a scope to operators. Jackendoff [21] and Reinhart ([32], ch. 3 and 9) propose to account for the preferred reading of (7) by means ofa C-commandprinciple according to which a quantified expression is allowed to take scope over another quantified expression only if the latter is c-commanded by the former at surface structure. Structural explanations (in the form of constraints on syn- tactic movement) have also been proposed to explain the constraint that prevents a quantifier to take scope outside the clause in which it appears, first observed by May [28] and called Scope Constraint by Heim [13]. This constraint is exemplified by the contrast in (11): whereas (lla) has a reading in which "every department" is allowed to take wide scope over "a student," this reading is not available for (llb). (11) a. A student from every department was at the party. b. A student who was from every department was at the party. Lexical semantics and commonsense knowledge also play an important role in detemaining the scope of operators. The contrast between the preferred readings of (12a) and (12b) can only be explained in terms of lexical semantics: (12) a. A workstation serves many users. b. A workstation can be found in many offices. Kurtzman and MacDonald [26] set out to verify the empiri- cal validity of several of these principles. The most crucial result is that none of the principles they set to verify can account for all the observed effects, and actually counterex- amples to all of thenv--including the quantifier hierarchy-- can be found. No evidence for a Left-to-Right processing principle was found. Kurtzman and MacDonald hypothesize that "...processes that are not strictly dedicated to the inter- pretation of scope relations may nonetheless influence the interpretation of quantifier scope ambiguities." ([26], p.22). They conclude that "...the results leave open the question of whether the building and selection of representations of scope are mandatory processes" ([26], p.45). 6 5Arguably, the closest thing to an explicit topic marker in En- glish are certain uses of definite descriptions and the topicalization construction; in both cases, the topically marked NP tends to take wide scope. 6Their experiments are discussed in more detail in [29]. OVERVIEW OF THE PROPOSAL Scope Disambiguation as Construction of an Event Structure It is commonly assumed in the psycholinguistic literature on sentence interpretation that hearers interpret sentences by constructing a model of the situation described by the sentence [10, 22]. I propose that the scope assigned to the operators contained in a sentence is determined by the char- acteristics of the model construction procedure. The model being constructed, which I call event structure, consists of a set of situation descriptions, one for each operator, together with dependency relations between them. The task of the model construction procedure is to identify these situations and to establish dependency relations. The scope assigned by a hearer to an operator depends on the position of the situation associated with that operator in the event structure. For example, I propose that the scope assigned to quanti- tiers depends on how their resource situation [3, 8] is iden- tiffed. It is well-known that a sentence like (13): (13) Everybody is asleep. is not interpreted as meaning that every single human be- ing is asleep, but only that a certain contextually relevant subset is. The process of identifying the set of individuals over which an operator quantifies is usually called domain restriction. In the case of, say, (7) whether "every kid" or"a tree" takes wide scope depends on how the listener builds a model of the sentence. If she starts by first identifying a situ- ation containing the group of kids that"every" is quantifying over, and then proceeds to 'build' for each of these kids a situation which contains a tree the kid is climbing, then "ev- ery kid" will take wide scope. In other words, I propose that a listener has a preferred reading for a sentence if she's able to identify the resource situation of one or more of the oper- ators in that sentence ('to picture some objects in her mind'), and to hypothesize dependency relations between these sit- uations. If this process cannot take place, the sentence is perceived as 'ambiguous' or 'hard to understand.' The less context is available, the more the establishment of dependency relations between situations depends on the order in which the model is built, i.e., on the order in which the situations associated with the different operators and events are identified. This order depends in part on which NPs are perceived to be 'in topic,' and in part on gen- eral principles for building the conceptual representation of events (see below). In addition, some operators (e.g., defi- nite descriptions) impose constraints on their resource situ- ation. A Model Construction Procedure: The DRT Algorithm In order to make the intuition more concrete we need the details of the model construction procedure. Ideally, one would want to adopt an existing procedure and show that the desired results fall out automatically. Unfortunately, the model construction procedures presented in the psycholin- guistic literature are not very detailed; often it's not even clear what these researchers intend as a model. There is, 80 however, a discourse interpretation procedure that is speci- fied in detail and has some oftbe characteristics of the model construction procedure I have in mind; I'm thinking of the DRS construction algorithm [23, 24]. The DRS construction algorithm consists of a set of rules that map discourses belonging to the language into certain "interpretive structures". The output structures are called "Discourse Representation Structures" or "DRSs." A DRS is a pair consisting of a set of discourse referents and a set of conditions (= predicates on the discourse referents). The construction algorithm works by first adding the syntactic structure of the sentence to the 'root' DRS representing the discourse up to that point, then applying the rules to the syn- tactic structure, thus adding discourse referents and condi- tions to the DRS. Consider how the algorithm is applied to obtain an interpretation for (7): (14) Every kid climbed the tree. The initial interpretation of (14) is the tree shown in (15). (15) S NP VP Det N' V NP Det A Every kid climbed a tree The DRS construction role for definites and universal quan- tification are as follows: (Definite Descriptions)When a syntactic configuration containing a definite NP is met in a DRS K, 1. Add a new discourse referent x to the root DRS, 2. Add a new condition to the root DRS representing the restriction on the indefinite NP, 3. Replace the NP with x in the syntactic configuration. (Universal Quantification) When a syntactic configura- tion containing an NP with determiner "every" is met in a DRS K, 1. Add a complex condition KI ~ 1(2 to K, 2. Add a new discourse referent x to K~, 3. Add a new condition to K1 representing the restriction on the indefinite NP, 4. Replace the NP with the discourse referent in the syn- tactic configuration, 5. Move the syntactic configuration insider K2. Both the rule for definites and the rule for universal quantifi- cation are triggered by (15). Two hypotheses are obtained; that obtained by applying first the rule for definite descrip- tions is shown in (16). Both of these hypothesis contain op- erators whose DRS construction roles haven't been applied yet: this algorittun comes with a built-in notion of partial hypothesis--a paltial hypothesis is a DRS some of whose operators still have to 'interpreted' in the sense just men- tioned. (16) x TREE(X) S NP VP Det N' V x Every kid climbed The two partial hypotheses are made into complete hypothe- ses by applying the remaining rules; the complete hypothesis with the definite taking wide scope is shown in (17). (17) x TREE(X) YID(y) [ ever~y> CLIMBED(y, X) Modifying the DRS Construction Algorithm Because the DRS construction rules depend on syntactic pat- terns, the role of structural factors in disambiguatiou can be taken into account--and a lot of data about disambiguation preferences can be explained without any further machin- ery. The Scope Constraint, for example, is embedded in the very semantics of DRT; and one can 'build in' the con- struction rules principles such as the c-command principle. (Kamp and Reyle do just that in [24].) The limitations of this approach are shown by examples in which the choice of an interpretation does not depend on the structure, like (12). Also, the rule for definites as just formulated is too re- strictive: in cases like (18), for example, predicts the correct reading for the definite NP''the meeting," but the wrong one for "the principal," that, intuitively, takes narrow scope with respect to "every school:" (18) Every school sent the principal to the meeting. I propose that the role of lexical semantics, as well as the data accounted for in the literature by introducing principles such as the grammatical function hierarchy, the topic prin- ciple, and the quantifier hierarchy, can be accounted for by making the activation of the DRS construction rules depend on factors other than the syntactic structure of the sentence. The factors I propose to incorporate are (i) the semantics of lexical items, (ii) the results of the interpretation of opera- tors in context, and (iii) the way the representation of events is built in memory. In order to achieve this goal, I propose two main modi- fications to the standard DRS construction algorithm. First of all, I propose that the input to the algorithm is a logi- calform--a structure isomorphic to the s-structure, that car- ties however information about the semantic interpretation of lexical items. In this way, the role of semantic factors in interpretation can be taken into account; in addition, a semantic value can be assigned to a representation contain- ing unresolved conditions or partial hypotheses. Secondly, I propose to make the application of the DRS construction rules depend on the identification of certain contextually de- pendent elements of the interpretation. The ingredients of the account thus include: a proposal about the input to the model construction procedure; a notion of what an event structure is; and an account of discourse interpretation. I discuss these issues in turn in the next sections. THE LOGICAL FORM As said above, the first difference between the interpretation procedure proposed here and the DRS construction algorithm illustrated above is that the rules I propose rely on semantical and contextual factors. I propose to do this by adding to standard DRT a new class of conditions, that I call 'logical forms.' Logical forms include semantic information about the lexical items occurring in the sentence. The logical form representation is the interface between the parser and the model construction algorithm, and can be compositionally obtained by a GPSG parser [11, 18] that couples a context- free grammar with rules of semantic interpretation. I first describe the language used to characterize the semantics of lexical items, SEL (for Simple Episodic Logic), then the syntax and interpretation of logical forms. 81 Lexical Semantics in Simple Episodic Logic I introduce SEL by presenting the truth conditions I propose to assign to (18), repeated here for convenience: (18) Every school sent the principal to the meeting. The truth conditions usually assigned to (18) in a language with restricted quantification, and ignoring tense, are shown in (19); I propose instead to assign to (18) the interpretation specified by (20). (19) (the m MEETING(m) (V S SCHOOL(S) (the p PRINCIPAL(p,s)] sE~rr(s,pan)))) (20) (the m [s'l ~= MEL~NG(m)] ^ sHARl~(spkr,hearerW0 (V S [S'2 ~ SCHOOL(s)] (too p IS3 ~ pP.n~cw~(p,s)] ^ SHARED(spkr,hearer ,.¢3) SENT(s,p,m)))) (20) reads: there exists a unique m that is a meeting in a con- textually specified resource situation s'l, and for all s's that are schools in a contextually specified resource situation ~2 the unique p such that p is the principal of s participates to m. The intent of the expression used for the quantifier restric- tions in (20) is to make it explicit that the situations from which the quantified dements are 'picked up' need not be the complete set of objects and relations at which the truth of (20) is evaluated. This is accomplished by introducing into the language an explicit relation ~ ('supports') to represent 'truth at a situation' [8]. A statement of the form Is1 MEWrING(X)] evaluates to true in a situation s if the object--say, m-- assigned to the variable x is a meeting in the situation s 1. A situation is a set of objects and facts about these objects [8, 18]. I assume a language which allows us to make state- ments about situations, and an ontology in which situations are objects in the universe. Episodic Logic provides such a language and such an ontology [19, 18]; where not otherwise noted, the reader should assume that an expression of SEL has the semantics of the identical expression in Episodic Logic. The restriction of the existential quantifier in (20) con- tains a parameter ~. Parameters are used in SEL to trans- late anaphoric expressions of English. A parameter behaves semantically as an open variable, a value for which has to be provided by context. 7 I have assumed the following translations for the lexical items "every," "meeting," and "sent" (I have again ignored tense): "every" -,-+ )~ P 3. Q (V x [s'i ~ P(x)] Q(x)) "meeting" -,-+ MEETING "sent" --~ SENT The semantics assigned to definite descriptions needs a bit of an explanation. According to the location theory [12, 4] the major uses of definite NP's, as well as the contrast between definites, indefinites, and demonstratives, can be accounted for by stipulating that a speaker, when using a definite article, 1. instructs the hearer to locate the referent in some shared set of objects, and 2. refers to the totality of the objects/mass within this set that satisfy the restriction. I formalize this idea in [301 by associating to definite de- scriptions the translation below. A situation is 'shared' be- tween x and y if every fact • supported by that situation is mutually believed by x and y (see [301 for details). "the meeting" -,~ )~ P (the x: ([S ~ MEETING(X)] A SHARED (spkr,hearer,S)) P(x)) Syntax and Interpretation of the Logical Form The translations seen above, together with the obvious context-free roles, result in the following LF for (18) (I have 7See [29] for details. The idea is to add to the parameters of evaluation an anchoring function a that provides the values for parameters, thus plays the role of 'context' in Helm's proposal. The reader should be aware that while the notation and terminology I have adopted is borrowed from Situation Theory, parameters have a different semantic interpretation there [8]. 82 used here, and elsewhere in the paper, a linear notation to save space): (21) [CP [IP [NP '~, Q (V s [s'2 ~ SCHOOL(s)] Q(s))] [vP [vP [v' 'SENT [NP 'Z Q (the p [s'3 ~ PRINCIPAI~,~)] ^ SHARED(spkr,hearer $3) Q(P))]]] [pp 'TO [NP 'Z Q (the m [s'l ~ MEmOs(m)] ^ SHARED(spkr,hearer ,,q 0 Q(m))llll] I propose that expressions like (21) can appear as conditions of DRSs. The syntax of LFs is as follows. Each internal node of (21) is labeled with a phrase category; the leaves are labeled with expressions of the form 'a, where a is an expression of SEL (and has therefore a 'standard' model the- oretic denotation). I use the phrase structure system largely adopted in the Government and Binding literature, accord- ing to which the sentence is the maximal projection of an Infl node and is therefore labeled IP [34]. I also assume the exis- tence of a maximal projection of complementizer CP above IP. Because I don't discuss relatives here, I use the following simplified notation for NPs with determiners, such as "every school": [NP '~- Q (V x [Sl ~ SCHOOL(x)] Q(x))] LFs like (21) are usually treated in the natural language processing literature as uninterpreted data structures from which to 'extract' the readings [16, 17]. However, it has been recently proposed [31, 2, 33] that it is possible (and indeed desirable) to assign a denotation to expressions like (21). The reason is that in this way one can define a no- tion of sound inference --that is, one can specify what can and cannot properly be inferred from an expression like (21) prior to disambiguation; and therefore, a notion of 'mono- tone disambiguation.' I do not assume disambiguation to work monotonically, but I want to be able to treat expres- sions like (21) as full-fledged conditions so that a DRS con- taining a condition of this kind can be interpreted, and I need to be able to characterize a disambiguation step as compati- ble in the sense that it does not introduce any new readings. To do this I need LFs to have an interpretation. Were it not for the problem that more than one interpre- tation can be associated to a single LF, one could easily de- fine a recursive mapping EXT from logical forms to truth- theoretical denotations (functions from situations to lluth values) in temxs of the usual [[ [[ function, as follows: = Ilall EXT([ v, a]) = EXT(a) EXT([vp a]) = EXT(a) EXT([ N, a]) = EXT(a) EXT(tNP a 131) = EXT(a)(EXT(~)) EXT(tIP a ]~l) = EXT(a)(EXT(~)) if TYPE(EXT(a)) = (t~,t~) and TYPE(EXT(fl)) = tl; EXT(/~)(EXT(a)) otherwise. Once this is done, one can reformulate the semantics of DRS in terms of situations and situations extensions instead of embeddings and embedding extensions, and interpret all conditions as functions from situations to truth values. (See [29] for details.) Matters get more complicated when expressions with more than one reading like (21) are considered. Different ways for assigning a denotation to expressions with more than one interpretation have been proposed [2, 31]; my pro- posal derives from [31]. I use a Cooper storage mechanism [5] to define EXT in such a way as to allow for an LF to have more than one 'indirect interpretation.' Briefly, Cooper's idea is to have a syntactic tree denote a set of sequences, each sequence representing a distinct 'order of application' in computing the interpretation of the sentence. For exam- ple, because in interpreting (22) one can either apply the translation of tense immediately or walt, EXT maps (22) in a set of two sequences, shown in (23). (22) [V" 'P [NP '~. Q (det x R(x)) Q(x)] ] EXT((22)) = {(~ x (det x R(x))P(x) ), (23) (P,)~ Q (det x R(x)) Q(x) )} I omit here the definition of the EXT function implement- ing Cooper storage, that is rather complex. For the current purposes, it is enough to understand that EXT associates to (21) a set of functions from situations to truth values, as in (24). (24) EXT((21)) = {the function denoted by II (the m [s'~ p ~L~tNG(m)] ^ SnARED(spkr,hearer,g0 (V S [~"2 [= SCHOOL(s)] (the p [g3 ~ PRINCIPAL(p,s)] ^ SHARED(spkr,hearer ,g3) s~,rr(s,p,x)))) II, the function denoted by II (v s [~2 h scnooL~s)] (the m [~¢x ~ MEElqNG(m)] ^ SI/ARED(spkr,hearer,~l) (the p [~3 P PRn~Cn'AL(p,s)] ^ SrlARED(spkr,hearer ,g3) s~cr(s,p,x)))) II, et~ } Having done this, we can say that a DRS condition like (21) is verifies the current situation s if one of the functions de- noted by (21) maps s into 1. BUILDING EVENT STRUCTURES Not all assertions in a narrative or conversation are going to be about the same situation. In the conversations with the TRAINS system, for example, the participants can dis- cuss both the state of the world and the state of the plan being developed. Maintaining this separation is crucial for the proper interpretation of definite descriptions, for exam- ple. The separation between the situations that are the topic of different sentences is achieved by translating sentences as situation descriptions. A situation description is a condition of the form: [-""--'3 s :l~ [ (25) i -- I 83 whose intuitive interpretation is that • provides a partial characterization of the situation s. The semantics of situa- tion descriptions is defined as follows, using a semantics of DRSs in terms of situation extensions, as discussed in the previous section, and interpreting discourse markers as con- stituents of situations: The condition s:K is satisfied wrt the situation s' iffK is satisfied wrt the value assigned to s in s '. I also propose the following constraint on the model con- struction rules: Constraint on Interpretation : with the exception of the discourse markers interpreted over situations and of the situation descriptions, every discourse marker and condi- tion has to be part of a situation descriptions. Situation descriptions are added to the model by rules trig- gered by an LF whose root is a CP node. The rules (now shown for lack of space) delete the complementizer and its whole projection, and introduce a situation structure. The result is shown in (26). S (26) s: /~ The conslraint on discourse interpretation proposed above is implemented by forcing the rules that build situation struc- tures to be triggered before any other rule; this is done by having every other rule being triggered by LFs whose root node is an IP. The result of this constraint is that a discourse model consists of a set of situation descriptions: (27) s:~-~ The DRSs produced by the standard DRT algorithm are se- mantically equivalent to the special case of a set of situation descriptions all describing the same situation s. Models like the one in (27) enable the formalization of processes of resource situation identification like that de- scribed in [30]. I illustrate how my rules for interpreting operators differ from those of standard DRT, and how the in- teraction between model construction rules and discourse in- terpretation works, by means of the model construction rule for definites. The rule MCR-DD is triggered by the config- uration in (28), and results in the configuration in (29). The notation used for the pattern indicates that this mle applies to a definite NP in any position within a syntactic tree whose maximal projection is an IP node, without any intervening IP node. (28 IP XX '~. Q (the y [3 I = P(y)] Q(y)) YY ANCHOR(% S') (29) s: !y s': P(y) IP XX y YY ANCHOR(% S') The key observation is that the application of this rule, as well as of any other NP rule, depends on the hearer's pre- vious identification of a resource situation for the definite description. The statement ANCHOR('~, s') constraining the interpretation of & is added to the situation structure by the processes that identify the referent of the definite descrip- tion; I describe these processes in detail in [30]. 8 Finally, I propose that, when context is missing, a default model construction procedure operates. It has been sug- gested [6] that the conceptualization of events follows an order reflected in the thematic hierarchy AGENT < LOCA- TION, SOURCE, GOAL < THEME proposed to account for phenomena like passivization [21]. Briefly, the idea is that 'the normal procedure for building an event description' is to follow the order in the hierarchy: first identify the agent, then the location, then the theme. This proposal can be for- malized in the current framework by having rules that oper- ate in case no other rule has, and that modify the model by introducing a resource situation for an operator and estab- lishing anchoring connections. These rules depend both on the semantics of the verb and on the syntactic configuration. The rule that identifies the AGENT, for example, is triggered by the configuration in (30), and results in the configuration in (31), that allows for the rule for the NP to operate in that the resource situation of the operator has been anchored: 8A more conventional situation-theoretic framework is used there, but the analysis carries over to the framework in this paper. 84 (30) IP NP VP w v YP '3. P (det x [~ ~ (2] "3. x3. y v(x)(y) P(x)) ^ AGENT(x) (31 IP NP VP V YP A 'X P (det x [t ~ Q] '3. x3. y P(x)(y) P(x)) ^ AGENT(x) ANCHOR(%~, S') These roles can of course originate conflicts with the re- suits of other discourse interpretation processes. I assume the following conflict resolution rule: when two rules pro- duce conflicting hypothesis, assume the result of the more specific rule. In general, the discourse interpretation rules are more specific than the default rules for constructing events representations, so they will be preferred. Although lack of space prevents me from giving exam- pies, rules relating the construction of the model to lexical semantics, such as those accounting for data like (12), can also be formulated. AN EXAMPLE We can now discuss in more detail the process of disam- biguation of (18). I have presented the logical form for (18) above, as (21). (18) Every school sent the principal to the meeting. After identifying the situation descriptions, various interpre- tation processes take place, like those performing definite description interpretation described in [30]. These processes generate hypotheses about the anchoring of resource situa- tions. Without entering into details, I assume that the con- text for (18) is provided by (32), that introduces into the model the situation description in (33), containing a group of schools and a meeting. (32) There was a meeting of the schools in the district. s x S (33) MEETING(X) S: SCHOOL* (S) PARTICIPATE(S, X) Given this context, the discourse interpretation processes identify s as the resource situation for the NPs "every school" and "the meeting." However, no unique principal can be identified in s. The activation of the model construction rules for universal quantification and definite descriptions results in the partial model in (34), in which ~1 and s'2 have been identified: (34) $ $1 $1: i : ~_emaNG(,v) ANCHOR(y, X) $2 s2: ~:z E S s2 _C THIS_SITUATION S3 IP NP VP s3: VP PP 'SENT "the principal" 'TO y S 2 E S 3 Th, model construction role applied to the universal "ev- ery school" introduces a complex condition K1 ---> I(2 as usual, but both the restriction and the nuclear scope include situation descriptions. The situation description in the re- striction, s2, is a subsituation of the situation at which the restriction is evaluated (denoted by the indexical constant THIS_SITUATION). The situation description in the nu- clear scope, s3, is an extension of s2. 85 Now that a situation description for the resource situation of the universal and a discourse marker for the school have been introduced (s2 and z, respectively), the roles for resolv- ing the parametric component Jc of the interpretation of"the principal" can apply. The result is that z is chosen as an- tecedent of ±, and s2 is chosen as the resource situation for "the principal." The model construction role updates s3 ac- cordingly; the resulting event structure is equivalent to the interpretation of (21) specified by (20). ACCOUNTING FOR THE DISAMBIGUATION DATA I briefly retum here the disambiguation principles, to show how the proposal just presented accounts for them. First of all, I'll note that, under simple assumptions about the map- ping between grammatical functions and theta-roles, there is a striking resemblance between the grammatical function hierarchy proposed by Ioup and the thematic hierarchy pro- posed by Jackendoff to account for facts about passives and reflexives. The facts accounted for by the grammatical func- tion hierarchy principle can also be explained if we assurm that the description of an event is constructed by identify- ing the filler of each thematic role in the order specified by Jackendoff's thematic hierarchy. Consider now the case of the other disambiguation fac- tor proposed by Ioup, the lexically encoded preference for certain operators to take wide scope. Definite descriptions are the paradigmatic case of an operator that tends to take wide scope. This preference can be explained in terms of the model construction hypothesis as follows. The choice of a resource situation for definite descriptions is restricted by the constraint that this resource situation be either shared among the conversational participants, or related to shared knowledge by shared relations [12, 4]. In our dialogues, for example, definite descriptions are usually interpreted with respect to the 'situation' corresponding to the current visual scene, which is independent from other situations. It follows that a definite description will be assigned narrow scope rel- ative to another operator only if (i) the resource situation of the definite is perceived to depend on this other resource situation, and (ii) this dependency relation is known to be shared. As for the tendency for NPs in topic to take wide scope, an element of a sentence is said to be in topic if it is consid- ered to be part of the background information on which the new information in the sentence depends. As the interpre- tation of the 'new' infonnation in the sentence depends on the background information, it is plausible to assume that, in constructing a model for the sentence, the listener begins by applying the model construction roles for the operators perceived to be in topic (or explicitly marked as being in topic, in the case of Japanese). The interpretation of the operators not in topic, when determined at all, will depend on the interpretation of the operators in topic, resulting in the dependency relations between the related situations that I have assumed to be the way scope is represented. Finally, I'll note that, in the absence of contextual clues, whether a completely disambiguated event structure is actu- ally constructed depends on how strong the model construc- tion roles are supposed to be; it's perfectly possible that the activation of these rules is controlled by additional factors, such as the specific needs of a task to be performed. ACKNOWLEDGMENTS I wish to thank my advisor Len Schubert and James Allen, Howard Kurtzman, Peter Lasersohn, and Uwe Reyle for sev- eral suggestions, technical help, and constructive criticism. This work was supported by the US Air Force - Rome Lab- oratory Research Contract no. F30602-91-C-0010. References [1] J.F. Allen andL.K. Schubert. The TRAINS project. TRAINS Technical Note 91-1, University of Rochester, Department of Computer Science, 1991. [2] H. Alshawi and R. Crouch. Monotonic semantic interpre- tation. In Proc. 30th. ACL, pages 32-39, University of Delaware, 1992. [3] J. Barwise and J. Perry. Situations and Attitudes. The M1T Press, 1983. [4] H. H. Clark and C. R. Marshall. Definite reference and mu- tual knowledge. In Elements of Discourse Understanding. Cambridge University Press, 1981. [5] Robin Cooper. Quantification and Syntactic Theory. D. Rei- del, 1983. [6] W. Croft. Syntactic Categories and Grammatical Relations: The cognitive organization of information. University of Chicago Press, 1991. [7] Kees van Deemter. On the Composition of Meaning. PhD thesis, University of Amsterdam, 1991. [8] K. Devlin. Logic and Information. Cambridge University Press, 1991. [9] J.E. Fenstad, P.K. Halvorsen, T. Langholm, and J. van Ben- them. Situations, Language andLogic. D.Reidel, 1987. [10] A. Garnham. On-line construction of representations of the content of texts. Reproduced by Indiana University Linguis- tics Club, 1982. [11] G. Gazdar, E. Klein, G. Pullum, and I. Sag. Generalized Phrase Structure Grammar. Blackwell, 1985. [12] J.A. Hawkins. Definiteness and lndefiniteness. Croom Helm, 1978. [13] I. Heim. The Semantics of Definite and Indefinite Noun Phrases. PhD thesis, University of Massachusetts at Amherst, 1982. [14] G.G. Hendrix. Semantic aspects of translation. In D. Walker, editor, Understanding Spoken Language, pages 193-226. El- sevier, 1978. [15] J. Hobbs. An improper treatment of quantification in ordinary English. In Proc. ACL-83, pages 57-63, Cambridge, MA, June 1983. [16] J. R. Hobbs and S. M. Shieber. An algorithm for generating quantifier scopings. ComputationaILinguistics, 13(1-2):47- 63, January-June 1987. [17] Sven Hurum. Handling scope ambiguities using domain- independent heuristics. Technical Report TR 88-12, Univer- sity of Alberta, June 1988. 86 [18] C. H. Hwang. A Logical Approach to Narrative Understand- ing. PhD thesis, University of Alberta, 1992. [19] C. H. Hwang and L. K. Schubert. Episodic logic: A situ- ational logic for natural language processing. In P. Aezel, D. Israel, Y. Katagiri, and S. Peters, editors, Situation Theory and its Applications, v.3. CSLI, 1993. To appear. [20] Georgette Ioup. Some universals for quantifier scope. In J. Kimball, editor, SyntaxandSemantics4, pages 37-58. Aca- demic Press, New York, 1975. [21] R. Jaekendoff. Semantic Interpretation in Generative Gram- mar. MIT Press, 1972. [22] E Johnson-Laird. Mental Models. Harvard University Press, 1983. [23] H. Kamp. A theory of truth and semantic representation. In J. Groenendijk, T. Janssen, and M. Stokhof, editors, For- mal Methods in the Study of Language. Mathematical Centre, Amsterdam, 1981. [24] H. Kamp and U. Reyle. From discourse to logic. To appear., 1993. [25] R. Kempson and A. Cormack. Ambiguity and quantification. Linguistics and Philosophy, 4(2):259-310, 1981. [26] H. S. Kurtzman and M. C. MacDonald. Resolution of quan- tifier scope ambiguities. To appear., April 1992. [27] G. Lakoff. Semantic interpretation in generative grammar. In D. A. Steinberg and L. A. Jakobovits, editors, Semantics: An interdisciplinary reader in philosophy, linguistics, anthropol- ogy, and psychology. Cambridge University Press, 1971. [28] R. May. The Grammar of Quantification. PhD thesis, M1T, 1977. [29] M. Poesio. Assigning a Scope to Operators in Dialogues. PhD thesis, University of Rochester, Department of Com- puter Science, 1993. [30] M. Poesio. A situation-theoretic formalization of definite description interpretation in plan elaboration dialogues. In E Aezel, D. Israel, Y. Katagiri, and S. Peters, editors, Situ- ations Theory and its Applications, voL3, chapter 12, pages 343-378. CSLI, 1993. To appear. [31] Massimo Poesio. Relational semantics and scope ambigu- ity. In J. Barwise, J. M. Gawron, G. Plotkin, and S. Tutiya, editors, Situation Semantics and its Applications, voL2, chap- ter 20, pages 469-497. CSLI, 1991. [32] T. Reinhart. Anaphora and semantic interpretation. Croom Helm, 1983. [33] U. Reyle. Dealing with ambiguities by underspecification: Construction, representation and deduction. Journal of Se- mantics, 3, 1993. [34] T. Stowell. Origins of Phrase Structure. PhD thesis, M1T, 1981. [35] Kurt A. VanLehn. Determining the scope of English quan- tifiers. Technical Report AI-TR-483, Artificial Intelligence Laboratory, MIT, Cambridge, MA, 1978.
1993
11
TWO KINDS OF METONYMY David Stallard BBN Systems and Technologies 70 Fawcett Street Cambridge, MA 02138, USA Intemet: [email protected] ABSTRACT We propose a distinction between two kinds of metonymy: "referential" metonymy, in which the refer- ent of an NP is shifted, and "predicative" metonymy, in which the referent of the NP is unchanged and the ar- gument place of the predicate is shifted instead. Exam- ples are, respectively, "The hamburger is waiting for his check" and "Which airlines fly from Boston to Denver". We also show that complications arise for both types of metonymy when multiple coercing predicates are con- sidered. Finally, we present implemented algorithms handling these complexities that generate both types of metonymic reading, as well as criteria for choosing one type of metonymic reading over another. 1 INTRODUCTION The phenomenon of semantic coercion, or "metonymy", is quite a common one in natural language. In metonymy, the actual argument of a predicate is not the literal argument, but is instead implicit and related to the literal argument through an implicit binary rela- tion. For example, in the following utterances, taken from Lakoff and Johnson (1980): (1) The ham sandwich is waiting for his check (2) Nixon bombed Hanoi it is not literally the ham sandwich which is doing the walling, but rather the person who ordered it, and not literally Nixon who is doing the bombing, but rather the pilots under his command. The noun phrase - "The ham sandwich", "Nixon" - is said to be "coerced" through an implicit binary relation to a related object which is the actual argument of the predicate. Perhaps the most familar definition of metonymy from the literature is that it is a figure of speech in which the speaker is "using one entity to refer to an- other that is related to it" (Lakoff and Johnson,1980). This definition is quite commonly held in one form or another. (For example, see (Fass,1991), where it is directly quoted; also similar definitions in (Puste- jovsky,1991), (Hobbs,1988)). But what does it really mean? Does it mean that the coerced noun phrase is actually an indirect reference to an object different from its literal referent? If so, then we might expect other linguistic data to support this. For example, we might expect subsequent anaphora to agree with the "real" referent. And indeed, in the following dialogue the intra-sentential pronoun "his" and the extra-sentential "he" both agree with the indirect reference to the customer, not the the literal sandwich: (3) The ham sandwich is waiting for his check He is getting a little impatient But compare the dialogues (4) Nixon bombed Hanoi. He wanted to force the Communists to negotiate (4') Nixon bombed Hanoi *They sang all the way back to Saigon The dialogue (4) is quite natural, while in (4'), the use of "they" to refer to the bomber crews seems ruled out - the reverse of what the indirect reference view would predict. A second problem with the indirect reference view is found in certain performative contexts, such as wh- questions and imperatives, in which the referent of a particular NP is sought by the speaker. If this NP is metonymically coerced, we could expect the cor- rect response to the utterance to be the indirect refer- ence. Consider, the following examples, which are ac- tual utterances collected for the DARPA ATIS domain (MADCOW,1992), a database question-answering do- main about commercial air flights: (5) Which wide-body jets serve dinner? (6) Which airlines fly from Boston to Denver? 87 In ATIS, only flights "fly" or "serve meals" and thus both sentences can only be understood metonymically. In (5), it is not the jets which serve dinner but the flights on the jets, and one plausible constmal is indeed that "wide-body jets" is really a reference to flights on wide-body jets, and the interpretation of the sentence is a request to display the set of these flights. This would agree with the indirect reference view. In (6), however, the only possible construal seems to be that a set of airlines - the airlines offering flights from boston to Denver - is being sought. To respond to this request with the set of flights from Boston to Denver would clearly be absurd. We propose a distinction, motivated by such exam- ples, between two kinds of metonymy, which we term referential and predicative. In referential metonymy, the metonymic noun phrase does indeed have an in- tended referent related to but different from its literal meaning. An example is the noun phrase "the ham sandwich" in (1) above, where the actual and intended referent is to a related object - the person who ordered the sandwhich. In predicative metonymy, however, the actual and intended referent of the noun phrase is just the literal one, and it is more accurate to say that the predicate is coerced (though as we show later, this is itself a simplification). An example of predicative metonymy is (6) above. We also show how both types of metonymy are complicated by the presence of multiple predicates that require the same coercion of an NP. We present al- gorithms for generating the two types of metonymic reading that cope with these complexities. Finally, we present criteria for determining a preference for one type of metonymic reading over another. (We do not, however, deal in this paper with the question of how to determine which relations to use for coercion, viewing this as a separate problem.) The examples throughout are taken from the ATIS domain, a domain with a pre-established formal concep- tual system of categories and relations that utterances must be mapped onto. The algorithms presented are im- plemented in the DELPIM system (Bobrow et a1,1991), which has been ported to that domain and formally eval- uated in it. The remainder of the paper is organized into the following sections: Section 2, the next section, formalizes the distinc- tion between referential and predicative metonymy by giving logical form readings for each, and shows how both types of metonymy are globally complicated when multiple coercing predicates are considered. Section 3 gives an algorithm for generating both types of metonymic readings in semantic interpretation that handles these global complications. Section 4 gives criteria for picking one type of read- ing over another Finally, section 5 compares our work to previous work on metonymy. 2 METONYMY AND LOGICAL FORM In this section we sharpen and formalize our notion of referential and predicative metonymy by giving logical form readings for the different cases. The logical language we use has sortal quantifiers, with a special quantifier "WH". A wh question is the treated as: (7) (wh x S (and (P1 x) (P2 x))) which is interpreted as a request to display all members of S (the semantic class of the wh-np) which satisfy both P1 (the modifiers of the wh-np) and P2 (the predicate of the clause). A labeled-argument notation is used for clause semantics. Now, let us return to the examples of the previous section. In (5), the referential metonymic reading of the sentence in which flights are sought that serve dinner and are on wide-body jets is expressed as: (8) Which wide-body jets serve dinner? (wh x flights (and (exists y jets (and (aircraft-of x y) (wide-body y))) (serve flight-of x meal-of dinner))) where coercion relation is AIRCRAFT-OF, mapping between flights and the aircraft they are on. Compare this with the reading for (6), in which airlines and not flights are sought: (9) Which airlines fly from Boston to Denver? (wh x airlines (exists y flights (and (airline-of y x) (fly flight-of y orig-of Boston dest-of Denver)))) 88 The readings of the referential (8) and and the predica- tive (9) are in a sense inside-out versions of each other. Both have an interpolated quantifier for FLIGHTS that is not explicitly present in the utterance but in (8) the interpolated is on the outside and is the WH-thing dis- played whereas in (9) the interpolated quantifier is on the inside, and is merely part of the description of what is to be displayed. This, in logical terms, is the crux of the referential/predicative distinction. Predicative metonymy can be loosely thought of as coercion of a predicate argument place, rather than of the argument NP itself. It may therefore seem attractive to try to formalize this in a directly compositional way through some device such as lambda-abstraction. If P is the predicate, R the binary relation of coercion, and i the argument-number of P to be coerced, the coerced version of P might then be defined in such a view as: (lambda (zl ..... xi,...z,~) (exists y (domain R) (and (R y ~i) (P 21 ..... Y,-..~,~)))) This is a predicate just like P, except extended by the relation R in its i'th argument place to take an object in the range of R. Metonymic extension of the predicate would be then be an essentially compositional, local process, taking place at the juncture of predicate and argument and not affecting interpretation elsewhere. Unfortunately, such a treatment turns out to give the wrong interpretation when multiple predicates requiring the same coercion are present. Consider: (10a) Which airlines flying from Boston to Denver leave at 3 pm? (10b) Show airlines flying from Boston to Denver leaving at 3 pm Both examples are predicative metonymic utterances. Airlines neither "fly" nor "leave"; flights do these, so both the main verb and the relative clause modifier pred- icates require airline-to-flight coercions. If the lambda- abstraction scheme is right each predicate-application couM be dealt with separately. Yet the following reading for 10a, which would result from the application of the lambda-abstraction scheme to the two predicates, is emphatically not the correct reading: (11) (wh x airlines (and (exists y flights (airline-of y x) (fly flight-of y orig-of Boston dest-of Denver)) (exists y' flights (airline-of y' x) (leave flight-of y' time-of (3 pro))))) These troth-conditions are too weak, as they allow air- lines that have a Boston to Denver flight at any time, so long as they have another (possibly different) flight at 3 pm to any place. The proper reading is instead: (12) (wh x airlines (exists y flights (and (airline-of y x) (fly flight-of y orig-of Boston dest-of Denver) (leave flight-of y time-of (3 pm))))) in which the airline is related to a single flight descrip- tion that has all the desired properties. Note that the issue here is not that one predicate is intemal to the NP and the other external to it. The same problem arises with whatever combination of in- ternal and external predicates. In 10b, for example, both predicates are internal to the NP but if the two coercions are carried out seperately the same erroneous troth-conditions will result, in which the AIRLINE is related to two different FLIGHT descriptions instead of one. Nor is the "single-interpolation" requirement re- lated specifically to the referential]predicative distinc- tion. If we modify one of our referential examples to include multiple coercing predicates, as below: The ham sandwich at table 12 is impatient we see that a correct reading would still require that the ham sandwich be related to one and only one interpo- lated description of a person that ordered the sandwich, is seated at table 12, and is impatient. That fact that multiple coercions of the same NP, whether internal or external to it, cannot be carried out separately means that the phenomenon of metonymy takes on a decisively global character, one which is as much akin to quantifier scoping as it is to compositional 89 semantic interpretation. As we shall see in the next section, the quantifier scoping stage of processing is exactly where we locate the solution to these problems. 3 GENERATING THE READINGS We now show how referential and predicative metonymic readings are generated, and how the require- ment of a single interpolate for multiple coercions is enforced. 3.1 Input Representation A two-stage mechanism of semantic interpretation is used in the DELPHI system. In the first stage, an ini- tial predicate-argument level of semantic representation is produced, with quantifiers in place. In the second, a fully quantified logical form is generated, in which quantifiers are pulled out of the predicate-argument rep- resentation and placed in their proper relative scope. It is in this second stage that the referential/predicative distinction is made. The first stage of semantic interpretation has been described elsewhere in (Bobrow et al,1991), and we do not discuss it here except to describe its output, which forms the input to the quantification stage. This output is a tree of whose nodes are phrasal representation ob- jects. Each of these phrasal representation objects has a head and a set of bindings. The head includes semantic type information (as well as other information such as subcategorization etc.), while the bindings represent the semantic effects of modifiers on the head. Each binding has four parts: 1. the modifier grammatical relation 2. the modifier semantic relation 3. the filler of this semantic relation 4. a binary coercion relation The following is the top-level phrasal representation for "Which airlines fly from Boston to Denver?": CLAUSE: head: fly subject: flight-of, (wh airlines), airline-of pp: orig-of, Boston, identity pp: dest-of, Denver, identity This representation has three bindings: a SUBJECT and two PP-complements. In the two PP bindings, the ranges of the modifier semantic relations ORIG-OF and and DEST-OF are both CITY, which agrees with the explict fillers BOSTON and DENVER. Thus, in these bindings no coercion is needed and the coercion relation is just IDENTITY. But in the SUBJECT binding, the range of the modifier relation FLIGHT-OF is FLIGHT and the explicit filler is an AIRLINE. Here, the coer- cion relation AIRLINE-OF is required to bridge the gap between FLIGHT and AIRLINE. NP semantic representations have the same struc- ture, plus a quantifier. Here is the representation for "which airline" NP: head: AIRLINE quant: wh We refer to the constituent modifier bindings of the NP itself as its "intemal" bindings. In this particular exam- ple, there are no internal modifiers and thus no internal bindings. When an NP is a constituent of a clause (or is the object of a PP which is), we call the binding in which the NP occurs its "external" binding. Semantic representations of this kind are neutral not only with respect to quantifier scoping, but to the dis- tinction between predicative and referential metonymy as well. From the standpoint of the predicate, one can think of the coercion relation as extending the given argument place of the predicate to take an argument of a different type. From the standpoint of the NP argu- ment, on the other hand, the coercion can be viewed as mapping the NP in the "reverse" direction of the rela- tion, from range AIRLINE to domain FLIGHT instead of from domain to range. 3.2 Algorithm The alternative metonymic readings are generated from these semantic representations as part of the quantifier scoping pass. There are two steps. Step 1, carded out before quantification begins, is to walk the phrasal representation Ixee and build a "co- ercion table" relating each nominal head N to the set of coercion relations on it: R~ - the coercion relation of N's external binding R~ - the coercion relations of N's internal bindings As a technical convenience, IDENTITY relations in the R,,R~ are subscripted with the semantic type restric- tion T of the binding in which they occur. This type restriction is simply the range of the semantic modifier relation in the binding. 90 Step 2 is to pull the quantifiers out of this StlUC- ture and into their proper places in a complete formula. For an NP with a non-IDENTITY entry in the coercion relation, alternative WFF-generating schemas are used to generate the alternative referential and predicative metonymic readings. In what follows, let N be the noun phrase under consideration and let Q be its quantifier and S its sort. Let R be any relation which is not IDENTITY and which is one of the coercion relations associated with N in the table - whether Re or one of the Ri. Now, consider all the internal bindings of N which have R as their coercion relation. Let the MR and F~ be respectively the semantic modifier relations and cor- responding arguments of these bindings. Let the Mo, Fo and Ro be, respectively, the semantic modifier re- lations, arguments and coercion relations of bindings which do not have R as their coercion relation. The operation of reading generation is to pick a non-IDENTITY R from N's table, and apply the two schemas. To generate the predicative reading, the fol- lowing schema is used: (QxS {(Mo x Fo)} (exists y (domain R) (and (R y x) {(MR y FR)}) ***)) The token "***" indicates the open slot for the matrix predicate of the clause, and the brackets "{","}" are shorthand for conjoined iteration over the subscripted items within. Note that the interpolated EXISTS quantifer has scope over the matrix formula of the clause, so it will govern any external R coercion for N. Furthermore, be- cause all the R coercions are gathered together in this scheme, the same quantifier will govern any R coer- cions which are internal to the NP. This fufills the re- quirement of the previous section: that there be one and only one quantifier for a given coercion, even when that coercion is needed both by internal modifier relations and by the external clause in which the noun phrase is contained. Use of the schema for our example above generates the interpretation: Which airlines fly from Boston to Denver? (wh x AIRLINE (exists y FLIGHT (AIRLINE-OF y x) (FLY flight-of y orig-of Boston dest-of Denver))) as desired. In order to enforce the restriction that subsequent anaphora resolve to the literal AIRLINE and not the in- terpolated FLIGHT (and, similarly, to "Nixon" instead of the pilots in our earlier example) we add a diacritic to the interpolated quantifier '(exists y FLIGHT ...)' that forbids the discourse component from resolving an anaphor to this quantified description. The referential metonymic reading is generated by a different schema. In order to use this schema, the following condition must hold: (Re = R) V (Re = IDENTITYT A (domainR) C T) This condition ensures that a semantically ill-formed expression will not result and simply requires that the type requirement of the external binding of the NP to be referentially coerced agrees with the coerced ver- sion. Either the coercion must be dictated by the ex- ternal binding itself, or the external binding's type re- quirement must be loose enough to accept the coerced version (as in the case of a loosely-typed predicate like "show"). If these conditions hold, then the following schema can be used to produce the referential reading: (Q x (domain R) (and (exists y S (and {(Ma y Fa)} (R x y)) {(Mo x Fo)})) ***) Use of the schema generates the following reading for our example: Which wide-body jets serve dinner? (wh x FLIGHT (exists y JET (and (WIDE-BODY y) (AIRCRAFT-OF x y) (SERVE flight-of x meal-of DINNER)))) In principle, of course, a given NP's entry in the coercion table can have more than one distinct non- IDENTITY coercion relation. Obviously in such a case there can be at most one referential coercion of the NP. 91 All other coercions to different semantics types must then be predicative. In the case of multiple predicative coercions, the predicative schema is simply iterated. We arbitrarily disallow chains of coercions ("double- shifting"), though these in principle could be accomo- dated. 4 DETERMINING THE CORRECT READING Thus far we have argued for different types of metonymic reading and shown how to generate them, but have not given any indication of when a given type of reading is to be preferred. How do we know, for example, that the predicative reading and not the refer- ential is correct in (6) "Which airlines fly from Boston to Denver"? A few criteria are fairly obvious. One we have already seen in the previous section: the external- binding agreement condition on applying the referen- tial metonymy schema. If an NP's external semantic context agrees with its literal referent, but not its refer- entially coerced version, then referential metonymy is ruled out for that N'P. A somewhat broader notion of external semantic context is found in intra-sentential anaphora: The ham sandwich is waiting for HIS check Which airline flies to ITS headquarters city? Clearly, we would prefer any intra-sentential anaphora to agree with the "real" referent of the NP. In the first sentence above, the pronoun "his" cannot agree with the literal referent, but can agree with the metonymicaUy interpolated PERSON, and so provides evidence for the referential reading. In the second sentence, the pronoun "its" cannot agree in number with the interpolated set of FLIGHTs, but can agree with the singular "airline", and so provides evidence for the predicative reading. Neither of these two criteria addresses example (6), however. Our hypothesis is that the real distinction be- ing made here is pragmatic. An important principle of language use (essentially part of the Gricean Maxim of Quantity (Grice,1975)) is that a cooperative speaker will avoid adding a part of a description which self- evidently adds no constraint to the set of things being described. This is the reason why such pleonasms as "female woman" sound odd to us, and are not normally uttered. In this light, the referential reading of the sen- tence above: (wh x FLIGHT (exists y AIRLINE (AIRLINE-OF x y)) (FLY flight-of x orig-of Boston dest-of Denver)) has a completely redundant component, since every flight is on some airline. Yet this redundant component is precisely the one introduced to handle the coercion! Encoding the reference in this way has no utility: one might as well have said "which flights" to begin with. We can formalize this principle as follows. Let R be the coercion relation and let S be the literal NP referent-set. Then the referential coercion of the NP can be written as the pairing (R,S), which describes a property on the domain of R that picks out just the subset of the domain of R that is obtained by mapping S back into the domain in the "reverse" direction of R. Such a property is considered vacuous if it provides no constraint on the domain, or in other words if: R is a total relation and S = (RANGE R) holds. A total, or "into", relation is one which maps every element of its domain to at least one element of its range. Since every flight in ATIS is on an airline, AIRLINE-OF is a total relation, and AIRLINE is its range, so a referential metonymy is clearly vacuous in this case. In contrast, the relation AIRCRAFF-OF is total, but "wide-body jet" is a proper subclass of its range (AIRCRAFT), so this condition does not hold for "What wide body jets serve dinner?" and referential metonymy is allowed for it. Similar pragmatic considerations can be applied to rule out predicative metonymy in some cases. If a metonymically extended predicate provides no con- straint on the NP, then predicative metonymy is the less likely reading. Consider again our referential ex- ample, "What wide-body jets serve dinner". If this is taken predicatively, it would have as its logical form: (wh x jet (and (wide-body x) (exists y flight (and (aircraft y x) (serve flight-of y meal-of dinner))))) 92 The class AIRCRAFT in ATIS is really the set of aircraft-types, and the same aircraft-type is typically used by a large number of flights with nothing par- ticularly in common. It therefore seems unlikely that the property "(used on flights)that serve dinner" offers any constraint on the class AIRCRAFT: in other words, that being a particular type of aircraft and being used by a flight that serves dinner are correlated in any way. This particular judgment, however, is based on human knowledge and plausibility, and is difficult to formalize given the current state of the art in knowledge repre- sentation. We have proposed a number of possible theoreti- cal criteria for choosing between predicative and refer- ential metonymy. It is of some interest, therefore, to compare the relative occurences of predicative and ref- erential metonymy in actual data. Our study of a large (> 5000 sentence) corpus of naturally collected ATIS data shows that predicative metonymy is very common. Noun phrases headed by "fare", "airline" and "ticket" frequently appear in positions that require a flight argu- ment. Yet it is clear, both from the meaning of the utter- ance, and from the judgements of independent annota- tors who pair these sentences with "correct" responses for NL system evaluation, that fares and airlines are being talked about in such cases, and not flights. Indeed, our experiments have shown that allow- ing predicative metonymic coercion when evaluating DELPHI against this corpus leads to a 27% decrease in weighted error over not allowing it. This is very sub- stantial difference indeed, and testifies to the importance of the metonymy phenomenon in actual data. As for the referential type of metonymy, we have found only a few cases of it in this corpus. We hy- pothesize that the reason for this is that referential metonymy, involving as it does an encoding of a ref- erence in terms of a categorially different thing, is a more marked and unusual event in psychological terms. Predicative metonymy, on the other hand, involves no such operation, merely the convenient making-way of a predicate for a non-standard but related argument. For this reason, our work prefers predicative metonymy as the default choice in processing when no other evidence is present. 5 COMPARISON WITH PREVIOUS WORK, CONCLUSIONS metonymy must ultimately be treated as a global phe- nomenon over the sentence, part of which belongs with quantificational considerations and part with lo- cal compositional interpretation. We have shown how pragmatic considerations of language use can influence which reading is preferred. The referential/predicative distinction is not ob- served in most of the writing on metonymy, which is either not formal and computational in nature (Lakoff and Johnson,1980), or is oriented towards different types of systems and computational concerns. Hobbs (1987,1988), for instance, discusses metonymy along with a number of other "local pragmatic" issues (nom- inal compounds, etc.), but this work is done in the context of a message-processing and not a question- anwering system, so many of the issues we have dis- cussed (wh-questions, etc.) simply do not arise them. Something like the referential/predicative distinc- tion does seem to be present, however, in the work of a few other authors. For example, Fass (1991) speaks of what he calls the "source" or the "target" of a metonymy being alternatively substituted for. His sentence repre- sentations are not done in a formal logical framework, however, so it is difficult to tell if the ambiguity has a referential or truth-conditional consequence. Closer to our work is that of Pustejovsky (1991). He defines a notion he calls "logical metonymy" which seems quite close to our notion of predicative metonymy. In a sentence like "Mary enjoyed the book", logical metonymy changes the type of the verb "en- joy" to take an object like "book" which is not an event but which is related to one (the reading of the book). As we have shown in Section 3, however, the single-interpolation requirement for multiple coercing predicates poses a technical problem for a verb type- changing view which only looks at the given verb and argument by themselves. Our work has demonstrated that a correct account of metonymic coercion must, in the most general case, involve considerations that are global over the whole utterance interpretation. Our work has also demonstrated an important inter- action between appropriateness of metonymic readings and the Gricean Maxim of Quantity. To our knowl- edge, no other work has done this. Finally, our work differs from previous work in the area by having been carded out in an environment of objective evaluation, an environment whose rigors have pushed us towards many of the insights presented here. We have argued for a distinction between two types of metonymic reading, and have given evidence that 93 6 Acknowledgments The work reported here was supported by the Advanced Research Projects Agency and was monitored by the Office of Naval Research under Contract No. N00014- 92-C-0035. The views and conclusions contained in this document are those of the author and should not be interpreted as necessarily representing the official poli- cies, either expressed or implied, of the Defense Ad- vanced Research Projects Agency or the United States Government. I would like to thank James Pustejovsky and Rusty Bobrow for valuable comments and discussion. Volume 17, Number 1 March 1991 References Lakoff, George and Johnson, Mark Metaphors We Live 8y University of Chicago Press 1980 MADCOW Committee (Hirschman, Lynette et al) Multi-Site Data Collection for a Spoken Language Cor- pus in Proceedings Speech and Natural Language Workshop February 1992 Grice, H. P. Logic and Conversation in P. Cole and J. L. Morgan, Speech Acts, New York: Academic Press, 1975 Pustejovsky, James The Generative Lexicon Computational Linguistics Volume 17, Number 4 December 1991 Hobbs, Jerry R. and Stickel, Mark Interpretation as Ab- duction in Proceedings of the 26th ACL June 1988 Bobrow, R., Ingria, R. and Stallard, D. The Mapping Unit Approach to Subcategorization in Proceedings Speech and Natural Language Workshop February 1991 Hobbs, Jerry R., and Martin, Paul Local Pragmatics in Proceedings, 10th International Joint Conference on Artificial Intelligence (IJCAI-87). Milan, Italy Fass, Dan met*: A Method for Discrimating Metonymy and Metaphor by Computer Computational Linguistics 94
1993
12
PLANNING MULTIMODAL DISCOURSE Wolfgang Wahlster German Research Center for Artificial Intelligence (DFKI) Stuhlsatzenhausweg 3 D-6600 Saarbr/icken 11, Germany Internet: [email protected] Abstract In this talk, we will, show how techniques for plan- ning text and discourse can be generalized to plan the structure and content of multimodal communi- cations, that integrate natural language, pointing, graphics, and animations. The central claim of this talk is that the generation of multimodal dis- course can be considered as an incremental plan- ning process that aims to achieve a given commu- nicative goal. One of the surprises from our research is that it is actually possible to extend and adapt many of the fundamental concepts developed to date in computatational linguistics in such a way that they become useful for multimodal discourse as well. This means that an interesting methodologi- cal transfer from the area of natural language pro- cessing to a much broader computational model of multimodal communication is possible. In partic- ular, semantic and pragmatic concepts like speech acts, coherence, focus, communicative act, dis- course model, reference, implicature, anaphora, rhetorical relations and scope ambiguity take an extended meaning in the context of multimodal discourse. It is an important goal of this research not simply to merge the verbalization and visualiza- tion results of mode-specific generators, but to carefully coordinate them in such a way that they generate a multiplieative improvement in commu- nication capabilities. Allowing all of the modali- ties to refer to and depend upon each other is a key to the richness of multimodal communication. A basic principle underlying our model is that the various constituents of a multimodal commu- nication should be generated from a common rep- resentation of what is to be conveyed. This raises the question of how to decompose a given com- municative goal into subgoals to be realized by the mode-specific generators, so that they com- plement each other. To address this problem, we explore computational models of the cognitive de- eision process, coping with questions such as what should go into text, what should go into graphics, and which kinds of links between the verbal and non-verbal fragments are necessary. In addition, we deal with layout as a rhetorical force, influ- encing the intentional and attentional state of the discourse participants. We have been engaged in work in the area of multimodal communication for several years now, starting with the HAM-ANS (Wahlster et al. 1983) and VITRA systems (Wahlster 1989), which auto- matically create natural language descriptions of pictures and image sequences shown on the screen. These projects resulted in a better understanding of how perception interacts with language produc- tion. Since then, we have been investigating ways of integrating tactile pointing and graphics with natural language understanding and generation in the XTRA (Wahlster 1991) and WIP projects (Wahlster et al. 1991). The task of the knowledge-based presentation system WIP is the context-sensitive generation of a variety of multimodal communications from an input including a presentation goal (Wahlster et al. 1993a). The presentation goal is a formal repre- sentation of the communicative intent specified by a back-end application system. WIP is currently able to generate simple multimodal explanations in German and English on using an espresso ma- chine, assembling a lawn-mower, or installing a modem, demonstrating our claim of language and application independence. WIP is a highly adap- tive multimodal presentation system, since all of its output is generated on the fly and customized for the intended discourse situation. The quest for adaptation is based on the fact that it is impos- sible to anticipate the needs and requirements of each potential dialog partner in an infinite number of discourse situations. Thus all presentation deci- sions are postponed until runtime. In contrast to hypermedia-based approaches, WIP does not use any preplanned texts or graphics. That is, each presentation is designed from scratch by reasoning 95 from first principles using commonsense presenta- tion knowledge. We approximate the fact that multimodal communication is always situated by introducing seven discourse parameters in our model. The cur- rent system includes a choice between user stereo- types (e.g. novice, expert), target languages (Ger- man vs. English), layout formats (e.g. paper hard- copy, slide, screen display), output modes (incre- mental output vs. complete output only), pre- ferred mode (e.g. text, graphics, or no prefer- ence), and binary switches for space restrictions and speech output. This set of parameters is used to specify design constraints that must be satisfied by the final presentation. The combinatorics of WIP's contextual parameters can generate 576 al- ternate multimodal presentations of the same con- tent. At the heart of the multimodal presentation system WIP is a parallel top-down planner (Andr6 and Rist 1993) and a constraint-based layout man- ager. While the root of the hierarchical plan struc- ture for a particular multimodal communication corresponds to a complex communicative act such as describing a process, the leaves are elementary acts that verbalize and visualize information spec- ified in the input from the back-end application system. In multimodal generation systems, three dif- ferent processes are distinguished: a content plan- ning process, a mode selection process and a con- tent realization process. A sequential architec- ture in which data only flow from the "what to present" to the "how to present" part has proven inappropriate because the components responsible for selecting the contents would have to anticipate all decisions of the realization components. This problem is compounded if content realization is done by separate components (e.g. for language, pointing, graphics and animations) of which the content planner has only limited knowledge. It seems even inappropriate to sequentialize content planning and mode selection. Selecting a mode of presentation depends to a large extent on the nature of the information to be conveyed. On the other hand, content planning is strongly influenced by previously selected mode combinations. E.g., to graphically refer to a phys- ical object (Rist and Andr6 1992), we need visual information that may be irrelevant to textual ref- erences. In the WIP system, we interleave content and mode selection. In contrast to this, presen- tation planning and content realization are per- formed by separate components to enable parallel processing (Wahlster et al. 1993b). In a follow-up project to WIP called PPP (Personalized Plan-Based Presenter), we are cur- rently addressing the additional problem of plan- ning presentation acts such as pointing and coor- dinated speech output during the display of the multimodal material synthesized by WIP. The insights and experience we gained from the design and implementation of the multimodal systems IIAM-ANS, VITRA, XTRA and WIP provide a good starting point for a deeper un- derstanding of the interdependencies of language, graphics, pointing, and animations in coordinated multimodal discourse. REFERENCES Andre, Elisabeth; and Rist, Thomas. 1993. The Design of Illustrated Documents as a Planning Task. Maybury, Mark (ed.). Intelligent Multime- dia Interfaces, AAAI Press (to appear). Rist, Thomas; and Andr6, Elisabeth. 1992. From Presentation Tasks to Pictures: Towards an Approach to Automatic Graphics Design. Pro- ceedings European Conference on AI (ECAI-92), Vienna, Austria (1992) 764-768. Wahlster, Wolfgang. 1989. One Word Says more than a Thousand Pictures. On the Auto- matic Verbalization of the Results of Image Se- quence Analysis Systems. Computers and Artifi- cial Intelligence, 8, 5:479-492 Wahlster, Wolfgang. 1991. User and Dis- course Models for Multimodal Communication. in: Sullivan, J.W.; and Tyler, S.W.(eds.). Intel- ligent User Interfaces, Reading: Addison-Wesley (1991): 45-67. Wahlster, Wolfgang; Marburger, Heinz; Jame- son, Anthony; Busemann, Stephan. 1983. Over- answering Yes-No Questions: Extended Responses in a NL Interface to a Vision System. Proceedings of IJCAI-83, Karlsruhe: 643-646. Wahlster, Wolfgang; Andr6, Elisabeth; Graf, Winfried; and Rist, Thomas. 1991. Designing I1- lustrated Texts: How Language Production is In- fluenced by Graphics Generation. Proceedings Eu- ropean ACL Conference, Berlin, Germany: 8-14. Wahlster, Wolfgang; Andr6, Elisabeth; Ban- dyopadhyay, Som; Graf, Winfried; and Rist, Thomas. 1993a. WIP: The Coordinated Gener- ation of Multimodal Presentations from a Com- mon Representation, in: Ortony, A.; Slack, J.; and Stock, O.(eds.). Communication from an Artifi- cial Intelligence Perspective: Theoretical and Ap- plied Issues, Springer: Heidelberg: 121-144. Wahlster, Wolfgang; Andr6, Elisabeth; Fin- kler, Wolfgang; Profitlich, Hans-Jiirgen; and Rist, Thomas. 1993b. Plan-Based Integration of Natu- ral Language and Graphics Generation. Artificial Intelligence Journal 26(3), (to appear). 96
1993
13
A UNIFICATION-BASED PARSER FOR RELATIONAL GRAMMAR David E. Johnson IBM Research Division P.O. Box 218 Yorktown Heights, NY 10598 dj ohns @ war son. ibm. com Adam Meyers Linguistics Department New York University New York, NY 10003 [email protected] Lawrence S. Moss Mathematics Department Indiana University Bloomington, IN 47401 [email protected] Abstract We present an implemented unification-based parser for relational grammars developed within the stratified feature grammar (SFG) frame- work, which generalizes Kasper-Rounds logic to handle relational grammar analyses. We first in- troduce the key aspects of SFG and a lexicalized, graph-based variant of the framework suitable for implementing relational grammars. We then de- scribe a head-driven chart parser for lexicalized SFG. The basic parsing operation is essentially ordinary feature-structure unification augmented with an operation of label unification to build the stratified features characteristic of SFG. INTRODUCTION Although the impact of relational grammar (RG) on theoretical linguistics has been substan- tial, it has never previously been put in a form suitable for computational use. RG's multiple syn- tactic strata would seem to preclude its use in the kind of monotonic, unification-based parsing sys- tem many now consider standard ([1], [11]). How- ever, recent work by Johnson and Moss [2] on a Kasper-Rounds (KR) style logic-based formalism [5] for RG, called Stratified Feature Grammar (S FG), has demonstrated that even RG's multiple strata are amenable to a feature-structure treat- ment. Based on this work, we have developed a unification-based, chart parser for a lexical ver- sion of SFG suitable for building computational relational grammars. A lexicalized SFG is sim- ply a collection of stratified feature graphs (S- graphs), each of which is anchored to a lexical item, analogous to lexicalized TAGs [10]. The ba- sic parsing operation of the system is S-graph unification (S-unification): This is essentially ordinary feature-structure unification augmented with an operation of label unification to build the stratified features characteristic of SFG. RELATED WORK Rounds and Manaster-Ramer [9] suggested en- coding multiple strata in terms of a "level" at- tribute, using path equations to state correspon- dences across strata. Unfortunately, "unchanged' relations in a stratum must be explicitly "car- ried over" via path equations to the next stra- tum. Even worse, these "carry over" equations vary from case to case. SFG avoids this problem. STRATIFIED FEATURE GRAM- MAR SFG's key innovation is the generalization of the concept ]eature to a sequence of so-called re- lational signs (R-signs). The interpretation of a stratified feature is that each R-sign in a se- quence denotes a primitive relation in different strata. 1 For instance, in Joe gave Mary tea there are, at the clause level, four sister arcs (arcs with the same source node), as shown in Figure h one arc labeled [HI with target gave, indicating gave is the head of the clause; one with label [1] and target Joe, indicating Joe is both the predicate- argument, and surface subject, of the clause; one with label [3,2] and target Mary, indicating that l We use the following R-signs: 1 (subject), 2 (direct object), 3 (indirect object), 8 (chSmeur), Cat (Category), C (comp), F (flag), H (head), LOC (locative), M (marked), as well as the special Null R-signs 0 and/, explainedbelow. 97 [Ca~] s [1] Joe [Hi save [3, 2] Mary [2, 8] tea Figure 1: S-graph for Joe gave Mary tea. Mary is the predicate-argument indirect object, but the surface direct object, of the clause; and one with label [2,8] and target tea, indicating tea is the predicate-argument direct object, but sur- face ch6meur, of the clause. Such a structure is called a stratified feature graph (S-graph). This situation could be described in SFG logic with the following formula (the significance of the different label delimiters (,), [, ] is explained be- low): RI:-- [Hi:gave A [1):Joe A [3, 2): Mary A [2, 8): tea . In RG, the clause-level syntactic information captured in R1 combines two statements: one characterizing gave as taking an initial 1, initial 2 and initial 3 (Ditransltive); and one character- izing the concomitant "advancement" of the 3 to 2 and the "demotion" of the 2 to 8 (Dative). In SFG, these two statements would be: Ditransitive :: [Hi:gave A [1):T A .[2):T A [3):T ; Dative :---- (3, 2): T ~ (2, 8_): T. Ditransitive involves standard Boolean con- junction (A). Dative, however, involves an opera- tor, &, unique to SFG. Formulas involving ~ are called e~tension formulas and they have a more complicated semantics. For example, Dative has the following informal interpretation: Two dis- tinct arcs with labels 3 and 2 may be "extended" to (3,2) and (2,8) respectively. Extension formulas are, in a sense, the heart of the SFG description language, for without them RG analyses could not be properly represented. 2 2We gloss over many technicalities, e.g., the SFG notion data justification and the formal semantics of stratified fea- tures; cf. [2]. RG-style analyses can be captured in terms of rules such as those above. Moreover, since the above formulas state positive constraints, they can be represented as S-graphs corresponding to the minimal satisfying models of the respective formu- las. We compile the various rules and their com- binations into Rule Graphs and associate sets of these with appropriate lexical anchors, resulting in a lexicalized grammar, s S-graphs are formally feature structures: given a collection of sister arcs, the stratified labels are required to be functional. However, as shown in the example, the individual R-signs are not. More- over, the lengths of the labels can vary, and this crucial property is how SFG avoids the "carry over" problem. S-graphs also include a strict par- tial order on arcs to represent linear precedence (cf. [3], [9]). The SFG description language in- cludes a class of linear precedence statements, e.g., (1] -4 (Hi means that in a constituent "the final subject precedes the head". Given a set 7Z,9 of R-signs, a (stratified) fea- ture (or label) is a sequence of R-signs which may be closed on the left or right or both. Closed sides are indicated with square brackets and open sides with parentheses. For example, [2, 1) denotes a la- bel that is closed on the left and open on the right, and [3, 2, 1, 0] denotes a label that is closed on both sides. Labels of the form [-.-] are called (totally) closed; of the form (...) (totally) open; and the others partially closed (open) or closed (open) on the right (left), as appropriate. Let B£ denote the set of features over 7Z•*. B£ is partially ordered by the smallest relation C_ per- mitting eztension along open sides. For example, (3) ___ (3,2) U [3,2,1) C [3,2, 1,0]. Each feature l subsuming (C) a feature f provides a partial description of f. The left-closed bracket [ allows reference to the "deepest" (initia~ R-sign of a left-closed feature; the right-closed bracket ] to the "most surfacy" (fina~ R-sign of a right-closed feature. The totally closed features are maximal (completely defined) and with respect to label uni- fication, defined below, act like ordinary (atomic) features. Formal definitions of S-graph and other defini- tions implicit in our work are provided in [2]. s We ignore negative constraints here. 98 AN EXAMPLE Figure 2 depicts the essential aspects of the S- graph for John seemed ill. Focus on the features [0,1] and [2,1,0], both of which have the NP John as target (indicated by the ~7's). The R-sign 0 is a member of Null, a distinguished set of R-signs, members of which can only occur next to brackets [ or ]. The prefix [2,1) of the label [2,1,0] is the SFG representation of RG's unaccusative analysis of adjectives. The suffix (1,0] of [2,1,0]; the prefix [0,1) of the label [0,1] in the matrix clause; and the structure-sharing collectively represent the raising of the embedded subject (cf. Figure 3). Given an S-graph G, Null R-signs permit the definitions of the predicate-argument graph, and the surface graph, of G. The predicate- argument graph corresponds to all arcs whose la- bels do not begin with a Null R-sign; the rele- vant R-signs are the first ones. The surface graph corresponds to all arcs whose labels do not end with a Null R-sign; the relevant R-signs are the final ones. In the example, the arc labeled [0,1] is not a predicate-argument arc, indicating that John bears no predicate-argument relation to the top clause. And the arc labeled [2,1,0] is not a surface arc, indicating that John bears no surface relation to the embedded phrase headed by ill. The surface graph is shown in Figure 4 and the predicate-argument graph in Figure 5. No- tice that the surface graph is a tree. The tree- hood of surface graphs is part of the defini- tion of S-graph and provides the foundation for our parsing algorithm; it is the SFG analog to the "context-free backbone" typical of unification- based systems [11]. LEXICALIZED SFG Given a finite collection of rule graphs, we could construct the finite set of S-graphs reflecting all consistent combinations of rule graphs and then associate each word with the collection of derived graphs it anchors. However, we actually only con- struct all the derived graphs not involving extrac- tions. Since extractions can affect almost any arc, compiling them into lexicalized S-graphs would be impractical. Instead, extractions are handled by a novel mechanism involving multi-rooted graphs (of. Concluding Remarks). We assume that all lexically governed rules such as Passive, Dative Advancement and Raising are compiled into the lexical entries governing them. [Cat] vP [0,11 [HI seemed [Cat] AP [c] [2,1,0] [n] in Figure 2: S-graph for John seemed ill [o,1) (1,o] m [el Figure 3: Raising Rule Graph [cat] (1] [H] [c] VP ~John seemed [Cat] AP [HI in Figure 4: Surface Graph for John seemed ill [Cat] VP [H] seemed [c t] AP [c] [2) John [H] iJ.J. Figure 5: Predicate-Argument Graph for John seemed ill 99 Thus, given has four entries (Ditransitive, Ditran- sitive + Dative, Passive, Dative + Passive). This aspect of our framework is reminiscent of LFG [4] and HPSG [7], except that in SFG, relational structure is transparently recorded in the stratified features. Moreover, SFG relies neither on LFG- style annotated CFG rules and equation solving nor on HPSG-style SUBCAT lists. We illustrate below the process of constructing a lexical entry for given from rule graphs (ignor- ing morphology). The rule graphs used are for Ditransitive, Dative and (Agentless) Passive con- structions. Combined, they yield a ditransitive- dative-passive S-graph for the use of given occur- ring in Joe was given ~ea (cf. Figure 6). Dltransitive: [H] given [3) [2) [I) DATive: (2, 8) (3,2) DI tl DAT: [H] given [3, 2) [2, 8) [1) PASsive: (2,1) [1, 8, 0] [Cat] s [0,11 m Joe [H] was [c] [Cat] vP [H] given [3,2,1,0] m [2, 8] tea [1,8,0] Figure 6: S-graph for Joe was given iea. D113 DAT) U PAS: [H] given [3,2, i) [2, 8) [1, s, 0] The idea behind label unification is that two compatible labels combine to yield a label with maximal nonempty overlap. Left (right) closed labels unify with left (right) open labels to yield left (right) closed labels. There are ten types of label unification, determined by the four types of bracket pairs: totally closed (open), closed only on the left (right). However, in parsing (as op- posed to building a lexicalized grammar), we stip- ulate that successful label unification must result in a ~o~ally closed label. Additionally, we assume that all labels in well-formed lexicalized graphs (the input graphs to the parsing algorithm) are at least partially closed. This leaves only four cases: Case 1. [or] Ll [o~1 = [Or] Case 2. [~) u [~#] = [~#1 Case 3. (o~] LI [~] : [~c~] Case 4. [+#) u (#+] = [+#+] Note: c~, fl, 7 @ T~S+ and/3 is the longest com- mon, nonempty string. 100 The following list provides examples of each. 1. [1,0] U [1,0] = [1,0] 2. [1) U [1,0] = [1,0] 3. (~,0] U [2,1,0] = [2,1,0] 4. [2,1) U (1,0] = [2,1,0] Case 1 is the same as ordinary label unifica- tion under identity. Besides their roles in unifying rule-graphs, Cases 2, 3 and 4 are typically used in parsing bounded control constructions (e.g., "equi" and "raising") and extractions by means of "splicing" Null R-signs onto the open ends of labels and closing off the labels in the process. We note in passing that cases involving totally open labels may not result in unique unifications, e.g., (1, 2) U (2, 1) can be either (2,1,2) or (1,2,1). In practice, such aberrant cases seem not to arise. Label unification thus plays a central role in build- ing a lexicalized grammar and in parsing. THE PARSING ALGORITHM S-unification is like normal feature structure unification ([1], [11]), except that in certain cases two arcs with distinct labels 1 and l' are replaced by a single arc whose label is obtained by unifying 1 and l'. S-unification is implemented via the procedures Unify-Nodes, Unify-Arcs, and Unify-Sets-of- Arcs: 1. Unify-Nodes(n,n') consists of the steps: a. Unify label(n) and label(n'), where node labels unify under identity b. Unify-Sets-of-Arcs(Out-Arcs(n), Out- Arcs(n')) 2. Unify-Arcs(A,A') consists of the steps: a. Unify label(A) and label(A') b. Unify-Nodes(target (A),target (A')) 3. Unify-Sets-of-Arcs(SeQ, Set2), where Sett = {Aj,...,A~} and Set2 = {Am,..., An}, returns a set of arcs Set3, de- rived as follows: a. For each arc Ai • SeQ, attempt to find some arc A~ • Set2, such that Step 2a of Unify-arcs(Ai,A~) succeeds. If Step 2a succeeds, proceed to Step 2b and re- move A~ from Sets. There are three pos- sibilities: i. If no A~ can be found, Ai • Set3. ii. If Step 2a and 2b both succeed, then Unify-arcs(Ai, A~) • Set3. iii. If Step 2a succeeds, but Step 2b fails, then the procedure fails. b. Add each remaining arc in Set2 to Set3. We note that the result of S-unification can be a set of S-graphs. In our experience, the unification of linguistically well-formed lexical S-graphs has never returned more than one S-graph. Hence, S-unification is stipulated to fail if the result is not unique. Also note that due to the nature of label unification, the unification procedure does not guarantee that the unification of two S-graphs will be functional and thus well-formed. To insure functionality, we filter the output. We distinguish several classes of Arc: (i) Sur- face Arc vs. Non-Surface, determined by absence or presence of a Null R-sign in a label's last position; (ii) Structural Arc vs. Constraint Arc (stipulated by the grammar writer); and (iii) Re- lational Arc vs. Category Arc, determined by the kind of label (category arcs are atomic and have R-signs like Case, Number, Gender, etc.). The parser looks for arcs to complete that are Sur- face, Structural and Relational (SSR). A simplified version of the parsing algorithm is sketched below. It uses the predicates Left- Precedence , Right-Precedence and Com- plete: . Precedence: Let Q~ = [n~,Li, R~], F • SSR-Out-Arcs(n~) such that Target(F) = Anchor(Graph(n~)), and A • SSR-Out- Arcs(ni) be an incomplete terminal arc. Then: A. Left-Precedence(A, n~) is true iff: a. All surface arcs which must follow F are incomplete. b. A can precede F. c. All surface arcs which must both precede F and follow A are com- plete. B. Right-Precedence(A, n~) is true iff: a. All surface arcs which must precede F are complete. b. A can follow F. c. All surface arcs which must both follow F and precede A are com- plete. 101 2. Complete : A node is complete if it is either a lexical anchor or else has (obligatory) out- going SSR arcs, all of which are complete. An arc is complete if its target is complete. The algorithm is head-driven [8] and was in- spired by parsing algorithms for lexicalized TAGs ([6], [10]). Simplified Parsing Algorithm: Input: A string of words Wl,..., w~. Output: A chart containing all possible parses. Method: A. Initialization: 1. Create a list of k state-sets $1,..., Sk, each empty. 2. For c = 1,...,k, for each Graph(hi) of Wc, add [ni, c - 1, c] to Se. B. Completions: For c = 1,..., k, do repeatedly until no more states can be added to Se: 1. Leftward Completion: For all = ¢] Se, Qj = [nj, Lj, L~] E SL,, such that Complete(nj ) and A E SSR-Out-Arcs(ni), such that Left-Precedence(A, hi) IF Unify-a~-end-of-Path(ni, nj, A ) n~, 2. THEN Add [n~,Lj,c] to So. Rightward Completion: For all Qi = [n/, L~, R~] E SR,, Qj = [nj,Pq, c] 6 Sc such that Complete(nj ), and A E SSR-Out-Arcs(ni), such that Right-Precedence(A, hi) IF Unify-at-end-of-Path(n~, nj, A) THEN Add [n~, Li, el to So. To illustrate, we step through the chart for John seemed ill ( cf. Figure 7). In the string 0 John 1 seemed 2 ill 3, where the integers represent string positions, each word w is associated via the lexi- calized grammar with a finite set of anchored S- graphs. For expository convenience, we will as- sume counterfactually that for each w there is only one S-graph G~ with root r~ and anchor w. Also in the simplified case, we assume that the anchor is always the target of an arc whose source is the root. This is true in our example, but false in general. For each G~, r~ has one or more outgoing SSR arcs, the set of which we denote SSR-Out- Arcs(r~). For each w between integers x and y in the string, the Initialization step (step A of the algorithm) adds [n~, x, y] to state set y. We de- note state Q in state-set Si as state i:Q. For an input string w = Wl,...,w,~, initialization cre- ates n state-sets and for 1 < i < n, adds states i : Qj,1 _< j < k, to Si , one for each of the k S-graphs G~. associated with wi. After initializa- tion, the example chart consists of states 1:1, 2:1, 3:1. Then the parser traverses the chart from left to right starting with state-set 1 (step B of the algo- rithm), using left and right completions, according to whether left or right precedence conditions are used. Each completion looks in a state-set to the left of Sc for a state meeting a set of conditions. In the example, for c = 1, step B of the algorithm does not find any states in any state-set preced- ing S1 to test, so the parser advances c to 2. A left completion succeeds with Qi = state 2:1 = [hi, 1, 2] and Qj = state 1:1 = [nj, 0, 1]. State 2:2 = [n~, 0, 2] is added to state-set $2, where n~ = Unify-at-end-of-Path(n,, nj, [0, 1)). Label [0, 1) is closed off to yield [0, 1] in the output graph, since no further R-signs may be added to the label once the arc bearing the label is complete. The precedence constraints are interpreted as strict partial orders on the sets of outgoing SSR arcs of each node (in contrast to the totally or- dered lexicalized TAGs). Arc [0, 1) satisfies left- precedence because: (i) [0, 1) is an incomplete ter- minal arc, where a terminal arc is an SSR arc, the target of which has no incomplete outgoing surface arcs; (ii) all surface arcs (here, only [C]) which must follow the [H] arc are incomplete; (iii) [0 1) can precede [H]; and (iv) there are no (incom- plete) surface arcs which must occur between [0 1) and [H]. (We say can in (iii) because the parser accomodates variable word order.) The parser precedes to state-set $3. A right completion succeeds with Q~ = state 2:2 = [n~, 0, 2] and Q~ = state 3:1 = [n~,2,3]. State 3:2 - [n~', 0, 3] is added to state set $3, n~' = Unify-at- 102 1..11 LP=0 RP=I L.P=I RP=2 VP [H] • [o, / seemed AP John seemed =:=J LP:0 RP:2 VP" OlJl , [H] ~. John John seemed 3:1J LP:2 RP=3 AP" '~ [."] NP ill ill 3:2] LP=0 RP=3 VP" John John seemed ill Figure 7: Chart for John seemed ill. end-of-Path(n~, n~, [C]). State 3:2 is a successful parse because n~' is complete and spans the entire input string. To sum up: a completion finds a state Qi = [hi, L,, R~] and a state Qj = [nj, Lj, Rj] in adja- cent state-sets (Li = Rj or P~/ = Lj) such that ni is incomplete and nj is complete. Each success- ful completion completes an arc A E SSR-Out- Arcs(n~) by unifying nj with the target of A. Left completion operates on a state Qi = [ni,Li, c] in the current state-set Sc looking for a state Qj = [nj, Lj, L~] in state-set SL, to complete some arc A E SSR-Out-Arcs(ni). Right completion is the same as left completion except that the roles of the two states are reversed: in both cases, suc- cess adds a new state to the current state-set So. The parser completes arcs first leftward from the anchor and then rightward from the anchor. CONCLUDING REMARKS The algorithm described above is simpler than the one we have implemented in a number of ways. We end by briefly mentioning some aspects of the VP //~[LOC] V pp [FJJ~[MI / \ in NP [/,Q] Figure 8: Example: in P ~ [c] d what Figure 9: Example: What general algorithm. Optional Arcs: On encountering an optional arc, the parser considers two paths, skipping the optional arc on one and attempting to complete it on the other. Constraint Arcs These are reminiscent of LFG constraint equations. For a parse to be good, each constraint arc must unify with a structural arc. Multi-tiered S-graphs: These are S-graphs having a non-terminal incomplete arc I (e.g., the [LOC] arc in Figure 8. Essentially, the parser searches I depth-first for incomplete terminal arcs to complete. Pseudo-R-signs: These are names of sets of R-signs. For a parse to be good, each pseudo-R- sign must unify with a member of the set it names. Extractions: Our approach is novel: it uses pseudo-R-signs and multirooted S-graphs, illus- trated in Figure 9, where p is the primary root and d, the dangling root, is the source of a "slashed arc" with label of the form (b,/] (b a pseudo- R-sign). Since well-formed final parses must be 103 single-rooted, slashed arcs must eventually unify with another arc. To sum up: We have developed a unification- based, chart parser for relational grammars based on the SFG formalism presented by Johnson and Moss [2]. The system involves compiling (combi- nations) of rules graphs and their associated lexi- cal anchors into a lexicalized grammar, which can then be parsed in the same spirit as lexicalized TAGs. Note, though, that SFG does not use an adjunction (or substitution) operation. [10] Yves Schabes. Mathematical and Compu- tational Properties of Lezicalized Grammars. PhD thesis, University of Pennsylvania, 1990. [11] Stuart Shieber. Constraint-Based Grammar Formalisms. MIT Press, 1992. References [1] Bob Carpenter. The Logic of Typed Feature Structures. Cambridge UP, Cambridge, 1992. [2] David E. Johnson and Lawrence S. Moss. Some formal properties of stratified feature grammars. To appear in Annals of Mathe- matics and Artificial Intelligence, 1993. [3] David E. Johnson and Paul M. Postal. Are Pair Grammar. Princeton University Press, 1980. [4] Ronald Kaplan and Joan Bresnan. Lexical- functional grammar, a formal system for grammatical representation. In J. Bresnan, editor, The Mental Representation of Gram- matical Relations. MIT Press, 1982. [5] Robert Kasper and William C. Rounds. The logic of unification in grammar. Linguistics and Philosophy, 13:35-58, 1990. [6] Alberto Lavelli and Giorgio Satta. Bidirec- tional parsing of lexicalized tree adjoining grammars. In Proceedings of the 5th Confer- ence of the European Chapter of the Associa- tion of Computational Linguistics, 1991. [7] Carl Pollard and Ivan Sag. Information-based Syntaz and Semantics. CSLI Lecture Notes. University of Chicago Press, Chicago, 1987. [8] Derek Proudian and Carl Pollard. Parsing head-driven phrase structure grammar. In Proceedings of the 23rd Annual Meeting of the ACL, 1985. [9] William C. Rounds and Alexis Manaster- Ramer. A logical version of functional gram- mar. In Proceedings of The 25th Annual Meeting of the Association for Computational Linguistics, 1987. 104
1993
14
Parsing Free Word Order Languages in the Paninian Framework Akshar Bharati Rajeev Sangal Department of Computer Science and Engineering Indian Institute of Technology Kanpur Kanpur 208016 India Internet: [email protected] Abstract There is a need to develop a suitable computational grammar formalism for free word order languages for two reasons: First, a suitably designed formal- ism is likely to be more efficient. Second, such a formalism is also likely to be linguistically more ele- gant and satisfying. In this paper, we describe such a formalism, called the Paninian framework, that has been successfully applied to Indian languages. This paper shows that the Paninian framework applied to modern Indian languages gives an elegant account of the relation between surface form (vib- hakti) and semantic (karaka) roles. The mapping is elegant and compact. The same basic account also explains active-passives and complex sentences. This suggests that the solution is not just adhoc but has a deeper underlying unity. A constraint based parser is described for the framework. The constraints problem reduces to bi- partite graph matching problem because of the na- ture of constraints. Efficient solutions are known for these problems. It is interesting to observe that such a parser (de- signed for free word order languages) compares well in asymptotic time complexity with the parser for context free grammars (CFGs) which are basically designed for positional languages. 1 Introduction A majority of human languages including Indian and other languages have relatively free word or- der. tn free word order languages, order of words contains only secondary information such as em- phasis etc. Primary information relating to 'gross' meaning (e.g., one that includes semantic relation- ships) is contained elsewhere. Most existing compu- tational grammars are based on context free gram- mars which are basically positional grammars. It is important to develop a suitable computational grammar formalism for free word order languages for two reasons: 1. A suitably designed formalism will be more ef- ficient because it will be able to make use of primary sources of information directly. 2. Such a formalism is also likely to be linguisti- cally more elegant and satisfying. Since it will be able to relate to primary sources of informa- tion, the grammar is likely to be more econom- ical and easier to write. In this paper, we describe such a formalism, called the Paninian framework, that has been successfully applied to Indian languages. 1 It uses the notion of karaka relations between verbs and nouns in a sentence. The notion of karaka relations is cen- tral to the Paninian model. The karaka relations are syntactico-semantic (or semantico-syntactic) re- lations between the verbals and other related con- stituents in a sentence. They by themselves do not give the semantics. Instead they specify re- lations which mediate between vibhakti of nom- inals and verb forms on one hand and semantic relations on the other (Kiparsky, 1982) (Cardona (1976), (1988)). See Fig. 1. Two of the impor- tant karakas are karta karaka and karma karaka. Frequently, the karta karaka maps to agent theta role, and the karma to theme or goal theta role. Here we will not argue for the linguistic significance of karaka relations and differences with theta rela- tions, as that has been done elsewhere (Bharati et al. (1990) and (1992)). In summary, karta karaka is that participant in the action that is most inde- pendent. At times, it turns out to be the agent. But that need not be so. Thus, 'boy' and 'key' are respectively the karta karakas in the following sen- tences 1The Paninian framework was originally designed more than two millennia ago for writing a grammar of Sanskrit; it has been adapted by us to deal with modern Indian languages. 105 --- semantic level (what the speaker l has in mind) --- karaka level I --- vibhakti level I --- surface level (uttered sentence) Fig. I: Levels in the Paninian model The boy opened the lock. The key opened the lock. Note that in the first sentence, the karta (boy) maps to agent theta role, while in the second, karta (key) maps to instrument theta role. As part of this framework, a mapping is specified between karaka relations and vibhakti (which covers A. 2 collectively case endings, post-positional markers, etc.). This mapping between karakas and vibhakti depends on the verb and its tense aspect modality (TAM) label. The mapping is represented by two structures: default karaka charts and karaka chart transformations. The default karaka chart for a verb or a class of verbs gives the mapping for the TAM la- bel tA_hE called basic. It specifies the vibhakti per- mitted for the applicable karaka relations for a verb when the verb has the basic TAM label. This basic TAM label roughly corresponds to present indefinite tense and is purely syntactic in nature. For other B. 1 TAM labels there are karaka chart transformation rules. Thus, for a given verb with some TAM la- bel, appropriate karaka chart can be obtained using its basic karaka chart and the transformation rule B.2 depending on its TAM label. 2 In Hindi for instance, the basic TAM label is tA_hE (which roughly stands for the present indef- inite). The default karaka chart for three of the B.3 karakas is given in Fig. 2. This explains the vibhak- tis in sentences A.1 to A.2. In A.1 and A.2, 'Ram' is karta and 'Mohan' is karma, because of their vib- hakti markers ¢ and ko, respectively. 3 (Note that B.4 'rAma' is followed by ¢ or empty postposition, and 'mohana' by 'ko' postposition.) A.I rAma mohana ko pltatA hE. 2The transformation rules are a device to represent the karaka charts more compactly. However, as is obvious, they affect the karaka charts and not the parse structure. There- fore, they are different from transformational granmlars. Formally, these rules can be eliminated by having separate karaka charts for each TAM label. But one would miss the liguistic generalization of relating the karaka charts based on TAM labels in a systematic manner. 3In the present examples karta and karma tm'n out to be agent and theme, respectively. KARAKA VIBHAKTI PRESENCE Karta ¢ mandatory Karma ko or ¢ mandatory Karana se or optional dvArA Fig. 2: A default karaka Chart TAM LABEL TRANSFORMED VIBHAKTI FOR KARTA yA ne nA_padA ko yA_gayA se or dvArA (and karta is optional) Fig. 3: Transformation rules Ram Mohan -ko beats is (Ram beats Mohan.) mohana ko rAma pItatA hE. Mohan -ko Ram beats is (Ram beats Mohan.) Fig. 3 gives some transformation rules for the default mapping for Hindi. It explains the vibhakti in sentences B.1 to B.4, where Ram is the karta but has different vibhaktis, ¢, he, ko, se, respectively. In each of the sentences, if we transform the karaka chart of Fig.2 by the transformation rules of Fig.3, we get the desired vibhakti for the karta Ram. rAma Pala ko KAtA hE. Ram fruit -ko eats is (Ram eats the fruit.) rAma ne Pala KAyA. Ram -ne fruit ate (Ram ate the fruit.) rAma ko Pala KAnA padA. Ram -ko fruit eat had to (Ram had to eat the fruit.) rAma se Pala nahI KAyA gayA Ram -se fruit not eat could (Ram could not eat the fruit.) In general, the transformations affect not only the vibhakti of karta but also that of other karakas. They also 'delete' karaka roles at times, that is, the 'deleted' karaka roles must not occur in the sen- tence. The Paninian framework is similar to the broad class of case based grammars. What distinguishes the Paninian framework is the use of karaka re- lations rather than theta roles, and the neat de- pendence of the karaka vibhakti mapping on TAMs 106 and the transformation rules, in case of Indian lan- guages. The same principle also solves the problem of karaka assignment for complex sentences (Dis- cussed later in Sec. 3.) 2 Constraint Based Parsing The Paninian theory outlined above can be used for building a parser. First stage of the parser takes care of morphology. For each word in the input sentence, a dictionary or a lexicon is looked up, and associated grammatical information is retrieved. In the next stage local word grouping takes place, in which based on local information certain words are grouped together yielding noun groups and verb groups. These are the word groups at the vibhakti level (i.e., typically each word group is a noun or verb with its vibhakti, TAM label, etc.). These in- volve grouping post-positional markers with nouns, auxiliaries with main verbs etc. Rules for local word grouping are given by finite state machines. Finally, the karaka relations among the elements are identi- fied in the last stage called the core parser. Morphological analyzer and local word grouper have been described elsewhere (Bharati et al., 1991). Here we discuss the core parser. Given the local word groups in a sentence, the task of the core parser is two-fold: 1. To identify karaka relations among word groups, and 2. To identify senses of words. The first task requires karaka charts and transfor- mation rules. The second task requires lakshan charts for nouns and verbs (explained at the end of the section). A data structure corresponding to karaka chart stores information about karaka-vibhakti mapping including optionality of karakas. Initially, the de- fault karaka chart is loaded into it for a given verb group in the sentence. Transformations are per- formed based on the TAM label. There is a sep- arate data structure for the karaka chart for each verb group in the sentence being processed. Each row is called a karaka restriclion in a karaka chart. For a given sentence after the word groups have been formed, karaka charts for the verb groups are created and each of the noun groups is tested against the karaka restrictions in each karaka chart. When testing a noun group against a karaka re- striction of a verb group, vibhakti information is checked, and if found satisfactory, the noun group becomes a candidate for the karaka of the verb group. The above can be shown in the form of a con- straint graph. Nodes of the graph are the word baccA hATa se kelA KAtA hE Fig. 4: Constraint graph groups and there is an arc labeled by a karaka from a verb group to a noun group, if the noun group satisfies the karaka restriction in the karaka chart of the verb group. (There is an arc from one verb group to another, if the karaka chart of the former shows that it takes a sentential or verbal karaka.) The verb groups are called demand groups as they make demands about their karakas, and the noun groups are called source groups because they sat- isfy demands. As an example, consider a sentence containing the verb KA (eat): baccA hATa se kelA KAtA hE. child hand -se banana eats (The child eats the banana with his hand.) Its word groups are marked and KA (eat) has the same karaka chart as in Fig. 2. Its constraint graph is shown in Fig. 4. A parse is a sub-graph of the constraint graph satisfying the following conditions: 1. For each of the mandatory karakas in a karaka chart for each demand group, there should be exactly one out-going edge from the demand group labeled by the karaka. 2. For each of the optional karakas in a karaka chart for each demand group, there should be at most one outgoing edge from the demand group labeled by the karaka. 3. There should be exactly one incoming arc into each source group. If several sub-graphs of a constraint graph satisfy the above conditions, it means that there are multi- ple parses and the sentence is ambiguous. If no sub- graph satisfies the above constraints, the sentence does not have a parse, and is probably ill-formed. There are similarities with dependency grammars here because such constraint graphs are also pro- duced by dependency grammars (Covington, 1990) (Kashket, 1986). 107 It differs from them in two ways. First, the Paninian framework uses the linguistic insight re- garding karaka relations to identify relations be- tween constituents in a sentence. Second, the con- straints are sufficiently restricted that they reduce to well known bipartite graph matching problems for which efficient solutions are known. We discuss the latter aspect next. If karaka charts contain only mandatory karakas, the constraint solver can be reduced to finding a matching in a bipartite graph. 4 Here is what needs to be done for a given sentence. (Perraju, 1992). For every source word group create a node belonging to a set U; for every karaka in the karaka chart of every verb group, create a node belonging to set V; and for every edge in the constraint graph, create an edge in E from a node in V to a node in U as follows: if there is an edge labeled in karaka k in the constraint graph from a demand node d to a source node s, create an edge in E in the bi- partite graph from the node corresponding to (d, k) in V to the node corresponding to s in U. The original problem of finding a solution parse in the constraint graph now reduces to finding a complete matching in the bipartite graph {U,V,E} that covers all the nodes in U and V. 5 It has several known effi- cient algorithms. The time complexity of augment- ing path algorithm is O (rain (IV], [U]). ]ED which in the worst case is O(n 3) where n is the number of word groups in the sentence being parsed. (See Papadimitrou et al. (1982), ihuja et al. (1993).) The fastest known algorithm has asymptotic corn- of O (IV[ 1/2 . [E[) and is based on max flow ] % plexity ] problem (Hopcroft and Sarp (1973)). If we permit optional karakas, the problem still has an efficient solution. It now reduces to finding a matching which has the maximal weight in the weighted matching problem. To perform the reduc- tion, we need to form a weighted bipartite graph. We first form a bipartite graph exactly as before. Next the edges are weighted by assigning a weight of 1 if the edge is from a node in V representing a mandatory karaka and 0 if optional karaka. The problem now is to find the largest maximal match- ing (or assignment) that has the maximum weight (called the maximum bipartite matching problem or assignment problem). The resulting matching rep- resents a valid parse if the matching covers all nodes in U and covers those nodes in V that are for manda- tory karakas. (The maximal weight condition en- 4 We are indebted to Sonmath Biswas for suggesting the connection. 5A matching in a bipartite graph {U,V,E)is a subgraph with a subset of E such that no two edges are adjacent. A complete matching is also a largest maximal matching (Deo, 197"4). sures that all edges from nodes in V representing mandatory karakas are selected first, if possible.) This problem has a known solution by the Hun- garian method of time complexity O(n 3) arithmetic operations (Kuhn, 1955). Note that in the above theory we have made the following assumptions: (a) Each word group is uniquely identifiable before the core parser ex- ecutes, (b) Each demand word has only one karaka chart, and (c) There are no ambiguities between source word and demand word. Empirical data for Indian languages shows that, conditions (a) and (b) hold. Condition (c), however, does not always hold for certain Indian languages, as shown by a cor- pus. Even though there are many exceptions for this condition, they still produce only a small num- ber of such ambiguities or clashes. Therefore, for each possible demand group and source group clash, a new constraint graph can be produced and solved, leaving the polynomial time complexity unchanged. The core parser also disambiguates word senses. This requires the preparation of lakshan charts (or discrimination nets) for nouns and verbs. A lak- shan chart for a verb allows us to identify the sense of the verb in a sentence given its parse. Lakshan charts make use of the karakas of the verb in the sentence, for determining the verb sense. Similarly for the nouns. It should be noted (without discus- sion) that (a) disambiguation of senses is done only after karaka assignment is over, and (b) only those senses are disambiguated which are necessary for translation The key point here is that since sense disambigua- tion is done separately after the karaka assignment is over it leads to an efficient system. If this were not done the parsing problem would be NP-complete (as shown by Barton et al. (1987) if agreement and sense ambiguity interact, they make the problem NP-complete). 3 Active-Passives and Com- plex Sentences This theory captures the linguistic intuition that in free word order languages, vibhakti (case endings or post-positions etc.) plays a key role in determining karaka roles. To show that the above, though neat, is not just an adhoc mechanism that explains the isolated phenomena of semantic roles mapping to vibhaktis, we discuss two other phenomena: active- passive and control. No separate theory is needed to explain active- passives. Active and passive turn out to be special cases of certain TAM labels, namely those used to mark active and passive. Again consider for exam- ple in Hindi. 108 F.I rAma mohana ko pItatA hE. (active) Ram Mohan -ko beat pres. (Ram beats Mohan.) F.2 rAma dvArA mohana ko pItA gayA. (passv.) Ram by Mohan -ko beaten was (Mohan was beaten by Ram. ) Verb in F.2 has TAM label as yA_gayA. Conse- quently, the vibhakti 'dvArA' for karta (Ram) fol- lows from the transformation already given earlier in Fig. 3. A major support for the theory comes from com- plex sentences, that is, sentences containing more than one verb group. We first introduce the prob- lem and then describe how the theory provides an answer. Consider the ttindi sentences G.1, G.2 and G.3. In G.1, Ram is the karta of both the verbs: KA (eat) and bulA (call). However, it occurs only once. The problem is to identify which verb will control its vibhakti. In G.2, karta Ram and the karma Pala (fruit) both are shared by the two verbs kAta (cut) and KA (eat). In G.3, the karta 'usa' (he) is shared between the two verbs, and 'cAkU' (knife) the karma karaka of 'le' (take) is the karana (instru- mental) karaka of 'kAta' (cut). G.I rAma Pala KAkara mohana ko bulAtA hE. Ram fruit having-eaten Mohan -ko calls (Having eaten fruit, Ram calls Mohan. ) G.2 rAma ne Pala kAtakara KAyA. Ram ne fruit having-cut ate (Ram ate having cut the fruit.) G.3 Pala kAtane ke liye usane cAkU liyA. fruit to-cut for he-ne knife took (To cut fruit, he took a knife.) The observation that the matrix verb, i.e., main verb rather than the intermediate verb controls the vibhakti of the shared nominal is true in the above sentences, as explained below. The theory we will outline to elaborate on this theme will have two parts. The first part gives the karaka to vibhakti mapping as usual, the second part identifies shared karakas. The first part is in terms of the karaka vibhakti mapping described earlier. Because the interme- diate verbs have their own TAM labels, they are handled by exactly the same mechanism. For ex- ample, kara is the TAM label 6 of the intermedi- ate verb groups in G.1 and G.2 (KA (eat) in G.1 and kAta (cut) in G.2), and nA 7 is the TAM label 6,kara, TAM label roughly means 'having completed the activity'. But note that TAM labels are purely syntactic, hence the meaning is not required by the system. ZThis is the verbal noun. TAM LABEL TRANSFORMATION kara Karta must not be present. Karma is optional. nA Karta and karma are op- tional. tA_huA Karta must not be present. Karma is optional. Fig. 5: More transformation rules of the intermediate verb (kAta (cut)) in G.3. As usual, these TAM labels have transformation rules that operate and modify the default karaka chart. In particular, the transformation rules for the two TAM labels (kara and nA) are given in Fig. 5. The transformation rule with kara in Fig. 5 says that karta of the verb with TAM label kara must not be present in the sentence and the karma is optionally present. By these rules, the intermediate verb KA (eat) in G.1 and kAta (cut) in G.2 do not have (indepen- dent) karta karaka present in the sentence. Ram is the karta of the main verb. Pala (fruit) is the karma of the intermediate verb (KA) in G.1 but not in G.2 (kAta). In the latter, Pala is the karma of the main verb. All these are accommodated by the above transformation rule for 'kara'. The tree structures produced are shown in Fig. 6 (ignore dotted lines for now) where a child node of a parent expresses a karaka relation or a verb-verb relation. In the second part, there are rules for obtaining the shared karakas. Karta of the intermediate verb KA in G.1 can be obtained by a sharing rule of the kind given by S1. Rule SI: Karta of a verb with TAM label 'kara' is the same as the karta of the verb it modifies s. The sharing rule(s) are applied after the tentative karaka assignment (using karaka to vibhakti map- ping) is over. The shared karakas are shown by dotted lines in Fig. 6. 4 Conclusion and future work In summary, this paper makes several contributions: • It shows that the Paninian framework applied to modern Indian languages gives an elegant account of the relation between vibhakti and karaka roles. The mapping is elegant and com- pact. 8The modified verb in the present sentences is the main verb. 109 bulA (call) kar/ ~arma~rec ede rAma mohana KA (eat) karta.~5 ~arma rAma Pala (~ruit) KA (eat) rAma Pala kAta (cut) (fruit) karta . karma rAma Pala le (take) kart/~arma~~urpos e vaha cAkU kAta (cut) (he) (knife) /a karta rma U- aha Pala (he) (fruit) • karana (knife) Fig. 6: Modifier-modified relations for sentences G.1, G.2 and G.3,respectively. (Shared karakas shown by dotted lines.) • The same basic account also explains active- passives and complex sentences in these lan- guages. This suggest that the solution is not just adhoc but has a deeper underlying unity. • It shows how a constraint based parser can be built using the framework. The constraints problem reduces to bipartite graph matching problem because of the nature of constraints. Efficient solutions are known for these prob- lems. It is interesting to observe that such a parser (designed for free word order languages) com- pares well in asymptotic time complexity with the parser for context free grammars (CFGs) which are basically designed for positional lan- guages. A parser for Indian languages based on the Paninian theory is operational as part of a machine translation system. As part of our future work, we plan to apply this framework to other free word order languages (i.e., other than the Indian languages). This theory can also be attempted on positional languages such as English. What is needed is the concept of general- ized vibhakti in which position of a word gets inco- porated in vibhakti. Thus, for a pure free word or- der language, the generalized vibhakti contains pre- or post-positional markers, whereas for a pure posi- tional language it contains position information of a word (group). Clearly, for most natural languages, generalized vibhakti would contain information per- taining to both markers and position. Acknowledgement Vineet Chaitanya is the principal source of ideas in this paper, who really should be a co-author. We gratefully acknowledge the help received from K.V. Ramakrishnamacharyulu of Rashtriya San- skrit Sansthan, Tirupati in development of the the- ory. For complexity results, we acknowledge the contributions of B. Perraju, Somnath Biswas and Ravindra K. Ahuja. Support for this and related work comes from the following agencies of Government of India: Ministry of Human Resource Development, Department of Electronics, and Department of Science and Tech- nology. References Ahuja, R.K., Thomas L. Magnanti, and James B. Orlin, Network Flows: Theory, Algorithms 110 and Applications, Prentice-Hall, 1993 (forth- coming). Barton, G. Edward, Robert C. Berwick, and Eric S. Ristad, Computational Complexity and Nat- ural Language, MIT Press, Cambridge, MA, 1987. Bharati, Akshar, Vineet Chaitanya, and Rajeev Sangal, A Computational Grammar for Indian Languages Processing, Journal of Indian Lin- guistics, IL-51. (Also available as TRCS-90-96, Dept. of CSE, IIT Kanpur, 1990.) Bharati, Akshar, Vineet Chaitanya, and Rajeev Sangal, Local Word Grouping and Its Rel- evance to Indian Languages, in Frontiers in Knowledge Based Computing (KBCS90), V.P. Bhatkar and K.M. Rege (eds.), Narosa Publish- ing House, New Delhi, 1991, pp. 277-296. Bharati, Akshar, Vineet Chaitanya, and Rajeev Sangal, LFG, GB, and Paninian Frameworks: An NLP Viewpoint, Part of NLP tutorial for CPAL-2: UNESCO 2nd Regional Workshop on Computer Processing of Asian Languages, 12- 16 March 1992, I.I.T. Kanpur. (Also available as TRCS-92-140, Dept. of CSE, IIT Kanpur.) Cardona, George, Panini: A Survey of Research, Mouton, Hague-Paris, 1976. Cardona, George, Panini: His Work and Its Tra- dition (Vol. 1: Background and Introduction), Motilal Banarsidas, Delhi, 1988. Covington, Michael A., Parsing Discontinuous Constituents in Dependency Grammar (Tech- nical Correspondence), Computational Lin- guistics, 16,4 (Dec. 1990), p.234. Deo, Narsingh, Graph Theory, Prentice-Hall, 1974. Hopcroft, J.E. and R.M. Karp, "A n 5/2 Algorithm for Maximum Matching in Bipartite Graphs," J. SIAM Comp. 2 (1973), pp.225-231. Kashket, Michael B., Parsing a free-word-order language: Warlpiri, Proc. of 24th Annual Meeting of ACL, pp. 60-66. Kiparsky, P., Some Theoretical Problems in Panini's Grammar, Bhandarkar Oriental Re- search Institute, Poona, India, 1982. Kuhn, H.W. "The Hungarian Method for the As- signment Problem", Naval Research Logistics Quarterly, 2 (1955), pp.83-97. Papadimitrou, Christos H., and K. Steiglitz, Com- binatorial Optimization, Prentice-Hall, Engle- wood Cliffs, 1982. Perraju, Bendapudi V.S., Algorithmic Aspects of Natural Language Parsing using Paninian Framework, M.Tech. thesis, Dept. of Com- puter Science and Engineering, I.I.T. Kanpur, Dec. 1992. 111
1993
15
PRINCIPLE-BASED PARSING WITHOUT OVERGENERATION 1 Dekang Lin Department of Computing Science, University of Manitoba Winnipeg, Manitoba, Canada, l~3T 2N2 E-mail: [email protected] Abstract Overgeneration is the main source of computational complexity in previous principle-based parsers. This paper presents a message passing algorithm for principle-based parsing that avoids the overgenera- tion problem. This algorithm has been implemented in C++ and successfully tested with example sen- tences from (van Riemsdijk and Williams, 1986). 1. Introduction Unlike rule-based grammars that use a large num- ber of rules to describe patterns in a language, Government-Binding (GB) Theory (Chomsky, 1981; Haegeman, 1991; van Riemsdijk and Williams, 1986) ezplains these patterns in terms of more foundmental and universal principles. A key issue in building a principle-based parser is how to procedurally interpret the principles. Since GB principles are constraints over syntactic struc- tures, one way to implement the principles is to 1. generate candidate structures of the sentence that satisfy X-bar theory and subcategoriza- tion frames of the words in the sentence. 2. filter out structures that violates any one of the principles. 3. the remaining structures are accepted as parse trees of the sentence. This implementation of GB theory is very ineffi- cient, since there are a large number of structures being generated and then filtered out. The prob- lem of producing too many illicit structures is called overgenera~ion and has been recognized as the cul- prit of computational difficulties in principle-based parsing (Berwick, 1991). Many methods have been proposed to alleviate the overgeneration problem by detecting illicit structures as early as possible, such as optimal ordering of principles (Fong, 1991), coroutining (Doff, 1991; Johnson, 1991). ] The author wishes to thank the anonymous referees for their helpful comments and suggestions. This research was supported by Natural Sciences and Engineering Research Council of Canada grant OGP121338. This paper presents a principle-based parser that avoids the overgeneration problem by applying prin- ciples to descriptions of the structures, instead of the structures themselves. A structure for the input sentence is only constructed after its description has been found to satisfy all the principles. The struc- ture can then be retrieved in time linear to its size and is guaranteed to be consistent with the princi- ples. Since the descriptions of structures are constant- sized attribute vectors, checking whether a struc- tural description satisfy a principle takes constant amount of time. This compares favorably to ap- proaches where constraint satisfaction involves tree traversal. The next section presents a general framework for parsing by message passing. Section 3 shows how linguistic notions, such as dominance and govern- ment, can be translated into relationships between descriptions of structures. Section 4 describes in- terpretation of GB principles. Familiarity with GB theory is assumed in the presentation. Section 5 sketches an object-oriented implementation of the parser. Section 6 discusses complexity issues and related work. 2. Parsing by Message Passing The message passing algorithm presented here is an extension to a message passing algorithm for context-free grammars (Lin and Goebel, 1993). We encode the grammar, as well as the parser, in a network (Figure 1). The nodes in the net- works represent syntactic categories. The links in the network represent dominance and subsumption relationships between the categories: • There is a dominance link from node A to B if B can be immediately dominated by A. The dominance links can be further classified ac- cording to the type of dominance relationship. • There is a specialization link from A to B if A subsumes B. The network is also a parser. The nodes in the network are computing agents. They communicate 112 with each other by passing messages in the reverse direction of the links in the network. /x•!':" .............. ~ ......... / ......... \ .... t..." /\°.... x PSpec B / i VI~. "d'''-~ , _~ \ % 1 i I i k "\" ".. .3. • F S ~ N ~ ] AUX" Have%e iv( //--' ., : \ $ : ",,,i \ Xi ASpec .. A'bar %~ D~et " N .............. ~ --" 0 barrier adjunct-dominance specialization link ~--~ .,ll.ll*.ll| head dominance specifier~ominance complement-dominance Figure 1: A Network Representation of Grammar The messages contains items. An item is a triplet that describes a structure: <surface-string, attribute-values, sources>, where surface-string is an integer interval [i, j] denoting the i'th to j'th word in the input sentence. attribute-values specify syntactic features, such as cat, plu, case, of the root node of the struc- ture described by the item. sources component is the set of items that describe the immediate sub-structures. Therefore, by tracing the sources of an item, a complete structure can be retrieved. The location of the item in the network deter- mines the syntactic category of the structure. For example, [NP the ice-cream] in the sentence "the ice-cream was eaten" is represented by an item i4 at NP node (see Figure 2): <[0,1], ((cat n) -plu (nforta norm) -cm +theta), {ix, 23}> An item represents the root node of a structure and contains enough information such that the in- ternal nodes of the structure are irrelevant. The message passing process is initiated by send- ing initial items externally to lexical nodes (e.g., N, P, ...). The initial items represent the words in the sentence. The attribute values of these items are obtained from the lexicon. In case of lexical ambiguity, each possibility is represented by an item. For example, suppose the input sentence is "I saw a man," then the word "saw" is represented by the following two items sent to nodes N and V:NP 2 respectively: <[I,I], ((cat n) -plu (nform norm)), {}> <[i,I], ((cat v) (cform fin) -pas (tense past)), {}> When a node receives an item, it attempts to combine the item with items from other nodes to form new items. Two items <[iljx], A~, SI> and <[i2,j~], A2, $2> can be combined if 1. their surface strings are adjacent to each other: i2 = jx+l. 2. their attribute values A1 and As are unifiable. 3. their sources are disjoint: Sx N $2 = @. The result of the combination is a new item: <[ix~j2], unify(A1, A2), $113 $2>. The new items represent larger parse trees resulted from combining smaller ones. They are then prop- agated further to other nodes. The principles in GB theory are implemented as a set of constraints that must be satisfied dur- ing the propagation and combination of items. The constraints are attached to nodes and links in the network. Different nodes and links may have differ- ent constraints. The items received or created by a node must satisfy the constraints at the node. The constraints attached to the links serve as filters. A link only allows items that satisfy its con- straints to pass through. For example, the link from V:NP to NP in Figure 1 has a constraint that any item passing through it must be unifiable with (case acc). Thus items representing NPs with nominative case, such as "he", will not be able to pass through the link. By default, the attributes of an item percolate with the item as it is sent across a link. However, the links in the network may block the percolation of certain attributes. The sentence is successfully parsed if an item is found at IP or CP node whose surface string is the input sentence. A parse tree of the sentence can be retrieved by tracing the sources of the item. An example The message passing process for analyzing the sen- tence 2V:NP denotes verbs taking an NP complement. Sim- ilarly, V:IP denotes verbs taking a CP complement, N:CP represents nouns taking a CP complement. 113 IP i12 @ (~) ~ b a r ~. (~ i9/ V~ bar i[ / ~ i , • / ] NP. i4. Aux Have Be NP i4 \ Nbar i3 Det il N i2 The ice-cream ~IP~ t l Ibar i /\ I i6 VP il0 i9 Vbar /. 18 v, Be i5 V:NP i7 was eaten & The message passing process b. The parse tree retrieved 11 :<[0,0] ((cat d)), {}> 12 =<[1,1] ((cat n) -plu (nform norm) +theta),{}> 13 =<[1,1] ((cat n) -plu (nform norm) +theta),{i2}> 14 =<[0,1] ((cat n) -plu (nform norm) -cm +theta), {il, i3}> 15 =<[2,2] ((cat i) -plu (per 1 3) (cform fin) +be +ca +govern (tense past)), {}> 16 =<[2,2] ((cat i) -plu (per 1 3) (cform fin) +be +ca +govern (tense past)), {i5}> 17 =<[3,3] ((cat v) +pas), {}> 18 ----<[3,3] ((cat v) +pas +nppg -npbarrier (np-atts NNORM)), {i7}> 19 =<[3,3] ((cat v) +pas +nppg -npbarrier (rip-arts NNORH)), {is}> 110=<[3,3] ((cat v) +pas +nppg -npbarrier (rip-arts NNORM)), {i9}> 111=<[2,3] ((cat ±) +pas +nppg -npbarrier (np-atts NNORH) (per 1 3) (cform fin) +ca +govern (tense past))), {i6, ilo}> i12~-<[0,3], ((cat i) +pas (per 1 3) (cform fin) +ca +govern (tense past)), {i4, ill}> Figure 2: Parsing the sentence "The ice-cream was eaten" (1) The ice-cream was eaten is illustrated in Figure 2.a. In order not to convolute the figure, we have only shown the items that are involved in the parse tree of the sentence and their propagation paths. The parsing process is described as follows: 1. The item il is created by looking up the lexi- con for the word "the" and is sent to the node Det, which sends a copy of il to NP. 2. The item i2 is sent to N, which propagates it to Nbar. The attribute values ofi2 are percolated to i3. The source component eli3 is {i2}. Item i3 is then sent to NP node. 3. When NP receives i3 from Nbar, i3 is com- bined with il from Det to form a new item i4. One of the constraints at NP node is: if (nform norm) then -cm, which means that normal NPs need to be case- marked. Therefore, i4 acquires -cm. Item i4 is then sent to nodes that have links to NP. 4. The word "was" is represented by item i5, which is sent to Ibar via I. 5. The word "eaten" can be either the past par- ticiple or the passive voice of "eat". The sec- ond possibility is represented by the item i7. The word belongs to the subcategory V:NP which takes an NP as the complement. There- fore, the item i7 is sent to node V:NP. 6. Since i7 has the attribute +pas (passive voice), an np-movement is generated at V:NP. The movement is represented by the attributes nppg, npbarrier, and np-atts. The first two attributes are used to make sure that the movement is consistent with GB principles. The value of np-atts is an attribute vector, which must be unifiable with the antecedent of this np-movement, l~N0aM is a shorthand for (cat n) (nform norm)• 7. When Ibar receives il0, which is propagated to VP from V:NP, the item is combined with 114 i6 from I to form i11. 8. When IP receives i11, it is combined with i4 from NP to form i12. Since ill contains an np- movement whose np-atts attribute is unifiable with i4, i4 is identified as the antecedent of np- movement. The np-movement attributes in i12 are cleared. The sources of i12 are i4 from NP and ill from Ibar. Therefore, the top-level of parse tree consists of an NP and Ibar node dominated by IP node. The complete parse tree (Figure 2.b) is obtained by re- cursively tracing the origins of i4 and ill from NP and Ibar respectively. The trace after "eaten" is in- dicated by the np-movement attributes of i7, even though the tree does not include a node representing the trace. 3. Modeling Linguistics Devices GB principles are stated in terms of linguistic con- cepts such as barrier, government and movement, which are relationships between nodes in syntactic structures. Since we interpret the principles with descriptions of the structures, instead of the struc- tures themselves, we must be able to model these notions with the descriptions. Dominance and m-command: Dominance and m-command are relationships be- tween nodes in syntactic structures. Since an item represent a node in a syntactic structure, relation- ships between the nodes can be represented by re- lationships between items: dominance: An item dominates its direct and in- direct sources. For example, in Figure 2, i4 dominates il, i2, and iz. m-command: The head daughter of an item repre- senting a maximal category m-commands non- head daughters of the item and their sources. Barrier Chomsky (1986) proposed the notion of barrier to unify the treatment of government and subjacency. In Chomsky's proposal, barrierhood is a property of maximal nodes (nodes representing maximal cat- egories). However, not every maximal node is a bar- rier. The barrierhood of a node also depends on its context, in terms of L-marking and inheritance. Instead of making barrierhood a property of the nodes in syntactic structures, we define it to be a property of links in the grammar network. That is, certain links in the grammar network are clas- sified as barriers. In Figure 1, barrier links have a black ink-spot on them. Barrierhood is a property of these links, independent of the context. This def- inition of barrier is simpler than Chomsky's since it is context-free. In our experiments so far, this simpler definition has been found to be adequate. Government Once the notion of barrier has been defined, the gov- ernment relationship between two nodes in a struc- ture can be defined as follows: government: A governs B if A is the minimal gov- ernor that m-commands B via a sequence of non-barrier links, where governors are N, V, P, A, and tensed I. Items representing governors are assigned +govern attribute. This attribute percolates across head dominance links. If an item has +govern at- tribute, then non-head sources of the item and their sources are governed by the head of the item if there are paths between them and the item satisfying the conditions: 1. there is no barrier on the path. 2. there is no other item with +govern attribute on the path (minimality condition (Chomsky, 1986, p.10)). Movement :3 Movement is a major source of complexity in principle-based parsing. Directly modeling Move-c~ would obviously generate a large number of invalid movements. Fortunately, movements must also sat- isfy: c-command condition: A moved element must c- command its trace (Radford, 1988, p.564), where A c-command B if A does not domi- nate B but the parent of A dominates B. The c-command condition implies that a movement consists of a sequence of moves in the reverse direc- tion of dominance links, except the last one. There- fore, we can model a movement with a set of at- tribute values. If an item contains these attribute values, it means that there is a movement out of the structure represented by the item. For example, in Figure 2.b, item i10 contains movement attributes: nppg, npbarr±er and np-atts. This indicates that there is an np-movement out of the VP whose root node is il0. 3We limit the discussion to np-movements and wh- movements whose initial traces are in argument positions. 115 The movement attributes are generated at the parent node of the initial trace. For example, V:NP is a node representing normal transitive verbs which take an NP as complement. When V:NP receives an item representing the passive sense of the word eaten, V:NP creates another item < [i,i] , ((cat v) -npbarrier +nppg (np-atts (cat n))), {}> This item will not be combined with any item from NP node because the NP complement is assumed to be an np-trace. The item is then sent to nodes dominating V:NP. As the item propagates further, the attributes is carried with it, simulating the effect of movement. The np-movement land at IP node when the IP node combines an item from subject NP and another item from Ibar with np-movement attributes. A precondition on the landing is that the attributes of the former can be unified with the value of np-atts of the latter. Wh-movements are dealt with by attributes whpg, whbarrier, wh-atts. This treatment of movement requires that the parent node of a initial trace be able to determine the type of movement. When a movement is gener- ated, the type of the movement depends on the ca (case assigner) attribute of the item: ca + movement examples wh active V, P, finite IP np A, passive V, non-finite IP For example, when IP node receives an item from Ibar, IP attempts to combine it with another item from subject NP. If the subject is not found, then the IP node generates a movement. If the item represent a finite clause, then it has attributes +ca (cform fin) and the movement is of type wh. Oth- erwise, the movement is of type np. 4. Interpretation of Principles We now describe how the principles of GB theory are implemented. ~ -bar Theory: ~N~ • Every syntactic category is a projection of a ] lexical head. / • There two levels of projection of lexical I heads. Only the bar-2 projections can be) complements and adjuncts, j/ The first condition requires that every non-lexical category have a head. This is guaranteed by a con- straint in item combination: one of the sources of the two items being combined must be from the head daughter. The second condition is implemented by the structure of the grammar network• The combina- tions of items represent constructions of larger parse trees from smaller ones. Since the structure of the grammar network satisfies the constraint, the parse trees constructed by item combination also satisfy the X-bar theory. Case Filter: Every lexical NP must be case-~ arked, where A case-marks B iff A is a case as- I ~igner and A governs B (Haegeman, 1991, p.156)fl The case filter is implemented as follows: 1. Case assigners (P, active V, tensed I) have +ca attribute. Governors that are not case assign- ers (N, A, passive V) have -ca attribute• 2. Every item at NP node is assigned an at- tribute value -cm, which means that the item needs to be case-marked. The -cm attribute then propagates with the item. This item is said to be the origin of the -era attribute. 3. Barrier links do not allow any item with -cm to pass through, because, once the item goes beyond the barrier, the origin of-cm will not be governed, let alone case-marked. 4. Since each node has at most one governor, if' the governor is not a case assigner, the node will not be case-marked. Therefore, a case- filter violation is detected if +govern -era -ca co-occur in an item. 5. If +govern +ca -cm co-occur in an item, then the head daughter of the item governs and case-marks the origin of -cm. The case-filter condition on the origin of -era is met. The -era attribute is cleared. For example, consider the following sentences: (2) a. I believe John to have left. b. *It was believed John to have left. c. I would hope for John to leave• d. *I would hope John to leave. The word "believe" belongs to a subcategory of verb (V:IP) that takes an IP as the complement. Since there is no barrier between V:IP and the subject of IP, words like "believe" can govern into the IP complement and case-mark its subject (known as exceptional case-marking in literature). In (2a), the -cm attribute assigned to the item representing [NP John] percolates to V:IP node without being blocked by any barrier. Since +govern +ca -cm co-occur in the item at V:IP node, the case-filter is satisfied (Figure 3.a). On the other hand, in (25) the pas- 116 *g ..... V:IP ~.. -pas / ~'IP believe /~ \ NP -crn Ibar John to have left a. Case-filter satisfied at V:IP ~ :CP ~ CP.~ +govern Cbar hope +ca ~'~/ ~ ; for NP -cm Ibar John to leave c. Case-filter satisfied at Cbar, --cm cleared +govern V:IP ~..-cm :;as // -,< / IP be,ieved ~ \ NP -era Ibalr John to have left b. Case-filter violation at V:IP v:cP~ / hope NP -cm IbM John to leave d. The attribute --cm is blocked by a barrier. Figure 3: Case Filter Examples sive "believed" is not a case-assigner. The case-filter violation is detected at V:IP node (Figure 3.b). The word "hope" takes a CP complement. It does not govern the subject of CP because there is a barrier between them. The subject of an infini- tive CP can only be governed by complement "for" (Figure 3.c and 3.d). criterion: Every chain must receive and one~ ly one 0-role, where a chain consists of an NP I d the traces (if any) coindexed with it (van I emsdijk and Williams, 1986, p.245). / We first consider chains consisting of one element. The 0-criterion is implemented as the following con- straints: 1. An item at NP node is assigned +theta if its nform attribute is norm. Otherwise, if the value of nform is there or it, then the item is as- signed -theta. 2. Lexical nodes assign +theta or -theta to items depending on whether they are 0-assigners (V, A, P) or not (N, C). 3. Verbs and adjectives also have a subj-theta attribute. value O-role* examples +subj-theta yes "take", "sleep" -subj-theta no "seem", passive verbs *assigning O-role to subject This attribute percolates with the item from V to IP. The IP node then check the value of theta and subj-theta to make sure that tile verb assigns a 0-role to the subject if it re- quires one, and vice versa. Figure 4 shows an example of 0-criterion in action when parsing: (3) *It loves Mary -theta lP ~. +subj-theta -em /~// % +govern +ca NP Ibar It ..... " ..... +theta "" V. ~ +theta +govern Iove Nl:* Mary Figure 4: 0-criterion in action The subject NP, "it", has attribute -theta, which is percolated to the IP node. The verb "love" has attributes +theta +subj-theta. The NP, "Mary", has attribute +theta, When the items representing "love" and "Mary" are combined. Their theta at- tribute are unifiable, thus satisfying the 0-criterion. The +subj-theta attribute of "love" percolates with the item representing "love Mary", which is prop- agated to IP node. When the item from NP and Ibar are combined at IP node, the new item has both -theta and +subj-theta attribute, resulting in a 0-criterion violation. 117 The above constraints guarantee that chains with only one element satisfy 0-criterion. We now consider chains with more than one element. The base-position of a wh-movement is case-marked and assigned a 0-role. The base position of an np- movement is assigned a 0-role, but not case-marked. To ensure that the movement chains satisfy 0- criterion we need only to make sure that the items representing the parents of intermediate traces and landing sites of the movements satisfy these condi- tions: None of +ca, +theta and +subj-theta is present in the items representing the parent of intermediate traces of (wh- and np-) move- ments as well as the landing sites of wh- movements, thus these positions are not case- marked and assigned a O-role. Both +ca and +subj-theta are present in the items representing parents of landing sites of np-movements. Subjacency: Movement cannot cross more thanJ ne barrier (Haegeman, 1991, p.494). A wh-movement carries a whbarrier attribute. The value -whbarrier means that the movement has not crossed any barrier and +whbarrier means that the movement has already crossed one barrier. Barrier links allow items with -whbarrier to pass through, but change the value to +whbarrier. Items with +whbarrier are blocked by barrier links. When a wh-movement leaves an intermediate trace at a po- sition, the corresponding whbarrier becomes -. The subjacency of np-movements is similarly bandied with a npbarrier attribute. Ermpty Category Principle (ECP): A traceJ its parent must be properly governed. In literature, proper government is not, as the term suggests, subsumed by government. For example, in (4) Who do you think [cP e' [IP e came]] the tensed I in liP e came] governs but does not properly govern the trace e. On the other hand, # properly governs but does not govern e (Haegeman, 1901, p.4 6). Here, we define proper government to be a sub- class of government: Proper government: A properly governs B iff A governs B and A is a 0-role assigner (A do not have to assign 0-role to B). Therefore, if an item have both +govern and one of +theta or +subj-theta, then the head of the item properly governs the non-head source items and their sources that are reachable via a sequence of non-barrier links. This definition unifies the notions of government and proper government. In (4), e is properly governed by tensed I, e I is properly gov- erned by "think". This definition won't be able to account for difference between (4) and (5) (That-Trace Effect, (Haegeman, 1991, p.456)): (5) *Who do you think [CP e' that [IP e came]] However, That-Trace Effect can be explained by a separate principle. The proper government of wh-traces are handled by an attribute whpg (np-movements are similarly dealt with by an nppg attribute): Value Meaning -whpg the most recent trace has yet to be properly governed. +~hpg the most recent trace has already been properly governed. 1. If an item has the attributes -whpg, -theta, +govern, then the item is an ECP violation, because the governor of the trace is not a 0- role assigner. If an item has attributes -whpg, +theta, +govern, then the trace is properly governed. The value of whpg is changed to +. 2. Whenever a wh-movement leaves an interme- diate trace, whpg becomes -. 3. Barrier links block items with -~hpg. N:CP -ca CP claim / CSpec Cbar that Reagan met e Figure 5: An example of ECP violation For example, the word claim takes a CP com- plement. In the sentence: (6) *Whol did you make the claim e~ that Reagan met ei there is a wh-movement out of the complement CP of claim. When the movement left an intermedi- ate trace at CSpec, the value of whpg became -. When the item with -whpg is combined with the item 118 representing claim, their unification has attributes (+govern -theta -whpg), which is an ECP violation. The item is recognized as invalid and discarded. PRO Theorem: PRO must be ungoverned 1 Haegeman, 1991, p.263). When the IP node receives an item from Ibar with cform not being fin, the node makes a copy of the item and assign +pro and -ppro to the copy and then send it further without combining it with any item from (subject) NP node. The attribute +pro represents the hypothesis that the subject of the clause is PRO. The meaning of -ppro is that the subject PRO has not yet been protected (from being governed). When an item containing -ppro passes through a barrier link, -ppro becomes +ppro which means that the PRO subject has now been protected. A PRO- theorem violation is detected if +govern and -ppro co-occur in an item. 5. Objected-oriented Implementation The parser has been implemented in C++, an object-oriented extension of C. The object-oriented paradigm makes the relationships between nodes and links in the grammar network and their soft- ware counterparts explicit and direct. Communica- tion via message passing is reflected in the message passing metaphor used in object-oriented languages. I \ 1,1 , ,_,,_1 \ \ ----~" = (~) I I instance of subclass of instance class Figure 6: The class hierarchy for nodes Nodes and links are implemented as objects. Figure 6 shows the class hierarchy for nodes. The constraints that implement the principles are dis- tributed over the nodes and links in the network. The implementation of the constraints is modular because they are defined in class definitions and all the instances of the class and its subclasses inherit these constraints. The object-oriented paradigm al- lows the subclasses to modify the constraints. The implementation of the parser has been tested with example sentences from Chapters 4- 10, 15-18 of (van Riemsdijk and Williams, 1986). The chapters left out are mostly about logical form and Binding Theory, which have not yet been im- plemented in the parser. The average parsing time for sentences with 5 to 20 words is below half of a second on a SPARCstation ELC. 6. Discussion and Related Work Complexity of unification The attribute vectors used here are similar to those in unification based grammars/parsers. An impor- tant difference, however, is that the attribute vec- tors used here satisfy the unil closure condition (Barton, Jr. et al., 1987, p.257). That is, non- atomic attribute values are vectors that consist only of atomic attribute values. For example: (7) a. ((cat v) +pas +whpg (wh-atts (cat p)) b. * ((cat v) +pas +ghpg (wh-atts (cat v) (np-att (cat n)))) (7a) satisfies the unit closure condition, whereas (7b) does not, because wh-atts in (7b) contains a non-atomic attribute np-atts. (Barton, Jr. et al., 1987) argued that the unification of recursive at- tribute structures is a major source of computa- tional complexity. On the other hand, let a be the number of atomic attributes, n be the number of non-atomic attributes. The time it takes to unify two attribute vectors is a + na if they satisfy the unit closure condition. Since both n and a can be regarded as constants, the unification takes only constant amount of time. In our current implemen- tation, n = 2, a = 59. Attribute grammar interpretation Correa (1991) proposed an interpretation of GB principles based on attribute grammars. An at- tribute grammar consists of a phrase structure grammar and a set of attribution rules to compute the attribute values of the non-terminal symbols. The attributes are evaluated after a parse tree has been constructed by the phrase structure grammar. The original objective of attribute grammar is to derive the semantics of programs from parse trees. Since programming languages are designed to be un- ambiguous, the attribution rules need to be eval- uated on only one parse tree. In attribute gram- mar interpretation of GB theory, the principles are 119 encoded in the attribution rules, and the phrase structure grammar is replaced by X-bar theory and Move-~. Therefore, a large number of structures will be constructed and evaluated by the attribution rules, thus leading to a serious overgeneration prob- lem. For this reason, Correa pointed out that the attribute grammar interpretation should be used as a specification of an implementation, rather than an implementation itself. Actor-based GB parsing Abney and Cole (1986) presented a GB parser that uses actors (Agha, 1986). Actors are similar to ob- jects in having internal states and responding to messages. In our model, each syntactic category is represented by an object. In (Abney and Cole, 1986), each instance of a category is represented by an actor. The actors build structures by creat- ing other actors and their relationships according to 0-assignment, predication, and functional-selection. Other principles are then used to filter out illicit structures, such as subjacency and case-filter. This generate-and-test nature of the algorithm makes it suscetible to the overgeneration problem. 7. Conclusion We have presented an efficient message passing al- gorithm for principle-based parsing, where * overgeneration is avoided by interpreting prin- ciples in terms of descriptions of structures; * constraint checking involves only a constant- sized attribute vector; • principles are checked in different orders at dif- ferent places so that stricter principles are ap- plied earlier. We have also proposed simplifications of GB the- ory with regard to harrier and proper government, which have been found to be adequate in our exper- iments so far. References Abney, S. and Cole, J. (1986). A government- binding parser. In Proceedings of NELS. Agha, G. A. (1986). Actors: a model of concurrent computation in distributed system. MIT Press, Cambridge, MA. Barton, Jr., G. E., Berwick, R. C., and Ristad, E. S. (1987). Computational Complexity and Natural Language. The MIT Press, Cambridge, Mas- sachusetts. Berwick, R. C. (1991). Principles of principle-based parsing. In Berwick, B. C., Abney, S. P., and Tenny, C., editors, Principle-Based Parsing: Computation and Psycholinguistics, pages 1- 38. Kluwer Academic Publishers. Chomsky, N. (1981). Lectures on Government and Binding. Foris Publications, Cinnaminson, USA. Chomsky, N. (1986). Barriers. Linguistic Inquiry Monographs. The MIT Press, Cambridge, MA. Correa, N. (1991). Empty categories, chains, and parsing. In Berwick, B. C., Abney, S. P., and Tenny, C., editors, Principle-Based Parsing: Computation and Psycholinguislics, pages 83- 121. Kluwer Academic Publishers. Dorr, B. J. (1991). Principle-based parsing for ma- chine translation. In Berwick, B. C., Abney, S. P., and Tenny, C., editors, Principle-Based Parsing: Computation and Psycholinguistics, pages 153-184. Kluwer Academic Publishers. Fong, S. (1991). The computational implementation of principle-based parsers. In Berwick, B. C., Abney, S. P., and Tenny, C., editors, Principle- Based Parsing: Computation and Psycholin- guistics, pages 65-82. Kluwer Academic Pub- lishers. Haegeman, L. (1991). Introduction to Government and Binding Theory. Basil Blackwell Ltd. Johnson, M. (1991). Deductive parsing: The use of knowledge of language. In Berwick, B. C., Abney, S. P., and Tenny, C., editors, Principle- Based Parsing: Computation and Psycholin- guistics, pages 39-64. Kluwer Academic Pub- lishers. Lin, D. and Goebel, I%. (1993). Contex-free gram- mar parsing by message passing. In Proceedings of PACLING-93, Vancouver, BC. Radford, A. (1988). Transformational Grammar. Cambridge Textbooks in Linguistics. Cam- bridge University Press, Cambridge, England. van Riemsdijk, H. and Williams, E. (1986). Intro- duction to the Theory of Grammar. Current Studies in Linguistics. The MIT Press, Cam- bridge, Massachusetts. 1 20
1993
16
Lexicalized Context-Free Grammars Yves Schabes and Richard C. Waters Mitsubishi Electric Research Laboratories 201 Broadway, Cambridge, MA 02139 e-mail: s(;[email protected] and dick((~merl.coin Lexicalized context-free grammar (LCFG) is an attractive compromise between the parsing ef- ficiency of context-free grammar (CFC) and the elegance and lexical sensitivity of lexicalized tree- adjoining grammar (LTAG). LCFC is a restricted form of LTAG that can only generate context- free languages and can be parsed in cubic time. However, LCF(I supports much of the elegance of LTAG's analysis of English and shares with LTAG the ability to lexicalize CF(I;s without changing the trees generated. Motivation Context-free grammar (CFG) has been a well ac- cepted framework for computational linguistics for a long time. While it has drawbacks, including the inability to express some linguistic constructions, it has the virtue of being computationally efficient, O(n3)-time in the worst case. Recently there has been a gain in interest in the so-called 'mildly' context-sensitive formalisms (Vijay-Shanker, 1987; Weir, 1988; Joshi, Vijay- Shanker, and Weir, 1991; Vijay-Shanker and Weir, 1993a) that generate only a small superset of context-free languages. One such formalism is lex- icalized tree-adjoining grammar (LTAG) (Schabes, Abeill~, and Joshi, 1988; Abeillfi et al., 1990; Joshi and Schabes, 1992), which provides a number of attractive properties at the cost of decreased efficiency, O(n6)-time in the worst case (Vijay- Shanker, 1987; Schabes, 1991; Lang, 1990; Vijay- Shanker and Weir, 1993b). An LTAG lexicon consists of a set of trees each of which contains one or more lexical items. These elementary trees can be viewed as the elementary clauses (including their transformational variants) in which the lexical items participate. The trees are combined by substitution and adjunction. LTAC supports context-sensitive features that can capture some language constructs not cap- tured by CFG. However, the greatest virtue of LTAG is that it is lexicalized and supports ex- tended domains of locality. The lexical nature of LTAC is of linguistic interest, since it is believed that the descriptions of many linguistic phenom- ena are dependent upon lexical data. The ex- tended domains allow for the localization of most syntactic and semantic dependencies (e.g., filler- gap and predicate-argument relationships). A fllrther interesting aspect of LTAG is its ability to lexicalize CFCs. One can convert a CFC into an LTAG that preserves the original trees (Joshi and Schabes, 1992). Lexicalized context-free grammar (LCFG) is an attractive compromise between LTAG and CFG, that combines many of the virtues of LTAG with the efficiency of CFG. LCFC is a restricted form of LTAG that places further limits on the el- ementary trees that are possible and on the way adjunction can be performed. These restrictions limit LCFG to producing only context-free lan- guages and allow LCFC to be parsed in O(n3) - time in the worst ease. However, LCFC retains most of the key features of LTAG enumerated above. In particular, most of the current LTAG gram- mar for English (Abeilld et al., 1990) follows the restrictions of LCFG. This is of significant practi- cal interest because it means that the processing of these analyses does not require more computa- tional resources than CFGs. In addition, any CFG can be transformed into an equivalent LCFC that generates the same trees (and therefore the same strings). This re- sult breaks new ground, because heretofore ev- ery method of lexicalizing CFCs required context- sensitive operations (Joshi and Schabes, 1992). The following sections briefly, define LCFG, discuss its relationship to the current LTAG gram- mar for English, prove that LC, FC can be used to lexicalize CFC, and present a simple cubic-time parser for LCFC. These topics are discussed in greater detail in Schabes and Waters (1993). 121 Lexicalized Context-Free Grammars Like an LTAG, an LC'FG consists of two sets of trees: initial trees, which are combined by substi- tution and auxiliary trees, which are combined by adjunction. An LCFG is lexicalized in the sense that every initial and auxiliary tree is required to contain at least one terminal symbol on its fron- tier. More precisely, an LCFG is a five-tuple (Z, NT, I, A, ,5'), where ~ is a set of terminal sym- bols, NT is a set of non-terminal symbols, I and A are sets of trees labeled by terminal and non- terminal symbols, and ,5' is a distinguished non- terminal start symbol. Each initial tree in the set I satisfies the fol- lowing requirements. (i) Interior nodes are labeled by non- terminal symbols. (ii) The nodes on the frontier of the tree consist of zero or more non-terminal symbols and one or more terminal sym- bols. (iii) The non-terminal symbols on the frontier are marked for substitution. By convention, this is annotated in dia- grams using a down arrow (l). Each auxiliary tree in the set A satisfies the following requirements. (i) Interior nodes are labeled by non- terminal symbols. (ii) The nodes on the frontier consist of zero or more non-terminal symbols and one or more terminal symbols. (iii) All but one of the non-terminal sym- bols on the frontier are marked for sub- stitution. (iv) The remaining non-terminal on the frontier of the tree is called the foot. The label on the foot must be identical to the label on the root node of the tree. By convention, the foot is indicated in diagrams using an asterisk (.). (v) the foot must be in either the leftmost or the rightmost position on the frontier. Figure 1, shows seven elementary trees that might appear in an LCFG for English. The trees containing 'boy', 'saw', and 'left' are initial trees. The remainder are attxiliary trees. Auxiliary trees whose feet are leftrnost are called left recursive. Similarly, auxiliary trees whose feet are rightrnost are called righl recursive auxiliary trees. The path from the root of an aux- iliary tree to the foot is called the spine. NP VP N VP A /k A /X D$ N V VP* A N* VP* Adv I I I i boy seems pretty smoothly S S NPi,~(+wh) S S NPo$ VP NP o VP NPo$ VP A I [ V SI*NA £i V V NPI$ I I I think left saw Figure 1: Sample trees. In LCF(I, trees can be combined with substi- tution and adjunction. As illustrated in Figure 2, substitution replaces a node marked for substitu- tion with a copy of an initial tree. Adjunction inserts a copy of an auxiliary tree into another tree in place of an interior node that has the same label as the foot of the auxiliary tree. The subtree that was previously connected to the interior node is reconnected to the foot of the copy of the auxiliary tree. If the auxiliary tree is left re- cursive, this is referred to as left recursive adjunc- tion (see Figure 3). If the auxiliary tree is right recursive, this is referred to as right recursive ad- junction (see Figure 4). Crucially, adjunction is constrained by requir- ing that a left recursive auxiliary tree cannot be adjoined on any node that is on the spine of a right recursive auxiliary tree and a right recursive auxiliary tree cannot be adjoined on the spine of a left recursive auxiliary tree. An LCFG derivation must start with an initial tree rooted in S. After that, this tree can be re- peatedly extended using substitution and adjunc- tion. A derivation is complete when every frontier node is labeled with a terminal symbol. The difference between LCFG and LTAG is Figure 2: Substitution. 122 / AA A Figure 3: Left recursive adjunction. ~ A * = "A % Figure 4: Right recursive adjunction. that LTAG allows the foot of an auxiliary tree to appear anywhere on the frontier and places no limitations on the interaction of auxiliary trees. In this unlimited situation, adjunction encodes string wrapping and is therefore more power- ful than concatenation (see Figure 5). However, the restrictions imposed by LCFG guarantee that no context-sensitive operations can be achieved. They limit the languages that can be generated by LCFGs to those that can be generated by CFGs. Coverage of LCFG and LTAG The power of LCFG is significantly less than LTAG. Surprisingly, it turns out that there are only two situations where the current LTAG gram- mar for English (Abeilld et al., 1990) fails to satisfy the restrictions imposed by LCFG. The first situation, concerns certain verbs that take more than one sentential complement. An ex- ample of such a verb is deduce, which is associated with the following auxiliary tree. S NPo$ VP V Sl* PP I A deduce P Sz,I, I from Since this tree contains a foot node in the cen- ter of its frontier, it is not part of an LCFG. Hav- ing the foot on the first sentential complement is convenient, because it allows one to use the stan- dard LTAG wh-analyses, which depends on the w2 ~ W4 % Figure 5: Adjunction in LTAG. existence of an initial tree where the filler and gap are local. This accounts nicely for the pair of sen- tences below. However, other analyses of wh ques- tions may not require the use of the auxiliary tree above. (1) John deduced that Mary watered the grass from seeing the hose. (2) What did John deduce that Mary wa- tered from seeing the hose. The second situation, concerns the way the current LTAG explains the ambiguous attach- ments of adverbial modifiers. For example, in the sentence: (3) John said Bill left yesterday. the attachment of yesterday is ambiguous. The two different LTAG derivations indicated in Fig- ure 6 represent this conveniently. Unfortunately, in LCFG the high attachment of yesterday is forbidden since a right auxiliary tree (corresponding to yesterday) is adjoined on the spines of a left auxiliary tree (corresponding to John said). However, one could avoid this prob- lem by designing a mechanism to recover the high attachment reading from the low one. Besides the two cases presented above, the current LTAG for English uses only left and right recursive auxiliary trees and does not allow any S NP ..~ ",~ ........ I John V S* .. .......... VP "-::. A said S ." "'° / \ ,~t o- VP* ADV NP VP I I I yesterday Bill V I left Figure 6: Two LTAG derivations for John said Bill left yesterday. 123 interaction along the spine of these two kinds of trees. This agrees with the intuition that most English analyses do not require a context-sensitive operation. LCFG. However, as shown below, combining ex- tended substitution with restricted adjunction al- lows strong lexicalization of CFG, without intro- ducing greater parsing complexity than CFG. Lexicalization of CFGs The lexicalization of grammar formalisms is of in- terest from a number of perspectives. It is of in- terest from a linguistic perspective, because most current linguistic theories give lexical accounts of a number of phenomena that used to be considered purely syntactic. It is of interest from a computa- tional perspective, because lexicalized grammars can be parsed significantly more efficiently than non-lexicalized ones (Schabes and Joshi, 1990). Formally, a grammar is said 'lexicalized' (Sch- abes, Abeill~., and Joshi, 1988) if it consists of: ,, a finite set of elementary structures of finite size, each of which c, ontains an overt (i.e., non-empty) lexical item. • a finite set of operations for creating derived structures. The overt lexical item in an elementary struc- ture is referred to as its anchor. A lexicalized grammar can be organized as a lexicon where each lexical item is associated with a finite number of structures for which that item is the anchor. In general, CFGs are not lexicalized since rules such as ,5' --* NP VP that do not locally introduce lexical items are allowed. In contrast, the well- known Creibach Normal Form (CNF) for CFCs is lexicalized, because every production rule is re- quired to be of the form A --+ ac~ (where a is a terminal symbol, A a non-terminal symbol and a a possibly empty string of non-terminal symbols) and therefore locally introduces a lexical item a. It can be shown that for any CFG (.7 (that does not derive the empty string), there is a CNF gram- mar (.7 ~ that derives the same language. However, it may be impossible for the set of trees produced by (7 ~ to be the same as the set of trees produced by G. Therefore, CNF achieves a kind of lexicaliza- tion of CFGs. However, it is only a weak lexical- ization, because the set of trees is not necessarily preserved. As discussed in the motivation section, strong lexicalization that preserves tree sets is pos- sible using LTAG. However, this is achieved at the cost of significant additional parsing complexity. Heretofore, several attempts have been made to lexicalize CFC with formalisms weaker than LTAG, but without success. In particular, it is not sufficient to merely extend substitution so that it applies to trees. Neither is it sutficient to rely solely on the kind restricted adjunction used by Theorem If G = (~,NT, P,S) is a finitely ambiguous CFG which does not generate the empty .string (¢), then there is an LCFG (7 ~ = (~, NT, I, A, S) generating the same language and tree set as (7. Furthermore (7' can be chosen .so that it utilizes only lefl-recursive auxiliary trees. As usual in the above, a CFG (.7 is a four- tuple, (E, NT, P, S), where N is a set of terminal symbols, NT is a set of non-terminal symbols, P is a set of production rules that rewrite non-terminal symbols to strings of terminal and non-terminal symbols, and S is a distinguished non-terminal symbol that is the start symbol of any derivation. To prove the theorem we first prove a some- what weaker theorem and then extend the proof to the flfll theorem. In particular, we assume for the moment that the set of rules for (.7 does not contain any empty rules of the form A ~ e. Step 1 We begin the construction of (7 ~ by con- structing a directed graph LCG that we call the left corner derivation graph. Paths in LCG cor- respond to leftmost paths from root to frontier in (partial) derivation trees rooted at non-terminal symbols in (1. L(TG contains a node for every symbol in E U NT and an arc for every rule in P as follows. For each terminal and non-terminal symbol X in G create a node in LCG labeled with X. For each rule X --+ Ya in G create a directed arc labeled with X ~ Ya from the node labeled with X to the node labeled Y. As an example, consider the example CFG in Figure 7 and the corresponding L(TG shown in Figure 8. The significance of L(;G is that there is a one- to-one correspondence between paths in LCG end- ing on a non-terminal and left corner derivations in G. A left corner derivation in a CFG is a partial derivation starting from any non-terminal where every expanded node (other than the root) is the leftmost child of its parent and the left corner is a non-terminal. A left corner derivation is uniquely identified by the list of rules applied. Since G does not have any empty rules, every rule in (7 is rep- resented in L(;'G. Therefore, every path in LCG ending on a terminal corresponds to a left corner derivation in (7 and vice versa. 124 S---+A A ,5' --+ B A A--+B B B--+ A S B ----+ b Figure 7: An example grammar. S ---~ B A S -~.A A S ~- A B B--+ A S B--+b b Figure 8: The LC(; created by Step 1. Step 2 The set of initial trees I for G' is con- structed with reference to L(TG. In particular, an initial tree is created corresponding to each non- cyclic path in L(/G that starts at a non-terminal symbol X and ends on a terminal symbol y. (A non-cyclic path is a path that does not touch any node twice.) For each non-cyclic path in LCG from X to y, construct an initial tree T as follows. Start with a root labeled X. Apply the rules in the path one after another, always expanding the left corner node of T. While doing this, leave all the non-left corner non-terminal symbols in T unexpanded, and label them as substi- tution nodes. Given the previous example grammar, this step produces the initial trees shown in Figure 9. Each initial tree created is lexicalized, because each one has a non-terminal symbol as the left corner element of its frontier. There are a finite number of initial trees, because the number of non- cyclic paths in LCG must be finite. Each initial tree is finite in size, because each non-cyclic path in LCG is finite in length. Most importantly, The set of initial trees is the set of non-recursive left corner derivations in (,'. S A AS B B$ B AS B B$ I I I b b b Figure 9: Initial trees created by Step 2. Step 3 This step constructs a set of left- recursive auxiliary trees corresponding to the cyclic path segments in L(TG that were ignored in the previous step. In particular, an attxiliary tree is created corresponding to each minimM cyclic path in LCG that starts at a non-terminM sym- bol. For each minimal cycle in LCG from X to it- self, construct an auxiliary tree T by starting with a root labeled X and repeatedly expand- ing left, corner frontier nodes using the rules in the path as in Step 2. When all the rules in the path have been used, the left corner fron- tier node in T will be labeled X. Mark this as the foot node of T. While doing the above, leave all the other non-terminal symbols in T unexpanded, and label them all substitution nodes. The LC(; in Figure 8 has two minimal cyclic paths (one from A to A via B and one from B to B via A). This leads to the the two auxiliary trees shown in Figure 10, one for A and one for B. The attxiliary trees generated in this step are not, necessarily lexicalized. There are a finite num- ber of auxiliary trees, since the number of minimal cyclic paths in G must be finite. Each auxiliary tree is finite in size, because each minimal-cycle in LCG is finite in length. The set of trees that can he created by corn- biding the initial trees from Step 2 with the auxil- iary trees from Step 3 by adjoining auxiliary trees along the left edge is the set of every left corner derivation in (,'. To see this, consider that ev- ery path in L(;G can be represented as an initial non-cyclic path with zero or more minimal cycles inserted into it. The set of trees that can be created by corn- biding the initial trees from Step 2 with the auxil- iary trees from Step 3 using both substitution and adjunction is the set of every derivation in G. To see this, consider that every derivation in G can be decomposed into a set of left corner derivations in G that are combined with substitution. In par- ticular, whenever a non-terminal node is not the leftmost child of its parent, it is the head of a sep- A B B B$ A S$ A* S$ B* B$ Figure 10: Auxiliary trees created by Step 3. 125 arate left corner derivation. Step 4 This step lexicalizes the set of auxiliary trees built in step 3, without altering the trees that can be derived. For each auxiliary tree T built in step 3, con- sider the frontier node A just to the right of the foot. If this node is a terminal do nothing. Otherwise, remove T from the set of auxiliary trees replace it with every tree that can be constructed by substituting one of the initial trees created in Step 2 for the node A in T. In the case of our continuing example, Step 4 results in the set of auxiliary trees in Figure 11. Note that since G is finitely ambiguous, there must be a frontier node to the right of the foot of an attxiliary tree T. If not, then T would corre- spond to a derivation X:~X in G and 6' would be infinitely ambiguous. After Step 4, every auxiliary tree is lexicalized, since every tree that does not have a terminal to the right of its foot is replaced by one or more trees that do. Since there were only a finite number of finite initial and auxiliary trees to start with, there are still only a finite number of finite attxiliary trees. The change in the auxiliary trees caused by Step 4 does not alter the set of trees that can be produced in any way, because the only change that was made was to make substitutions that could be made anyway, and when a substitutable node was eliminated, this was only done after every possible substitution at that node was performed. Note that the initial trees are left anchored and the auxiliary trees are ahnost left anchored in the sense that the leftmost frontier node other than the foot is a terminal. This facilitates effi- cient left to right parsing. A A B B$ A* S B B$ A AS A* S A S$ B B$ B AS B* B I I I b b b Figure 1 l: Auxiliary trees created by Step 4. The procedure above creates a lexicalized grammar that generates exactly the same trees as G and therefore the same strings. The only re- maining issue is the additional assumption that G does not contain any empty rules. If (; contains an empty rule A ~ e one first uses standard methods to transform (; into an equivalent grammar H that does not have any such rule. When doing this, create a table showing how each new rule added is related to the empty rules removed. Lexicalize H producing H' using the procedure above. Derivations in H' result in elements of the tree set of H. By means of the ta- ble recording the relationship between (; and H, these trees can be converted to derivations in G. [] Additional issues There are several places in the algorithm where greater freedom of choice is possible. For instance, when lexicalizing the auxiliary trees created in Step 3, you need not do anything if there is any frontier node that is a terminal and you can choose to expand any frontier node you want. For in- stance you might want to choose the node that corresponds to the smallest number of initial trees. Alternatively, everywhere in the procedure, the word 'left' can be replaced by 'right' and vice versa. This results in the creation of a set of right anchored initial trees and right recursive auxiliary trees. This can be of interest when the right cor- ner derivation graph has less cycles than the left corner one. The number of trees in G' is related to the number of non-cyclic and minimal cycle paths in LCG. In the worst case, this number rises very fast as a function of the number of arcs in LCG, (i.e., in the number of rules in G). (A fully con- nected graph of n 2 arcs between n nodes has n! acyclic paths and n! minimal cycles.) However, in the typical case, this kind of an explosion of trees is unlikely. Just as there can be many ways for a CF(~ to derive a given string, there can be many ways for an LCFG to derive a given tree. For maximal efficiency, it would be desirable for the grammar G' produced by the procedure above to have no ambiguity in they way trees are derived. Unfortu- nately, the longer the minimal cycles in LCG, the greater the tree-generating ambiguity the proce- dure will introduce in G'. However, by modifying the procedure to make use of constraints on what attxiliary trees are allowed to adjoin on what nodes in which initial trees, it should be possible to re- duce or even eliminate this ambiguity. All these issues are discussed at greater length 126 in Schabes and Waters (1993). Parsing LCFG Since LCFG is a restricted case of tree-adjoining grammar (TAG), standard O(nG)-time TAG parsers (Vijay-Shanker, 1987; Schabes, 1991; Lang, 1990) can be used for parsing LCFG. Fur- ther, they can be straightforwardly modified to re- quire at most O(n4)-tirne when applied to LCFG. However, this still does not take fifll advantage of the context-freeness of LCFC. This section describes a simple I)ottom-up recognizer for LCFG that is in the style of the CKY parser for (IT(I;. The virtue of this algo- rithm is that it shows in a simple manner how the O(n3)-time worst case complexity can be achieved for LCFG. Schabes and Waters (1993) describes a more practical and more elaborate (Earley-style) recognizer for LCFC, which achieves the same bounds. Suppose that G = (E, NT, I,A,S) is an LCFG and that al'"a,~ is an input string. We can assume without loss of generality 1 that every node in I U A has at most two children. Let 71 be a node in an elementary tree (identi- fied by the name of the tree and the position of the node in the tree). The central concept of the al- gorithrn is the concepts of spanning and covering. 71 spans a string ai+l ...aj if and only if there is some tree derived by ('; for which it is the case that the fringe of the subtree rooted at 71 is ai+l "" "aj. In particular, a non-terminal node spans aj if and only if the label on the node is aj. A non-terrninal node spans ai+ 1 ...aj if and only if ai+l ...aj is the concatenation in left, to right order of strings spanned by the children of the node. • If 7 / does not subsume the foot node of an aux- iliary tree then: 71 covers the string ai+ 1 ...aj if and only if it spans ai+l" .aj. • If 7 / is on the spine of a right recursive auxiliary tree T then: 71 covers ai+l." .aj if and only if 7 / spans some strin~ that is the concatenation of ai+l -..aj and a string spanned by the foot of T. (This situation is illustrated by the right drawing in Figure 12, in which 7 / is labeled with B.) • If 71 is on the spine of a left recursive auxiliary tree T then: 71 covers ai+] " .aj if and only if 71 spans some string that is the concatenation of a string spanned by the foot of T and ai+l ...aj. (This situation is illustrated by the left drawing in Figure 12, in which 71 is labeled with B.) lit can be easily shown that by adding new nodes ('4 "~ any L ,F(., can be transformed into an equivalent LC, FG satisfying this condition. A, ai+l-.- aj ai+l.., aj A* Figure 12: Coverage of nodes on the spine. The algorithm stores pairs of the form (71, pos) in an n by n array C. In a pair, pos is either t (for top) or b (for bottom). For every node 7l in every elementary tree in (;, the algorithm guarantees the following. • ('l,b) e C[i,j] if and only if,I covers ai+l ...aj. • ('l,t) E C[i,j] if and only if ('l,b} E C,[i,j] or ai+l ...aj is the concatenation (in either order) of a string covered by 7 / and a string covered by an auxiliary tree that can be adjoined on 71 . The algorithm fills the upper diagonal portion of the array C[i, j] (0 < i < j _< n) for increasing values of j - i. The process starts by placing each foot node in every cell C'[i,i] and each terminal node 71 in every cell C[i, i + 1] where 71 is labeled ai+l • The algorithm then considers all possible ways of combining covers into longer covers. In particu- lar, it fills the cells C[i, i + k] for increasing values of k by combining elements from the cells C[i, j] and C[j,i + k] for all j such that i < j < i + k. There are three situations where combination is possible: sibling concatenation, left recursive con- catenation, and right recursive concatenation. Sibling concatenation is illustrated in Fig- ure 13. Suppose that there is a node 7/0 (labeled B) with two children 711 (labeled A) and 712 (labeled A'). If (711 , t) E C[i, j] and ('12, t} E (7[j, i + k] then ('1o, b) E C[i, i + k]. Left recursive concatenation is illustrated in Figure 14. Here, the cover of a node is combined with the cover of a left auxiliary tree that can be adjoined at the node. Right recursive concatena- tion, which is shown in Figure 15 is analogous. For simplicity, the recognizer is written in two parts. A main procedure and a subpro- cedure Add(node, pos, i,j), which adds the pair (node, pos) into C[i, j]. a. ...a. ai+ 1 t+l J aj+l'"ak "'" ak Figure 13: Sibling concatenation. 127 Procedure recognizer begin ;; foot node initialization ( ,[z, i]) for i = 0 to n for all foot node 71 in A call Add0/, b, i, i) ;; terminal node initialization ((;[i, i + 1]) fori=0 ton-l for all node 71 in A U I labeled by ai+l call Add0/, t, i, i + 1) ;; induction (G'[i, i + k] = (;[i, j] + (:[j, i + k]) for k = 2 to n for i = 0 to n- k forj=i+ 1 toi+k-1 ;; sibling concatenation if (711 , l) 6 C,[i, j] and (712, t) e C[j, i + k] and r/1 is the left sibling of 7/2 with common parent 71o then Add(710 , b, i, i + k) ;; left recursive concatenation if {71, b) E C[i, j] and (p, t} e (,'[/, i + k] and p is the root node of a left recursive auxiliary tree that can adjoin on rl then Add0j , t, i, i + k) ;; right recursive concatenation if {'l, b) e (;[j, i + k] and (p, t) E C[i, j] and p is the root node of a right recursive auxiliary tree that can adjoin on 7 I then Add(r/, t, i, i + k) if (7/, z) e c[0, 7q and 71 is labeled by ,5' and 71 is the root node of an initial tree in I then return acceptance otherwise return rejection end Note that the sole purl)ose of the codes t and b is to insure that only one auxiliary tree can adjoin on a node. The procedure could easily be mod- ified to account for other constraints on the way derivation should proceed such as those suggested for LTAGs (Schabes and Shieber, 1992). The procedure Add puts a pair into the array C. If the pair is already present, nothing is (lone. However, if it is new, it is added to (7 and other pairs may be added as well. These correspond to cases where the coverage is not increased: when a node is the only child of its parent, when the A /2.,+ /2... ai+l... ~ A*a. ...a k J+l ai+l'"ak Figure 14: Left recursive concatenation. A /2., .... A* ... ak aj+i., a k ai+ 1 aj ai+ 1 Figure 15: Right recursive concatenation. node is recognized without adjunction, and when substitution occurs. Procedure Add(r/, pos, i, j) begin Put (rl, pos) in C,[i, j] if pos = t and r I is the only child of a parent It call Add(#, b, i, j) if pos = t and r is the root node of an initial tree, for each substitution node p at which 71 can substitute call Add(p, t, i, j) ;; no adjunction if pos = b if the node 7/does not have an OA constraint call Add(r/, t, i, j) end The O(n 3) complexity of the recognizer fol- lows from the three nested induction loops on k, i and j. (Although the procedure Add is defined recursively, the number of pairs added to (7 is bounded by a constant that is independent of sen- tence length.) By recording how each pair was introduced in each cell of the array C, one can easily extend the recognizer to produce all derivations of the input. Conclusion LCFG combines much of the power of LTAG with tile computational efficiency of CFG. It supports most of the same linguistic analysis supported by LTAC. In particular, most of the current LTAG for English falls into LCFG. In addition, LCFC can lexicalize CFG without altering the trees pro- duced. Finally, LCFG can be parsed in O(n3)- time. There are many directions in which the work on LCFG described here could be extended. In 128 particular, one could consider stochastic exten- sions, LP~ parsing, and non-deterministic LR pars- ing. Acknowledgments We thank John Coleman who, by question- ing whether the context-sensitivity of stochastic LTAG was actually being used for English, trig- gered this work. We thank Aravind Joshi, Fer- nando Pereira, Stuart Shieber and B. Srinivas for valuable discussions. REFERENCES Abeilld, Anne, Kathleen M. Bishop, Sharon Cote, and Yves Schabes. 1990. A lexicalized tree adjoining grammar for English. Technical Re- port MS-CIS-90-24, Department of Computer and Information Science, University of Penn- sylvania. Joshi, Aravind K. and Yves Schabes. 1992. Tree- adjoining grammars and lexicalized gram- mars. In Maurice Nivat and Andreas Podel- ski, editors, Tree Automata and Languages. Elsevier Science. Joshi, Aravind K., K. Vijay-Shanker, and David Weir. 1991. The convergence of mildly context-sensitive grammatical formalisms. In Peter Sells, Stuart Shieber, and Tom Wasow, editors, Foundational Issues in Nalural Lan- guage Processing. MIT Press, Cambridge MA. Lang, Bernard. 1990. The systematic construc- tions of Earley parsers: Application to the production of O(n 6) Earley parsers for Tree Adjoining Grammars. In Proceedings of the Ist International Workshop on Tree Adjoining Grammars, Dagstuhl C, astle, FRG, August. Schabes, Yves, Anne Abeill6, and Aravind K. Joshi. 1988. Parsing strategies with 'lexical- ized' grammars: Application to tree adjoining grammars. In Proceedings of the 12 th Interna- tional Conference on Computational Linguis- tics (COLING'88), Budapest, Hungary, An- gust. Schabes, Yves and Aravind K. Joshi. 1990. Pars- ing with lexicalized tree adjoining grammar. In Masaru Tomita, editor, C, urrent Issues in Parsing Technologies. Kluwer Accademic Publishers. Schabes, Yves and Stuart Shieber. 1992. An al- ternative conception of tree-adjoining deriva- tion. In 20 th Meeting of the Association for (,'omputational Linguistics (A CL '92). Schabes, Yves and Richard C. Waters. 1993. Lex- icalized context-free grammar: A cubic-time parsable formalism that strongly lexicalizes context-free grammar. Technical Report 93- 04, Mitsubishi Electric Research Laboratories, 201 Broadway. Cambridge MA 02139. Schabes, Yves. 1991. The valid prefix prop- erty and left to right parsing of tree-adjoining grammar. In Proceedings of the second Inter- national Workshop on Parsing Technologies, Cancan, Mexico, February. Vijay-Shanker, K. and David Weir. 1993a. The equivalence of four extensions of context-free grammars. To appear in Mathematical Sys- tems Theory. Vijay-Shanker, K. and [)avid Weir. 1993b. Pars- ing some constrained grammar formalisms. To appear in Computational Linguistics. Vijay-Shanker, K. 1987. A Study of Tree Adjoin- ing Grammars. Ph.D. thesis, Department of Computer and Information Science, Univer- sity of Pennsylvania. Weir, David J. 1988. Character- izing Mildly Context-,5¥nsitive Grammar For- malisms. Ph.D. thesis, Department of Com- puter and Information Science, University of Pennsylvania. 129
1993
17
PARALLEL MULTIPLE CONTEXT-FREE GRAMMARS, FINITE-STATE TRANSLATION SYSTEMS, AND POLYNOMIAL-TIME RECOGNIZABLE SUBCLASSES OF LEXICAL-FUNCTIONAL GRAMMARS Hiroyuki Seki tt Ryuichi Nakanishi t Yuichi Kaji t Sachiko Ando t Tadao Kasami $t t Department of Information and Computer Sciences, Faculty of Engineering Science, Osaka University 1-1 Machikaneyama, Toyonaka, Osaka 560, Japan :~ Graduate School of Information Science, Advanced Institute of Science and Technology, Nara 8916-5 Takayama, Ikoma, Nara 630-01, Japan Internet: [email protected] Abstract A number of grammatical formalisms were intro- duced to define the syntax of natural languages. Among them are parallel multiple context-free grammars (pmcfg's) and lexical-functional gram- mars (lfg's). Pmcfg's and their subclass called multiple context-free grammars (mcfg's) are nat- ural extensions of cfg's, and pmcfg's are known to be recognizable in polynomial time. Some sub- classes of lfg's have been proposed, but they were shown to generate an AlP-complete language. Fi- nite state translation systems (fts') were intro- duced as a computational model of transforma- tional grammars. In this paper, three subclasses of lfg's called nc-lfg's, dc-lfg's and fc-lfg's are introduced and the generative capacities of the above mentioned grammatical formalisms are in- vestigated. First, we show that the generative capacity of fts' is equal to that of nc-lfg's. As relations among subclasses of those formalisms, it is shown that the generative capacities of de- terministic fts', dc-lfg's, and pmcfg's are equal to each other, and the generative capacity of fc-lfg's is equal to that of mcfg's. It is also shown that at least one Af79-complete language is generated by fts'. Consequently, deterministic fts', dc-lfg's and fc-lfg's can be recognized in polynomial time. However, fts' (and nc-lfg's) cannot, if P ¢ AfT 9. 1 Introduction A number of grammatical formalisms such as lexical-functional grammars (Kaplan 1982), head grammars (Pollard 1984) and tree adjoining grammars (Joshi 1975)(Vijay-Shanker 1987) were introduced to define the syntax of natural lan- guages. On the other hand, there has been much effort to propose well-defined computa- tional models of transformational grammars. One of these is the one to extend devices which oper- ate on strings, such as generalized sequential ma- chines (gsm's) to devices which operate on trees. It is fundamentally significant to clarify the gen- erative capacities of such grammars and devices. Parallel multiple context-free grammars (pm- cfg's) and multiple context-free grammars (mcfg's) were introduced in (Kasami 1988a)(Seki 1991) as natural extensions of cfg's. The subsystem of lin- ear context-free rewriting systems (Icfrs') (Vijay- Shanker 1987) which deals with only strings is the same formalism as mcfg's. The class of cfl's is properly included in the class of languages gen- erated by pmcfg's, which in turn is properly in- cluded in the one generated by mcfg's. The class of languages generated by pmcfg's is properly included in that of context-sensitive languages (Kasami 1988a). Pmcfg's have been shown to be recognized in polynomial time (Kasami 1988b)(Seki 1991). A tree transducer (Rounds 1969) takes a tree as an input, starts from the initial state with its head scanning the root node of an input. Ac- cording to the current state and the label of the scanned node, it transforms an input tree into an output tree in a top-down way. A finite state translation system (fts) is a tree transducer with its input domain being the set of derivation trees of a cfg (Rounds 1969)(Thatcher 1967). A num- ber of equivalence relations between the classes of yield languages generated by fts' and other computational models have been established (En- gelfriet 1991)(Engelfriet 1980)(Weir 1992). Espe- cially, it has been shown that the class of yield languages generated by finite-copying fts' equals to the class of languages generated by lcfrs' (Weir 1992), hence by mcfg's. In lexical-functional grammars (Ifg's) (Ka- plan 1982), associated with each node v of a derivation tree is a finite set F of pairs of at- tribute names and their values. F is called the f- structure of v. An lfg G consists of a cfg Go called the underlying cfg of G and a finite set Pfs of equations called functional schemata which spec- ify constraints between the f-structures of nodes in a derivation tree. Functional schemata are at- tached to symbols in productions of Go. It has been shown in (Nakanishi 1992) that the class of languages generated by lfg's is equal to that of re- 130 cursively enumerable languages even though the underlying cfg's are restricted to regular gram- mars. In (Gazdar 1985)(Kaplan 1982)(Nishino 1991), subclasses of lfg's were proposed in order to guarantee the recursiveness (and/or the effi- cient recognition) of languages generated by lfg's. However, these classes were shown to generate an A/P-complete language (Nakanishi 1992). In this paper, three subclasses of lfg's called nc-lfg's, dc-lfg's and fc-lfg's are proposed, two of which can be recognized in polynomial time. Moreover, this paper clarifies the relations among the generative capacities of pmcfg's, fts' and these subclasses of lfg's. In nc-lfg's, a functional schema either speci- fies the vMue of a specific attribute, say atr, im- mediately (Tart = val) or specifies that the value of a specific attribute of a node v is equal to the whole f-structure of a child node of v (Tatr =l). An nc-lfg is called a dc-lfg if each pair of rules P] : A --~ aa and P2 : A --~ a2 whose left-hand sides are the same is inconsistent in the sense that there exists no f-structure that locally sat- isfies both of the functional schemata of Pl and those of p2. Intuitively, in a dc-lfg G, for each pair (tl, t2) of derivation trees in G, if the f-structure and nonterminal of the root of tl are the same as those of t2, then t] and t2 derive the same termi- nal string. Let G be an nc-lfg. A multiset M of nonter- minals of G is called an SPN multiset in G if the following condition holds: Let M = {{A1,A2,'..,An}} be a mul- tiset of nonterminals where different Ai's are not always distinct. There exist a derivation tree t and a subset of nodes V = {v],v2,...,v,~} of t such that the label ofvi is Ai (1 < i < n) and the f- structures of vi's are the same with each other by functional schemata of G. If the number of SPN multisets in G is finite, then G is called an fc-lfg. Our main result is that the generative capac- ity of nc-lfg's is equal to that of fts'. As relations among proper subclasses of the above mentioned formalisms, it is shown that the generative capac- ities of dc-lfg's, deterministic fts' and pmcfg's are equal to each other, and the generative capacity of fc-lfg's is equal to that of mcfg's. It is also shown that a (nondeterministic) fts generates an Af:P-complete language. 2 Parallel Multiple Context-Free Grammars A parallel multiple context-free grammar (pmcfg) is defined to be a 5-tuple G = ( N, T, F, P, S) which satisfies the following conditions (G1) through (Gh) (Kasami 1988a)(Seki 1991). (G1) N is a finite set of nonterminal symbols. A positive integer d(A) is given for each nonter- minal symbol A • N. (G2) T is a finite set of terminal symbols which is disjoint with N. (G3) F is a finite set of functions satisfying the following conditions. For a positive integer d, let (T*) a denote the set of all the d-tuples of strings over T. For each f • F with arity a(f), positive integers r(f) and di(f) (1 _< i < a(f)) are given, and f is a total function from (T*) dl(:) x (T*) d2(f) ×... x (T*)da(f) (1) to I T*)'(:) which satisfies the following condition fl). Let • i = (zil, zi2,..., zid,(S)) denote the ith argument of f for 1 < i < a(f). (fl) For 1 < h < r(f), the hth component of f, denoted by f[h], is defined as; f[h] [Xl, f~2,-" -, Xa(f)] = OCh,OX#(h,O)rl(h,o)Oth,1 • .'ah,nh_lXu(h,nh_l)n(h,nh_Dah,n~ (2.1) where ah,k • T* for 0 < k <_ nh, 1 < #(h,j) <_ a(f) and 1 <_ ~(h,j) <_ dt~(h,j)(f) for O ~ j ~_ nh --1. (G4) P is a finite set of productions of the form A ---* f[A1,A2,...,Aa(y)] where A, Aa,A2,...,Aa(/) • N, f • F, r(f) = d(A) and di(f) = d(Ai) (1 < i < a(f)). Ifa(f) = 0, i.e., f • (T*) r(f), the production is called a terminating production, otherwise it is called a nonterminating production. (Gh) S • N is the initial symbol, and d(S) = 1. If all the functions of a pmcfg G satisfy the following Right Linearity condition, then G is called a multiple context-free grammar (mcfg). [Right Linearity ] For each xlj, the total num- ber of occurrences of xij in the right-hand sides of (2.1) from h = 1 through r(f) is at most one. The language generated by a pmcfg G = (N, T, F, P, S) is defined as follows. For A • N, let us define LG(A) as the smallest set satisfying the following two conditions: (L1) If a terminating production A --* & is in P, then ~ • LG(A). (L2) If A --~ f[A1,A2,...,Aa(y)] • P and (~i • LG(Ai) ~1 < i < a(f)), then f[~1,~2,''', O~a(f)] • LG(A) .- 131 Define L(G) a=La(S). L(G) is called the par- allel multiple context-free language (pmcfl) gen- erated by G. If G is an mcfg, L(G) is called the multiple context-free language (mcfl) generated by G. Example 2.1(Kasami 1988a): Let GEX1 ~--- (N, T,F,P,S),N = {S}, T = {a},F = {f~,f},P = {r] : S --~ fa, ro : S --* f[S]}, where f~ = a,f[(x)] = xx. GExl is a pmcfg but is not an mcfg since the function f does not satisfy Right Linearity. The language generated by GEx~ is {a 2" In > 0}, which cannot be generated by any mcfg (see Lemma 6 of (Kasami 1988a)). The empty string is denoted by ¢. Example 2.2: Let GEx2 = (N, T, F, P, S) be a pmcfg, where N = {S,A), T = {a,b}, F = {g[(Xl,X2) ] ---- XlX2, fa[(Xl,X2)] -~ (xla, x2a), = y, = and, P= - --* {Po : S g[A], Pl : A ---* f~[A], Pz : A --* fb[A], P3 : A ---* f~}. Note that GEx2 is an mcfg. L (GEx2) = {ww I w E {a, b)*}. Lamina 2.1(Kasami 1988b)(Seki 1991): Let C be a pmcfg. For a given string w, it is decidable whether w E L (G) or not in time polynomial of I~1, where I~1 denotes the length of w. 3 Finite State Translation Systems A set ~ of symbols is a ranked alphabet if, for each cr E ~, a unique non-negative number p(c~) is associated, p(cr) is the rank of ~. For a set X, we define free algebra T~.(X) as the smallest set such that; * T~: (X) includes X. • If p(~) = 0 for cr E ~, then ~ E T~(X). • If p(o') = n (> 1) for a E, ~ and tl,..., E 7-~.(X), then t-= or(t1,.., tn) E T~(X). t~ is called the root symbol, or shortly, the root of t. Hereafter, a term in 7"~ (X) is also called a tree, and we use terminology of trees such as subtree, node and so on. Let G - (N, T, P, S) be a context-free gram- mar (cfg) where N, T, P and S are a set of non- terminal symbols, a set of terminal symbols, a set of productions and the initial symbol, respec- tively. A derivation tree in cfg G is a term defined as follows. (T1) Every a E T is a derivation tree in G. (T2) Assume that there are a production p : A ---* X1...X,~ (A E N, XI,...,Xn E NUT) in P and n derivation trees tl,...t,~ whose roots are labeled with Pl,..., pn, respectively, and • ifXi E N, then pl is a production Xi --~ " ", whose left-hand side is Xi, and • ifXiET, thenpi=ti=Xi. Then p(tl,..., t,~) is a derivation tree in G. (T3) There are no other derivation trees. Let T~(G) be the set of derivation trees in G, and 7¢s(G) C 7¢(G) be the set of derivation trees whose root is labeled with a production of which left-hand side is the initial symbol S. Clearly, T~s(G) C_ T~(¢) holds. Remark that 7¢s(G) is a multi-sorted algebra, where the nonterminals are sorts, and the terminals and the labels of produc- tions are operators. A tree transducer (Rounds 1969) defines a mapping from trees to trees. Since we are mainly interested in the string language generated by a tree transducer, a "tree-to-string" version of transducer defined in (Engelfriet 1980) is used in this paper. For sets Q and X, let Q[X]~{q[x] l q e Q,x e X). A tree-to-string transducer (yT-transducer or simply transducer) is defined to be a 5-tuple M = (Q, ~., A, q0, R) where (1) Q is a finite set of states, (2) ~ is an input ranked alphabet, (3) A is an output alphabet, (4) q0 E Q is the initial state, and (5) R is a finite set of rules of the form q[c~(xl,..., xn)] --* v where q e Q, e = and v e (Z uQ[{xl, ..., xn}])*. If any different rules in R have differ- ent left-hand sides, then M is called deterministic (Engelfriet 1980). A configuration of a yT-transducer is an ele- ment in (A U Q[T~.(¢)])*. Derivation of M is de- fined as follows. Let t ----- alq[a(tl,..., tn)]a2 be a configuration where al, a2 E (A U Q[T~.(¢)])*, q E Q, ~ E ~, p(a) = n and Q,...,tn E T~.(¢). Assume that there is a rule q[cr(xl,..., Xn)] --* V in R. Let t ~ be obtained from v by substituting t],..., tn for xl,..., xn, respectively, then we de- fine t ~M ultra2 • Let ::~ be the reflexive and transitive closure of :=~. If t =¢.~ t ~, then we say t ~ is derived from t. If there is no w E A* such that t ~ w, then we say no output is derived from t. A tree-to-string finite state translation sys- tem (yT-fts or fts) is defined by a yT-transducer M and a cfg G, written as (M,G) (Rounds 1969)(Thatcher 1967). We define yL(M,G), called the yield lan- guage generated by yT-fts (M, G), as yL(M,a)~{w e A* 13t e ~s(a),qo[t] ~*M w} where A is an output alphabet and q0 is the initial state of M. An fts is called deterministic (Engel- friet 1980) if the transducer M is deterministic. 132 Engelfriet introduced a subclass of fts' called finite-copying fts' as follows (Engelfriet 1980): Let (M,G) be an fts with output alphabet A and initial state q0, t be a derivation tree in G and t ~ be a subtree of t. Assume that there is a derivation a : q0[t] =~ w. Now, delete from this derivation a all the derivation steps which operates on t t. This leads to the following new derivation which keeps t ~ untouched; ! * : q0[t] wherewi• A* forl<i<n+l. The state sequence of t ! in derivation a is defined to be (qi~,...,qi.). Derivation a has copying-bound k if, for every subtree of t, the length of its state sequence is at most k. An fts (M, G) is a finite-copying, if there is a constant k and for each w • yL(M, G), there is a deriva- tion tree t in G and a derivation q0[t] ~ w with copying-bound k. It is known that the determin- ism does not weaken the generative capacity of finite-copying fts' (Engelfriet 1980). We note that an fts (M, G) can be considered to be a model of a transformational grammar: A deep-structure of a sentence is represented by a derivation tree of G, and M can be considered to transform the deep-structure into a sentence (or its surface structure). 4 Subclasses of Lexical-functional grammars A simple subclass of lfg's, called r-lfg's, is in- troduced in (Nishino 1992), which is shown to generate all the recursively enumerable languages (Nakanishi 1992). Here, we define a nondeter- ministic copying Ifg (nc-lfg) as a proper subclass of r-lfg's. An nc-lfg is defined to be a 6-tuple G = (N, T, P, S, N~t~, A~tr~) where: (1) N is a fi- nite set of nonterminal symbols, (2) T is a finite set of terminal symbols, and (3) P is a finite set of annotated productions. Sometimes, a nonterminal symbol, a terminal symbol and an annotated pro- duction are abbreviated as a nonterminal, a ter- minal and a production, respectively, i 4) S • N is the initial symbol, (5) Nat~ is a finite set of at- tributes, and (6) A~tm is a finite set of atoms. An equation of the form T atr =~ (atr • Nat,) is called an S (structure synthesizing) schema, and an equation of the form T atr .-= val (atr • Natr, val • A~tm) is called a V (im- mediate value) schema. A functional schema is either an S schema or a V schema. Each production p • P has the following form: p :A -~ B1 B2 ... Bq, (4.2) Ev ESl Es2 "" Esq where A • N, B1,B2,.",Bq • NUT. Ev is a finite set of V schemata and Esj (1 _< j <_ q) is a singleton of an S schema. A --~ B1B2".. Bq in (4.2) is called the underlying production of p. Let P0 be the set of all the underlying productions of P. Cfg Go = (N, T, P0, S) is called the underlying c/g o/ C. An f-structure of G is recursively defined as a set F -=- {(atrl, call), (atr2, val2>,..., latrk, valk)} where atr], atr2,..., and atrk are distinct at- tributes, and each of vail, val2,." ", and valk is an atom or an f-structure. We say that vali (1 < i < k) is the value of atri in F and write F.atri -= vali. For a cfg G' = ( N ~, T', P~, S~), derivation re- lations in G ~, denoted by A ::~a' a and A =~* G ~ (A • N',a • (N' u T')*) are defined in the usual way. Suppose Go = i N, T, P0, S) is the underlying cfg of an nc-lfg G = (N, T, P, S, Nat,, Aa,m). Let t be a derivation tree in Go. (In 4.,7. and 8., the label of a leaf of a derivation tree is allowed to be a nonterminal.) Every internal node v in t has an f-structure, which is called the f-structure of v and written as Fv. If an underlying production P0 :A ~ BI".Bq • P0 is applied at v, then v is labeled with either P0 itself, or p (• P) of which P0 is the underlying production, if necessary. Let vi be the ith child ofv (1 < i < q). We define the values of both sides of a functional schema attached to the symbol in p (on v) as follows: * the value of T atr(atr • Nat,) is Fv.atr, • the value of + in an S schema is Fv~ if the S schema is attached to the i(1 _< i _< q)th symbol in the right-hand side of p, and • the value of atom atm in a V schema is arm itself. We say that v satisfies functional schemata if for each functional schema lls = rib of p, the val- ues of lls and r/s on v are defined and equals with each other. In this case, it is also said that Fv locally satisfies the functional schemata of p. NOTE : Because the meaning of a V schema is in- dependent of the position where it is annotated, V schemata are attached to the left-hand side in this paper. For a nonterminal A E N and a sentential form a E iN t_J T)*, let t be a derivation tree of a derivation A =** Go a. If all internal nodes in t satisfy functional schemata, then a is said to be derived from A and written as A =~* . a a In this case, the tree t is called a derivation tree of A:=~* G a. We also call t a derivation tree (of a) in G simply. The language generated by an nc-lfg G, de- noted by LIG), is defined as L(G) = {w e T*[S ~* w I. G NOTE : In the definition of nc-lfg, even if "Esj (1 < j < q) is a'singleton of an S schema" is replaced with 133 Fcount :[COunt :[COunt : e ]]~ S a b c ~otmt :[count : eli %° Figure 1: A derivation tree of aabbccdd "Esj (1 < j < q) is either a singleton of an S schema or an empty set", the generative capacity of nc-lfg is not changed. Example 4.1: Let G~xs = (N, T, P, S, Nat,, A~tm) be an nc-lfg where N = {S,A,B}, T = {a, b,c, d}, Nat~ = {count}, Aatrn = {e}, and pro- ductions in P are; pll : s {T coA, t=l _ {T couBt=l} ' P12 : A ~ a { T couAt = ~ } b, p~ : B -~ ~ iT couBtM} d, P14 : {T coAt e} ~ ~ P" : {T eou~t= e} -~ ~ The language generated by GExs is L(GExs) = {a'~bncnd n In > 0}. Figure 1 shows a derivation tree of S ~* aabbccdd in GEXS. GEX3 Example 4.2: Let Gsx4 = (N, T, P, S, N,t,, A~tm) be an nc-lfg where g = {S}, T = Ca}, N,t, = {log}, A,tm = {e}, and productions in P are; : s iT _ {T J=l}, p22 : { T loS= e } ~ a. The language generated by GEX4 is L(GEx4) = {a2" ]n > 0}. Example 4.3: Let GEX5 = (N, T, P, S, Na~,, Aatm) be an nc-lfg where N = {S,S',A,B}, T = {the, woman, men, and, drinks, smoke, respec- tively}, N.t, = {hum, list}, A.tm = {sg,pl, nil}, and productions in P are; P3o : S --* S t respectively {T list =l} P31 : P32 : p33 : p34 : P35 : S~---* the woman and A drinks and B {Tnum=sg} {Tlist=l} {Tlist=l}' S~--* the men and A smoke and 13 {Tnum=pl} {Tlist=~} {Tlist=~}' A ~ the woman and A { T num= sg} { r list =.[}' A ~ the men and A { T num = pl } {T list =~}' A ~ the woman T num= sg T list = nil ) p3~ : A ---. the men T num = pl T list = nil ) pz7 : B --* drinks and B {T n~,m = ~g} {T list =~}' P38 : B --~ smoke and B { T hum = pl } iT list =l}' p39 : B ---. drinks { T num = sg T list = nil ) p310 : B ---+ smoke T hum = pl T list = nil ) G~xs generates "respectively" sentences such as "the woman and the men drinks and smoke re- spectively". For a set X of functional schemata, X is con- sistent iff neither the following (1) nor (2) holds. (1) {T atr = Call, T atr = val2 } c X for some atr E Na,, and some vall,val2 E Aatm such that call # val2. (2) iT atr = val, T atr =~} _C X for some atr E Nat~ and some val E Aatm. Productions pl,''',Pn are consistent iff Ul<i<_n E (0 is consistent where E (/) is the set of functional schemata of Pl. If productions are not consistent then they are called inconsistent. An nc-lfg G is called a deterministically copy- ing Ifg (dc-lfg), if any two productions A --+ al and A --+ a2 whoes left-hand sides are the same are inconsistent. Suppose G = (N,T, P, S, Nat,, Aatm) is an nc-lfg. Let {{el,e2,-'.,en}} denote the multi- set which consists of elements el, e2," • •, en that are not necessarily distinct. An SPN (SubPhrase Nonterminal) multiset in G is recursively defined as the following 1 through 3: 1. {{S}} is an SPN multiset. 2. Suppose that {{A1, A2,'", Ah}} (A1, A2,'" ", Ah E N) is an SPN multiset. Let A1 --~ al, 134 • .', Ah ~ O:h be consistent productions. For each atr E Nat,, let MS~,~ be the multi- set consisting of all the nonterminals which appear in al,''',ah and have an S schema T atr --l. If MSat~ is not empty, then MS~t~ is also an SPN multiset. 3. There is no other SPN multiset. An nc-lfg such that the number of SPN multisets in G is finite is called a finite-copying lfg (fc-lfg). Example 4.4: Consider GEX s in Example 4.1. Productions /912 and P14 are inconsistent with each other and so are P13 and Ply. SPN multisets in GEX3 are {{S}} and {{A,B)). Hence GEXS is a dc-lfg and is an fc-lfg. GEX5 is also a dc-lfg and is an fc-lfg by the similar reason. Similarly, GEX4 in Example 4.2 is a dc-lfg. SPN multisets in C~x~ are {{S}}, {{S, S}), {{S, S, S, S)}, .... Hence GEx4 is not an fc-lfg. NOTE : L (GExs) is generated by a tree adjoining grammar. Suppose that a sentence has three or more phrases which have co-occurrence relation like the one between the subject phrase and the verb phrase in the "respectively" sentence. Tree adjoining grammars can not generate such syntax while fc-lfg's or dc-lfg's can, although the authors do not know a natural language which has such syntax so far. By Lemma 2.1 and Theorem 8.1, fc-lfg's are polynomial-time recognizable. Hence, it is desir- able that whether a given lfg G is an fc-lfg or not is decidable. Fortunately, it is decidable by the following lemma. Lemma 4.1: For a given nc-lfg G, it is decid- able whether the number of SPN multisets in G is finite or infinite. Proof. The problem can be reduced to the boundedness problem of Petri nets, which is known to be decidable (Peterson 1981). 5 Overview of the Results Let ~'nc-lfg, ~'dc-lfg and ~-'fc-lfg denote the classes of languages generated by nc-lfg's, dc-lfg's and fc-lfg's, respectively, and let y~#,, Y~.d-fts and YElc-#s denote the classes of yield languages gen- erated by fts', deterministic fts' and finite-copying fts', respectively. Let l:vmcla and £:mcfg be the classes of languages generated by pmcfg's and mcfg's, respectively. Also let £:ta9 be the class of language generated by tree adjoining grammars. Inclusion relations among these classes of languages are summarized in Figure 2. An equiv- alence relation *1 is shown in (Weir 1992). Re- lations *2 are new results which we prove in this paper. We also note that all the inclusion rela- tions are proper; indeed, 0 l {ala2a3a41n >_ E D - E a a2 n n _ ..... a2m_la2m [ n > E C - D for m > 3, (by (Vijay-Shanker 1987).) {a 2" In > 0} e S - C, (by (Kasami 1988a)(Seki 1991).) A relation B~ A is shown in (Engelfriet 1980). By Lemma 2.1, all languages in the region enclosed with the bold line are recognizable in polynomial time. On the other hand, it is shown in this paper that Unary-3SAT, which is known to be A/P-complete (Nakanishi 1992), is in A. Hence, if ~ ~ A/~, then Unary-3SAT E A - B and the languages generated by fts' (or equivalently, nc- lfg's) are not recognizable in polynomial time in general. 6 Generative Capacity of fts' 6.1 Deterministic fts' Here, the proof of an inclusion relation yEd-#s C_ /:vmc/g is sketched. Let (M, G) be a deterministic yT-fts where M = (Q,~,A,ql,R) and G = (N,T,P,S)., We assume that Q = {ql,---,ql}, T = {al,... an} and P = {Pl,...,Pm}. Since the input for M is the set of derivation trees of G, we assume that = {Pl,.-. ,Pro, al,..., an} without loss of gen- erality. We will construct a pmcfg G I -=- ( N ~, T ~, F', P', S') such that yL(M, G) ---- L(G') N A*. Since /:pmc/g is closed under the intersection with a regular set (Kasami 1988a)(Seki 1991), it follows that yL(U, G) E £'pmclg. Let T' = A td {b} where b is a newly introduced symbol and let N' = {S',RI,...,Rm, AI,...,An} where d(Ri) = d(Aj) = t for 1 < i <_ m and 1 < j <_ n. Productions and functions of G ~ will be constructed to have the following property. -- A y~/t, *=2 ~,~c-lfg --B ~ £?~:~ .__2 y£~_:,, .__2 £~_,:~ --c oL- £.~:~ *j y£:~-:. *=2 :-.:o-,:, __ D (2) - - E - - £cr~ l:,a~ -- Figure 2: Inclusion relations between classes of languages. (1) : The class of language generated by lcfrs' is equal to C. (2) : The class of language generated by head grammars is equal to D. 135 Property 6.1: There is (a~,... ,a~) e LG,(Rh) (resp. LG,(Ah)) such that each of a,,,...,as~ does not contain b, and every remaining at,,..., a,~ contains b if and only if there is a derivation tree t of G such that the root is Ph (resp. ah) and { qs, [t] ==>~ c~s~ (1 < j < u) no output is derived from q,~[t] (1 _< j < v). D The basic idea is to simulate the move of tree transducer M which is scanning a symbol Ph (resp. ah) with state ql by the ith component of the nonterminal Rh (resp. Ah) of pmcfg G I. Dur- ing the move of M, it may happen that no rule is defined for a current configuration and hence no output will be derived• The symbol b is intro- duced to represent such an undefined move ex- plicitly. We define RS(X) (X E N tO T) as follows. {Rh [the left-hand side of Ph is X} RS(X) = if X E N { Ah } if X = ah E T. Productions and functions are defined as follows. Step 1: For each production Ph : Iio --'* Y~ " " Yk ( Yo ~ N , Y= E NtoT for 1 <u< k) of cfg G, construct nonterminating produc- tions Rh -+ [&,..., zk] for every Z~ E RS(Y~) (1 < u < k), where fph is defined as follows: For 1 < i < g, • if the transducer M has no rule whose left- hand side is qi~ah(Xl,..., xk)], then (6.a) h •'' • if M has a rule -+ • " ai,n,-lq~(i,,~,_D[x~4~,,,_D] a~,n,, then f p [ i ] [ • x .. , 5:k] __a ei,ox~,(i,o),7(i,o)ei,] (6.4) h ~ • • " " Ot-,ni--lglz",ni--l'rl'i,ni--l'Ogi,ni~, (1, ) [ ) where = (1 <, < k). (Since M is deterministic, there exists at most one rule whose left-hand side is qi~h('" ")] and hence the above construction is well defined•) Step 2: For each ah E T, construct a terminat- ing production Ah -"+ fah where f~h is defined as follows: For 1 < i < i, • if M has no rule whose left-hand side is qi[ah], then ~a~[i] ~--b. • ifM has a rule qi[ah] --+ hi, then f[~&ai. Step 3: For each Rh E RS(S), construct S' --+ /fi~st[Rh] where /fi,st[(x], ..., xl)]~x]. Intu- itively, the right-hand side of this production corresponds to the initial configuration, that is, M is in the initial state ql and scanning the root symbol Ph of a derivation tree, where the left-hand side of Ph is the initial symbol S. The pmcfg G I constructed above satisfies Property 6.1. Its proof is found in (Kaji 1992) and omitted in this paper. By Property 6.1, we obtain the following lemma. L e m m a 6.1: yl:d_f, s C ff.pmcfg. 0 The reverse inclusion relation l:p,~c/g C_ Y~.d-B, can be shown in a similar way, and the following theorem holds• Theorem 6.2: yf-.d./,s : E-pmcfg• 0 6.2 Nondeterministic fts' In this section, the generative capacity of nonde- terministic yT-fts' is investigated, from the view- point of computational complexity• We have al- ready shown that Y~.d-~s : ~.pmcfg, and hence every language in this class can be recognized in time polynomial of the length of an input string• Our result here is: there is a nondeterministic fts that generates an A/'~-complete language• In the following, a language called Unary-3SAT, which is ArT'-complete (Nakanishi 1992), is considered, and then it is shown to belong to yL:/,a. A Unary-3CNF is a (nonempty) 3CNF in which the subscripts of variables are represented in unary. A positive literal xi in a 3CNF is rep- resented by 1i$ in a Unary-3CNF. Similarly , a negative literal --xl is represented by 12#. For example, a 3CNF (xi v x2 v ~xa) A (xa V ~x] v ~x~) is represented by a Unary-3CNF 15115111# A I1151#Ii#. Unary-3SAT is the set of all satisfiable Unary- 3CNF's. Next, we construct a nondeterministic yT-fts (M, G) that generates Unary-3SAT. Define a cfg G = (N,T,P, S) where N = {S,T,F}, T = {e} and the productions in P are as follows: rss : S--+S rsT : S--+ T rsF : S-+F "rTT : T--+ T rTF : T--+ F. ?'Te : T--+ e rFT : F-+T ?'FF : F--+ F rFe : F-+ e 136 Let M = (Q, E, A, qo, R) where q = {qo,q~,qt, qa}, ~ {rSS,... ,rFe}, z~ = {L^,$,#}. Since there are many rules in R, we will use an ab- breviated notation. For example, following four rules qaIrTelXll-~ 15, q~[rTe(X)] ~ 1# qdrF~tX)] + 15, qo[rF~(X)] -~ 1# are abbreviated as "q~[rT~(X)] = q~[rF~(X)] --* 15 or 1#'. By using this notation, the rules in R are defined as follows. q0[r~(~)] -~ qo[~] A q0[x] qo[r~s(~)] -~ q&] q0[~sr(~)] = q0[~sv(~)] = q~[r~r(~)] = q~[rSF(X)] --~ qdx]q~[x]q~[x] or q~[x]qt[x]qa[x] or qa[x]qa[x]qt[x] q,[r**(x)] = q&.~(~)] -* lq,[~] or 1~ q,[rr,(~)] ~ 1, q,[r~,(~)] = q,[r~(~)] -* lq,[x] or 1# q,[rF~(x)]---+ 1# qo[rr~(~)] = qo[rr~(~)] = qoirF~(~)] = qa[rFF(X)] lqa[X] or 15 or 1# qo[r~c(x)] = q.[r~(~)] ++ 1, or 1#. The readers can easily verify that this yT-fts generates Unary-3SAT. 7 Equivalence of f-'nc-lfg and Y£fts First, we show £,~c-lfg C_ Y£qt~. For a given nc- lfg G = (N, T, P, S, Nat,, A~m), an equivalent fts (M, G I) is constructed in the following way. Let t be a derivation tree in lfg G and the f-structure of the root node of t be F = {(atrl,F1),..., (atr,~,Fn)}. F is represented by a derivation tree r = p,p(Tl,'-., rn) in G', where ri (1 < i < n) is a derivation tree in G' which rep- resents Fi recursively. And sp is a set of produc- tions such that F locally satisfies the functional schemata of all productions in sp. M transforms r into the yield of t, i.e., the terminal string ob- talned by concatenating the labels of leaves, in a top-down way. [TRANS 7.11 Let N = {A1,'",Am}, S = A1 and Nat, = {atrl,-.., atr,~}. Define SP as the set of all consistent subsets of P. Step 1: G' = (N',{d},P',S'), where N' = {S,plsp e SP} U {S'} and P' = {p',p : S,p ---* S'-.. S't u{p;=~... : s' --+ Ss, l,p e sP} u{p~,m :~s' -+ deC:_/}. For a derivation tree r in G' and a node v ' is applied, the snbtree rooted by the where p,p ith child of v represents the value of attribute atr i. Step 2: M = (Q,E,T, ql,R) is defined as fol- lows. Define Q = {ql,..., qm}. A state qj (1 < j _< m) corresponds to nonterminal A t in N. Define E -- {d} where p(p'.,) = p(p .... .~) = ' = and p(d) = O. And define R by the following (i) through (iii). (i) qj~ .... .,(x)] -~ qj[x] (1 _< j < m) belongs to R for each sp • SP. (ii) Let r be a derivation tree in G '. When plsp is the production applied at the root of r and a state of M is q,o, M chooses a production p whose left-hand side is Auo , if exists, in sp. NOTE : Since productions in sp are consis- tent, there is an f-structure, which locally sat- isfies the functional schemata of all produc- tions in sp. For each production p E sp in SP p : A~o --* a0 A m al ... OtL-1 At, L aL Ev {~ atrv~ =~} ... {~ atrvL =~} where A~z E N and al E T*(0 < l < L), the following rule belongs to R: q#o~tsp(Xl, , *',xn)] -~ "0q,,[X~,]"I..-"L--lq,~[X,~]~. (7.5) (iii) No other rule belongs to R. Next, Y£~s C_ £~c-zf9 is shown. For a given fts (M, G), the following algorithm constructs an nc-lfg G' such that L(G') --- yL(M, G). [TRANS 7.2] Suppose that a given fts (M, G) is G -- (N, T, P, S) and M -- (Q, E, A, ql, R) where Q = {ql,q2,'",qm}. Let n be the maximum length of the right-hand side of a production in P. Define an nc-lfg G I = ( N', A, P', S I, N~r, Aatm) as follows. Step 1: N'={C[J]IC•N, lgj <m} u{aI~l la • T, 1 < j <_ m}, S' = S [11, Nat~ = {atri I1 < i < n} U {rule}, and Aatm = {PIP is the label of a production in P}. A derivation tree t = p(tl,'" ,th) in G is rep- resented by an f-structure {(rule, p),(atrl, El), • ".,(atrh, Fh)} of G' where Fi (1 < i < h) is an f-structure which represents the subtree ti recursively. Each pair of a symbol (either nonterminal or terminal) X of G and a state qj of M is rep- resented by a single nonterminal X[J] in G'. 137 Step 2: A move when M at state qj reads a symbol p which is the label of a production p : C --+ ..., can be simulated by a production in G ~ whose left-hand side is C[J] {T ute = p}" Formally, the set P~ of productions of G I is con- structed as follows. (i) Let p : C --* X1 "" Xh be a production in P where CE N, Xi E NUT (1 <i < h), and let: qj[p(x],..., Xh)] --~ ajoq,7,, [z~,,, ]aj,...q,7,zj [X~,,L ' ]O~jL, be a rule in R where ~k E A* (0 < k < Lj), q'Tj~ E Q, and xvj~ e tXl,'",Xh}(1 < l < L j). Then, the following production belongs to P~: y[r/jl] V[nJLj] C[J] 7...40tjo-~vjl Otjl "'" AI~jLj OtjLj" {Trute = p) {Tatr , {Tatr j (ii) Let qj[a] ---* flj be a rule in R where a 6 T and flj 6 A*. Then the production a[J] --~ flj belongs to P'. (iii) No other production belongs to P'. By TRANS 7.1 and TRANS7.2, the fol- lowing theorem is obtained. A formal proof is found in (Nakanishi 1993). Theorem 7.1: f~nc-lfg = Y~'fts. Corollary 7.2: ~'dc-lfg ---- Y~.d-fts. Proof. In TRANS 7.1, if G is a dc-lfg, then no sp E SP contains distinct productions whose left-hand sides are the same and hence the con- structed transducer M becomes deterministic by the construction. Conversely, in TRANS 7.2, if M is deterministic, then there exist no consistent productions p~ and p~ in P~ whose left-hand sides are the same and hence the constructed nc-lfg is a dc-lfg. 8 Equivalence of ~fc-lfg and £~mcfg To prove f~fc-lfg C Lmcfg, we give an al- gorithm which translates a given fc-lfg G = (N, T, P, S, Nat,, Aatm) into an mcfg G I such that L (G') = L (G). [TRANS 8] We explain the algorithm by us- ing the fc-lfg GEX3 in Example 4.1. An mcfg G' = (N', T, F, P', S) is constructed as follows. Step 1: N' = (the set of nonterminals which has a one-to-one correspond- ence with the set of SPN multi- sets in G) = {(S), (A,B)} (for GEx3 in Example 4.1) P' = ¢, and F =¢. Step 2: For each SPN multiset M0 = {{A1,A2, • ".,Ak}} of G, consider every tuple (pl,P2, "",Pk) of productions in P whose left- hand sides are A1, A2,'", Ak respectively and which are consistent. (Suppose that, if we write an SPN multiset as {{A1, A2,. ", Ak}}, then Aj's are arranged according to a pre- defined total order < on N, that is, A1 < A2 <_ "'" <_ Ak hold.) For an SPN multiset {{A, B}} in GEX3, the following two pairs of productions have to be considered: b p12 : A ---* a {TcouAt,~ PI3 : B ~ c B d, {Tcount --~} p14: {Tcoun A = e} pls : {Tc°u B=e} For (Pl,P2,'",Pk), a production p' and a function f of G' are constructed and added to P' and F, respectively as follows. The multiset M of the nonterminals appearing in the right-hand side of some pj (1 < j < k) are partitioned into multisets M1, M2," -., Mh with respect to the S schemata attached to the nonterminals in pj's. That is, (11//1, M2,-", Mh) are the coarsest partition of M such that for each M,, (1 < u < h), the fol- lowing condition holds. Each nonterminal in M~, has the same S schema. By the definition, each M= (1 < u < h) is an SPN multiset in G. _Construct a production of mcfg p': hit0 --* f[M1, ffI2,'", Mh] where M= is the nonterminal of G' which corresponds to M=(1 < u < h). Addp' to P' and f to F where f is defined as follows. Suppose pj : Aj ~ ajoBjlajl "'' BjL~ajL~ (1 < j < k) where Aj E N, Bfl E N(1 < l < Lj) and ajz E T* (0 < l < Lj), and let- - = (1 < < h) where Cu, E N(1 < v < su). Then, for 1 < j < k, the jth component f[J] of f is: _ A f[J] (X-l, x2," "', Xh )=otjoYjl Otjl Yj2 "" • YjLj OtjLj where x-u = (xul,xu2,'",xus.)(1 < u < h). For j (1 <_ j < k) and l (1 _< l _< nj), if z~ Bjl = C~,, then yfl-=x~,v. Note that, since Mu's are a partition of M, f satisfies Right Linearity (see 2.) and hence G' is an mcfg. For example, consider the above (P12,P13)- The nonterminals appearing in the right-hand 138 sides are A and B, and their S schemata are the same. Thus, we construct the following mcfg production: (A, B) --* fl [(A, B)] where fl [(Xl, x2)] = (aXlb, cx2d). Consider the following pair of productions as another example: { ~ :~} bD{Tatr2 p~ : A --* a {TatrB ---* c D P'2 : B {TatrA=£} {TatrC=j,} {1"air2 --,L} The multiset of nonterminals in the right- hand sides are partitioned into M1 = I{ A, B}} (for arT1) and M~ = {{C, D, D}} for atr2). For (p~,p~), the following mcfg production is constructed: (A, B) --+ g [(A, B), (C, D, D)] where g [(x11, x12), (=21, X22, x23)]~--(ax12bx22, xilx .lc 3). V] Example 8.1: TRANS 8 translates fc-lfg GEx3 in Example 4.1 into an equivalent mcfg G~x 3 = (N',T, F, P', S I) where N', S' are those illustrated in TRANS 8, F = {fo[(xl,x2)] = XlX2, fl[(Xl,X2)] = (aXlb, cx2d), f2 ----- (~',~')}, and, P' = {(S) --+ fo[(A,B)], (A,B) --* fl [(A, B)], (A, B) --~ f2}. 0 Theorem 8.1: ~rncfg = Efc-lfg. Proof: £yc-tfg C £mcf9 can be proved by TRANS 8. Conversely, for a given mcfg G, an fc-lfg G' such that L (G') = L (G) can be con- structed in a similar way to TRANS 8. Details are found in (Ando 1992). [1 9 Conclusion In this paper, we introduce three subclasses of lfg's, two of which can be recognized in polyno- mial time. Also this paper clarifies the relations between the generative capacities of those sub- classes, pmcfg's and fts'. References Ando, S. et al. 1992. "Subclasses of Lexical- Functional Grammars Which Are Recognizable in Polynomial Time", IEICE Technical Report, COMP92-44. Engelfriet, J. and Heyker, L. 1991. "The String Gen- erating Power of Context-Free Hypergraph Gram- mars", J. Comput. & Syst. Sci., 43:328-360. Engelfriet, J., Rosenberg, G. and Slutzki, G. 1980. "Tree Transducers, L Systems, and Two-Way Ma- chines", J. Comput. & Syst. Sei., 20:150-202. Joshi, A.K., Levy, L. and Takahashi, M. 1975 "Tree Adjunct Grammars", J. of Comput. & Syst. Sci., 10:136-163. Gazdar, G. and Pullum, G.K. 1985. "Computa- tionally Relevant Properties of Natural Languages and Their Grammars", New Generation Computing, 3:273-306. Kaji, Y. et al. 1992. "Parallel Multiple Context- Free Grammars and Finite State Translation Systems ", IEICE Technical Report, COMP92-34. Kaplan, R. and Bresnan, J. 1982. "Lexical- Functional Grammar", The Mental Representation of Grammatical Relations, J.Bresnan (ed.), MIT press:173-281. Kasami, T. et al. 1988a. "Generalized Context-Free Grammars and Multiple Context-Free Grammars", Trans. IEICE, J71-D-I, 5:758-765. Kasami, T. et al. 1988b. "On the Membership Prob- lem for Head Language and Multiple Context-Free Languages, Trans. IEICE, J71-D-I, 6:935-941. Nakanishi, R. et al. 1993. "On the Generative Capacity of Tree Translation Systems and Lexi- cal Functional-Grammars", Technical Paper of FAI, Japanese Society for Artificial Intelligence, SIG- FAI-9202. Nakanishi, R. et al. 1992. "On the Generative Capac- ity of Lexical-Functional Grammars", IEICE Trans. Inf. and Syst., 75-D, 7:509-516. Nishino, T. 1991. "Mathematical Analysis of Lexical- Functional Grammars --Complexity, Parsability, and Learnability--", Language Research Institute, Seoul National University. Nishino, T. 1992. "Relating Attribute Grammars and Lexical-Functional Grammars", Information Sci- ences, 66:1-22. Peterson, J.L. 1981. "Petri Net Theory and the Mod- eling of Systems", Prentice-Hall. Pollard, C.J. 1984. "Generalized Phrase Structure Grammars, Head Grammars and Natural Language", Ph.D. dissertation, Stanford University. Rounds, W.C. 1969. "Context-Free Grammars on Trees", Proc. of ACM STOC:143-148. Seki, H. et al. 1991. "On Multiple Context- Free Grammars", Theoretical Computer Science, 88, 2:191-229. Thatcher, J.W. 1967. "Characterizing Derivation Trees of Context-Free Grammars through a Gener- alarization of Finite Automata Theory". J. Comput. & Syst. Sci., 1:317-322. Vijay-Shanker, K. 1987. "A Study of Tree Adjoining Grammars", Ph.D. thesis, University of Pennsylva- nia. Vijay-Shanker, K., Weir, D.J. and Joshi, A.K. 1987. "Characterizing structural descriptions produced by various grammatical formalisms", Proc. of 25th meeting of Assoc. Comput. Ling. :104-111. Weir, D.J. 1988. "Characterizing Mildly Context- Sensitive Grammar Formalisms", Ph.D. thesis, Uni- versity of Pennsylvania. Weir, D.J. 1992. "Linear Context-Free Rewriting Systems and Deterministic Tree-Walking Transduc- ers", Proc. of 30th meeting of Assoc. Comput. Ling. 139
1993
18
FEATURE-BASED ALLOMORPHY* Hans-Ulrich Krieger Hannes Pirker German Research Center for Artificial Intelligence (DFKI) Stuhlsatzenhausweg 3 W-66 Saarbr/icken 11, Germany {krieger,pirker} @dfki.uni-sb.de John Nerbonne Alfa Informatica, P.O.Box 716 Oude Kijk in 't Jatstraat 41 Rijksuniversiteit Groningen NL 9700 AS Groningen, Holland [email protected] Abstract Morphotactics and allomorphy are usually modeled in different components, leading to in- terface problems. To describe both uniformly, we define finite automata (FA) for allomorphy in the same feature description language used for morphotactics. Nonphonologically conditioned allomorphy is problematic in FA models but submits readily to treatment in a uniform for- malism. 1 Background and Goals ALLOMORPHY or MORPHOPHONEMICS describes the variation we find among the different forms of a morpheme. For instance, the German sec- ond person singular present ending -st has three different allomorphs, -st, -est, -t, determined by the stem it combines with: 'say' 'pray' 'mix' (1) lsg pres ind 2sg pres ind 3sg pres ind sag+e sag+st sag+t bet + e bet+ est bet-/-et mix+e mix+t mix+t MORPHOTACTICS describes the arrangement of morphs in words, including, e.g., the properties of -st that it is a suffix (and thus follows the stem it combines with), and that it combines with verbs. While allomorphy is normally de- scribed in finite automata (FA), morphotactics is generally described in syntax-oriented models, e.g., CFGs or feature-based grammars. The present paper describes both allomor- phy and morphotactics in a feature-based lan- guage like that of Head-Driven Phrase Struc- ture Grammar (HPSG) (Pollard and Sag 1987). *This work was supported by research grant ITW 9002 0 from the German Bundesministerium ffir Forschung und Technologie to the DFKI DISCO project. We are grateful to an anonymous ACL re- viewer for helpful comments. The technical kernel of the paper is a feature- based definition of FA. 1 While it is unsurprising that the languages defined by FA may also be defined by feature description languages (FDL), our reduction goes beyond this, showing how the FA themselves may be defined. The significance of specifying the FA and not merely the lan- guage it generates is that it allows us to use FA technology in processing allomorphy, even while keeping the interface to other grammar compo- nents maximally transparent (i.e., there is NO interface--all linguistic information is specified via FDL). Our motivation for exploring this application of typed feature logic is the opportunity it pro- vides for integrating in a single descriptive for- malism not only (i) allomorphic and morpho- tactic information but also (ii) coneatenative and non-concatenative allomorphy. The latter is particularly useful when concatenative and non-concatenative allomorphy coexists in a sin- gle language, as it does, e.g., in German. 2 Finite Automata as Typed Feature Structures An FA A is defined by a 5-tuple (Q, E, 5, q0, F), where Q is a finite set of STATES, ~ a finite IN- PUT ALPHABET, (~ : Q x ~ ---y Q is the TRAN- SITION FUNCTION, q0 E Q the INITIAL STATE, and F _C Q the set of FINAL STATES. 2 For reasons of simplicity and space, we only refer to the simplest form of FA, viz., DETERMIN- ISTIC finite automata without e-moves which consume exactly one input symbol at a time. This is of course not a restriction w.r.t, ex- pressivity: given an arbitrary automaton, we can always construct a deterministic, equiva- I See Krieger 1993b for the details and several extensions. 2We assume a familiarity with automata theory (e.g., Hopcroft and Ullman 1979). 140 lent one which recognizes the same language (see Hopcroft and Ullman 1979). Fortunately, our approach is also capable of representing and processing directly non-deterministic FA with e- moves and allows for edges which are multiple- symbol consumers. Specifying an automaton in our approach means introducing for every state q E Q a possi- bly recursive feature type with the same name as q. We will call such a type a CONFIGURATION. Exactly the attributes EDGE, NEXT, and INPUT are appropriate for a configuration, where EDGE encodes disjunctively the outgoing edges of q, NEXT the successor states of q, and INPUT the symbols which remain on the input list when reaching q.S Note that a configuration does not model just a state of the automaton, but an en- tire description at a point in computation. [ EDGE input-symb ] (2) proto-confi9 _= | NEXT config | / INPUT list(input-symb)J We now define two natural subtypes of proto- con fig. The first one represents the non-final states Q \ F. Because we assume that exactly one input symbol is consumed every time an edge is taken, we are allowed to separate the input list into the first element and the rest list in order to structure-share the first element with EDGE (the consumed input symbol) and to pass the rest list one level deeper to the next state. (3) non-final-conflg =_ proto-config "] EDGE [] / NEXTIINPUT [] / INPUT ( [-i-]. [] )J The other subtype encodes the final states of F which possess no outgoing edges and therefore no successor states. To cope with this fact, we introduce a special subtype of T, called under, which is incompatible with every other type. In addition, successfully reaching a final state with no outgoing edge implies that the input list is empty. (4) final-config = proto- config ] EDGE undef l NEXT undef l INP ( ) J aNote that EDGE is not restricted in bearing only atomic symbols, but can also be labeled with com- plex ones, i.e., with a possibly underspecified fea- ture structure (for instance in the case of 2-1evel morphology--see below). A Figure 1: A finite automaton A recognizing the language £(A) = (a + b)*c. Of course, there will be final states with out- going edges, but such states are subtypes of the following DISJUNCTIVE type specification: (5) config =_ non-final-con.fig V J~inal-config To make the idea more concrete, let us study a very small example, viz., the FA A (see Fig- ure 1). A consists of the two states X and Y, from which we define the types X and Y, where Y (7) is only an instantiation of final-config. In order to depict the states perspicuously, we shall make use of DISTRIBUTED DISJUNCTIONS. DSrre and Eisele 1989 and Backofen et al. 1990 introduce distributed disjunctions because they (normally) allow more efficient processing of dis- junctions, sometimes obviating the need to ex- pand to disjunctive normal form. They add no expressive power to a feature formalism (assum- ing it has disjunction), but abbreviate some oth- erwise prolix disjunctions: {$1 a V PATH2 $1 ~ V fl} = PATH3 .., ] {[PA ,a ] [P,THlb ]} PATH2 o~ V PATH2 fl PATH3 [ ... ] PATH3 [ ... ] The two disjunctions in the feature structure on the left bear the same name '$1', indicat- ing that they are a single alternation. The sets of disjuncts named covary, taken in order. This may be seen in the right-hand side of the equivalence. 4 We employ distributed disjunctions below (6) to capture the covariation between edges and 4Two of the advantages of distributed disjunc- tions may be seen in the artificial example above. First, co-varying but nonidentical elements can be identified as such, even if they occur remotely from one another in structure, and second, features struc- tures are abbreviated. The amount of abbreviation depends on the number of distributed disjunctions, the lengths of the paths PATH1 and PATH2, and--in at least some competing formalisms--on the size of the remaining structure (cf. PATH3 [...] above). 141 their successor states: if a is taken, we must take the type X (and vice versa), if b is used, use again type X, but if c is chosen, choose the type Y. (6) "non-final-config ] X-- EDGE $1{aVbVc} NEXT $1{X V X V Y} (7) Y - [ final-config ] Whether an FA A ACCEPTS the input or not is equivalent in our approach to the question of FEATURE TERM CONSISTENCY: if we wish to know whether w (a list of input symbols) will be recognized by A, we must EXPAND the type which is associated with the initial state q0 of A and say that its INPUT is w. Using the terminol- ogy of Carpenter 1992: (8) must be a TOTALLY WELL-TYPED feature structure. [q° ] (8) INPUT W Coming back to our example (see Figure 1), we might ask whether abc belongs to /2(A). We can decide this question, by expanding the type X with [INPUT (a,b,c)]. This will lead us to the following consistent feature structure which moreover represents, for free, the com- plete recognition history of abc, i.e., all its solu- tions in the FA. / / EDGE [] c (9) ]NEXT [NEXT IEYGE under | | NEXT ]NEXT under I | [INPUT [] ( > / | INPUT r-~ ( ~].~] ) / LINPUT~ < [~'~ LINPUT < 5q" Note that this special form of type expansion will always terminate, either with a unification failure (A does not accept w) or with a fully expanded feature structure, representing a suc- cessful recognition. This idea leads us to the following ACCEPTANCE CRITERION: (10) w • £(A) ¢=~ (NEXT)" [{NP () where f • F Notice too that the acceptance criterion does not need to be checked explicitly--it's only a logi- cal specification of the conditions under which a word is accepted by an FA. Rather the effects of (10) are encoded in the type specifications of the states (subtypes of final-config, etc.). Now that we have demonstrated the feature- based encoding of automata, we can abbrevi- ate them, using regular expressions as "feature templates" to stand for the initial states of the automaton derived from them as above. 5 For example, we might write a feature specification [NORPHIFORN (a + b)*c] to designate words of the form accepted by our example automaton. As a nice by-product of our encoding tech- nique, we can show that unification, disjunction, and negation in the underlying feature logic di- rectly correspond to the intersection, union, and complementation of FA. Note that this state- ment can be easily proved when assuming a clas- sical set-theoretical semantics for feature struc- tures (e.g., Smolka 1988). To give the flavor of how this is accomplished, consider the two reg- ular expressions •1 : ab*c and/22 -- a*bc. We model them via six types, one for each state of the automata. The initial state of/21 is A, that of/22 is X. The intersection of£1 and/22 is given by the unification of A and X. Unifying A and X leads to the following structure: (11) : |EDGE a [NEXT BJ [NEXT $1 {XV Y}J [NEXT B A Now, testing whether w belongs to /21 N/22 is equivalent to the satisfiability (consistency) of (12) A A X A [INPUT w], where type expansion yields a decision proce- dure. The same argumentation holds for the union and complementation of FA. It has to be noted that the intersection and complementa- tion of FA via unification do not work in general 5'Template' is a mild abuse of terminology since we intend not only to designate the type correspond- ing to the initial state of automaton, but also to suggest what other types are accessible. 142 for FA with e-moves (Ritchie et al. 1992, 33-35). This restriction is due to the fact, that the in- tersected FA must run "in sync" (Sproat 1992, 139-140). The following closure properties are demon- strated fairly directly. Let A1 = (Qt,Et,61,qo, Ft) and As = (Os, ~2, ~S, q~), Fs). * Alf7As ~ qoAq~o • AtUAs ~ qoVqto • A1 ~ -~qo In addition, a weak form of functional uncer- tainty (Kaplan and Maxwell 1988), represented through recursive type specifications, is appro- priate for the expression also concatenation and Kleene closure of FA. Krieger 1993b provides proofs using auxiliary definitions and apparatus we lack space for here. 3 Allomorphy The focus of this section lies in the illustration of the proposal above and in the demonstration of some benefits that can be drawn from the in- tegration of allomorphy and morphotactics; we eschew here the discussion of alternative the- ories and concentrate on inflectional morphol- ogy. We describe inflection using a word-and- paradigm (WP) specification of morphotactics (Matthews 1972) and a two-level treatment of allomorphy (Koskenniemi 1983). We also indi- cate some potential advantages of mixed models of allomorphy--finite state and other. 6 3.1 WP Morphotactlcs in FDL Several WORD-GRAMMARS use FDL morphotac- tics (Trost 1991, Krieger and Nerbonne 1992 on derivation); alternative models are also avail- able. Krieger and Nerbonne 1992 propose an FDL-based WP treatment of inflection. The basic idea is to characterize all the elements of a paradigm as alternative specifications of abstract lexemes. Technically, this is realized through the specification of large disjunctions which unify with lexeme specifications. The SThe choice of two-level allomorphy is justified both by the simplicity of two-level descriptions and by their status as a "lingua franca" among compu- tational morphologists. Two-level analyses in FDLs may also prove advantageous if they simplify the po- tential compilation into a hybrid two-level approach of the kind described in Trost 1991. three elements of the paradigm in (1) would be described by the distributed disjunction in (13). (13) weak-paradigm - word FORH ,pp,nd(U,r ) STEN~ NORPH I ENDING,s1 SyNILOCIHEADIAGR [N UH PER This treatment provides face to syntactic/semantic helps realize the goal of linguistic knowledge in a (Pollard and Sag 1987). (+,e) V } ( +,s,t> v (-I-,t) sg , {lv:v3} a seamless inter- information, and representing ALL single formalism Nevertheless, the model lacks a treatment of allomorphy. The various allomorphs of -st in (1) are not distinguished in the FDL, and Krieger and Nerbonne 1992 foresaw an interface to an external module for allomorphy. It would be possible--but scientifically poor--to distin- guish all of the variants at the level of mor- photactics, providing a brute-force solution and multiplying paradigms greatly. 7 The character- ization in Section 2 above allows us to formu- late WITHIN FDL the missing allomorphy com- ponent. 3.2 Two-Level Allomorphy Two-level morphology has become popular be- cause it is a declarative, bidirectional and efficient means of treating allomorphy (see Sproat 1992 for a comprehensive introduction). In general, two-level descriptions provide con- straints on correspondences between underly- ing (lexical) and surface levels. We shall use it to state constraints between morphemic units and their allomorphic realizations. Because two- level automata characterize relations between two levels, they are often referred to (and often realized as) transducers. The individual rules then represent constraints on the relation being transduced. The different forms of the suffix in 2nd person singular in (1) are predictable given the phono- logical shape of the stem, and the alternations can be described by the following (simplified) two-level rules (we have abstracted away from inessential restrictions here, e.g., that (strong) verbs with i/e-umlaut do not show epenthesis): rTzoukermann and Libermann 1990 show that multiplying paradigms need not degrade perfor- mance, however. 143 (14) e-epenthesis in the bet- case +:e . {d,t}_{s,t} s-deletion in the mix- case s:O ¢:~ {s,z,z, ch}+:O t The colon ':' indicates a correspondence be- tween lexical and surface levels. Thus the first rule states that a lexical morph bound- ary + must correspond to a surface e if it oc- curs after d or t and before s or t. The sec- ond specifies when lexical s is deleted (corre- sponds to surface 0). Two-level rules of this sort are then normally compiled into transduc- ers (Dalrymple et al. 1987, p.35-45). 3.3 FDL Specification of Two-Level Morphology Two-level descriptions of allomorphy can be specified in FDLs straightforwardly if we model not transducers, but rather two-level accep- tors (of strings of symbol pairs), following Ritchie et al. 1992. We therefore employ FA over an alphabet consisting of pairs of symbols rather than single symbols, s The encoding of these FA in our approach requires only replacing the alphabet of atomic symbols with an alphabet of feature structures, each of which bears the attributes LEX and SURF. A pair of segments appearing as values of these features stand in the lexical-surface correspon- dence relation denoted by ':' in standard two- level formalisms. The values of the attributes STEM and ENDING in (13) are then not lists of symbols but rather lists of (underspecified) fea- ture structures. Note that the italicized t etc. found in the sequences under MORPHIENDING (13) denote types defined by equations such as (16) or (17). (To make formulas shorter we abbrevi- ate 'alphabet' etymologically as 'aft'.) (15) a]~ = [LEX $1{"a"V .."s"V"s"V'+"V"+"} ] SURF $d"a" V ."s" V 0 V "e" v 0} (16) t = ^ [LZX "t"] = ] LEX "t" SURF "t" (17) + =-- (~ A [LEX "+"] : LEX "+" SURF "e" v 0 aSince our formalisation of FA cannot allow e- transitions without losing important properties, we are in fact forced to this position. It is the role of the collection of FA to re- strict underspecifled lexical representations to those obeying allomorphic constraints. This is the substance of the allomorphy constraint (18), which, together with the Acceptance Criterion (10), guarantees that the input obeys the con- straints of the associated (initial states of the) FA. NORPH]FORM [~] ] (18) allomorphy =_ INPUT [] Rules of the sort found in (14) can be directly compiled into FA acceptors over strings of sym- bol pairs (Ritchie et al. 1992, p.19). Making use of the regular expression notation as templates (introduced in Section 2 above), (19-21) display a compilation of the first rule in (14). Here the composite rule is split up into three different constraints. The first indicates that epenthesis is obligatory in the environment specified and the latter two that each half of the environment specification is necessary. 9 (19) epenth-1 =_ Nallomorphy ] 0RPH [FORM (11"* {t,d} +:0 {s,t} 7r*)]J (20) epenth-2 =_ allomorphy (21) epenth.3 =_ allomorphy + o 3.4 Limits of Pure FA Morphology Finite-state morphology has been criticized (i) for the strict finite-stateness of its handling of morphotactics (Sproat 1992, 43-66); (ii) for making little or no use of the notion of inflec- tional paradigms and inheritance relations be- tween morphological classes (Cahill 1990); and (iii) for its strict separation of phonology from morphology--i.e., standard two-level rules can only be sensitive to phonological contexts (in- cluding word and morpheme boundaries), and apply to all forms where these contexts hold. In fact, allomorphic variation is often "fos- silized", having outlived its original phonological motivation. Therefore some allomorphic rules 97r* denotes the Kleene closure over alphabet 11" and A the complement of A with respect to ~r. 144 are restricted in nonphonological ways, apply- ing only to certain word classes, so that some stems admit idiosyncratic exceptions with re- spect to the applicability of rules (see Bear 1988, Emele 1988, Trost 1991)• To overcome the first difficulty, a number of researchers have suggested augmenting FA with "word grammars", expressed in terms of feature formalisms like PATR II (Bear 1986) or HPSG (Trost 1990). Our proposal follows theirs, improving only on the degree to which morphotactics may be integrated with allomor- phy. See Krieger and Nerbonne 1992 for pro- posals for treating morphotactics in typed fea- ture systems. We illustrate how the FDL approach over- comes the last two difficulties in a concrete case of nonphonologically motivated allomor- phy. German epenthesizes schwa (< e >) at morph boundaries, but in a way which is sensi- tive to morphological environments, and which thus behaves differently in adjectives and verbs• The data in (22) demonstrates some of these dif- ferences, comparing epenthesis in phonologically very similar forms• free, adj super frei+st freiest (22) free, v 2s pres be+frei+st befreist woo, v 2s pres frei+st freist While the rule stated in (14) (and reformu- lated in (19)-(21)) treats the verbal epenthesis correctly, it is not appropriate for adjectives, for it does not allow epenthesis to take place after vowels. We thus have to state different rules for different morphological categories. The original two-level formalism could only solve this problem by introducing arbitrary dia- critic markers• The most general solution is due to Trost 1991, who associated two-level rules with arbitrary filters in form of feature struc- tures. These feature structures are unified with the underlying morphs in order to check the con- text restrictions, and thus serve as an interface to information provided in the feature-based lex- icon. But Trost's two-level rules are a com- pletely different data structure from the feature structures decorating transitions in FA. We attack the problem head on by restrict- ing allomorphic constraints to specific classes of lexical entries, making use of the inheritance techniques available in structured lexicons• The cases of epenthesis in (22) is handled by defining not only the rule in (19-21) for the verbal cases, but also a second, quite similar rule for the more liberal epenthesis in adjectives) ° This frees the 1°In fact, the rules could be specified so that the T • . ° allomorphy epenth-1 epenth-2 epenth-3 word Adj Verb ... Figure 2: Nonphonological Conditioning of allomorphy is achieved by requiring that only some word classes obey the relevant constraints• Adjectives inherit from two of the epenthesis constraints in the text, and verbs (without i/e umlaut) satisfy all three. This very natural means of restricting allomorphic variation to se- lected, nonphonologically motivated classes is only made available through the expression of allomorphy in type hierarchy of the FDL. (The types denote the initial states of FA, as ex- plained in Section 2.) rule from operating on a strictly phonological basis, making it subject to lexical conditioning• This is illustrated in Figure 2. But note that this example demonstrates not only how feature-based allomorphy can over- come the strictly phonological base of two-level morphology (criticism (iii) above), but it also makes use of the inheritance structure in mod- ern lexicons as well. 4 Conclusions In this section we examine our proposal vis-b.-vis others, suggest future directions, and provide a summary. 4.1 Comparison to other Work Computational morphology is a large and ac- tive field, as recent textbooks (Sproat 1992 and Ritchieet al. 1992) testify• This im- pedes the identification of particularly im- portant predecessors, among whom nonethe- less three stand out. First, Trost 1991's use of two-level morphology in combination verbal rule inherited from the more general adjecti- val rule, but pursuing this here would take us some- what afield. 145 with feature-based filters was an important impetus. Second, researchers at Edinburgh (Calder 1988, Bird 1992) first suggested using FDLs in phonological and morphological de- scription, and Bird 1992 suggests describing FA in FDL (without showing how they might be so characterized, however--in particular, providing no FDL definition of what it means for an FA to accept a string). Third, Cahill 1990 posed the critical question, viz., how is one to link the work in lexical inher- itance (on morphotactics) with that in finite- state morphology (on allomorphy). This ear- lier work retained a separation of formalisms for allomorphy (MOLUSC) and morphotactics (DATR). Cahill 1993 goes on to experiment with assuming all of the allomorphic specification into the lexicon, in just the spirit proposed here. 11 Our work differs from this later work (i) in that we use FDL while she uses DATR, which are similar but not identical (cf. Nerbonne 1992); and (ii) in that we have been concerned with showing how the standard model of allomorphy (FA) may be assumed into the inheritance hier- archy of the lexicon, while Cahill has introduced syllable-based models. 4.2 Future Work At present only the minimal examples in Section 2 above have actually been imple- mented, and we are interested in attempting more. Second, a compilation into genuine fi- nite state models could be useful. Third, we are concerned that, in restricting ourselves thus far to acceptors over two-level alpha- bets, we may incur parsing problems, which a more direct approach through finite-state trans- ducers can avoid (Sproat 1992, p.143). See Ritchie et al. 1992, 19-33 for an approach to parsing using finite-state acceptors, however. 4.3 Summary This paper proposes a treatment of allomor- phy formulated and processable in typed feature logic. There are several reasons for developing this approach to morphology. First, we prefer the GENERALITY of a system in which linguis- tic knowledge of all sorts may be expressed--at least as long as we do not sacrifice processing efficiency. This is an overarching goal of HPSG (Pollard and Sag 1987)--in which syntax and semantics is described in a feature formalism, and in which strides toward descriptions of mor- photactics (Krieger 1993a, Riehemann 1993, lICf. Reinhard and Gibbon 1991 for another sort of DATR-based allomorphy Gerdemann 1993) and phonology (Bird 1992) have been taken. This work is the first to show how allomorphy may be described here. The proposal here would allow one to describe seg- ments using features, as well, but we have not explored this opportunity for reasons of space. Second, the uniform formalism allows the ex- act and more transparent specification of depen- dencies which span modules of otherwise dif- ferent formalisms. Obviously interesting cases for the extension of feature-based descriptions to other areas are those involving stress and intonation--where phonological properties can determine the meaning (via focus) and even syn- tactic well-formedness (e.g., of deviant word or- ders). Similarly, allomorphic variants covary in the style register they belong to: the German dative singular in -e, dera Kinde, belongs to a formal register. Third, and more specifically, the feature- based treatment of allomorphy overcomes the bifurcation of morphology into lexical aspects-- which have mostly been treated in lexical in- heritance schemes--and phonological aspects-- which are normally treated in finite-state mor- phology. This division has long been recognized as problematic. One symptom of the problem is seen in the treatment of nonphonologically conditioned allomorphy, such as German um- laut, which (Trost 1990) correctly criticizes as ad hoc in finite-state morphology because the latter deals only in phonological (or graphemic) categories. We illustrated the benefits of the uniform formalism above where we showed how a similar nonphonologically motivated alterna- tion (German schwa epenthesis) is treated in a feature-based description, which may deal in several levels of linguistic description simultane- ously. References Backofen, R., L. Euler, and G. G6rz. 1990. Towards the Integration of Functions, Relations and Types in an AI Programming Language. In Proc. of GWAI-90. Berlin. Springer. Bear, J. 1986. A Morphological Recognizer with Syntactic and Phonological Rules. In Proc. of COLING, 272-276. Bear, J. 1988. Morphology with Two-Level Rules and Negative Rule Features. In Proc. of COLING, 28-31. Bird, S. 1992. Finite-State Phonology in HPSG. In Proc. of COLING, 74-80. Cahill, L. J. 1990. Syllable-Based Morphology. In Proc. of COLING, 48-53. Cahill, L. J. 1993. Morphonology in the Lexicon. In Proc. of the 7th European A CL, 87-96. 146 Calder, J. 1988. Paradigmatic Morphology. In Proc. of the 5th European A CL. Carpenter, B. 1992. The Logic of Typed Feature Structures. No. 32Tracts in Theoretical Com- puter Science. Cambridge: Cambridge University Press. Dalrymple, M., R. Ka- plan, L. Karttunen, K. Koskenniemi, S. Shaio, and M. Wescoat. 1987. Tools for Morphological Analysis. Technical Report CSLI-1987-108, CSLI, Stanford University. DSrre, J., and A. Eisele. 1989. Determining Consis- tency of Feature Terms with Distributed Disjunc- tions. In Proc. of GWAI-89 (15th German Work- shop on AI), ed. D. Metzing, 270-279. Berlin. Springer-Verlag. Emele, M. 1988. Uberlegungen zu einer Two-Level Morphologie fiir das Deutsche. In Proc. der 4. Osterreichischen Artificiai-lntelligence- Tagung und des WWWS, ed. H. Trost, 156-163. Berlin: Springer. Informatik-Fachberichte 176. Gerdemann, D. 1993. Complement Inheritance as Subcategorization Inheritance. In German Gram- mar in HPSG, ed. J. Nerbonne, K. Netter, and C. Ponard. Stanford: CSLI. Hopcroft, J. E., and J. D. Ullman. 1979. In- troduction to Automata Theory, Languages, and Computation. Reading, Massachusetts: Addison- Wesley. Kaplan, R., and J. Maxwell. 1988. An Algorithm for Functional Uncertainty. In Proc. of Coling 1988, 303-305. Budapest. Koskenniemi, K. 1983. Two-Level Model for Mor- phological Analysis. In Proc. of IJCAI, 683-685. Krieger, H.-U. 1993a. Derivation Without Lexi- cad Rules. In Constraint Propagation, Linguis- tic Description and Computation, ed. R. Johnson, M. Rosner, and C. Rupp. Academic Press. Krieger, H.-U. 1993b. Representing and Pro- cessing Finite Automata Within Typed Fea- ture Formalisms. Technical report, Deutsches Forschungsinstitut fiir Kiinstfiche Intelligenz, Saarbriicken, Germany. Krieger, H.-U., and J. Nerbonne. 1992. Feature- Based Inheritance Networks for Computational Lexicons. In Default Inheritance within Unification-Based Approaches to the Lexicon, ed. T. Briscoe, A. Copestake, and V. de Paiva. Cam- bridge: Cambridge University Press. Also DFKI Research Report RR-91-31. Matthews, P. 1972. Inflectional Morphology: A The- oretical Study Based on Aspects of Latin Verb Conjugation. Cambridge, England: Cambridge University Press. Nerboune, J. 1992. Feature-Based Lexicons-- An Example and a Comparison to DATR. In Beitriige des ASL-Lexikon- Workshops, Wandlitz (bei Berlin), ed. D. Reimann, 36-49. also DFKI RR-92-04. Pollard, C., and I. Sag. 1987. Information-Based Syntax and Semantics, Vol.L Stanford: CSLI. Reinhard, S., and D. Gibbon. 1991. Prosodic In- heritance and Morphological Generalizations. In Proc. of the 6th European ACL, 131-137. Riehemarm, S. 1993. Word Formation in Lexical Type Hierarchies. A Case Study of bar-Adjectives in German. Master's thesis, Eberhard-Karls- Universit~it Tiibingen, Seminar f/Jr Sprachwis- senschaft. Ritchie, G. D., G. J. Russell, A. W. Black, and S. G. Pulman. 1992. Computational Morphology: Prac- tical Mechanisms for the English Lexicon. Cam- bridge: MIT Press. Smolka, G. 1988. A Feature Logic with Subsorts. Technical Report 33, WT LILOG-IBM Germany. Sproat, R. 1992. Morphology and Computation. Cambridge: MIT Press. Trost, H. 1990. The Application of Two-Level Mor- phology to Non-concatenative German Morphol- ogy. In Proc. of COLING, 371-376. Trost, H. 1991. X2MORF: A Morphological Com- ponent Based on Augmented Two-Level Mor- phology. Technical Report RR-91-04, DFKI, Saarbrficken, Germany. Tzoukermann, E., and M. Libermann. 1990. A Finite-State Morphological Processor for Spanish. In Proc. of COLING, Vol. 3. 147
1993
19
ALIGNING SENTENCES IN BILINGUAL CORPORA USING LEXICAL INFORMATION Stanley F. Chen* Aiken Computation Laboratory Division of Applied Sciences Harvard University Cambridge, MA 02138 Internet: [email protected] Abstract In this paper, we describe a fast algorithm for aligning sentences with their translations in a bilingual corpus. Existing efficient algorithms ig- nore word identities and only consider sentence length (Brown el al., 1991b; Gale and Church, 1991). Our algorithm constructs a simple statisti- cal word-to-word translation model on the fly dur- ing alignment. We find the alignment that maxi- mizes the probability of generating the corpus with this translation model. We have achieved an error rate of approximately 0.4% on Canadian Hansard data, which is a significant improvement over pre- vious results. The algorithm is language indepen- dent. 1 Introduction In this paper, we describe an algorithm for align- ing sentences with their translations in a bilingual corpus. Aligned bilingual corpora have proved useful in many tasks, including machine transla- tion (Brown e/ al., 1990; Sadler, 1989), sense dis- ambiguation (Brown el al., 1991a; Dagan el at., 1991; Gale el al., 1992), and bilingual lexicogra- phy (Klavans and Tzoukermann, 1990; Warwick and Russell, 1990). The task is difficult because sentences frequently do not align one-to-one. Sometimes sentences align many-to-one, and often there are deletions in *The author wishes to thank Peter Brown, Stephen Del- laPietra, Vincent DellaPietra, and Robert Mercer for their suggestions, support, and relentless taunting. The author also wishes to thank Jan Hajic and Meredith Goldsmith as well as the aforementioned for checking the aligmnents produced by the implementation. one of the supposedly parallel corpora of a bilin- gual corpus. These deletions can be substantial; in the Canadian Hansard corpus, there are many deletions of several thousand sentences and one deletion of over 90,000 sentences. Previous work includes (Brown el al., 1991b) and (Gale and Church, 1991). In Brown, align- ment is based solely on the number of words in each sentence; the actual identities of words are ignored. The general idea is that the closer in length two sentences are, the more likely they align. To perform the search for the best align- ment, dynamic programming (Bellman, 1957) is used. Because dynamic programming requires time quadratic in the length of the text aligned, it is not practical to align a large corpus as a sin- gle unit. The computation required is drastically reduced if the bilingual corpus can be subdivided into smaller chunks. Brown uses anchors to per- form this subdivision. An anchor is a piece of text likely to be present at the same location in both of the parallel corpora of a bilingual corpus. Dy- namic programming is used to align anchors, and then dynamic programming is used again to align the text between anchors. The Gale algorithm is similar to the Brown al- gorithm except that instead of basing alignment on the number of words in sentences, alignment is based on the number of characters in sentences. Dynamic programming is also used to search for the best alignment. Large corpora are assumed to be already subdivided into smaller chunks. While these algorithms have achieved remark- ably good performance, there is definite room for improvement. These algorithms are not robust with respect to non-literal translations and small deletions; they can easily misalign small passages 9 Mr. McInnis? M. McInnis? Yes. Oui. Mr. Saunders? M. Saunders? No. Non. Mr. Cossitt? M. Cossitt? Yes. Oui. : Figure 1: A Bilingual Corpus Fragment because they ignore word identities. For example, the type of passage depicted in Figure 1 occurs in the Hansard corpus. With length-based alignment algorithms, these passages may well be misaligned by an even number of sentences if one of the cor- pora contains a deletion. In addition, with length- based algorithms it is difficult to automatically re- cover from large deletions. In Brown, anchors are used to deal with this issue, but the selection of anchors requires manual inspection of the corpus to be aligned. Gale does not discuss this issue. Alignment algorithms that use lexical informa- tion offer a potential for higher accuracy. Previ- ous work includes (Kay, 1991) and (Catizone el al., 1989). However, to date lexically-based al- gorithms have not proved efficient enough to be suitable for large corpora. In this paper, we describe a fast algorithm for sentence alignment that uses lexical informa- tion. The algorithm constructs a simple statistical word-to-word translation model on the fly during sentence alignment. We find the alignment that maximizes the probability of generating the corpus with this translation model. The search strategy used is dynamic programming with thresholding. Because of thresholding, the search is linear in the length of the corpus so that a corpus need not be subdivided into smaller chunks. The search strat- egy is robust with respect to large deletions; lex- ical information allows us to confidently identify the beginning and end of deletions. 2 The Alignment Model 2.1 The Alignment Framework We use an example to introduce our framework for alignment. Consider the bilingual corpus (E, ~') displayed in Figure 2. Assume that we have con- structed a model for English-to-French transla- tion,/.e., for all E and Fp we have an estimate for P(Fp]E), the probability that the English sentence E translates to the French passage Fp. Then, we can assign a probability to the English corpus E translating to the French corpus :T with a partic- ular alignment. For example, consider the align- ment .41 where sentence E1 corresponds to sen- tence F1 and sentence E2 corresponds to sentences F2 and F3. We get P(-~',.4~l,f:) = P(FIIE1)P(F~., FsIE2), assuming that successive sentences translate inde- pendently of each other. This value should be rel- atively large, since F1 is a good translation of E1 and (F2, F3) is a good translation of E2. Another possible alignment .42 is one where E1 maps to nothing and E2 maps to F1, F2, and F3. We get P(.F',.42]£) = P(elE1)P(F~, F2, F3IE2) This value should be fairly low, since the align- ment does not map the English sentences to their translations. Hence, if our translation model is accurate we will have P(~',`41I,~) >> P(.r,.421,f:) In general, the more sentences that are mapped to their translations in an alignment .4, the higher the value of P(~,.AIE). We can extend this idea to produce an alignment algorithm given a trans- lation model. In particular, we take the alignment of a corpus (~, ~) to be the alignment ,4 that max- imizes P(~',`41E). The more accurate the transla- tion model, the more accurate the resulting align- ment will be. However, because the parameters are all of the form P(FplE ) where E is a sentence, the above framework is not amenable to the situation where a French sentence corresponds to no English sen- tences. Hence, we use a slightly different frame- work. We view a bilingual corpus as a sequence of sentence beads (Brown et al., 1991b), where a sentence bead corresponds to an irreducible group of sentences that align with each other. For exam- ple, the correct alignment of the bilingual corpus in Figure 2 consists of the sentence bead [El; F1] followed by the sentence bead [E2; ];'2, F3]. We can represent an alignment `4 of a corpus as a se- quence of sentence beads ([Epl; Fpl], [Ep2; F~],...), where the E~ and F~ can be zero, one, or more sentences long. Under this paradigm, instead of expressing the translation model as a conditional distribution 10 English (£) El That is what the consumers are interested in and that is what the party is interested in. E2 Hon. members opposite scoff at the freeze suggested by this party; to them it is laughable. French (~) /'i Voil~ ce qui int6resse le consommateur et roll& ce que int6resse notre parti. F2 Les d6put6s d'en face se moquent du gel que a propos6 notre parti. F3 Pour eux, c'est une mesure risible. Figure 2: A Bilingual Corpus P(FpIE ) we express the translation model as a distribution P([Ep; Fp]) over sentence beads. The alignment problem becomes discovering the align- ment A that maximizes the joint distribution P(£,2",.A). Assuming that successive sentence beads are generated independently, we get L P(C, Yr, A) = p(L) H P([E~;F~]) k=l where A t. 1 = ([E~,F;],..., [EL; FL])is consistent with g and ~" and where p(L) is the probability that a corpus contains L sentence beads. 2.2 The Basic Translation Model For our translation model, we desire the simplest model that incorporates lexical information effec- tively. We describe our model in terms of a series of increasingly complex models. In this section, we only consider the generation of sentence beads containing a single English sentence E = el ""en and single French sentence F = fl""fm. As a starting point, consider a model that assumes that all individual words are independent. We take n P([E; F]) = p(n)p(m) H p(ei) fi p(fj) i=l j=l where p(n) is the probability that an English sen- tence is n words long, p(m) is the probability that a French sentence is m words long, p(ei) is the fre- quency of the word ei in English, and p(fj) is the frequency of the word fj in French. To capture the dependence between individual English words and individual French words, we generate English and French words in pairs in addition to singly. For two words e and f that are mutual translations, instead of having the two terms p(e) and p(f) in the above equation we would like a single term p(e, f) that is substan- tially larger than p(e)p(f). To this end, we intro- duce the concept of a word bead. A word bead is either a single English word, a single French word, or a single English word and a single French word. We refer to these as 1:0, 0:1, and 1:1 word beads, respectively. Instead of generating a pair of sen- tences word by word, we generate sentences bead by bead, using the hl word beads to capture the dependence between English and French words. As a first cut, consider the following "model": P* (B) = p(l) H p(bi) i=1 where B = {bl, ..., bl} is a multiset of word beads, p(l) is the probability that an English sentence and a French sentence contain l word beads, and p(bi) denotes the frequency of the word bead bi. This simple model captures lexical dependencies between English and French sentences. However, this "model" does not satisfy the con- straint that ~B P*(B) = 1; because beddings B are unordered multisets, the sum is substantially less than one. To force this model to sum to one, we simply normalize by a constant so that we re- tain the qualitative aspects of the model. We take l p(t) "b" P(B) = - II p[ i) N, Z While a beading B describes an unordered mul- tiset of English and French words, sentences are in actuality ordered sequences of words. We need to model word ordering, and ideally the probabil- ity of a sentence bead should depend on the or- dering of its component words. For example, the sentence John ate Fido should have a higher prob- ability of aligning with the sentence Jean a mang4 Fido than with the sentence Fido a mang4 Jean. 11 However, modeling word order under translation is notoriously difficult (Brown et al., 1993), and it is unclear how much improvement in accuracy a good model of word order would provide. Hence, we model word order using a uniform distribution; we take I P([E;F],B)- p(l) Hp(bi) Nin!m! i=1 which gives us p([E;F])= E p(l) ,(s) N,n!m! H p(b,) B i=1 where B ranges over beadings consistent with [E; F] and l(B) denotes the number of beads in B. Recall that n is the length of the English sen- tence and m is the length of the French sentence. 2.3 The Complete Translation Model In this section, we extend the translation model to other types of sentence beads. For simplicity, we only consider sentence beads consisting of one English sentence, one French sentence, one En- glish sentence and one French sentence, two En- glish sentences and one French sentence, and one English sentence and two French sentences. We refer to these as 1:0, 0:1, 1:1, 2:1, and 1:2 sentence beads, respectively. For 1:1 sentence beads, we take t(B) P([E; F]) = P1:1 E P1:1(/) H p(bi) NLHn!ml B i=1 where B ranges over beadings consistent with [E;F] and where Pz:I is the probability of gen- erating a 1:1 sentence bead. To model 1:0 sentence beads, we use a similar equation except that we only use 1:0 word beads, and we do not need to sum over beadings since there is only one word beading consistent with a 1:0 sentence bead. We take I P([E]) = Pl-o Pz:0(/) HP(ei) • Nl,l:0n! i=1 Notice that n = I. We use an analogous equation for 0:1 sentence beads. For 2:1 sentence beads, we take z(s) P2:l (/) H p(bi) Pr([E1, E2; F]) = P~:I E Nl 2:lnl !n2!m! B ' i=1 where the sum ranges over beadings B consistent with the sentence bead. We use an analogous equation for 1:2 sentence beads. 3 Implementation Due to space limitations, we cannot describe the implementation in full detail. We present its most significant characteristics in this section; for a more complete discussion please refer to (Chen, 1993). 3.1 Parameterization We chose to model sentence length using a Poisson distribution, i.e., we took At1:0 Pl:0(/) - l! e ~1:0 for some Al:0, and analogously for the other types of sentence beads. At first, we tried to estimate each A parameter independently, but we found that after training one or two A would be unnat- urally small or large in order to specifically model very short or very long sentences. To prevent this phenomenon, we tied the A values for the different types of sentence beads together. We took A1:1 A2:l AI:2 Al:0=A0:l---~--- 3 - 3 (1) To model the parameters p(L) representing the probability that the bilingual corpus is L sen- tence beads in length, we assumed a uniform distribution, z This allows us to ignore this term, since length will not influence the probability of an alignment. We felt this was reasonable becattse it is unclear what a priori information we have on the length of a corpus. In modeling the frequency of word beads, notice that there are five distinct distributions we need to model: the distribution of 1:0 word beads in 1:0 sentence beads, the distribution of 0:1 word beads in 0:1 sentence beads, and the distribution of all word beads in 1:1, 2:1, and 1:2 sentence beads. To reduce the number of independent parameters we need to estimate, we tied these distributions to- gether. We assumed that the distribution of word beads in 1:1, 2:1, and 1:2 sentence beads are iden- tical. We took the distribution of word beads in 1 To be precise, we assumed a uniform distribution over some arbitrarily large finite range, as one cannot have a uniform distribution over a countably infinite set. 12 1:0 and 0:1 sentence beads to be identical as well except restricted to the relevant subset of word beads and normalized appropriately, i.e., we took pb(e) for e E Be pc(e) : pb(e') and Pb(f) for f E By P:(f) = ~'~.:'eB, Pb(f') where Pe refers to the distribution of word beads in 1:0 sentence beads, pf refers to the distribu- tion of word beads in 0:1 sentence beads, pb refers to the distribution of word beads in 1:1, 2:1, and 1:2 sentence beads, and Be and B I refer to the sets of 1:0 and 0:1 word beads in the vocabulary, respectively. 3.2 Evaluating the Probability of a Sentence Bead The probability of generating a 0:1 or 1:0 sentence bead can be calculated efficiently using the equa- tion given in Section 2.3. To evaluate the proba- bilities of the other sentence beads requires a sum over an exponential number of word beadings. We make the gross approximation that this sum is roughly equal to the maximum term in the sum. For example, with 1:1 sentence beads we have Z(B) P([E;F]) = px:lE Pa:I(/) Hp(bi) Nz,Hn!m! B i=1 ,~ pllmax{ Pl:I(/) I(B) : B N~m! Hp(bi)} i=l Even with this approximation, the calculation of P([E; F]) is still intractable since it requires a search for the most probable beading. We use a greedy heuristic to perform this search; we are not guaranteed to find the most probable beading. We begin with every word in its own bead. We then find the 0:1 bead and 1:0 bead that, when replaced with a 1:1 word bead, results in the greatest in- crease in probability. We repeat this process until we can no longer find a 0:1 and 1:0 bead pair that when replaced would increase the probability of the beading. 3.3 Parameter Estimation We estimate parameters by using a variation of the Viterbi version of the expectation-maximization (EM) algorithm (Dempster et al., 1977). The Viterbi version is used to reduce computational complexity. We use an incremental variation of the algorithm to reduce the number of passes through the corpus required. In the EM algorithm, an expectation phase, where counts on the corpus are taken using the current estimates of the parameters, is alternated with a maximization phase, where parameters are re-estimated based on the counts just taken. Im- proved parameters lead to improved counts which lead to even more accurate parameters. In the in- cremental version of the EM algorithm we use, in- stead of re-estimating parameters after each com- plete pass through the corpus, we re-estimate pa- rameters after each sentence. By re-estimating pa- rameters continually as we take counts on the cor- pus, we can align later sections of the corpus more reliably based on alignments of earlier sections. We can align a corpus with only a single pass, si- multaneously producing alignments and updating the model as we proceed. More specifically, we initialize parameters by taking counts on a small body of previously aligned data. To estimate word bead frequencies, we maintain a count c(b) for each word bead that records the number of times the word bead b oc- curs in the most probable word beading of a sen- tence bead. We take c(b) pb(b) - Eb, c(V) We initialize the counts c(b) to 1 for 0:1 and 1:0 word beads, so that these beads can occur in bead- ings with nonzero probability. To enable 1:1 word beads to occur in beadings with nonzero probabil- ity, we initialize their counts to a small value when- ever we see the corresponding 0:1 and 1:0 word beads occur in the most probable word beading of a sentence bead. To estimate the sentence length parameters ,~, we divide the number of word beads in the most probable beading of the initial training data by the total number of sentences. This gives us an estimate for hi:0, and the other ~ parameters can be calculated using equation (1). We have found that one hundred sentence pairs are sufficient to train the model to a state where it can align adequately. At this point, we can process unaligned text and use the alignments we produce to further train the model. We update parameters based on the newly aligned text in the same way that we update parameters based on the initial ]3 training data. 2 To align a corpus in a single pass the model must be fairly accurate before starting or else the beginning of the corpus will be poorly aligned. Hence, after bootstrapping the model on one hun- dred sentence pairs, we train the algorithm on a chunk of the unaligned target bilingual corpus, typically 20,000 sentence pairs, before making one pass through the entire corpus to produce the ac- tual alignment. 3.4 Search It is natural to use dynamic programming to search for the best alignment; one can find the most probable of an exponential number of align- ments using quadratic time and memory. Align- ment can be viewed as a "shortest distance" prob- lem, where the "distance" associated with a sen- tence bead is the negative logarithm of its proba- bility. The probability of an alignment is inversely related to the sum of the distances associated with its component sentence beads. Given the size of existing bilingual corpora and the computation necessary to evaluate the proba- bility of a sentence bead, a quadratic algorithm is still too profligate. However, most alignments are one-to-one, so we can reap great benefits through intelligent thresholding. By considering only a subset of all possible alignments, we reduce the computation to a linear one. Dynamic programming consists of incrementally finding the best alignment of longer and longer prefixes of the bilingual corpus. We prune all alignment prefixes that have a substantially lower probability than the most probable alignment pre- fix of the same length. 2 In theory, one cannot decide whether a particular sen- tence bead belongs to the best alignment of a corpus un- til the whole corpus has been processed. In practice, some partial alignments will have much higher probabilities than all other ahgnments, and it is desirable to train on these partial alignments to aid in aligning later sections of the corpus. To decide when it is reasonably safe to train on a particular sentence bead, we take advantage of the thresh- olding described in Section 3.4, where improbable partial alignments are discarded. At a given point in time in align- ing a corpus, all undiscarded partial alignments will have some sentence beads in common. When a sentence bead is common to all active partial alignments, we consider it to he safe to train on. 3.5 Deletion Identification Deletions are automatically handled within the standard dynamic programming framework. How- ever, because of thresholding, we must handle large deletions using a separate mechanism. Because lexical information is used, correct alignments receive vastly greater probabilities than incorrect alignments. Consequently, thresh- olding is generally very aggressive and our search beam in the dynamic programming array is nar- row. However, when there is a large deletion in one of the parallel corpora, consistent lexical cor- respondences disappear so no one alignment has a much higher probability than the others and our search beam becomes wide. When the search beam reaches a certain width, we take this to in- dicate the beginning of a deletion. To identify the end of a deletion, we search lin- early through both corpora simultaneously. All occurrences of words whose frequency is below a certain value are recorded in a hash table. When- ever we notice the occurrence of a rare word in one corpus and its translation in the other, we take this as a candidate location for the end of the deletion. For each candidate location, we exam- ine the forty sentences following the occurrence of the rare word in each of the two parallel corpora. We use dynamic programming to find the prob- ability of the best alignment of these two blocks of sentences. If this probability is sufficiently high we take the candidate location to be the end of the deletion. Because it is extremely unlikely that there are two very similar sets of forty sentences in a corpus, this deletion identification algorithm is robust. In addition, because we key off of rare words in considering ending points, deletion iden- tification requires time linear in the length of the deletion. 4 Results Using this algorithm, we have aligned three large English/French corpora. We have aligned a cor- pus of 3,000,000 sentences (of both English and French) of the Canadian Hansards, a corpus of 1,000,000 sentences of newer Hansard proceedings, and a corpus of 2,000,000 sentences of proceed- ings from the European Economic Community. In each case, we first bootstrapped the translation model by training on 100 previously aligned sen- tence pairs. We then trained the model further on 14 20,000 sentences of the target corpus. Note that these 20,000 sentences were not previously aligned. Because of the very low error rates involved, in- stead of direct sampling we decided to estimate the error of the old Hansard corpus through com- parison with the alignment found by Brown of the same corpus. We manually inspected over 500 lo- cations where the two alignments differed to esti- mate our error rate on the alignments disagreed upon. Taking the error rate of the Brown align- ment to be 0.6%, we estimated the overall error rate of our alignment to be 0.4%. In addition, in the Brown alignment approxi- mately 10% of the corpus was discarded because of indications that it would be difficult to align. Their error rate of 0.6% holds on the remaining sentences. Our error rate of 0.4% holds on the entire corpus. Gale reports an approximate error rate of 2% on a different body of Hansard data with no discarding, and an error rate of 0.4% if 20% of the sentences can be discarded. Hence, with our algorithm we can achieve at least as high accuracy as the Brown and Gale algo- rithms without discarding any data. This is espe- cially significant since, presumably, the sentences discarded by the Brown and Gale algorithms are those sentences most difficult to align. In addition, the errors made by our algorithm are generally of a fairly trivial nature. We ran- domly sampled 300 alignments from the newer Hansard corpus. The two errors we found are displayed in Figures 3 and 4. In the first error, E1 was aligned with F1 and E2 was aligned with /'2. The correct alignment maps E1 and E2 to F1 and F2 to nothing. In the second error, E1 was aligned with F1 and F2 was aligned to nothing. Both of these errors could have been avoided with improved sentence boundary detection. Because length-based alignment algorithms ignore lexical information, their errors can be of a more spec- tacular nature. The rate of alignment ranged from 2,000 to 5,000 sentences of both English and French per hour on an IBM RS/6000 530H workstation. The alignment algorithm lends itself well to paralleliza- tion; we can use the deletion identification mecha- nism to automatically identify locations where we can subdivide a bilingual corpus. While it required on the order of 500 machine-hours to align the newer Hansard corpus, it took only 1.5 days of real time to complete the job on fifteen machines. 5 Discussion We have described an accurate, robust, and fast algorithm for sentence alignment. The algorithm can handle large deletions in text, it is language independent, and it is parallelizable. It requires a minimum of human intervention; for each lan- guage pair 100 sentences need to be aligned by hand to bootstrap the translation model. The use of lexical information requires a great computational cost. Even with numerous approxi- mations, this algorithm is tens of times slower than the Brown and Gale algorithms. This is acceptable given that alignment is a one-time cost and given available computing power. It is unclear, though, how much further it is worthwhile to proceed. The natural next step in sentence alignment is to account for word ordering in the translation model, e.g., the models described in (Brown et al., 1993) could be used. However, substantially greater computing power is required before these approaches can become practical, and there is not much room for further improvements in accuracy. References (Bellman, 1957) Richard Bellman. Dynamic Pro- gramming. Princeton University Press, Princeton N.J., 1957. (Brown et al., 1990) Peter F. Brown, John Cocke, Stephen A. DellaPietra, Vincent J. DellaPietra, Frederick Jelinek, John D. Lafferty, Robert L. Mercer, and Paul S. Roossin. A statistical ap- proach to machine translation. Computational Linguistics, 16(2):79-85, June 1990. (Brown et al., 1991a) Peter F. Brown, Stephen A. DellaPietra, Vincent J. DellaPietra, and Ro- bert L. Mercer. Word sense disambiguation using statistical methods. In Proceedings 29th Annu- al Meeting of the ACL, pages 265-270, Berkeley, CA, June 1991. (Brown et al., 1991b) Peter F. Brown, Jennifer C. Lai, and Robert L. Mercer. Aligning sentences in parallel corpora. In Proceedings 29th Annual Meeting of the ACL, pages 169-176, Berkeley, CA, June 1991. (Brown et al., 1993) Peter F. Brown, Stephen A. Del- laPietra, Vincent J. DellaPietra, and Robert L. Mercer. The mathematics of machine transla- tion: Parameter estimation. Computational Lin- guistics, 1993. To appear. (Catizone et al., 1989) Roberta Catizone, Graham Russell, and Susan Warwick. Deriving transla- tion data from bilingual texts. In Proceedings ]5 E1 If there is some evidence that it ... and I will see that it does. E2 \SCM{} Translation \ECM{} El F1 Si on peut prouver que elle ... je verrais & ce que elle se y conforme. \SCM{} Language = French \ECM{} F2 \SCM{} Paragraph \ECM{} Figure 3: An Alignment Error Motion No. 22 that Bill C-84 be amended in ... and substituting the following therefor : second anniversary of. F 1 Motion No 22 que on modifie le projet de loi C-84 ... et en la rempla§ant par ce qui suit : ' 18. F2 Deux ans apr~s : '. Figure 4: Another Alignment Error of the First International Acquisition Workshop, Detroit, Michigan, August 1989. (Chen, 1993) Stanley 17. Chen. Aligning sentences in bilingual corpora using lexical information. Tech- nical Report TR-12-93, Harvard University, 1993. (Dagan et al., 1991) Ido Dagan, Alon Itai, and U1- rike Schwall. Two languages are more informa- tive than one. In Proceedings of the 29th Annual Meeting of the ACL, pages 130-137, 1991. (Dempster et al., 1977) A.P. Dempster, N.M. Laird, and D.B. Rubin. Maximum likelihood from in- complete data via the EM algorithm. Journal of the Royal Statistical Society, 39(B):1-38, 1977. (Gale and Church, 1991) William A. Gale and Ken- neth W. Church. A program for aligning sen- tences in bilingual corpora. In Proceedings of the 29th Annual Meeting of the ACL, Berkeley, Cali- fornia, June 1991. (Gale et al., 1992) William A. Gale, Kenneth W. Church, and David Yarowsky. Using bilingual materials to develop word sense disambiguation methods. In Proceedings of the Fourth Interna- tional Conference on Theoretical and Methodolog- ical lssues in Machine Translation, pages 101- 112, Montr4al, Canada, June 1992. (Kay, 1991) Martin Kay. Text-translation alignment. In ACH/ALLC 'gl: "Making Connections" Con- ference Handbook, Tempe, Arizona, March 1991. (Klavans and Tzoukermann, 1990) Judith Klavans and Evelyne Tzoukermann. The bicord system. In COLING-gO, pages 174-179, Helsinki, Fin- land, August 1990. (Sadler, 1989) V. Sadler. The Bilingual Knowledge Bank - A New Conceptual Basis for MT. BSO/Research, Utrecht, 1989. (Warwick and Russell, 1990) Susan Warwick and Graham Russell. Bilingual concordancing and bilingual lexicography. In EURALEX 4th later- national Congress, M~laga, Spain, 1990. 16
1993
2
INTENTION-BASED SEGMENTATION: HUMAN RELIABILITY AND CORRELATION WITH LINGUISTIC CUES Rebecca J. Passonneau Department of Computer Science Columbia University New York, NY 10027 [email protected] Abstract Certain spans of utterances in a discourse, referred to here as segments, are widely assumedto form coherent units. Further, the segmental structure of discourse has been claimed to constrain and be constrained by many phenomena. However, there is weak consensus on the nature of segments and the criteria for recognizing or generating them. We present quantitative results of a two part study us- ing a corpus of spontaneous, narrative monologues. The first part evaluates the statistical reliability of human segmentation of our corpus, where speaker intention is the segmentation criterion. We then use the subjects' segmentations to evaluate the corre- lation of discourse segmentation with three linguis- tic cues (referential noun phrases, cue words, and pauses), using information retrieval metrics. INTRODUCTION A discourse consists not simply of a linear se- quence of utterances, 1 hut of meaningful relations among the utterances. As in much of the litera- ture on discourse processing, we assume that cer- tain spans of utterances, referred to here as dis- course segments, form coherent units. The seg- mental structure of discourse has been claimed to constrain and be constrained by disparate phe- nomena: cue phrases (Hirschberg and Litman, 1993; Gross and Sidner, 1986; Reichman, 1985; Co- hen, 1984); lexical cohesion (Morris and Hirst, 1991); plans and intentions (Carberry, 1990; Lit- man and Allen, 1990; Gross and Sidner, 1986); prosody (Grosz and Hirschberg, 1992; Hirschberg and Gross, 1992; Hirschberg and Pierrehumbert, 1986); reference (Webber, 1991; Gross and Sidner, 1986; Linde, 1979); and tense (Webber, 1988; Hwang and Schubert, 1992; Song and Cohen, 1991). How- ever, there is weak consensus on the nature of seg- ments and the criteria for recognizing or generat- ing them in a natural language processing system. Until recently, little empirical work has been di- rected at establishing obje'~ively verifiable segment boundaries, even though this is a precondition for 1We use the term utterance to mean a use of a sen- tence or other linguistic unit, whether in text or spoken language. Diane J. Litman AT&T Bell Laboratories 600 Mountain Avenue Murray Hill, NJ 07974 [email protected] SEGMENT 1 Okay. tsk There's ~ , he looks like ay uh Chicano American, he is picking pears. A-nd u-m he's just picking them, he comes off of the ladder, a-nd he- u-h puts his pears into the basket. SEGMENT 2 U-h a number of people are going by, and one is um /you know/ I don't know, I can't remember the first ... the first person that goes by. Oh. A u-m a man with a goat comes by. It see it seems to be a busy place. You know, fairly busy, it's out in the country, maybe.in u-m u-h the valley or something. um [-~ goes up the ladder, A-nd and picks some more pears. Figure 1: Discourse Segment Structure avoiding circularity in relating segments to linguis- tic phenomena. We present the results of a two part study on the reliability of human segmenta- tion, and correlation with linguistic cues. We show that human subjects can reliably perform discourse segmentation using speaker intention as a criterion. We use the segmentations produced by our subjects to quantify and evaluate the correlation of discourse segmentation with three linguistic cues: referential noun phrases, cue words, and pauses. Figure 1 illustrates how discourse structure in- teracts with reference resolution in an excerpt taken from our corpus. The utterances of this discourse are grouped into two hierarchically structured seg- ments, with segment 2 embedded in segment 1. This segmental structure is crucial for determining that the boxed pronoun he corefers with the boxed noun phrase a farmer. Without the segmentation, the ref- erent of the underlined noun phrase a man with a goat is a potential referent of the pronoun because it is the most recent noun phrase consistent with the number and gender restrictions of the pronoun. With the segmentation analysis, a man with a goat is ruled out on structural grounds; this noun phrase occurs in segment 2, while the pronoun occurs after resumption of segment 1. A farmer is thus the most recent noun phrase that is both consistent with, and 148 in the relevant interpretation context of, the pro- noun in question. One problem in trying to model such dis- course structure effects is that segmentation has been observed to be rather subjective (Mann et al., 1992; Johnson, 1985). Several researchers have be- gun to investigate the ability of humans to agree with one another on segmentation. Grosz and Hirschberg (Grosz and Hirschberg, 1992; Hirschberg and Grosz, 1992) asked subjects to structure three AP news stories (averaging 450 words in length) ac- cording to the model of Grosz and Sidner (1986). Subjects identified hierarchical structures of dis- course segments, as well as local structural features, using text alone as well as text and professionally recorded speech. Agreement ranged from 74%-95%, depending upon discourse feature. Hearst (1993) asked subjects to place boundaries between para- graphs of three expository texts (length 77 to 160 sentences), to indicate topic changes. She found agreement greater than 80%. We present results of an empirical study of a large corpus of sponta- neous oral narratives, with a large number of poten- tial boundaries per narrative. Subjects were asked to segment transcripts using an informal notion of speaker intention. As we will see, we found agree- ment ranging from 82%-92%, with very high levels of statistical significance (from p = .114 x 10 -6 to p < .6 x 10-9). One of the goals of such empirical work is to use the results to correlate linguistic cues with dis- course structure. By asking subjects to segment discourse using a non-linguistic criterion, the corre- lation of linguistic devices with independently de- rived segments can be investigated. Grosz and Hirschberg (Grosz and Hirschberg, 1992; Hirschberg and Grosz, 1992) derived a discourse structure for each text in their study, by incorporating the struc- tural features agreed upon by all of their subjects. They then used statistical measures to character- ize these discourse structures in terms of acoustic- prosodic features. Morris and Hirst (1991) struc- tured a set of magazine texts using the theory of Grosz and Sidner (1986). They developed a lexical cohesion algorithm that used the informa- tion in a thesaurus to segment text, then qualita- tively compared their segmentations with the re- suits. Hearst (1993) derived a discourse structure for each text in her study, by incorporating the bound- aries agreed upon by the majority of her subjects. Hearst developed a lexical algorithm based on in- formation retrieval measurements to segment text, then qualitatively compared the results with the structures derived from her subjects, as well as with those produced by Morris and Hirst. Iwanska (1993) compares her segmentations of factual reports with segmentations produced using syntactic, semantic, and pragmatic information. We derive segmenta- tions from our empirical data based on the statisti- cM significance of the agreement among subjects, or boundary strength. We develop three segmentation algorithms, based on results in the discourse litera- ture. We use measures from information retrieval to quantify and evaluate the correlation between the segmentations produced by our algorithms and those derived from our subjects. RELIABILITY The correspondence between discourse segments and more abstract units of meaning is poorly under- stood (see (Moore and Pollack, 1992)). A number of alternative proposals have been presented which directly or indirectly relate segments to intentions (Grosz and Sidner, 1986), RST relations (Mann et al., 1992) or other semantic relations (Polanyi, 1988). We present initial results of an investigation of whether naive subjects can reliably segment dis- course using speaker intention as a criterion. Our corpus consists of 20 narrative monologues about the same movie, taken from Chafe (1980) (N~14,000 words). The subjects were introductory psychology students at the University of Connecti- cut and volunteers solicited from electronic bulletin boards. Each narrative was segmented by 7 sub- jects. Subjects were instructed to identify each point in a narrative where the speaker had completed one communicative task, and began a new one. They were also instructed to briefly identify the speaker's intention associated with each segment. Intention was explained in common sense terms and by ex- ample (details in (Litman and Passonneau, 1993)). To simplify data collection, we did not ask sub- jects to identify the type of hierarchical relations among segments illustrated in Figure 1. In a pilot study we conducted, subjects found it difficult and time-consuming to identify non-sequential relations. Given that the average length of our narratives is 700 words, this is consistent with previous findings (Rotondo, 1984) that non-linear segmentation is im- practical for naive subjects in discourses longer than 200 words. " Since prosodic phrases were already marked in the transcripts, we restricted subjects to placing boundaries between prosodic phrases. In principle, this makes it more likely that subjects will agree on a given boundary than if subjects were com- pletely unrestricted. However, previous studies have shown that the smallest unit subjects use in sim- ilar tasks corresponds roughly to a breath group, prosodic phrase, or clause (Chafe, 1980; Rotondo, 1984; Hirschberg and Grosz, 1992). Using smaller units would have artificially lowered the probability for agreement on boundaries. Figure 2 shows the responses of subjects at each potential boundary site for a portion of the excerpt from Figure 1. Prosodic phrases are numbered se- quentially, with the first field indicating prosodic phrases with sentence-final contours, and the second 149 3.3 [.35+ [.35] a-nd] he- u-h [.3] puts his pears into the basket. l 6 SUBJECTS I NP, PAUSE 4.1 [I.0 [.5] U-hi a number of people are going by, CUE, PAUSE 4.2 [.35+ and [.35]] one is [1.15 urn/ /you know/I don't know, 4.3 I can't remember the first ... the first person that goes by. [ 1 SUBJECTS [ PAUSE 5.1 Oh SUBJECTS I NP tl 6.1 A u-m.. a man with a goat [.2] comes by. I [2 SUBJECTS I NP, PAUSE 7.1 [.25] It see it seems to be a busy place. PAUSE 8.1 [.1] You know, 8.2 fairly busy, I, suBJeCTS I 8.3 it's out in the country, PAUSE 8.4 [.4] maybe in u-m [.8] u-h the valley or something. [7 SUBJECTS[ NP, CUE, PAUSE 9.1 [2.95 [.9] A-nd um [.25] [.35]] he goes up the ladder, Figure 2: Excerpt from 9, with Boundaries field indicating phrase-final contours. 2 Line spaces between prosodic phrases represent potential bound- ary sites. Note that a majority of subjects agreed on only 2 of the 11 possible boundary sites: after 3.3 (n=6) and after 8.4 (n=7). (The symbols NP, CUE and PAUSE will be explained later.) Figure 2 typifies our results. Agreement among subjects was far from perfect, as shown by the pres- ence here of 4 boundary sites identified by only 1 or 2 subjects. Nevertheless, as we show in the following sections, the degree of agreement among subjects is high enough to demonstrate that segments can be reliably identified. In the next section we dis- cuss the percent agreement among subjects. In the subsequent section we show that the frequency of boundary sites where a majority of subjects assign a boundary is highly significant. AGREEMENT AMONG SUBJECTS We measure the ability of subjects to agree with one another, using a figure called percent agreement. Percent agreement, defined in (Gale et al., 1992), is the ratio of observed agreements with the ma- jority opinion to possible agreements with the ma- jority opinion. Here, agreement among four, five, six, or seven subjects on whether or not there is a segment boundary between two adjacent prosodic phrases constitutes a majority opinion. Given a transcript of length n prosodic phrases, there are n-1 possible boundaries. The total possible agree- ments with the majority corresponds to the number of subjects times n-1. Teral observed agreements equals the number of times that subjects' bound- ary decisions agree with the majority opinion. As 2The transcripts presented to subjects did not con- tain line numbering or pause information (pauses indi- cated here by bracketed numbers.) noted above, only 2 of the 11 possible boundaries in Figure 2 are boundaries using the majority opin- ion criterion. There are 77 possible agreements with the majority opinion, and 71 observed agreements. Thus, percent agreement for the excerpt as a whole is 71/77, or 92%. The breakdown of agreement on boundary and non-boundary majority opinions is 13/14 (93%) and 58/63 (92%), respectively. The figures for percent agreement with the ma- jority opinion for all 20 narratives are shown in Ta- ble 1. The columns represent the narratives in our corpus. The first two rows give the absolute number of potential boundary sites in each narrative (i.e., n- 1) followed by the corresponding percent agreement figure for the narrative as a whole. Percent agree- ment in this case averages 89% (variance ~r=.0006; max.=92%; min.=82%). The next two pairs of rows give the figures when the majority opinions are bro- ken down into boundary and non-boundary opin- ions, respectively. Non-boundaries, with an average percent agreement of 91% (tr=.0006; max.=95%; min.=84%), show greater agreement among subjects than boundaries, where average percent agreement is 73% (or= .003; max.=80%; min.=60%). This partly reflects the fact that non-boundaries greatly outnumber boundaries, an average of 89 versus 11 majority opinions per transcript. The low variances, or spread around the average, show that subjects are also consistent with one another. Defining a task so as to maximize percent agree- ment can be difficult. The high and consistent lev- els of agreement for our task suggest that we have found a useful experimental formulation of the task of discourse segmentation. Furthermore, our per- cent agreement figures are comparable with the re- sults of other segmentation studies discussed above. While studies of other tasks have achieved stronger results (e.g., 96.8% in a word-sense disambiguation study (Gale et al., 1992)), the meaning of percent agreement in isolation is unclear. For example, a percent agreement figure of less than 90% could still be very meaningful if the probability of obtaining such a figure is low. In the next section we demon- strate the significance of our findings. STATISTICAL SIGNIFICANCE We represent the segmentation data for each narra- tive as an { x j matrix of height i=7 subjects and width j=n-1. The value in each cell ci,j is a one if the ith subject assigned a boundary at site j, and a zero if they did not. We use Cochran's test (Cochran, 1950) to evaluate significance of differences across columns in the matrix. 3 Cochran's test assumes that the number of Is within a single row of the matrix is fixed by ob- servation, and that the totals across rows can vary. Here a row total corresponds to the total number 3We thank Julia Hirschberg for suggesting this test. 150 Narrative 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 All Opinions 138 121 55 63 69 83 90 50 96 195 110 160 108 113 112 46 151 85 94 56 Al~reement 87 82 91 89 89 90 90 90 90 88 92 90 91 89 85 89 92 91 91 86 Boundary 21 16 7 10 6 5 11 5 8 22 13 17 9 11 8 7 15 11 10 6 Agreement 74 70 76 77 60 80 79 69 75 70 74 75 73 71 68 73 77 71 80 74 Non-Boundary % Agreement 117 105 48 53 63 78 79 45 88 173 97 143 99 102 104 39 136 74 84 50 89 84 93 91 92 91 92 92 92 90 95 91 93 91 87 92 93 94 93 88 Table 1: Percent Agreement with the Majority Opinion of boundaries assigned by subject i. In the case of narrative 9 (j=96), one of the subjects assigned 8 boundaries. The probability of a 1 in any of the j cells of the row is thus 8/96, with (9s6) ways for the 8 boundaries to be distributed. Taking this into ac- count for each row, Cochran's test evaluates the null hypothesis that the number of ls in a column, here the total number of subjects assigning a boundary at the jth site, is randomly distributed. Where the row totals are ui, the column totals are Tj, and the average column total is T, the statistic is given by: Q approximates the X 2 distribution with j-1 de- grees of freedom (Cochran, 1950). Our results indi- cate that the agreement among subjects is extremely highly significant. That is, the number of 0s or ls in certain columns is much greater than would be ex- pected by chance. For the 20 narratives, the prob- abilities of the observed distributions range from p=.ll4x 10 -6 top<,6x 10 -9 . The percent agreement analysis classified all the potential boundary sites into two classes, boundaries versus non-boundaries, depending on how the ma- jority of subjects responded. This is justified by further analysis of Q. As noted in the preceding sec- tion, the proportion of non-boundaries agreed upon by most subjects (i.e., where 0 <Tj < 3) is higher than the proportion of boundaries they agree on (4 < Tj < 7). That agreement on non-boundaries is more probable suggests that the significance of Q owes most to the cases where columns have a ma- jority of l's. This assumption is borne out when Q is partitioned into distinct components for each pos- sible value of Tj (0 to 7), based on partioning the sum of squares in the numerator of Q into distinct samples (Cochran, 1950). We find that Qj is signif- icant for each distinct Tj > 4 across all narratives. For Tj=4, .0002 < p < .30 x 10-s; probabilities become more signfficant for higher levels of Tj, and the converse. At Tj=3, p is sometimes above our significance level of .01, depending on the narrative. DISCUSSION OF RESULTS We have shown that an atheoretical notion of speaker intention is understood sufficiently uni- formly by naive subjects to yield significant agree- ment across subjects on segment boundaries in a corpus of oral narratives. We obtained high levels of percent agreement on boundaries as well as on non- boundaries. Because the average narrative length is 100 prosodic phrases and boundaries are relatively infrequent (average boundary frequency=16%), per- cent agreement among ? subjects (row one in Ta- ble 1) is largely determined by percent agreement on non-boundaries (row three). Thus, total percent agreement could be very high, even if subjects did not agree on any boundaries. However, our results show that percent agreement on boundaries is not only high (row two), but also statistically significant. We have shown that boundaries agreed on by at least 4 subjects are very unlikely to be the result of chance. Rather, they most likely reflect the validity of the notion of segment as defined here. In Figure 2, 6 of the 11 possible boundary sites were identi- fied by at least 1 subject. Of these, only two were identified by a majority of subjects. If we take these two boundaries, appearing after prosodic phrases 3.3 and 8.4, to be statistically validated, we arrive at a linear version of the segmentation used in Figure 1. In the next section we evaluate how well statistically validated boundaries correlate with the distribution of linguistic cues. CORRELATION In this section we present and evaluate three dis- course segmentation algorithms, each based on the use of a single linguistic cue: referential noun phrases (NPs), cue words, and pauses. 4 While the discourse effects of these and other linguistic phenomena have been discussed in the literature, there has been little work on examining the use of such effects for recognizing or generating segment boundaries, s or on evaluating the comparative util- ity of different phenomena for these tasks. The algo- rithms reported here were developed based on ideas in the literature, then evaluated on a representative set of 10 narratives. Our results allow us to directly compare the performance of the three algorithms, to understand the utility of the individual knowledge sources. We have not yet attempted to create compre- hensive algorithms that would incorporate all pos- sible relevant features. In subsequent phases of our work, we will tune the algorithms by adding and 4The input to each algorithm is a discourse tran- scription labeled with prosodic phrases. In addition, for the NP algorithm, noun phrases need to be labeled with anaphoric relations. The pause algorithm requires pauses to be noted. SA notable exception is the literature on pauses. 151 Subjects Al~orithm Boundary Non-Boundary Boundary a b Non-Boundary c d Recall Precision Fallout Error a/(a+c) a/(a+b) b/(b+d) (b+c)/(a+b+c+d) Table 2: Evaluation Metrics refining features, using the initial 10 narratives as a training set. Final evaluation will be on a test set corresponding to the 10 remaining narratives. The initial results reported here will provide us with a baseline for quantifying improvements resulting from distinct modifications to the algorithms. We use metrics from the area of information retrieval to evaluate the performance of our algo- rithms. The correlation between the boundaries produced by an algorithm and those independently derived from our subjects can be represented as a matrix, as shown in Table 2. The value a (in cell cz,1) represents the number . of potential boundaries identified by both the algorithm and the subjects, b the number identified by the algorithm but not the subjects, c the number identified by the subjects but not the algorithm, and d the number neither the al- gorithm nor the subjects identified. Table 2 also shows the definition of the four evaluation metrics in terms of these values. Recall errors represent the false rejection of a boundary, while precision errors represent the false acceptance of a boundary. An algorithm with perfect performance segments a dis- course by placing a boundary at all and only those locations with a subject boundary. Such an algo- rithm has 100% recall and precision, and 0% fallout and error. For each narrative, our human segmentation data provides us with a set of boundaries classified by 7 levels of subject strength: (1 < T/ < 7). That is, boundaries of strength 7 are the set of pos- sible boundaries identified by all 7 subjects. As a baseline for examining the performance of our algo- rithms, we compare the boundaries produced by the algorithms to boundaries of strength ~ >_ 4. These are the statistically validated boundaries discussed above, i.e., those boundari.~,,~ identified by 4 or more subjects. Note that recall for ~ > 4 corresponds to percent agreement for boundaries. We also ex- amine the evaluation metrics for each algorithm, cross-classified by the individual levels of boundary strength. REFERENTIAL NOUN PHRASES Our procedure for encoding the input to the re- ferring expression algorithm takes 4 factors into account, as documented in (Passonneau, 1993a). Briefly, we construct a 4-tuple for each referential NP: <FIC, NP, i, I>. FIC is clause location, NP is surface form, i is referential identity, and I is a set of inferential relations. Clause location is de- 25 16.1 You could hear the bicycler2, 16.2 wheelsls going round. CODING <25, wheels, 13, (13 rl 12)> Figure 3: Sample Coding (from Narrative 4) termined by sequentially assigning distinct indices to each functionally independent clause (FIC); an FIC is roughly equivalent to a tensed clause that is neither a verb argument nor a restrictive relative. Figure 3 illustrates the coding of an NP, wheels. It's location is FIC number 25. The surface form is the string wheels. The wheels are new to the dis- course, so the referential index 13 is new. The infer- ential relation (13 rl 12) indicates that the wheels entity is related to the bicycle entity (index 12) by a part/whole relation. 6 The input to the segmentation algorithm is a list of 4-tuples representing all the referential NPs in a narrative. The output is a set of boundaries B, represented as ordered pairs of adjacent clauses: (FIC,,FIC,+I). Before describing how boundaries are assigned, we explain that the potential bound- ary locations for the algorithm, between each FIC, differ from the potential boundary locations for the human study, between each prosodic phrase. Cases where multiple prosodic phrases map to one FIC, as in Figure 3, simply reflect the use of additional linguistic features to reject certain boundary sites, e.g., (16.1,16.2). However, the algorithm has the potential to assign multiple boundaries between ad- jacent prosodic phrases. The example shown in Fig- ure 4 has one boundary site available to the human subjects, between 3.1 and 3.2. Because 3.1 consists of multiple FICs (6 and 7) the algorithm can and does assign 2 boundaries here: (6,7) and (7,8). To normalize the algorithm output, we reduce multiple boundaries at a boundary site to one, here (7,8). A total of 5 boundaries are eliminated in 3 of the 10 test narratives (out of 213 in all 10). All the re- maining boundaries (here (3.1,3.2)) fall into class b of Table 2. The algorithm operates on the principle that if an NP in the current FIC provides a referential link to the current segment, the current segment contin- ues. However, NPs and pronouns are treated differ- ently based on the notion of focus (cf. (Passonneau, 1993a). A third person definite pronoun provides a referential link if its index occurs anywhere in the current segment. Any other NP type provides a ref- erential link if its index occurs in the immediately preceding FIC. The symbol NP in Figure 2 indicates bound- aries assigned by the algorithm. Boundary (3.3,4.1) is assigned because the sole NP in 4.1, a number of people, refers to a new entity, one that cannot be in- ferred from any entity mentioned in 3.3. Boundary 6We use 5 inferrability relations (Passonneau, 1993a). Since there is a phrase boundary between the bicycle and wheels, we do not take bicycle to modify wheels. 152 6 3.1 A-nd he's not ... paying all that much attention NP BOUNDARY 7 because you know the pears fall, NP BOUNDARY (no subjects) 8 3.2 and he doesn't really notice, Figure 4: Multiple FICs in One Prosodic Phrase FORALL FIC,`,I < n < last IF CD,` n CD,`_I ¢ STHENCDs = CDs t9 CD,~ % (COREFERENTIAL LINK TO NP IN FIC,,_ 1) ELSE IFF,, n CD,,_ 1 ~ ~THEN CDs = CDs U CD,` % (INFERENTIAL LINK TO NP IN FIC,`_I) ELSE IF PRO,, n CDs ~ STHEN CDs = CDs U CD,` % (DEFINITE PRONOUN LINK TO SEGMENT) ELSE B = B t9 {(FIC,`_],FIC,`)} % (IF NO LINK, ADD A BOUNDARY) Figure 5: Referential NP Algorithm (8.4,9.1) results from the following facts about the NPs in 9.1: 1) the full NP the ladder is not referred to implicitly or explicitly in 8.4, 2) the third person pronoun he refers to an entity, the farmer, that was last mentioned in 3.3, and 3 NP boundaries have been assigned since then. If the farmer had been re- ferred to anywhere in 7.1 through 8.4, no boundary would be assigned at (8.4,9.1). Figure 5 illustrates the three decision points of the algorithm. FIC,* is the current clause (at lo- cation n); CD, is the set of all indices for NPs in FIC,; F, is the set of entities that are inferrentially linked to entities in CDn; PRO,, is the subset of CD, where NP is a third person definite pronoun; CDn-1 is the contextual domain for the previous FIC, and CDs is the contextual domain for the current seg- ment. FIC,* continues the current segment if it is anaphorically linked to the preceding clause 1) by a coreferential NP, or 2) by an inferential relation, or 3) if a third person definite pronoun in FIC,* refers to an entity in the current segment. If no boundary is added, CDs is updated with CDn. If all 3 tests fail, FICn is determined to begin a new segment, and (FICn_I,FICn) is added to B. Table 3 shows the average performance of the referring expression algorithm (row labelled NP) on the 4 measures we use here. Recall is .66 (a=.068; max=l; min=.25), precision is .25 (a=.013; max=.44; min=.09), fallout is .16 (~r=.004) and error rate is 0.17 (or=.005). Note that the error rate and fallout, which in a sense are more sensitive measures of inaccuracy, are both much lower than the precision and have very low variance. Both recall and precision have a relatively high variance. CUE WORDS Cue words (e.g., "now") are words that are some- times used to explicitly signal the structure of a discourse. We develop a b,'~eline segmentation al- gorithm based on cue words, using a simplification of one of the features shown by Hirschberg and Lit- man (1993) to identify discourse usages of cue words. Hirschberg and Litman (1993) examine a large set of cue words proposed in the literature and show that certain prosodic and structural features, in- cluding a position of first in prosodic phrase, are highly correlated with the discourse uses of these words. The input to our lower bound cue word al- gorithm is a sequential list of the prosodic phrases constituting a given narrative, the same input our subjects received. The output is a set of bound- aries B, represented as ordered pairs of adjacent phrases (P,,P,*+I), such that the first item in P,*+I is a member of the set of cue words summarized in Hirschberg and Litman (1993). That is, if a cue word occurs at the beginning of a prosodic phrase, the usage is assumed to be discourse and thus the phrase is taken to be the beginning of a new seg- ment. Figure 2 shows 2 boundaries (CUE) assigned by the algorithm, both due to and. Table 3 shows the average performance of the cue word algorithm for statistically validated bound- aries. Recall is 72% (cr=.027; max=.88; min=.40), precision is 15% (or=.003; max=.23; min=.04), fall- out is 53% (a=.006) and error is 50% (~=.005). While recall is quite comparable to human perfor- mance (row 4 of the table), the precision is low while fallout and error are quite high. Precision, fallout and error have much lower variance, however. PAUSES Grosz and Hirschberg (Grosz and Hirschberg, 1992; Hirschberg and Grosz, 1992) found that in a cor- pus of recordings of AP news texts, phrases be- ginning discourse segments are correlated with du- ration of preceding pauses, while phrases ending discourse segments are correlated with subsequent pauses. We use a simplification of these results to develop a baseline algorithm for identifying bound- aries in our corpus using pauses. The input to our pause segmentation algorithm is a sequential list of all prosodic phrases constituting a given narrative, with pauses (and their durations) noted. The out- put is a set of boundaries B, represented as ordered pairs of adjacent phrases (P,*,Pn+I), such that there is a pause between Pn and Pn+l- Unlike Grosz and Hirschberg, we do not currently take phrase dura- tion into account. In addition, since our segmenta- tion task is not hierarchical, we do not note whether phrases begin, end, suspend, or resume segments. Figure 2 shows boundaries (PAUSE) assigned by the algorithm. Table 3 shows the average performance of the pause algorithm for statistically validated bound- aries. Recall is 92% (~=.008; max=l; min=.73), precision is 18% (~=.002; max=.25; min=.09), fall- out is 54% (a=.004), and error is 49% (a=.004). Our algorithm thus performs with recall higher than human performance. However, precision is low, 153 Recall Precision Fallout NP .66 .25 .16 Cue .72 .15 .53 Pause .92 .18 .54 Humans .74 .55 .09 Table 3: Evaluation for Tj > 4 Error .17 .50 .49 .11 Tj 1 2 3 4 5 6 7 NPs f Precision .18 .26 .15 .02 .15 .07 .06 Cues I "1 °°1 Precision .17 .09 .08 .07 .04 .03 .02 Pauses Precision .18 .10 .08 .06 .06 .04 .03 Humans t "1 Precision .14 .14 .17 .15 .15 .13 .14 Table 4: Variation with Boundary Strength while both fallout and error are quite high. DISCUSSION OF RESULTS In order to evaluate the performance measures for the algorithms, it is important to understand how individual humans perform on all 4 measures. Row 4 of Table 3 reports the average individual perfor- mance for the 70 subjects on the 10 narratives. The average recall for humans is .74 (~=.038), ~ and the average precision is .55 (a=.027), much lower than the ideal scores of 1. The fallout and error rates of .09 (~=.004) and .11 (a=.003) more closely approx- imate the ideal scores of 0. The low recall and preci- sion reflect the considerable variation in the number of boundaries subjects assign, as well as the imper- fect percent agreement (Table 1). To compare algorithms, we must take into ac- count the dimensions along which they differ apart from the different cues. For example, the referring expression algorithm (RA) differs markedly from the pause and cue algorithms (PA, CA) in using more knowledge. CA and PA depend only on the ability to identify boundary sites, potential cue words and pause locations while RA relies on 4 features of NPs to make 3 different tests (Figure 5). Unsurprisingly, RA performs most like humans. For both CA and PA, the recall is relatively high, but the precision is very low, and the fallout and error rate are both very high. For lZA, recall and precision are not as different, precision is higher than CA and PA, and fallout and error rate are both relatively low. A second dimension to consider in comparing 7Human recall is equivalent to percent agreement for boundaries. However, the average shown here represents only 10 narratives, while the average from Table 1 rep- resents all 20. performance is that humans and RA assign bound- aries based on a global criterion, in contrast to CA and PA. Subjects typically use a relatively gross level of speaker intention. By default, RA assumes that the current segment continues, and assigns a boundary under relatively narrow criteria. However, CA and PA rely on cues that are relevant at the local as well as the global level, and consequently assign boundaries more often. This leads to a preponder- ance of cases where PA and CA propose a boundary but where a majority of humans did not, category b from Table 2. High b lowers precision, reflected in the low precision for CA and PA. We are optimistic that all three algorithms can be improved, for example, by discriminating among types of pauses, types of cue words, and features of referential NPs. We have enhanced RA with cer- tain grammatical role features following (Passon- neau, 1993b). In a preliminary experiment using boundaries from our first set of subjects (4 per nar- rative instead of 7), this increased both recall and precision by ,~ 10%. The statistical results validate boundaries agreed on by a majority of subjects, but do not thereby invalidate boundaries proposed by only 1-3 subjects. We evaluate how performance varies with boundary strength (1 _< 7) _< 7). Table 4 shows recall and precision of RA, PA, CA and humans when boundaries are broken down into those identi- fied by exactly 1 subject, exactly 2, and so on up to 7. 8 There is a strong tendency for recall to increase and precision to decrease as boundary strength in- creases. We take this as evidence that the presence of a boundary is not a binary decision; rather, that boundaries vary in perceptual salience. CONCLUSION We have shown that human subjects can reliably perform linear discourse segmentation in a corpus of transcripts of spoken narratives, using an infor- mal notion of speaker intention. We found that per- cent agreement with the segmentations produced by the majority of subjects ranged from 82%-92%, with an average across all narratives of 89% (~=.0006). We found that these agreement results were highly significant, with probabilities of randomly achiev- ing our findings ranging from p = .114 x 10 -6 to p < .6 x 10 -9. We have investigated the correlation of our intention-based discourse segmentations with refer- ential noun phrases, cue words, and pauses. We de- veloped segmentation algorithms based on the use of each of these linguistic cues, and quantitatively eval- uated their performance in identifying the statisti- cally validated boundaries independently produced by our subjects. We found that compared to hu- man performance, the recall of the three algorithms SFallout and error rate do not vary much across T i. 154 was comparable, the precision was much lower, and the fallout and error of only the noun phrase algo- rithm was comparable. We also found a tendency for recall to increase and precision to decrease with exact boundary strength, suggesting that the cogni- tive salience of boundaries is graded. While our initial results are promising, there is certainly room for improvement. In future work on our data, we will attempt to maximize the corre- lation of our segmentations with linguistic cues by improving the performance of our individual algo- rithms, and by investigating ways to combine our algorithms (cf. Grosz and Hirschberg (1992)). We will also explore the use of alternative evaluation metrics (e.g. string matching) to support close as well as exact correlation. ACKNOWLEDGMENTS The authors wish to thank W. Chafe, K. Church, J. DuBois, B. Gale, V. Hatzivassiloglou, M. Hearst, J. Hirschberg, J. Klavans, D. Lewis, E. Levy, K. McK- eown, E. Siegel, and anonymous reviewers for helpful comments, references and resources. Both authors' work was partially supported by DARPA and ONR under contract N00014-89-J-1782; Passonneau was also partly supported by NSF grant IRI-91-13064. REFERENCES S. Carberry. 1990. Plan Recognition in Natural Lan- guage Dialogue. MIT Press, Cambridge, MA. W. L. Chafe. 1980. The Pear Stories: Cognitive, Cul- tural and Linguistic Aspects of Narrative Produc- tion. Ablex Publishing Corporation, Norwood, NJ. W. G. Cochran. 1950. The comparison of percentages in matched samples. Biometrika, 37:256-266. R. Cohen. 1984. A computational theory of the function of clue words in argument understanding. In Proc. of COLINGS4, pages 251-258, Stanford. W. Gale, K. W. Church, and D. Yarowsky. 1992. Esti- mating upper and lower bounds on the performance of word-sense disambiguation programs. In Proc. of A CL, pages 249-256, Newark, Delaware. B. Grosz and J. Hirschberg. 1992. Some intonational characteristics of discourse structure. In Proc. of the International Conference on Spoken Language Processing. B. J. Grosz and C. L. Sidner. 1986. Attention, inten- tions and the structure of discourse. Computational Linguistics, 12:175-204. M. A. Hearst. 1993. TextTiling: A quantitative ap- proach to discourse segmentation. Technical Report 93/24, Sequoia 2000 Technical Report, University of California, Berkeley. J. Hirschberg and B. Grosz. 1992. Intonational features of local and global discourse structure. In Proc. of Darpa Workshop on Speech and Natural Language. J. Hirschberg and D. Litman. 1993. Empirical studies on the disambiguation of cue phrases. Computa- tional Linguistics, 19. J. Hirschberg and J. Pierrehumbert. 1986. The intona- tional structuring of discourse. In Proc. of ACL. C. H. Hwang and L. K. Schubert. 1992. Tense trees as the 'fine structure' of discourse. In Proc. of the 30th Annual Meeting of the ACL, pages 232-240. L. Iwafiska. 1993. Discourse structure in factual report- ing (in preparation). N. S. Johnson. 1985. Coding and analyzing experimen- tal protocols. In T. A. Van Dijk, editor, Handbook of Discourse Analysis, Vol. ~: Dimensions of Dis- course. Academic Press, London. C. Linde. 1979. Focus of attention and the choice of pronouns in discourse. In T. Givon, editor, Syntax and Semantics: Discourse and Syntax, pages 337- 354. Academic Press, New York. D. Litman and J. Allen. 1990. Discourse processing and commonsense plans. In P. R. Cohen, J. Morgan, and M. E. Pollack, editors, Intentions in Commu- nication. MIT Press, Cambridge, MA. D. Litman and R. Passonneau. 1993. Empirical ev- idence for intention-based discourse segmentation. In Proc. of the A CL Workshop on lntentionality and Structure in Discourse Relations. W. C. Mann, C. M. Matthiessen, and S. A. Thompson. 1992. Rhetorical structure theory and text analy- sis. In W. C. Mann and S. A. Thompson, editors, Discourse Description. J. Benjamins Pub. Co., Am- sterdam. J. D. Moore and M. E. Pollack. 1992. A problem for RST: The need for multi-level discourse analysis. Computational Linguistics, 18:537-544. J. Morris and G. Hirst. 1991. Lexical cohesion computed by thesaural relations as an indicator of the struc- ture of text. Computational Linguistics, 17:21-48. R. J. Passonneau. 1993a. Coding scheme and algorithm for identification of discourse segment boundaries on the basis of the distribution of referential noun phrases. Technical report, Columbia University. R. J. Passonneau. 1993b. Getting and keeping the cen- ter of attention. In R. Weischedel and M. Bates, editors, Challenges in Natural Language Processing. Cambridge University Press. L. Polanyi. 1988. A formal model of the structure of discourse. Journal of Pragmatics, 12:601-638. R. Reichman. 1985. Getting Computers to Talk Like You and Me. MIT Press, Cambridge, Mas- sachusetts. J. A. Rotondo. 1984. Clustering analysis of subject partitions of text. Discourse Processes, 7:69-88. F. Song and R. Cohen. 1991. Tense interpretation in the context of narrative. In Proc. of AAA1, pages 131-136. B. L. Webber. 1988. Tense as discourse anaphor. Com- putational Linguistics, 14:113-122. B. L. Webber. 1991. Structure and ostension in the in- terpretation of discourse deixis. Language and Cog- nitive Processes, pages 107-135. 155
1993
20
A LANGUAGE-INDEPENDENT ANAPHORA RES()LUTION SYSTEM FOR UNDERSTANDING MULTILINGUAL TEXTS Chinatsu Aone and Douglas McKee Systems Research and Applications (SRA) 2000 15th Street North Arlington, VA 22201 [email protected], [email protected] Abstract This paper describes a new discourse module within our multilingual NLP system. Because of its unique data-driven architecture, the discourse module is language-independent. Moreover, the use of hierarchically organized multiple knowledge sources makes the module robust and trainable using discourse-tagged corpora. Separating discourse phe- nomena from knowledge sources makes the discourse module easily extensible to additional phenomena. 1 Introduction This paper describes a new discourse module within our multilingual natural language processing system which has been used for understanding texts in En- glish, Spanish and Japanese (el. [1, 2])) The follow- ing design principles underlie the discourse module: • Language-independence: No processing code de- pends on language-dependent facts. • Extensibility: It is easy to handle additional phe- nomena. • Robustness: The discourse module does its best even when its input is incomplete or wrong. • Trainability: The performance can be tuned for particular domains and applications. In the following, we first describe the architecture of the discourse module. Then, we discuss how its performance is evaluated and trained using discourse- tagged corpora. Finally, we compare our approach to other research. 1 Our system has been used in several data extraction tasks and a prototype nlachine translation systeln. perfo.m .... ~nti ~u2k c$~ " e dv . . . . . . . . . . . . . . . . . . . . . . . . . . r . . . . . . . . . . . . . . . . . . . . o . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . -, l:)i~ ~ Module Figure 1: Discourse Architecture 2 Discourse Architecture Our discourse module consists of two discourse pro- cessing submodules (the Discourse A dministralor and the Resolution Engine), and three discourse knowl- edge bases (the Discourse Knowledge Source KB, the Discourse Phenomenon KB, and the Discourse Domain KB). The Discourse Administrator is a development-time tool for defining the three dis- course KB's. The Resolution Engine, on the other hand, is the run-time processing module which ac- tually performs anaphora resolution using these dis- course KB's. The Resolution Engine also has access to an ex- ternal discourse data structure called the global dis- course world, which is created by the top-level text processing controller. The global discourse world holds syntactic, semantic, rhetorical, and other infor- mation about the input text derived by other parts of the system. The architecture is shown in Figure i. 2.1 Discourse Data Structures There are four major discourse data types within the global discourse world: Discourse World (DW), [)is- 156 course Clause (DC), Discourse Marker (DM), and File Card (FC), as shown in Figure 2. The global discourse world corresponds to an entire text, and its sub-discourse worlds correspond to sub- components of the text such as paragraphs. Discourse worlds form a tree representing a text's structure. A discourse clause is created for each syntactic structure of category S by the semantics module. It can correspond to either a full sentence or a part of a flfll sentence. Each discourse clause is typed accord- ing to its syntactic properties. A discourse marker (cf. Kamp [14], or "discourse entity" in Ayuso [3]) is created for each noun or verb in the input sentence during semantic interpietation. A discourse marker is static in that once it is intro- duced to the discourse world, the information within it is never changed. Unlike a discourse marker, a file card (cf. Heim [11], "discourse referent" in Karttunen [15], or "discourse entity" in Webber [19]) is dynamic in a sense that it is continually updated as the discourse process- ing proceeds. While an indefinite discourse marker starts a file card, a definite discourse marker updates an already existing file card corresponding to its an- tecedent. In this way, a file card keeps track of all its co-referring discourse markers, and accumulates semantic information within them. 2.2 Discourse Administrator Our discourse module is customized at development time by creating and modifying the three discourse KB's using the Discourse Administrator. First, a dis- course domain is established for a particular NLP ap- plication. Next, a set of discourse phenomena which should be handled within that domain by the dis- course module is chosen (e.g. definite NP, 3rd per- son pronoun, etc.) because some phenomena may not be necessary to handle for a particular applica- tion domain. Then, for each selected discourse phe- nomenon, a set of discourse knowledge sources are chosen which are applied during anaphora resolution, since different discourse phenomena require different sets of knowledge sources. 2.2.1 Discourse Knowledge Source KB The discourse knowledge source KB houses small well-defined anaphora resolution strategies. Each knowledge source (KS) is an object in the hierarchi- cally organized KB, and information in a specific KS can be inherited from a more general KS. There are three kinds of KS's: a generator, a filter and an orderer. A generator is used to generate pos- w w • • hi* Edit '~4=1p / 10 ..... J 't-- "F'~-''=~ I i Figure 3: Discourse Knowledge Source KB sible antecedent hypotheses from the global discourse world. Unlike other discourse systems, we have multi- ple generators because different discourse phenomena exhibit different antecedent distribution patterns (cf. Guindon el al. [10]). A filter is used to eliminate im- possible hypotheses, while an orderer is used to rank possible hypotheses in a preference order. The KS tree is shown in Figure 3. Each KS contains three slots: ks-flmction, ks-data, and ks-language. The ks-function slot contains a functional definition of the KS. For example, the func- tional definition of the Syntactic-Gender filter defines when the syntactic gender of an anaphor is compati- ble with that of an antecedent hypothesis. A ks-data slot contains data used by ks-function. The sepa- ration of data from function is desirable because a parent KS can specify ks-function while its sub-KS's inherit the same ks-function but specify their own data. For example, in languages like English and Japanese, the syntactic gender of a pronoun imposes a semantic gender restriction on its antecedent. An English pronoun "he", for instance, can never refer to an NP whose semantic gender is female like "Ms. Smith". The top-level Semantic-Gender KS, then, defines only ks-flmction, while its sub-KS's for En- glish and Japanese specify their own ks-data and in- herit the same ks-function. A ks-language slot speci- fies languages if a particular KS is applicable for spe- cific languages. Most of the KS's are language-independent (e.g. all the generators and the semantic type filters), and even when they are language-specific, the function 157 (defframe discourse-world (discourse-d*ta-structure) date location topics position discourse-clauses s u b-discou rse-worlds~ ; DW date of the text ; loc~tion where the text is originated ; semantic concepts which correspond to globM topics of the text ; the corresponding character position in the text ; ~ list of discourse clauses in the current DW ; a list of DWs subordinate to the current one (defframe discourse-clause (discourse-d~ta-structure ; D(: discourse-markers ; ~ list of discourse m~rkers in the current D(:~ syntax ; ~n f-structure for the current DC parse-tree ; ~ p~rse tree of this S semantics ; ~ semantic (KB) object representing the current DC position ; the corresponding character position in the text d~te ; date of the current DC~ loca.tion ; Ioco.tlon of the current D(2 subordinate-discourse-clsuse ; a DC," subordinate to the current D(: coordin~te-dlscourse-clattses) ; coordinate DC's which a conjoined sentence consists of II (dell di .... ........... ker(dl d ture' ;DM ........ ......... Jr position ; the corresponding character position in the text discourse-clause ; a pointer b~ck to DC: syntax ; an f-structure for the current DM semantics ; a semantic (KB) object file card) ; a pointer to the file card (deffr&me file-card (discourse-d~t~-structure) co-referring-discou rse-m~r kers u pd ated-semantic-info) ; FC: a list of co-referring DM's ; a semantic (KB) object which contains cumulative sem&ntlcs Figure 2: Discourse World, Discourse Clause, Discourse Marker, and File Card definitions are shared. In this way, much of the dis- course knowledge source KB is sharable across differ- ent languages. 2.2.2 Discourse Phenomenon KB The discourse phenomenon KB contains hierarchi- cally organized discourse phenomenon objects as shown in Figure 4. Each discourse phenomenon ob- ject has four slots (alp-definition, alp-main-strategy, dp-backup-strategy, and dp-language) whose values can be inherited. The dp-definilion of a discourse phenomenon object specifies a definition of the dis- course phenomenon so that an anaphoric discourse marker can be classified as one of the discourse phe- nomena. The dp-main-strategy slot specifies, for each phenomenon, a set of KS's to apply to resolve this particular discourse phenomenon. The alp-backup- strategy slot, on the other hand, provides a set of backup strategies to use in case the main strategy fails to propose any antecedent hypothesis. The dp- language slot specifies languages when the discourse phenomenon is only applicable to certain languages (e.g. Japanese "dou" ellipsis). When different languages use different sets of KS's for main strategies or backup strategies for the same discourse phenomenon, language specific dp-main- strategy or dp-backup-strategy values are specified. For example, when an anaphor is a 3rd person pro- noun in a partitive construction (i.e. 3PRO-Partitive- Parent) 2, Japanese uses a different generator for the main strategy (Current-and-Previous-DC) than En- glish and Spanish (Current-and-Previous-Sentence). 2e.g. "three of them" ill English, "tres de ellos" in Spanish, "uchi san-nin" in Japaamse Because the discourse KS's are independent of dis- course phenomena, the same discourse KS can be shared by different discourse phenomena. For exam- ple, the Semantic-Superclass filter is used by both Definite-NP and Pronoun, and the Recency orderer is used by most discourse phenomena. 2.2.3 Discourse Domain KB The discourse domain KB contains discourse domain objects each of which defines a set of discourse phe- nomena to handle [n a particular domain. Since texts in different domains exhibit different sets of dis- course phenomena, and since different applications even within the same domain may not have to handle the same set of discourse phenomena, the discourse domain KB is a way to customize and constrain the workload of the discourse module. 2.3 Resolution Engine The Resolution Engine is the run-time processing module which finds the best antecedent hypothesis for a given anaphor by using data in both the global discourse world and the discourse KB's. The Resolu- tion Engine's basic operations are shown in Figure 5. 2.3.1 Finding Antecedents The Resolution Engine uses the discourse phe- nomenon KB to classify an anaphor as one of the discourse phenomena (using dp-definition values) and to determine a set of KS's to apply to the anaphor (using dp-main-strategy values). The Engine then applies the generator KS to get an initial set of hy- potheses and removes those that do not pass tile filter 158 ; • -~ . ~ _ . ..... _-._~_-'~ ~, ~,-,~-~ ..................... Figure 4: Discourse Phenomenon KB For each anaphoric discourse marker ill the current sentence: Find-Antecedent Input: aalaphor to resolve, global discourse world Get-KSs-for-Discourse-Phenomenon Input: anaphor to resolve, discourse phenomenon KB Output: a set of discourse KS's Apply-KSs hlput: aalaphor to resolve, global discourse world, discourse KS's Output: the best hypothesis Output: the best hypothesis Update-Discourse-World Input: anaphor, best hypothesis, global discourse world Output: updated global discourse world Figure 5: Resolution Engine Operations KS's. If only one hypothesis rernains, it is returned as the anaphor's referent, but there may be more than one hypothesis or none at all. When there is more than one hypothesis, orderer KS's are invoked. However, when more than one or- derer KS could apply to the anaphor, we face the problem of how to combine the preference values re- turned by these multiple orderers. Some anaphora resolution systems (cf. Carbonell and Brown [6], l~ich and LuperFoy [16], Rimon el al. [17]) assign scores to antecedent hypotheses, and the hypotheses are ranked according to their scores. Deciding the scores output by the orderers as well as the way the scores are combined requires more research with larger data. In our current system, therefore, when there are mul- tiple hypotheses left, the most "promising" orderer is chosen for each discourse phenomenon. In Section 3, we discuss how we choose such an orderer for each discourse phenomenon by using statistical preference. In the future, we will experiment with ways for each orderer to assign "meaningful" scores to hypotheses. When there is no hypothesis left after the main strategy for a discourse phenomenon is performed, a series of backup strategies specified in the discourse phenomenon KB are invoked. Like the main strut- egy, a backup strategy specifies which generators, fil- ters, and orderers to use. For example, a backup strategy may choose a new generator which gener- ates more hypotheses, or it may turn off some of the filters used by the main strategy to accept previously rejected hypotheses. How to choose a new generator or how to use only a subset of filters can be deter- mined by training the discourse module on a corpus tagged with discourse relations, which is discussed in Section 3. Thus, for example, in order to resolve a 3rd per- son pronoun in a partitive in an appositive (e.g. anaphor ID=1023 in Figure 7), the phenomenon KB specifies the following main strategy for Japanese: generator = Head-NP, filters = {Semantic-Amount, Semantic-Class, Semantic-Superclass}, orderer = Re- cency. This particular generator is chosen because in almost every example in 50 Japanese texts, this type of anaphora has its antecedent in its head NP. No syntactic filters are used because the anaphor has no useful syntactic information. As a backup strategy, a new generator, Adjacent-NP, is chosen in case the parse fails to create an appositive relation between the antecedent NP ID=1022 and the anaphor. 159 The AIDS Surveillance Committee confirmed 7A1DSpatients yesterday. IDM-1 semantics: Patient.101 I Three of them were hemophiliac. DM-2 semantics: Person.102 FC-5 coreferring-DM's: { DM-I DM-2} semantics: PatienL101 ^ Person.102 Figure 6: Updating Discourse World 2.3.2 Updating the Global Discourse World After each anaphor resolution, the global discourse world is updated as it would be in File Change Se- mantics (cf. Helm [11]), and as shown in Figure 6. First, the discourse marker for the anaphor is in- corporated into the file card to which its antecedent discourse marker points so that the co-referring dis- course markers point to the same file card. Then, the semantics information of the file card is updated so that it reflects the union of the information from all the co-referring discourse markers. In this way, a file card accumulates more information as the discourse processing proceeds. The motivation for having both discourse markers and file cards is to make the discourse processing a monotonic operation. Thus, the discourse process- ing does not replace an anaphoric discourse marker with its antecedent discourse marker, but only creates or updates file cards. This is both theoretically and computationally advantageous because the discourse processing can be redone by just retracting the file cards and reusing the same discourse markers. 2.4 Advantages of Our Approach Now that we have described the discourse module in detail, we summarize its unique advantages. First, it is the only working language-independent discourse system we are aware of. By "language-independent," we mean that the discourse module can be used for different languages if discourse knowledge is added for a new language. Second, since the anaphora resolution algorithm is not hard-coded in the Resolution Engine, but is kept in the discourse KB's, the discourse module is ex- tensible to a new discourse phenomenon by choosing existing discourse KS's or adding new discourse KS's which the new phenomenon requires. Making the discourse module robust is another im- portant goal especially when dealing with real-world input, since by the time the input is processed and passed to the discourse module, the syntactic or se- mantic information of the input is often not as accu- rate as one would hope. The discourse module must be able to deal with partial information to make a decision. By dividing such decision-making into mul- tiple discourse KS's and by letting just the applicable KS's fire, our discourse module handles partial infor- mation robustly. Robustness of the discourse module is also mani- fested when the imperfect discourse KB's or an inac- curate input cause initial anaphor resolution to fail. When the main strategy fails, a set of backup strate- gies specified in the discourse phenomenon KB pro- vides alternative ways to get the best antecedent hy- pothesis. Thus, the system tolerates its own insuffi- ciency in the discourse KB's as well as degraded input in a robust fashion. 3 Evaluating and Training the Discourse Module In order to choose the most effective KS's for a par- ticular phenomenon, as well as to debug and track progress of the discourse module, we must be able to evaluate the performance of discourse processing. To perform objective evaluation, we compare the results of running our discourse module over a corpus with a set of manually created discourse tags. Examples of discourse-tagged text are shown in Figure 7. The metrics we use for evaluation are detailed in Figure 8. 3.1 Evaluating the Discourse Module We evaluate overall performance by calculating re- call and precision of anaphora resolution results. The higher these measures are, the better the discourse module is working. In addition, we evaluate the dis- course performance over new texts, using blackbox evaluation (e.g. scoring the results of a data extrac- tion task.) To calculate a generator's failure vale, a filter's false positive rate, and an orderer's effectiveness, the algo- rithms in Figure 9 are used. 3 3.2 Choosing Main Strategies The uniqueness of our approach to discourse analysis is also shown by the fact that our discourse mod- ule can be trained for a particular domain, similar to the ways grammars have been trained (of. Black 3,,Tile remaining antecedent hypotheses" are the hypothe- ses left after all the filters are applied for all anaphor. 160 Overall Performance: Recall = No~I, Precision = N¢/Nh I Number of anaphors in input Arc. Number of correct resolutions Nh Number of resolutions attempted Filter: Recall = OPc/IPc, ['recision = OPc/OP IP OP OF~ 1 - OP/IP - or~/IF~ Number of correct pairs in input Number of pairs in input Number of pairs output and passed by filter Number of correct pairs output by filter Fraction of input pairs filtered out Fraction of correct answers filtered out (false positive rate) Generator: Recall = N¢/I, ['recision = Nc/Nh I Nh gc Nh/I 1 - N~/I Number of anaphors in input Number of hypotheses in input Number of times correct answer in output Average number of hypotheses Fraction of correct answers not returned (failure rate) Orderer: I Number of anaphors in input N¢ Number of correct answers output first Nc/I Success rate (effectiveness) Figure 8: Metrics used for Evaluating and Training Discourse For each discourse phenomenon, given anaphor and antecedent pairs in the corpus, calculate how often the generator fails to generate the antecedents. For each discourse phenomenon, given anaphor and antecedent pairs in the corpus, for each filter, calculate how often the filter incorrectly eliminates the antecedents. For each anaphor exhibiting a given discourse phenomenon in the corpus, given the remaining antecedent hypotheses for the anaphor, for each applicable orderer, test if the orderer chooses the correct antecedent as the best hypothesis. Figure 9: Algorithms for Evaluating Discourse Knowledge Sources 161 <DM ID=-I000>T 1 ' ~'.~.~4S]~<./DM> (<DM ID=1001 Type=3PARTA [The AIDS Surveillance Corru~ttee of the Health and Welfare Ministry (Chairman, Prof¢.~or Emeritus Junlchi Sh/okawa), on the 6~h, newly COnfirmed 7 AIDS patients (of them 3 arc dead) and 17 iafec~d pcop!¢.] <DM IDol 020 Typc-~DNP Ref=1000>~'/',: ~-?'~)~ ~ ~,:.~.~" J~D M > (7)-~ "k~<DM ID=1021>IKIJ~.</DM>~<DM lD=1022 Type=BE Ref=1021> ~[~']~.:~'~</DM> (<DM ID=1023 Type=3PARTA Ref=1021>5 </DM>~-'Jx) . <DM ID=I02AType-ZPARTF Ref=1020></DM>--j ~, ~'-~.~'~.~1~)~. <DM ID=1025 Typc--ZPARTF Ref=1020></DM> <[}M ID=I026>~J~,</DM> (<DM ID=1027 Typc=JDEL Ref=1026>~ [4 of ~ 7 ~:wly discovered patients were male homosexuals<t022> (of them<1023> 2 are dead), I is heterosexual woaran, and 2 (ditto l) are by contaminated blood product.] La Comisio~n de Te'cnicos del SIDA informo' dyer de que existen <DM ID=2000>196 enfermos de <DM ID=2OOI>SIDA</DM></DM> en la Comunidad Valenciana. De <DM ID=2002 Type=PRO Reffi000>ellos </DM>, 147 corresponden a Valencia; 34, a Alicante; y 15, a Castello'n. Mayoritariamente <DM ID=2003 Type=DNP Ref=2001>la enfermedad</DM> afecta a <DM ID=2004 Type=GEN~Ios hombres</DM>, con 158 cases. Entre <DN ID=2OOfi Type=DNP Ref=2OOO>los afectados </DM> se encuentran nueve nin~os menores de 13 an'os. Figure 7: Discourse Tagged Corpora [4]). As Walker [lS] reports, different discourse algo- rithms (i.e. Brennan, Friedman and Pollard's center- ing approach [5] vs. Hobbs' algorithm [12]) perform differently on different types of data. This suggests that different sets of KS's are suitable for different domains. In order to determine, for each discourse phe- nomenon, the most effective combination of gener- ators, filters, and orderers, we evaluate overall per- formance of the discourse module (cf. Section 3.1) at different rate settings. We measure particular gen- erators, filters, and orders for different phenomena to identify promising strategies. We try to mini- mize the failure rate and the false positive rate while minimizing the average number of hypotheses that the generator suggests and maximizing the number of hypotheses that the filter eliminates. As for or- derers, those with highest effectiveness measures are chosen for each phenomenon. The discourse module is "trained" until a set of rate settings at which the overall performance of the discourse module becomes highest is obtained. Our approach is more general than Dagan and Itai [7], which reports on training their anaphora reso- lution component so that "it" can be resolved to its correct antecedent using statistical data on lexical re- lations derived from large corpora. We will certainly incorporate such statistical data into our discourse KS's. 3.3 Determining Backup Strategies If the main strategy for resolving a particular anaphor fails, a backup strategy that includes either a new set of filters or a new generator is atternpted. Since backup strategies are eml)loyed only when the main strategy does not return a hypothesis, a backup strat- egy will either contain fewer filters than the main strategy or it will employ a generator that returns more hypotheses. If the generator has a non-zero failure rate 4, a new generator with more generating capability is chosen from the generator tree in the knowledge source KB as a backup strategy. Filters that occur in the main strategy but have false positive rates above a certain threshold are not included in the backup strategy. 4 Related Work Our discourse module is similar to Carbonell and Brown [6] and Rich and LuperFoy's [16] work in us- ing multiple KS's rather than a monolithic approach (cf. Grosz, Joshi and Weinstein [9], Grosz and Sidner [8], Hobbs [12], Ingria and Stallard [13]) for anaphora resolution. However, the main difference is that our system can deal with multiple languages as well as multiple discourse phenomena 5 because of our more fine-grained and hierarchically organized KS's. Also, our system can be evaluated and tuned at a low level because each KS is independent of discourse phenom- ena and can be turned off and on for automatic eval- uation. This feature is very important because we use our system to process real-world data in different domains for tasks involving text understanding. References [i] Chinatsu Aone, Hatte Blejer, Sharon Flank, Douglas McKee, and Sandy Shinn. The Murasaki Project: Multilingual Natural Lan- guage Understanding. In Proceedings of the ARPA Human Language Technology Workshop, 1993. [2] Chinatsu Aone, Doug McKee, Sandy Shinn, and Hatte Blejer. SRA: Description of the SOLOMON System as Used for MUC-4. In Pro- ceedings of Fourth Message Understanding Con- ferencc (MUC-4), 1992. 4 Zero failure rate means that tile hypotheses generated by a generator always contained tile correct antecedent. SCarbonell and Brown's system handles only intersentential 3rd person pronotms and some defilfite NPs, and Rich and LuperFoy's system handles only 3rd person pronouns. 162 [3] Damaris Ayuso. Discourse Entities in JANUS. In Proceedings of 27th Annual Meeting of the ACL, 1989. [4] Ezra Black, John Lafferty, and Salim Roukos. Development and Evaluation of a Broad- (:',overage Probablistic Grammar of English- Language Computer Manuals. In Proceedings of 30lh Annual Meeting of the ACL, 1992. [5] Susan Brennan, Marilyn Friedman, and Carl Pollard. A Centering Approach to Pronouns. In Proceedings of 25th Annual Meeting of the A(,'L, 1987. [6] Jairne G. Carbonell and Ralf D. Brown. Anaphora Resolution: A Multi-Strategy Ap- /)roach. In Proceedings of the 12lh International Conference on Computational Linguistics, 1988. [7] Ido Dagan and Alon Itai. Automatic Acquisition of Constraints for the Resolution of Anaphora References and Syntactic Ambiguities. In Pro- ceedings of the 13th International Conference on Computational Linguistics, 1990. [8] Barbara Crosz and Candace L. Sidner. Atten- tions, Intentions and the Structure of Discourse. Computational Linguistics, 12, 1986. [9] Barbara J. Grosz, Aravind K. Joshi, and Scott Weinstein. Providing a Unified Account of Def- inite Noun Phrases in Discourse. In Proceedings of 21st Annual Meeting of the ACL, 1983. [10] Raymonde Guindon, Paul Stadky, Hans Brun- net, and Joyce Conner. The Structure of User- Adviser Dialogues: Is there Method in their Madness? In Proceedings of 24th Annual Meet- ing of the ACL, 1986. [11] Irene Helm. The Semantics of Definite and In- definite Noun Phrases. PhD thesis, University of Massachusetts, 1982. [12] Jerry R. Hohbs. Pronoun Resolution. Technical Report 76-1, Department of Computer Science, City College, City University of New York, 1976. [13] Robert Ingria and David Stallard. A Computa- tional Mechanism for Pronominal Reference. In Proceedings of 27th Annual Meeting of the ACL, 1989. [14] Hans Kamp. A Theory of Truth and Semantic Representation. In J. Groenendijk et al., edi- tors, Formal Methods in the Study of Language. Mathematical Centre, Amsterdam, 1981. [15] Lauri Karttunen. Discourse Referents. In J. Mc- Cawley, editor, Syntax and Semantics 7. Aca- demic Press, New York, 1976. [16] Elaine Rich and Susan LuperFoy. An Architec- ture for Anaphora Resolution. In Proceedings of the Second Conference on Applied Natural Lan- guage Processing, 1988. [17] Mort Rimon, Michael C. McCord, Ulrike Schwall, and Pilar Mart~nez. Advances in Ma- chine Translation Research in IBM. In Proceed- zngs of Machine Translation Summit IIl, 1991. [18] Marilyn A. Walker. Evaluating Discourse Pro- cessing Algorithms. In Proceedings of 27th An- nual Meeting of the ACL, 1989. [19] Bonnie Webber. A Formal Approach to Dis- course Anaphora. Technical report, Bolt, Be- ranek, and Newman, 1978. 163
1993
21
CONTEXTUAL WORD SIMILARITY AND ESTIMATION FROM SPARSE DATA Ido Dagan AT•T Bell Laboratories 600 Mountain Avenue Murray Hill, NJ 07974 dagan@res earch, art. tom Shaul Marcus Computer Science Department Technion Haifa 32000, Israel shaul@cs, t echnion, ac. il $haul Markovitch Computer Science Department Technion Haifa 32000, Israel shaulm@cs, t echnion, ac. il Abstract In recent years there is much interest in word cooccurrence relations, such as n-grams, verb- object combinations, or cooccurrence within a limited context. This paper discusses how to estimate the probability of cooccurrences that do not occur in the training data. We present a method that makes local analogies between each specific unobserved cooccurrence and other cooccurrences that contain simi- lar words, as determined by an appropriate word similarity metric. Our evaluation sug- gests that this method performs better than existing smoothing methods, and may provide an alternative to class based models. 1 Introduction Statistical data on word cooccurrence relations play a major role in many corpus based approaches for natural language processing. Different types of cooccurrence relations are in use, such as cooc- currence within a consecutive sequence of words (n-grams), within syntactic relations (verb-object, adjective-noun, etc.) or the cooccurrence of two words within a limited distance in the context. Sta- tistical data about these various cooccurrence rela- tions is employed for a variety of applications, such as speech recognition (Jelinek, 1990), language gen- eration (Smadja and McKeown, 1990), lexicogra- phy (Church and Hanks, 1990), machine transla- tion (Brown et al., ; Sadler, 1989), information retrieval (Maarek and Smadja, 1989) and various disambiguation tasks (Dagan et al., 1991; Hindle and Rooth, 1991; Grishman et al., 1986; Dagan and Itai, 1990). A major problem for the above applications is how to estimate the probability of cooccurrences that were not observed in the training corpus. Due to data sparseness in unrestricted language, the ag- gregate probability of such cooccurrences is large and can easily get to 25% or more, even for a very large training corpus (Church and Mercer, 1992). Since applications often have to compare alterna- tive hypothesized cooccurrences, it is important to distinguish between those unobserved cooccur- rences that are likely to occur in a new piece of text and those that are not. These distinctions ought to be made using the data that do occur in the cor- pus. Thus, beyond its own practical importance, the sparse data problem provides an informative touchstone for theories on generalization and anal- ogy in linguistic data. The literature suggests two major approaches for solving the sparse data problem: smoothing and class based methods. Smoothing methods es- timate the probability of unobserved cooccurrences using frequency information (Good, 1953; Katz, 1987; Jelinek and Mercer, 1985; Church and Gale, 1991). Church and Gale (Church and Gale, 1991) show, that for unobserved bigrams, the estimates of several smoothing methods closely agree with the probability that is expected using the frequencies of the two words and assuming that their occurrence is independent ((Church and Gale, 1991), figure 5). Furthermore, using held out data they show that this is the probability that should be estimated by a smoothing method that takes into account the fre- quencies of the individual words. Relying on this result, we will use frequency based es~imalion (using word frequencies) as representative for smoothing estimates of unobserved cooccurrences, for compar- ison purposes. As will be shown later, the problem with smoothing estimates is that they ignore the expected degree of association between the specific words of the cooccurrence. For example, we would not like to estimate the same probability for two cooccurrences like 'eat bread' and 'eat cars', de- spite the fact that both 'bread' and 'cars' may have the same frequency. Class based models (Brown et al., ; Pereira et al., 1993; Hirschman, 1986; Resnik, 1992) dis- tinguish between unobserved cooccurrences using classes of "similar" words. The probability of a spe- cific cooccurrence is determined using generalized parameters about the probability of class cooccur- ] 64 rence. This approach, which follows long traditions in semantic classification, is very appealing, as it attempts to capture "typical" properties of classes of words. However, it is not clear at all that un- restricted language is indeed structured the way it is assumed by class based models. In particular, it is not clear that word cooccurrence patterns can be structured and generalized to class cooccurrence parameters without losing too much information. This paper suggests an alternative approach which assumes that class based generalizations should be avoided, and therefore eliminates the in- termediate level of word classes. Like some of the class based models, we use a similarity metric to measure the similarity between cooccurrence pat- terns of words. But then, rather than using this metric to construct a set of word classes, we use it to identify the most specific analogies that can he drawn for each specific estimation. Thus, to estimate the probability of an unobserved cooccur- fence of words, we use data about other cooccur- fences that were observed in the corpus, and con- tain words that are similar to the given ones. For example, to estimate the probability of the unob- served cooccurrence 'negative results', we use cooc- currences such as 'positive results' and 'negative numbers', that do occur in our corpus. The analogies we make are based on the as- sumption that similar word cooccurrences have similar values of mutual information. Accordingly, our similarity metric was developed to capture sim- ilarities between vectors of mutual information val- ues. In addition, we use an efficient search heuris- tic to identify the most similar words for a given word, thus making the method computationally affordable. Figure 1 illustrates a portion of the similarity network induced by the similarity metric (only some of the edges, with relatively high val- ues, are shown). This network may be found useful for other purposes, independently of the estimation method. The estimation method was implemented using the relation of cooccurrence of two words within a limited distance in a sentence. The proposed method, however, is general and is applicable for anY type of lexical cooccurrence. The method was evaluated in two experiments. In the first one we achieved a complete scenario of the use of the esti- mation method, by implementing a variant of the d[Sambiguation method in (Dagan et al., 1991), for sense selection in machine translation. The esti- mation method was then successfully used to in- crease the coverage of the disambiguation method by 15%, with an increase of the overall precision compared to a naive, frequency based, method. In the second experiment we evaluated the estimation method on a data recovery task. The task sim- ulates a typical scenario in disambiguation, and also relates to theoretical questions about redun- dancy and idiosyncrasy in cooccurrence data. In this evaluation, which involved 300 examples, the performance of the estimation method was by 27% better than frequency based estimation. 2 Definitions We use the term cooccurrence pair, written as (x, y), to denote a cooccurrence of two words in a sentence within a distance of no more than d words. When computing the distance d, we ignore function words such as prepositions and determiners. In the experiments reported here d = 3. A cooccurrence pair can be viewed as a gen- eralization of a bigram, where a bigram is a cooc- currence pair with d = 1 (without ignoring func- tion words). As with bigrams, a cooccurrence pair is directional, i.e. (x,y) ¢ (y,x). This captures some information about the asymmetry in the lin- ear order of linguistic relations, such as the fact that verbs tend to precede their objects and follow their subjects. The mutual information of a cooccurrence pair, which measures the degree of association between the two words (Church and Hanks, 1990), is defined as (Fano, 1961): P(xly) I(x,y) -- log 2 P(x,y) _ log 2 (1) P(x)P(y) P(x) = log 2 P(y[x) P(Y) where P(x) and P(y) are the probabilities of the events x and y (occurrences of words, in our case) and P(x, y) is the probability of the joint event (a cooccurrence pair). We estimate mutual information values using the Maximum Likelihood Estimator (MLE): P(x,y) _log~. N f(x,y) ] I(x, y) = log~ P~x)P--(y) ( -d f(x)f(y) " (2) where f denotes the frequency of an eyent and N is the length of the corpus. While better es- timates for small probabilities are available (Good, 1953; Church and Gale, 1991), MLE is the simplest to implement and was adequate for the purpose of this study. Due to the unreliability of measuring negative mutual information values in corpora that are not extremely large, we have considered in this work any negative value to be 0. We also set/~(x, y) to 0 if f(x, y) = 0. Thus, we assume in both cases that the association between the two words is as expected by chance. 165 paper articles °14I /\00 1 conference. 0.132 . papers ~ /~ ,, U. I6 ~ , l",, "-,, worksh:p.,,._ ~0.106 ~ ~ \0.126 0. 4 \ • symposmm ~ j book " ' documentation 0.137 Figure 1: A portion of the similarity network. 3 Estimation for an Unobserved Cooccurrence Assume that we have at our disposal a method for determining similarity between cooccurrence pat- terns of two words (as described in the next sec- tion). We say that two cooccurrence pairs, (wl, w2) and (w~, w~), are similar if w~ is similar to wl and w~ is similar to w2. A special (and stronger) case of similarity is when the two pairs differ only in one of their words (e.g. (wl,w~) and (wl,w2)). This special case is less susceptible to noise than unrestricted similarity, as we replace only one of the words in the pair. In our experiments, which involved rather noisy data, we have used only this restricted type of similarity. The mathematical for- mulations, though, are presented in terms of the general case. The question that arises now is what analo- gies can be drawn between two similar cooccur- rence pairs, (wl,w2) and tw' wt~ Their proba- k 1' 21" bilities cannot be expected to be similar, since the probabilities of the words in each pair can be dif- ferent. However, since we assume that wl and w~ have similar cooccurrence patterns, and so do w~ and w~, it is reasonable to assume that the mutual information of the two pairs will be similar (recall that mutual information measures the degree of as- sociation between the words of the pair). Consider for example the pair (chapter, de- scribes), which does not occur in our corpus 1 . This pair was found to be similar to the pairs (intro- 1 We used a corpus of about 9 million words of texts in the computer domain, taken from articles posted to the USENET news system. duction, describes), (book, describes)and (section, describes), that do occur in the corpus. Since these pairs occur in the corpus, we estimate their mutual information values using equation 2, as shown in Table 1. We then take the average of these mutual information values as the similarity based estimate for I(chapter, describes), denoted as f(chapter, describes) 2. This represents the as- sumption that the word 'describes' is associated with the word 'chapter' to a similar extent as it is associated with the words 'introduction', 'book' and 'section'. Table 2 demonstrates how the anal- ogy is carried out also for a pair of unassociated words, such as (chapter, knows). In our current implementation, we compute i(wl, w2) using up to 6 most similar words to each of wl and w~, and averaging the mutual informa- tion values of similar pairs that occur in the corpus (6 is a parameter, tuned for our corpus. In some cases the similarity method identifies less than 6 similar words). Having an estimate for the mutual information of a pair, we can estimate its expected frequency in a corpus of the given size using a variation of equation 2: w2) = d f(wl)f(w2)2I(t°l't°2) (3) /(wl, In our example, f(chapter) = 395, N = 8,871,126 and d = 3, getting a similarity based estimate of f(chapter, describes)= 3.15. This value is much 2We use I for similarity based estimates, and reserve i for the traditional maximum fikefihood estimate. The similarity based estimate will be used for cooccurrence pairs that do not occur in the corpus. 166 i(w , (introduction, describes) 6.85 (book, describes) 6.27 (section, describes) 6.12 f(wl,w2) f(wl) f(w2) 5 464 277 13 1800 277 6 923 277 Average: 6.41 Table 1: The similarity based estimate as an average on similar pairs: [(chapter, describes) = 6.41 (wl, w2) [(wl, w=) (introduction, knows) 0 (book, knows) 0 (section, knows) 0 Average: 0 f(wl,w2) f(wl) f(w2) 0 464 928 0 1800 928 0 923 928 Table 2: The similarity based estimate for a pair of unassociated words: I(chapter, knows) = 0 higher than the frequency based estimate (0.037), reflecting the plausibility of the specific combina- tion of words 3. On the other hand, the similar- ity based estimate for ](chapter, knows) is 0.124, which is identical to the frequency based estimate, reflecting the fact that there is no expected associ- ation between the two words (notice that the fre- quency based estimate is higher for the second pair, due to the higher frequency of 'knows'). 4 TheSimilarity Metric Assume that we need to determine the degree of similarity between two words, wl and w2. Recall that if we decide that the two words are similar, then we may infer that they have similar mutual in- formation with some other word, w. This inference would be reasonable if we find that on average wl and w2 indeed have similar mutual information val- ues with other words in the lexicon. The similarity metric therefore measures the degree of similarity between these mutual information values. We first define the similarity between the mu- tual information values of Wl and w2 relative to a single other word, w. Since cooccurrence pairs are directional, we get two measures, defined by the po- sition of w in the pair. The left context similarity of wl and w2 relative to w, termed simL(Wl, w2, w), is defined as the ratio between the two mutual in- formation values, having the larger value in the de- nominator: simL(wl, w2, w) = min(I(w, wl), I(w, w2)) (4) max(I(w, wl), I(w, w2)) 3The frequency based estimate for the expected fre- quency of a cooccurrence pair, assuming independent occurrence of the two words and using their individual frequencies, is -~f(wz)f(w2). As mentioned earlier, we use this estimate as representative for smoothing esti- mates of unobserved cooccurrences. This way we get a uniform scale between 0 and 1, in which higher values reflect higher similar- ity. If both mutual information values are 0, then sirnL(wl,w2, w) is defined to be 0. The right con- text similarity, simn(wl, w2, w), is defined equiva- lently, for I(Wl, w) and I(w2, w) 4. Using definition 4 for each word w in the lex- icon, we get 2 • l similarity values for Wl and w2, where I is the size of the lexicon. The general sim- ilarity between Wl and w2, termed sim(wl, w2), is defined as a weighted average of these 2 • l values. It is necessary to use some weighting mechanism, since small values of mutual information tend to be less significant and more vulnerable to noisy data. We found that the maximal value involved in com- puting the similarity relative to a specific word pro- vides a useful weight for this word in computing the average. Thus, the weight for a specific left context similarity value, WL(Wl, W2, W), is defined as: Wt(wl, w) = max(I(w, wl), :(w, (5) (notice that this is the same as the denominator in definition 4). This definition provides intuitively appropriate weights, since we would like to give more weight to context words that have a large mu- tual information value with at least one of Wl and w2. The mutual information value with the other word may then be large, providing a strong "vote" for similarity, or may be small, providing a strong "vote" against similarity. The weight for a spe- cific right context similarity value is defined equiv- alently. Using these weights, we get the weighted average in Figure 2 as the general definition of 4In the case of cooccurrence pairs, a word may be in- volved in two types of relations, being the left or right argument of the pair. The definitions can be easily adopted to cases in which there are more types of rela- tions, such as provided by syntactic parsing. 167 sim(wl, w2) = ~toetexicon sirnL(wl, w2, w) . WL(Wl, W2, W) -t- simR(wl, w2, w) . WR(wl, w~, w) _ WL(Wl, w2, w) + WR(wl, w2, w) Y'~,o e,,,,,i~or, min(I(w, wl), I(w, w2) ) + min(I(wl, w), I(w~, w)) ~wetexicon max(I(w, Wl), I(w, w2) ) + max(I(wx, w), I(w2, w) ) (6) Figure 2: The definition of the similarity metric. Exhaustive Search Approximation similar words sim similar words sim aspects 1.000 topics 0.100 areas 0.088 expert 0.079 issues 0.076 approaches 0.072 aspects 1.000 topics 0.100 areas 0.088 expert 0.079 issues 0.076 concerning 0.069 Table 3: The most tic and exhaustive results. similar words of aspects: heuris- search produce nearly the same similarity s. The values produced by our metric have an in- tuitive interpretation, as denoting a "typical" ra- tio between the mutual information values of each of the two words with another third word. The metric is reflexive (sirn(w,w) -- 1), symmetric (sim(wz, w2) = sirn(w2, wz)), but is not transitive (the values of sire(w1, w2) and sire(w2, w3) do not imply anything on the value of sire(w1, w3)). The left column of Table 3 lists the six most similar words to the word 'aspects' according to this met- ric, based on our corpus. More examples of simi- larity were shown in Figure 1. 4.1 An efficient search heuristic The estimation method of section 3 requires that we identify the most similar words of a given word w. Doing this by computing the similarity between w and each word in the lexicon is computationally very expensive (O(12), where I is the size of the lexicon, and O(l J) to do this in advance for all the words in the lexicon). To account for this prob- lem we developed a simple heuristic that searches for words that are potentially similar to w, using thresholds on mutual information values and fre- quencies of cooccurrence pairs. The search is based on the property that when computing sim(wl, w2), words that have high mutual information values 5The nominator in our metric resembles the similar- ity metric in (Hindle, 1990). We found, however, that the difference between the two metrics is important, be- cause the denominator serves as a normalization factor. with both wl and w2 make the largest contributions to the value of the similarity measure. Also, high and reliable mutual information values are typically associated with relatively high frequencies of the in- volved cooccurrence pairs. We therefore search first for all the "strong neighbors" of w, which are de- fined as words whose cooccurrence with w has high mutual information and high frequency, and then search for all their "strong neighbors". The words found this way ("the strong neighbors of the strong neighbors of w") are considered as candidates for being similar words of w, and the similarity value with w is then computed only for these words. We thus get an approximation for the set of words that are most similar to w. For the example given in Ta- ble 3, the exhaustive method required 17 minutes of CPU time on a Sun 4 workstation, while the ap- proximation required only 7 seconds. This was done using a data base of 1,377,653 cooccurrence pairs that were extracted from the corpus, along with their counts. 5 Evaluations 5.1 Word sense disambiguation in machine translation The purpose of the first evaluation was to test whether the similarity based estimation method can enhance the performance of a disambiguation technique. Typically in a disambiguation task, dif- ferent cooccurrences correspond to alternative in- terpretations of the ambiguous construct. It is therefore necessary that the probability estimates for the alternative cooccurrences will reflect the rel- ative order between their true probabilities. How- ever, a consistent bias in the estimate is usually not harmful, as it still preserves the correct relative or- der between the alternatives. To carry out the evaluation, we implemented a variant of the disambiguation method of (Dagan et al., 1991), for sense disambiguation in machine translation. We term this method as THIS, for Target Word Selection. Consider for example the Hebrew phrase 'laxtom xoze shalom', which trans- lates as 'to sign a peace treaty'. The word 'laxtom', however, is ambiguous, and can be translated to ei- ther 'sign' or 'seal'. To resolve the ambiguity, the 168 Precision Applicability TWS 85.5 64.3 Augmented TWS 83.6 79.6 Word Frequency 66.9 100 Table 4: Results of TWS, Augmented TWS and Word Frequency methods TWS method first generates the alternative lexi- cal cooccurrence patterns in the targel language, that correspond to alternative selections of target words. Then, it prefers those target words that generate more frequent patterns. In our example, the word 'sign' is preferred upon the word 'seal', since the pattern 'to sign a treaty' is much more fre- quent than the pattern 'to seal a treaty'. Similarly, the word 'xoze' is translated to 'treaty' rather than 'contract', due to the high frequency of the pattern 'peace treaty '6. In our implementation, cooccur- rence pairs were used instead of lexical cooccur- fence within syntactic relations (as in the original work), to save the need of parsing the corpus. We randomly selected from a software manual a set of 269 examples of ambiguous Hebrew words in translating Hebrew sentences to English. The expected success rate of random selection for these examples was 23%. The similarity based estima- tion method was used to estimate the expected fre- quency of unobserved cooccurrence pairs, in cases where none of the alternative pairs occurred in the corpus (each pair corresponds to an alternative target word). Using this method, which we term Augmented TWS, 41 additional cases were disam- biguated, relative to the original method. We thus achieved an increase of about 15% in the applica- bility (coverage) of the TWS method, with a small decrease in the overall precision. The performance of the Augmented TWS method on these 41 exam- ples was about 15% higher than that of a naive, Word Frequency method, which always selects the most frequent translation. It should be noted that the Word Frequency method is equivalent to us- ing the frequency based estimate, in which higher word frequencies entail a higher estimate for the corresponding cooccurrence. The results of the ex- periment are summarized in Table 4. 5.2 A data recovery task In the second evaluation, the estimation method had to distinguish between members of two sets of 8It should be emphasized that the TWS method uses only a monolingual target corpus, and not a bilingual corpus as in other methods ((Brown et al., 1991; Gale et al., 1992)). The alternative cooccurrence patterns in the target language, which correspond to the alter- native translations of the ambiguous source words, are constructed using a bilingual lexicon. cooccurrence pairs, one of them containing pairs with relatively high probability and the other pairs with low probability. To a large extent, this task simulates a typical scenario in disambiguation, as demonstrated in the first evaluation. Ideally, this evaluation should be carried out using a large set of held out data, which would provide good estimates for the true probabilities of the pairs in the test sets. The estimation method should then use a much smaller training corpus, in which none of the example pairs occur, and then should try to recover the probabilities that are known to us from the held out data. However, such a setting requires that the held out corpus would be several times larger than the training corpus, while the latter should be large enough for robust application of the estimation method. This was not feasible with the size of our corpus, and the rather noisy data we had. To avoid this problem, we obtained the set of pairs with high probability from the training cor- pus, selecting pairs that occur at least 5 times. We then deleted these pairs from the data base that is used by the estimation method, forcing the method to recover their probabilities using the other pairs of the corpus. The second set, of pairs with low probability, was obtained by constructing pairs that do not occur in the corpus. The two sets, each of them containing 150 pairs, were constructed randomly and were restricted to words with indi- vidual frequencies between 500 and 2500. We term these two sets as the occurring and non-occurring sets. The task of distinguishing between members of the two sets, without access to the deleted fre- quency information, is by no means trivial. Trying to use the individual word frequencies will result in performance close to that of using random selec- tion. This is because the individual frequencies of all participating words are within the same range of values. To address the task, we used the following pro- cedure: The frequency of each cooccurrence pair was estimated using the similarity-based estima- tion method. If the estimated frequency was above 2.5 (which was set arbitrarily as the average of 5 and 0), the pair was recovered as a member of the occurring set. Otherwise, it was recovered as a member of the non-occurring set. Out of the 150 pairs of the occurring set, our method correctly identified 119 (79%). For th e non-occurring set, it correctly identified 126 pairs (84%). Thus, the method achieved an 0retail ac- curacy of 81.6%. Optimal tuning of the threshold, to a value of 2, improves the overall accuracy to 85%, where about 90% of the members of the oc- curring set and 80% of those in the non-occurring 169 set are identified correctly. This is contrasted with the optimal discrimination that could be achieved by frequency based estimation, which is 58%. Figures 3 and 4 illustrate the results of the ex- periment. Figure 3 shows the distributions of the expected frequency of the pairs in the two sets, us- ing similarity based and frequency based estima- tion. It clearly indicates that the similarity based method gives high estimates mainly to members of the occurring set and low estimates mainly to mem- bers of the non-occurring set. Frequency based es- timation, on the other hand, makes a much poorer distinction between the two sets. Figure 4 plots the two types of estimation for pairs in the occurring set as a function of their true frequency in the cor- pus. It can be seen that while the frequency based estimates are always low (by construction) the sim- ilarity based estimates are in most cases closer to the true value. 6 Conclusions In both evaluations, similarity based estimation performs better than frequency based estimation. This indicates that when trying to estimate cooc- currence probabilities, it is useful to consider the cooccurrence patterns of the specific words and not just their frequencies, as smoothing methods do. Comparing with class based models, our ap- proach suggests the advantage of making the most specific analogies for each word, instead of making analogies with all members of a class, via general class parameters. This raises the question whether generalizations over word classes, which follow long traditions in semantic classification, indeed provide the best means for inferencing about properties of words. Acknowledgements We are grateful to Alon Itai for his help in initiating this research. We would like to thank Ken Church and David Lewis for their helpful comments on ear- lier drafts of this paper. REFERENCES Peter Brown, Vincent Della Pietra, Peter deSouza, Jenifer Lai, and Robert Mercer. Class-based n-gram models of natural language. Computa- tional Linguistics. (To appear). P. Brown, S. Della Pietra, V. Della Pietra, and R. Mercer. 1991. Word sense disambiguation using statistical methods. In Proc. of the An- nual Meeting of the ACL. Kenneth W. Church and William A. Gale. 1991. A comparison of the enhanced Good-Turing I i optimal B occurring I ithreshold (85%) II non ~ i "" O 0 1 2 3 4 5 6 7 8 9 10 11 12 Estimated Value: Similarity Based B!°ptimal B occurring n o n - occurring 0 0 0.20.4 0.6 0.8 t 1.21.41.61.8 2 2.2 Estimated Value: Frequency Based Figure 3: Frequency distributions of estimated fre- quency values for occurring and non-occurring sets. 170 oo,." o° °oO" ,.°° ," + = /* + + ....-~ + ÷ +++ li!i ;!: 6 8 10 12 14 16 18 True Frequency Figure 4: Similarity based estimation ('+') and fre- quency based estimation ('0') for the expected fre- quency of members of the occurring set, as a func- tion of the true frequency. and deleted estimation methods for estimat- ing probabilities of English bigrams. Computer Speech and Language, 5:19-54. Kenneth W. Church and Patrick Hanks. 1990. Word association norms, mutual information, and lexicography. Computational Linguistics, 16(1):22-29. Kenneth W. Church and Robert L. Mercer. 1992. Introduction to the special issue in computa- tional linguistics using large corpora. Compu- tational Linguistics. (In press). Ido Dagan and Alon Itai. 1990. Automatic ac- quisition of constraints for the resolution of anaphora references and syntactic ambiguities. In Proc. of COLING. Ido Dagan, Alon Itai, and Ulrike Schwall. 1991. Two languages are more informative than one. In Proc. of the Annual Meeting of the ACL. R. Fano. 1961. Transmission of Information. Cambridge,Mass:MIT Press. William Gale, Kenneth Church, and David Yarowsky. 1992. Using bilingual materials to develop word sense disambiguation meth- ods. In Proc. of the International Conference on Theoretical and Methodolgical Issues in Ma- chine Translation. I. J. Good. 1953. The population frequencies of species and the estimation of population pa- rameters. Biometrika, 40:237-264. R. Grishman, L. Hirschman, and Ngo Thanh Nhan. 1986. Discovery procedures for sublanguage se- lectional patterns - initial experiments. Com- putational Linguistics, 12:205-214. D. Hindle and M. Rooth. 1991. Structural am- biguity and lexical relations. In Proc. of the Annual Meeting of the ACL. D. Hindle. 1990. Noun classification from predicate-argument structures. In Proc. of the Annual Meeting of the ACL. L. Hirschman. 1986. Discovering sublanguage structures. In R. Grishman and R. Kittredge, editors, Analyzing Language in Restricted Do- mains: Sublanguage Description and Process- ing, pages 211-234. Lawrence Erlbaum Asso- ciates. F. Jelinek and R. Mercer. 1985. Probability dis- tribution estimation from sparse data. IBM Technical Disclosure Bulletin, 28:2591-2594. Frederick Jelinek. 1990. Self-organized language modeling for speech recognition. In Alex Waibel and Kai-Fu Lee, editors, Readings in Speech Recognition, pages 450-506. Morgan Kaufmann Publishers, Inc., San Maeio, Cali- fornia. Slava M. Katz. 1987. Estimation of probabilities from sparse data for the language model com- ponent of a speech recognizer. IEEE Transac- tions on Acoustics, speech, and Signal Process- ing, 35(3):400-401. Yoelle Maarek and Frank Smadja. 1989. Full text indexing based on lexical relations - An appli- cation: Software libraries. In Proc. of SIGIR. Fernando Pereira, Naftali Tishby, and Lillian Lee. 1993. Distributional clustering of English words. In Proc. of the Annual Meeting of the ACL. Philip Resnik. 1992. Wordnet and distributional analysis: A class-based approach to lexical dis- covery. In AAAI Workshop on Statistically- based Natural Language Processing Techniques, July. V. Sadler. 1989. Working with analogical seman- tics: Disambiguation techniques in DLT. Foris Publications. Frank Smadja and Katheleen McKeown. 1990. Au- tomatically extracting and representing collo- cations for language generation. In Proc. of the Annual Meeting of the ACL. 171
1993
22
TOWARDS THE AUTOMATIC IDENTIFICATION OF ADJECTIVAL SCALES: CLUSTERING ADJECTIVES ACCORDING TO MEANING Vasileios Hatzivassiloglou Kathleen R. McKeown Department of Computer Science 450 Computer Science Building Columbia University New York, N.Y. 10027 Internet: [email protected] kathy @ cs.columbia.edu ABSTRACT In this paper we present a method to group adjectives according to their meaning, as a first step towards the automatic identification of adjectival scales. We discuss the properties of adjectival scales and of groups of semantically related adjectives and how they imply sources of linguistic knowledge in text corpora. We describe how our system exploits this linguistic knowledge to compute a measure of similarity between two adjectives, using statistical tech- niques and without having access to any semantic infor- mation about the adjectives. We also show how a clustering algorithm can use these similarities to produce the groups of adjectives, and we present results produced by our sys- tem for a sample set of adjectives. We conclude by present- ing evaluation methods for the task at hand, and analyzing the significance of the results obtained. 1. INTRODUCTION As natural language processing systems become more oriented towards solving real-world problems like machine translation or spoken language under- standing in a limited domain, their need for access to vast amounts of knowledge increases. While a model of the general rules of the language at various levels (morphological, syntactic, etc.) can be hand-encoded, knowledge which pertains to each specific word is harder to encode manually, if only because of the size of the lexicon. Most systems currently rely on human linguists or lexicographers who compile lexicon entries by hand. This approach requires significant amounts of time and effort for expanding the system's lexicon. Furthermore, if the compiled infor- mation depends in any way on the domain of the application, the acquisition of lexical knowledge must be repeated whenever the system is transported to another domain. For systems which need access to large lexicons, some form of at least partial automa- tion of the lexical knowledge acquisition phase is needed. One type of lexical knowledge which is useful for many natural language (NL) tasks is the semantic re- latedness between words of the same or different syn- tactic categories. Semantic relatedness subsumes hyponymy, synonymy, and antonymy- incompatibility. Special forms of relatedness are represented in the lexical entries of the WordNet lex- ical database (Miller et al., 1990). Paradigmatic semantic relations in WordNet have been used for diverse NL problems, including disambiguation of syntactic structure (Resnik, 1993) and semi- automatic construction of a large-scale ontology for machine translation (Knight, 1993). In this paper, we focus on a particular case of semantic relatedness: relatedness between adjectives which describe the same property. We describe a technique for automatically grouping adjectives ac- cording to their meaning based on a given text cor- pus, so that all adjectives placed in one group describe different values of the same property. Our method is based on statistical techniques, augmented with linguistic information derived from the corpus, and is completely domain independent. It demonstrates how high-level semantic knowledge can be computed from large amounts of low-level knowledge (essentially plain text, part-of-speech rules, and optionally syntactic relations). The problem of identifying semantically related words has received considerable attention, both in computational linguistics (e.g. in connection with thesaurus or dictionary construction (Sparck-Jones, 1986)) and in psychology (Osgood et al., 1957). However, only recently has work been done on the automatic computation of such relationships from text, quantifying similarity between words and clustering them ( (Brown et aL, 1992), (Pereira et al., 1993)). In comparison, our work emphasizes the use of shallow linguistic knowledge in addition to a statistical model and is original in the use of negative knowledge to constrain the search space. Further- more, we use a flexible architecture which will allow us to easily incorporate additional knowledge sources for computing similarity. 172 While our current system does not distinguish be- tween scalar and non-scalar adjectives, it is a first step in the automatic identification of adjectival scales, since the scales can be subsequently ordered and the non-scalar adjectives filtered on the basis of independent tests, done in part automatically and in part by hand in a post-editing phase. The result is a semi-automated system for the compilation of adjec- tival scales. In the following sections, we first provide back- ground on scales, then describe our algorithm in detail, present the results obtained, and finally provide a formal evaluation of the results. 2. BACKGROUND A linguistic scale is a set of words, of the same grammatical category, which can be ordered by their semantic strength or degree of informativeness (Levinson, 1983). For example, lukewarm, warm, and hot fall along a single adjectival scale since they indicate a variation in the intensity of temperature of the modified noun (at least when used in their non- metaphorical senses; metaphorical usage of scalar words normally also follows the order of the scale by analogy). Scales are not limited to adjectives; for ex- ample, {may, should, must} and {sometimes, often, always} (Horn, 1972) are linguistic scales consisting of auxiliary verbs expressing obligation and of ad- verbs expressing frequency respectively. In the case of adjectives, the above definition is commonly relaxed to replace the total order among the elements of the scale by a partial one, so that the elements of the scale may be partitioned into two groups (sub-scales), within each of which the order is total. The two sub-groups correspond to positive and negative degrees of the common property that the scale describes. For example, the set of adjectives { cold, lukewarm, warm, hot} are normally considered part of one scale, even though no direct ordering of semantic strength exists between cold and hot. Linguistic scales are known to possess interesting properties, derived from conventional logical entail- ment on the linear ordering of their elements and from Gricean scalar implicature (Levinson, 1983). Despite these properties and their potential usefulness in both understanding and generating natural lan- guage text, dictionary entries are largely incomplete for adjectives in this regard. Yet, if systems are to use the information encoded in adjectival scales for generation or interpretation (e.g. for selecting an ad- jective with a particular degree of semantic strength (Elhadad, 1991, Elhadad, 1993), or for handling negation), they must have access to the sets of words comprising a scale. Linguists have presented various tests for accept- ing or rejecting a particular scalar relationship be- tween any two adjectives. For example, Horn (1969) proposed a test using the phrase "x even y" for two elements x and y of a totally ordered scale. More I EXTRACT WORD J PATTERNS I ... MODULE 1 MODULE 2 MODULE n SIMILARITIES WORDS Figure 1: System architecture. refined tests locate the position of an adjective in a scale relative to the neutral element or to the ex- tremes of the scale (Bolinger, 1977). The common problem with these methods is that they are designed to be applied by a human who incorporates the two adjectives in specific sentential frames (e.g. "X is warm, even hot") and assesses the semantic validity of the resulting sentences. Such tests cannot be used computationally to identify scales in a domain, since the specific sentences do not occur frequently enough in a corpus to produce an adequate description of the adjectival scales in the domain (Smadja, 1991). As scales vary across domains, the task of compiling such information is compounded. 3. ALGORITHM Our algorithm, whose overall architecture is depicted in Figure 1, operates in four stages. First, we extract linguistic data from the parsed corpus in the form of syntactically related word pairs, or, more generally, sequences of syntactically related words; this co-occurrence information is processed by a mor- phology component and tabulated. In the second stage, the various types of co-occurrence relations which have been identified in the text are forwarded 173 to a set of independent similarity modules, which operate in parallel. Each similarity module uses some linguistic criterion to judge the similarity or dis- similarity between any two adjectives, producing a real number between 0 and l; a module may also refrain from making any judgement. The third stage combines the opinions of the various similarity modules in a single dissimilarity measure for any pair of adjectives. Finally, the fourth stage clusters the adjectives into groups according to the dissimilarity measure, so that adjectives with a high degree of pairwise similarity fall in the same cluster (and, con- sequently, adjectives with a low degree of similarity fall in different clusters). The algorithm currently uses two similarity modules based on two sources of linguistic data: data that help establish that two adjectives are related, and data that indicate that two adjectives are unrelated. First, we extract adjective-noun pairs that occur in a modification relation in order to identify the distribu- tion of nouns an adjective modifies and, ultimately, determine which adjectives it is related to. This is based on the expectation that adjectives describing the same property tend to modify approximately the same set of nouns. For example, temperature is nor- mally defined for physical objects and we can expect to find that adjectives conveying different values of temperature will all modify physical objects. There- fore, our algorithm finds the distribution of nouns that each adjective modifies and categorizes adjec- tives as similar if they have similar distributions. Second, we use adjective-adjective pairs occur- ring as pre-modifiers within the same NP as a strong indication that the two adjectives do not belong in the same group. There are three cases: 1. If both adjectives modify the head noun and the two adjectives are antithetical, the NP would be self-contradictory, as in the scalar sequence hot cold or the non-scalar red black. 2. For non-antithetical scalar adjectives which both modify the head noun, the NP would violate the Gricean maxim of Manner (Levinson, 1983) since the same information is conveyed by the strongest of the two adjectives (e.g. hot warm). 3. Finally, if one adjective modifies the other, the modifying adjective has to qualify the modified one in a different dimension. For example, in light blue shirt, blue is a value of the property color, while light indicates the shade 1. The use of multiple types of linguistic data, in INote that sequences such as blue-green are usually hyphenated and thus better considered as a compound. addition to statistical measures, is a unique property of our work and significantly improves the accuracy of our results. One other published model for group- ing semantically related words (Brown et al., 1992), is based on a statistical model of bigrams and trigrams and produces word groups using no linguis- tic knowledge, but no evaluation of the results is reported. 3.1. Stage One: Extracting Word Pairs During the first stage, the system extracts adjective-noun and adjective-adjective pairs from the corpus. To determine the syntactic category of each word, and identify the NP boundaries and the syntac- tic relations among the words, we used the Fidditch parser (Hindle, 1989). For each NP, we then deter- mine its minimal NP, that part of an NP consisting of the head noun and its adjectival pre-modifiers 2. We match a set of regular expressions, consisting of syn- tactic categories and representing the different forms a minimal NP can take, against the NPs. From the minimal NP, we produce the different pairs of adjec- tives and nouns, assuming that all adjectives modify the head noun 3. This assumption is rarely invalid, because a minimal NP with multiple adjectives all modifying the head noun is far more common than a minimal NP with multiple adjectives where one of them modifies another. Furthermore, minimal NPs with multiple adjectives are relatively rare in the first place; most minimal NPs consist simply of a noun or an adjective and a noun. The resulting adjective-adjective and adjective- noun pairs are filtered by a morphology component, which removes pairs that contain erroneous infor- mation (such as mistyped words, proper names, and closed-class words which may be mistakenly classi- fied as adjectives (e.g. possessive pronouns)). This component also reduces the number of different pairs without losing information by transforming words to an equivalent, base form (e.g. plural nouns are con- verted to singular) so that the expected and actual frequencies of each pair are higher. Stage one then produces as output a simple list of adjective-adjective pairs that occurred within the same minimal NP and a table with the observed frequencies of every adjective-noun combination. Each row in the table contains the frequencies of modified nouns for a given adjective. 2This part of an NP has been used by many researchers (e.g. (Hobbs et aL, 1993) who call it a noun group), mostly because of the relative ease with which it can be identified. 3We take into account possessives however and correct the result, so that the minimal NP (the) tall man's wife will correctly produce the pair (tall, man) instead of (tall, wife). 174 3.2. Stage Two: Computing Similarities Between Adjectives This stage currently employs two similarity modules, each of which processes a part of the output of stage one and produces a measure of similarity for each possible pair of adjectives. The first module processes the adjective-noun frequency table; for each possible pair in the table we compare the two distributions of nouns. We use a robust non-parametric method to compute the similarity between the modified noun distributions for any two adjectives, namely Kendall's x coef- ficient (Kendall, 1938) for two random variables with paired observations. In our case, the two random variables are the two adjectives we are comparing, and each paired observation is their frequency of co- occurrence with a given noun. Kendall's x coef- ficient compares the two variables by repeatedly comparing two pairs of their corresponding obser- vations. Formally, if (Xi,Yi) and (Xj,Yj) are two pairs of observations for the adjectives X and Y on the nouns i and j respectively, we call these pairs concordant if Xi>Xj and Yi>Y. or if Xi<X. and Yi<Yj; otherwise these pairs are ~iscordant. W/e dis- card ties, that is pairs of observations where Xi=Xj.or Yi=Yj. For example, Table 1 shows the frequencies observed for the co-occurrences of the nouns coordination and market and the adjectives global and international in the test corpus which is described in Section 4. From the table we observe that for i=coordination, j=market, X=global, and Y=international, we have Xi=16 < 24=X: and Yi=I9 < 33=Yj, so this particular pair of paired/obser - vations is concordant and contributes positively to the similarity between global and international. In general, if the distributions for the two adjec- tives are similar, we expect a large number of concor- dances, and a small number of discordances. Kendall's I: is defined as "c = Pc-Pd where Pc and Pd are the probabilities of observing a concordance or discordance respectively. ~ ranges from -1 to +1, with +1 indicating complete concor- dance, -1 complete discordance, and 0 no correlation between X and Y. An unbiased estimator of x is the statistic C-Q T=-- where n is the number of paired observations in the sample and C and Q are the numbers of observed concordances and discordances respectively (Wayne, 1990). We compute T for each pair of adjectives, ad- justing for possible ties in the values of each variable, so that our statistic remains an unbiased estimator of x. We determine concordances and discordances by global international coordination 16 19 market 24 33 Table 1: Example adjective-noun frequencies. sorting the pairs of observations (noun frequencies) on one of the variables (adjectives), and computing how many of the (2) pairs of paired observations agree or disagree with the expected order on the other adjective. We normalize the result to the range 0 to 1 using a simple linear transformation. The second similarity module utilizes the knowledge offered by the observed adjective- adjective pairs. We know that the adjectives which appear in any such pair cannot be part of the same group, so the module produces zero similarity for all such pairs. The module does not output any similarity value for pairs of adjectives which have not been ob- served together in the same minimal NP. The two modules produce results of a sig- nificantly different character. The adjective-noun module always outputs a similarity value for any pair of adjectives, but these values tend to be around the middle of the range of possible values; rarely will the pattern of similarity or dissimilarity be strong enough to produce a value which has a large deviation from 0.5. This compression of the range of the similarity values can be attributed to the existence of many ties and many adjective-noun pairs with low frequencies, as would be expected by Zipf's law (Zipf, 1949). However, the expected number of concordances and discordances which can be attributed to chance will be the same (a random pair can produce a concor- dance or discordance with probability 0.5 for each), so the effect of chance fluctuations on T is not very significant. Furthermore, the robustness of the method guarantees that it will not be significantly influenced by any outliers (this is true for all rank based methods). Therefore, although we cannot have complete confidence in a statistical estimate like T, we expect the module to produce useful estimates of similarity. On the other hand, the adjective-adjective module produces similarity values with absolute certainty, since once two adjectives have been seen in the same NP even once, we can deduce that they do not belong in the same group. However, this negative knowledge is computed only for a few of the possible pairs of adjectives, and it cannot be propagated to more pairs as dissimilarity is not a transitive relation. As a result we can make some inferences with very high con- fidence, but we cannot make very many of them. 175 3.3. Stage Three: Combining The Similarity Estimates In stage three we combine the values produced by the various similarity modules in stage two using a pre-specified algorithm. The output of this stage is a single table of dissimilarity values (as required by the next stage) having one entry for each adjective pair. Currently we have only the two similarity modules described in the previous subsection, so we employ the following simple algorithm: for any pair of adjectives (x,y) do if the adjective-adjective module has no opinion on (x,y) then dissimilarity = 1 - (the similarity reported by the adjective-noun module) else dissimilarity = some constant k> 1 As can be easily seen, the algorithm has complete confidence in the results of the adjective-adjective module whenever that module has an opinion; when it does not, the algorithm uses the similarity value produced by the adjective-noun module, after a simple linear transformation is applied to convert it to a dissimilarity. The choice of the constant k reflects how undesirable it is to place in the same group two adjectives which have been observed in the same minimal NP. Since we consider the results of the adjective-adjective module more reliable than the adjective-noun module, we use a high value for k, k=10; this practically guarantees that a suggestion by the adjective-adjective module will be respected by the clustering algorithm unless the evidence for the contrary is overwhelming. Note that by placing complete confidence in the output of the adjective-adjective module, the algo- rithm of stage three is sensitive to small errors that this module may perform. An incorrect suggestion would make possibly related adjectives be kept separate. However, this problem looks more severe than it really is. An erroneous opinion produced by that module must correspond to a violation of one of the three linguistic principles listed at the start of this section; such violations do not occur in carefully written English (as is our test corpus of Associated Press news reports). In fact, during the analysis of the corpus for our test set of adjectives we found no er- roneously identified pairs of adjectives; however, if the system is used with a less well written, or even spoken, corpus, the complete confidence in the adjective-adjective module may need to be reduced. This can be accomplished by taking into account the frequency of an adjective-adjective pair, and making our confidence an increasing function of this fre- quency. When new similarity modules, such as the ones discussed in Section 6, are added to the system, the above algorithm will be inadequate for combining their suggestions. We plan to extend the algorithm to compute an extended weighted average of the similarities and/or dissimilarities produced by these modules, and add a separate training component which will determine the appropriate value for the weight of each module. 3.4. Stage Four: Clustering The Adjectives In stage four we form groups of adjectives (a par- tition) according to the combined dissimilarity values computed in the previous stage. We want to find a partition which is optimal, in the sense that adjectives with high dissimilarity are placed in different groups. We use a non-hierarchical clustering algorithm, since such algorithms are in general stronger than hierar- chical methods (Kaufman and Rousseeuw, 1990). The number of clusters produced is an input parameter. The algorithm uses the exchange method (Spath, 1985) since the more commonly used K- means method (Kaufman and Rousseeuw, 1990) is not applicable; the K-means method, like all centroid methods, requires the measure d between the clus- tered objects to be a distance; this means, among other conditions, that for any three objects x, y, and z the triangle inequality applies. However, this in- equality does not necessarily hold for our dis- similarity measure. If the adjectives x and y were ob- served in the same minimal NP, their dissimilarity is quite large. If neither z and x nor z and y were found in the same minimal NP, then it is quite possible that the sum of their dissimilarities could be less than the dissimilarity between x and y. The algorithm tries to produce a partition of the set of adjectives as close as possible to the optimal one. This is accomplished by minimizing an objective function • which scores a partition P. The objective function we use is dp(~ = E [ 1__~ E d(x,y) ] c~ P I CIx,y~C The algorithm starts by producing a random par- tition of the adjectives, computing its • value and then for each adjective computing the improvement in • for every cluster where it can be moved; the adjective is moved to the cluster that yields the best improvement of • if there is such a cluster and the next adjective is considered. This procedure is repeated until no more moves lead to an improve- ment of ~. This is a hill-climbing method and therefore is guaranteed to converge, but it may lead to a local minimum of ~, inferior to the global minimum that corresponds to the optimal solution. To alleviate this problem, the partitioning algorithm is called repeatedly with different random starting partitions and the best solution in these runs is kept. As with many practical optimization problems, computing the optimal solution is NP-complete (Brucker, 1978). 176 antitrust new big old economic political financial potential foreign real global serious international severe legal staggering little technical major unexpected mechanical Figure 2: Adjectives to be grouped. Note that if the problem' s search space had been rela- tively small, then we could have computed the op- timal partition by enumerating all possible solutions and keeping the best one. However, again as with many other practical optimization problems, the search space turns out to be intractably large. The number of possible partitions of n objects to m non- empty subsets with m<n is equal to the correspond- ing Stifling number of the second kind (Knuth, 1973), and this number grows exponentially with n for all but trivial values of m. For example, for our test set of adjectives presented in the next section, we have n=21 and m=9; the corresponding number of possible partitions is roughly 1.23 × 1014. 4. RESULTS We tested our system on a 8.2 million word cor- pus of stock market reports from the Associated Press news wire. A subset of 21 of the adjectives in the corpus (Figure 2) was selected for practical reasons (mainly for keeping the evaluation task tractable). We selected adjectives that have one modified noun in common (problem) to ensure some semantic re- latedness, and we included only adjectives that oc- curred frequently so that our similarity measure would be meaningful. The partition produced by the system for 9 clusters appears in Figure 3. Before presenting a for- mal evaluation of the results, we note that this par- tition contains interesting data. First, the results con- tain two clusters of gradable adjectives which fall in the same scale. Groups 5 and 8 contain adjectives that indicate the size, or scope, of a problem; by aug- menting the system with tests to identify when an adjective is gradable, we could separate out these two groups from other potential scales, and perhaps con- sider combining them. Second, groups 1 and 6 clearly identify separate sets of non-gradable adjectives. The first contains adjectives that describe the geographi- cal scope of the problem. Although at first sight we would classify these adjectives as non-scalar, we ob- served that the phrase international even global problem is acceptable while the phrase *global even international problem is not. These patterns seem to 1. foreign global international 2. old 3. potential 4. new real unexpected 5. little staggering 6. economic financial mechanical political technical 7. antitrust 8. big major serious severe 9. legal Figure 3: Partition found for 9 clusters. suggest at least some degree of scalability. On the other hand, group 6 contains non-scalar relational ad- jectives that specify the nature of the problem. It is interesting to note here that the clustering algorithm discourages long groups, with the expected number adjectives per cluster being 9---2.33; nevertheless, of the evidence for the adjectives in group 6 is strong enough to allow the creation of a group with more than twice the expected number of members. Finally, note that even in group 4 which is the weakest group produced, there is a positive semantic correlation be- tween the adjectives new and unexpected. To sum- marize, the system seems to be able to identify many of the existent semantic relationships among the ad- jectives, while its mistakes are limited to creating singleton groups containing adjectives that are related to other adjectives in the test set (e.g., missing the semantic associations between new-old and potential-real) and "recognizing" a non-significant relationship between real and new-unexpected in group 4. We produced good results with a relatively small corpus of 8.2 million words 4, out of which only 34,359 total / 3,073 distinct adjective-noun pairs in- volving 1,509 distinct nouns were relevant to our test set of 21 adjectives (Figure 2). The accuracy of the results can be improved if a larger, homogeneous cor- pus is used to provide the raw data. Also, we can increase the size of the adjective-noun and adjective- adjective data that we are using if we introduce more syntactic patterns in stage one to extract more com- plex cases of pairs. Furthermore, some of the associa- tions between adjectives that the system reports ap- pear to be more stable than others; these associations remain in the same group when we vary the number of clusters in the partition. We have noticed that ad- jectives with a higher degree of semantic content (e.g. international or severe) appear to form more 4Corpora up to 366 million words have been used for similar classification tasks. 177 Answer should be Yes Answer should be No The system says Yes a b The system says No c d Table 2: Contingency table model for evaluation. stable associations than relatively semantically empty adjectives (e.g. little or real). This observation can be used to filter out adjectives which are too general to be meaningfully clustered in groups. 5. EVALUATION To evaluate the performance of our system we compared its output to a model solution for the problem designed by humans. Nine human judges were presented with the set of adjectives to be par- titioned, a description of the domain, and a simple example. They were told that clusters should not overlap but they could select any number of clusters (the judges used from 6 to 11 clusters, with an average of 8.565 and a sample standard deviation of 1.74). Note that this evaluation method differs sig- nificantly from the alternative method of asking the humans to directly estimate the goodness of the system's results (e.g. (Matsukawa, 1993)). It requires an explicit construction of a model from the human judge and places the burden of the comparison be- tween the model and the system's output on the sys- tem instead of the judge. It has been repeatedly demonstrated that in complex evaluation tasks humans can easily find arguments to support ob- served data, leading to biased results and to an infla- tion of the evaluation scores. To score our results, we converted the com- parison of two partitions to a series of yes-no ques- tions, each of which has a correct answer (as dictated by the model) and an answer assigned by the system. For each pair of adjectives, we asked if they fell in the same cluster ("yes") or not ("no"). Since human judges did not always agree, we used fractional values for the correctness of each answer instead of 0 ("incorrect") and 1 ("correct"). We defined the cor- rectness of each answer as the relative frequency of the association between the two adjectives among the human models and the incorrectness of each answer as 1 - correctness; in this way, associations receive a correctness value proportional to their popularity among the human judges. For example, in the sample set of adjectives discussed in the previous section, the association (foreign, international) received a cor- rectness value of 1, since all the humans placed these two adjectives in the same group, while the associa- tion (legal, severe) received a correctness value of 0. The pair (economic, political) on the other hand received a correctness value of 0.67, since two thirds of the judges placed the two adjectives in the same group. Once correctness and incorrectness values have been defined, we can generalize measures such as "the number of correct associations retrieved by the system" by using summation of those values in- stead of counting. Then the contingency table model (Swets, 1969), widely used in Information Retrieval and Psychology, is applicable. Referring to the clas- sification of the yes-no answers in Table 2, the fol- lowing measures are defined : a • Recall = • 100% a+c a • Precision = • 100% a+b b • Fallout = b~--d" 100% In other words, recall is the percentage of correct "yes" answers that the system found among the model "yes" answers, precision is the percentage of correct "yes" answers among the total of "yes" answers that the system reported, and fallout is the percentage of incorrect "yes" answers relative to the total number of "no" answers 6. Note that in our generalized contingency table model, the symbols a, b, c, and d do not represent numbers of observed associations but rather sums of correctness or incor- rectness values. These sums use correctness values for the quantities in the first column of Table 2 and incorrectness values for the quantities in the second column of Table 2. Furthermore, the summation is performed over all pairs reported or not reported by the system for quantities in the first or second row of Table 2 respectively. Consequently, the information theoretic measures represent the generalized counter- parts of their original definitions. In the case of per- fect agreement between the models, or of only one model, the generalized measures reduce to their original definitions. We also compute a combined measure for recall and precision, the F-measure (Van Rijsbergen, 1979), which always takes a value between the values of recall and precision, and is higher when recall and precision are closer; it is defined as F - (132+1)x Precision x Recall 132 x Precision + Recall 5This is the reason that we presented the partition with 9 clusters, as this is the closest integer to the average number of clusters used by the humans. 6Another measure used in information retrieval, overgeneration, is in our case always equal to (100 - precision)%. 178 Recall Precision Fallout F-measure (13=1) 7 clusters 50.78% 43.56% 7.48% 46.89% 8 clusters 37.31% 38.10% 6.89% 37.70% 9 clusters 49.74% 46.38% 6.54% 48.00% 10 clusters 35.23% 41.98% 5.54% 38.31% Table 3: Evaluation results. where 13 is the weight of recall relative to precision; we use 13=1.0, which corresponds to equal weighting of the two measures. The results of applying our evaluation method to the system output (Figure 3) are shown in Table 3, which also includes the scores obtained for several other sub-optimal choices of the number of clusters. We have made these observations related to the evaluation mechanism: 1. Recall is inversely related to fallout and precision. Decreasing the number of clusters generally increases the recall and fallout and simultaneously decreases precision. 2. We have found fallout to be a better measure overall than precision, since, in addition to its decision-theoretic ad- vantages (Swets, 1969), it appears to be more consistent across evaluations of partitions with different numbers of clusters. This has also been reported by other researchers in different evaluation problems (Lewis and Tong, 1992). 3. The problem of assessing the meaning of the evaluation scores in an absolute sense is a non-trivial one. For example, there has been increasing concern that the scoring methods used for evaluating the goodness of parsers are producing values which seem extremely good (in the >90% range), while in fact the parse trees produced are not so satisfactory; the blame for this inflation of the scores can be assigned to an inadequate com- parison technique, which essentially considers a tree fragment correct when it is a part of (although not exactly matching) the corresponding fragment in the model. For other tasks, such as part-of-speech assignment to free text, the comparison techniques are sound, but very high levels of performance (e.g, 90%) can be obtained by a zero- parameter model which operates at ran- dom; clearly this makes the assessment of the significance of an improvement over the base line of the random algo- rithm much harder. As a consequence of point (3) made above, we need to understand the significance of the scores produced by our evaluation methods (for example, the limits of their ranges) before trying to interpret them. There are theoretical principles which indicate that the evaluation metrics will produce lower values much more easily than higher ones. Because of the multiple models used, perfect scores are not attain- able. Also, because each pair of adjectives in a cluster is considered an observed association, the relation- ship between the number of associations produced by a cluster and the number of adjectives in the cluster is not linear (a cluster with k adjectives will produce (k) =O (k 2) associations). This leads to lower values of recall, since moving a single adjective out of a cluster with k elements in the model will cause the system to miss k-1 associations. As an example of this phenomenon, consider the hypothetical (single) model and partition of Figure 4; while the partition differs from the model only in that the first cluster has been split into two, the recall score abruptly falls to 50%. In order to provide empirical evidence in addition to the theoretical discussion above, and be able to estimate an upper bound on the values of the evalua- tion metrics, we evaluated each human model against all the other human models, using the same evalua- tion method which was used for the system; the results ranged from 38 to 72% for recall, 1 to 12% for fallout, 38 to 81% for precision, and, covering a Model: 1. ABCDE 2. FG 3. HI Partition: 1. ABC 2. DE 3. FG 4. HI Figure 4: A hypothetical model where a small perturbation leads to a recall score of 50%. 179 Recall Precision Fallout F-measure (13=1) Without negative knowledge 33.16% 32.32% 7.90% 32.74% With both modules 49.74% 46.38% 6.54% 48.00% Table 4: Comparison of the system's performance (9 clusters) with and without the negative knowledge module. remarkably short range, 49 to 59% for F-measure 7, indicating that the performance of the system is not far behind human performance. In order to provide a lower bound for the evalua- tion metrics and thus show that the system's scores are not close to the scores of the human judges simply by chance, we performed a Monte Carlo analysis (Rubinstein, 1981) for the evaluation metrics, by repeatedly creating random partitions of the sample adjectives and evaluating the results. Then we estimated a smoothed probability density function for each metric from the resulting histograms; the results obtained are shown in Figure 5 for F-measure and fallout using 9 clusters. We observed that the system's performance (indicated by a square in the diagrams) was significantly better than what we would expect under the null hypothesis of random performance; the probability of getting a better par- tition than the system's is extremely small for all metrics (no occurrence in 20,000 trials) except for fallout, for which a random system may be better 4.9% of the time. The estimated density functions also show that the metrics are severely constrained by the structure imposed by the clustering as they tend to peak at some point and then fall rapidly. Finally, we performed another study to quantify the impact of using negative knowledge obtained from adjective-adjective pairs. We ran our system in a mode where the suggestions of the adjective- adjective module were ignored (i.e. stage three simply passed to the output the similarities computed by the adjective-noun module, after converting them to dissimilarities), and evaluated the results produced. The values of the metrics for the partition with 9 clusters appear in Table 4, alongside the correspond- ing values produced when the system uses both modules. When both modules are used, we can see a significant improvement of about 15 points, which is a 43% to 50% improvement for all metrics (except for fallout where the improvement is about 17%). This represents a definite improvement even though for our test set of 21 adjectives (Figure 2) we ob- served in our corpus only 41 distinct adjective- adjective pairs, out of a possible (221)=210 pairs. AI- 7Thus indicating that human models which fared well on the precision metric tended to perform badly on recall, and vice versa; remember that the values of the metrics are related to the number of clusters used, and that the human judges were allowed to select the number of clusters they considered most appropriate; con- sequently, the models with high recall/low precision are the ones with a small number of clusters, while the opposite pattern of scores characterizes the models with a large number of clusters. though the observed pairs represent only 19.52% of the possible pairs, their importance is considerable. Note that the sparsity of the adjective-adjective pairs does not allow us to perform a comparable study for the partition produced using the adjective- adjective module alone, since such a partition would be largely determined by chance. 6. CONCLUSIONS AND FUTURE WORK We have described a system for extracting groups of semantically related adjectives from large text cor- pora, with a flexible architecture which allows for multiple knowledge sources influencing similarity to 0 10 20 ~tO 40 50 F-me.urn (9 durum) 0 S 10 15 20 Falo~ {9 ctu~m) Figure 5: Estimated probability densities for F-measure and fallout with 9 clusters. 180 be easily incorporated into the system. Our evalua- tion reveals that it has significantly high performance levels, comparable to humans, using only a relatively small amount of input data; in addition, it shows the usefulness of negative knowledge, an original feature of our approach. The system's results can be filtered to produce scalar adjectives that are applicable in any given domain. Furthermore, while we have demonstrated the algorithm on adjectives, it can be directly applied to other word classes once sources of linguistic information for judging their similarity have been identified. Our immediate plans are to incorporate more similarity modules into stage two of the system and add a training component to stage three so that the relative weights of the various modules can be es- timated. We have identified several additional sources of linguistic knowledge which look promis- ing, namely pairs of adjectives separated by connec- tives and adverb-adjective pairs. We also plan to ex- tend the adjective-noun module to cover adjectives in predicative positions, in addition to our current use of attributive adjectives. These extensions not only will provide us with a better way of exploiting the infor- mation in the corpus but may also help us categorize the adjectives as relational or attributive (Levi, 1978); such a categorization may be useful in classifying them as either scalar or non-scalar. For determining whether a group of adjectives is scalar, we also plan to use the gradability of the adjectives as observed in the corpus. In addition, we are exploring tests for determining whether two adjectives are antonymous, essentially in the opposite direction of the work by Justeson and Katz (1991) , and tests for comparing the relative semantic strength of two adjectives. Furthermore, we plan to consider alternative evaluation methods and test our system on a much larger set of adjectives. That was not done for the current evaluation because of the difficulty for humans of constructing large models. We are con- sidering an evaluation method which would use a thesaurus to judge similarity, as well as a supplemen- tary method based on mathematical properties of the clustering. Neither of these methods would access any human models. The mathematical method, which uses cluster silhouettes and the silhouette coefficient (Kaufman and Rousseeuw, 1990), can also be used to automatically determine the proper number of clusters, one of the hardest problems in cluster analysis. We also plan a formal study to evaluate the appropriateness of the clustering method used, by computing and evaluating the results when a hierar- chical algorithm is employed instead in stage four. Eventually, we plan to evaluate the system's output by using it to augment adjective entries in a lexicon and test the augmented lexicon in an application such as language generation. ACKNOWLEDGEMENTS This work was supported jointly by DARPA and ONR under contract N00014-89-J-1782, by NSF GER-90-24069, and by New York State Center for Advanced Technology Contract NYSSTF- CAT(91)-053. We wish to thank Diane Litman and Donald Hindle for providing us with access to the Fidditch parser at AT&T Bell Labs, and Karen Kukich and Frank Smadja for providing us with ac- cess to the Associated Press news wire corpus. Finally, we thank Rebecca Passonneau and the anonymous reviewers for providing us with useful comments on earlier versions of the paper. REFERENCES Bolinger, D. (1977). Neutrality, Norm, and Bias. Bloomington, IN: Indiana University Linguistics Club. Brown P., Della Pietra V., deSouza P., Lai J., and Mercer R. (1992). Class-based n-gram Models of Natural Language. Computational Linguistics, 18:4,467-479. Brucker, P. (1978). On the complexity of clustering problems. In Henn, R., Korte, B., and Oletti, W. (Eds.), Lecture Notes in Economics and Math- ematical Systems. Optimierung und Operations Research. Berlin: Springer. Quoted in (Garey and Johnson, 1979). Elhadad, Michael. (1991). Generating Adjectives to Ex- press the Speaker' s Argumentative Intent. Proceedings of 9th National Conference on Artificial Intelligence (AAA191). Anaheim. Elhadad, Michael. (1993). Using Argumentation to Control Lexical Choice: A Unification-Based Implementation. Doctoral dissertation, Computer Science Department, Columbia University. Garey, M.R., and Johnson, D.S. (1979). Computers and Intractability: A Guide to the Theory of NP-Completeness. W.H. Freeman. Hindle, D. M. (1989). Acquiring Disambiguation Rules from Text. Proceedings of the 27th meeting of the Association for Computational Linguistics. Van- couver, B.C.. Hobbs J.R., Appelt D., Bear J., Israel D., Kameyama M., and Tyson M. (1993). FASTUS: A System for Extract- ing Information from Text. Proceedings of the ARPA Workshop on Human Language Technology. ARPA Information Science and Technology Office. Horn, L. (1969). A Presuppositional Analysis of Only and Even. Papers from the Fifth Regional Meeting. Chicago Linguistics Society. Hom, LR. (1972). On the Semantic Properties of the Logical Operators in English. Bloomington, IN: In- diana University Linguistics Club. Justeson, J.S. and Katz, S.M. (1991). Co-occurences of 181 Antonymous Adjectives and Their Contexts. Computational Linguistics, 17:1, 1-19. Kaufman, L. and Rousseeuw, P.J. (1990). Wiley Series in Probability and Mathematical Statistics. Finding Groups in Data: An Introduction to Cluster Analysis. New York: Wiley. Kendall, M.G. (1938). A New Measure of Rank Correla- tion. Biometrika, 30, 81-93. Knight, Kevin. (1993). Building a Large Ontology for Machine Translation. Proceedings of the ARPA Workshop on Human Language Technology. ARPA Information Science and Technology Office. Knuth, D.E. (1973). The Art of Computer Programming. Vol. 1: FundamentalAlgorithms (2nd ed.). Reading, Mass.: Addison-Wesley. Levi, Judith N. (1978). The Syntax and Semantics of Com- plex Nominals. New York: Academic Press. Levinson, S.C. (1983). Pragmatics. Cambridge, England: Cambridge University Press. Lewis, D. and Tong, R. (1992). Text Filtering in MUC-3 and MUC-4. Proceedings of the Fourth Message Understanding Conference (MUC-4). DARPA Software and Intelligent Systems Technology Office. Matsukawa, Tomoyoshi. (1993). Hypothesizing Word As- sociation From Untagged Text. Proceedings of the ARPA Workshop on Human Language Technology. ARPA Information Science and Technology Office. Miller, G.A. (ed.). (1990). WordNet: An On-Line Lexical Database. International Journal of Lexicography (special issue), 3:4, 235-312. Osgood, C.E., Suci, G.S. and Tannenbaum, P.H. (1957). The measurement of meaning. Urbana, Illinois: University of Illinois Press. Pereira F., Tishby N., and Lee L. (1993). Distributional Clustering of English Words. Proceedings of the 31st Conference of the ACL. Columbus, Ohio: As- sociation for Computational Linguistics. Resnik, Philip. (1993). Semantic Classes and Syntactic Ambiguity. Proceedings of the ARPA Workshop on Human Language Technology. ARPA Information Science and Technology Office. Rubinstein, R.Y. (1981). Wiley Series in Probability and Mathematical Statistics. Simulation and the Monte Carlo method. New York: Wiley. Smadja, F. (1991). Retrieving Collocational Knowledge from Textual Corpora. An Application: Language Generation. Doctoral dissertation, Department of Computer Science, Columbia University. Sparck-Jones, Karen. (1986). Synonymy and Semantic Classification. Edinburgh, Great Britain: Edinburgh University Press. Based on the author's Ph.D. thesis, University of Cambridge, 1964. Spath, Helmuth. (1985). Ellis Horwood Series in Com- puters and their Applications. Cluster Dissection and Analysis: Theory, FORTRAN Programs, Examples. Chichester, West Sussex, England: Ellis Horwood. Swets, J.A. (January 1969). Effectiveness of Information Retrieval Methods. American Documentation, 20, 72-89. Van Rijsbergen, C.J. (1979). Information Retrieval (2nd ed.). London: Butterwoths. Wayne, D.W. (1990). The Duxbury Advanced Series in Statistics and Decision Sciences. Applied Non- parametric Statistics (2nd ed.). Boston: PWS- KENT Publishing Company. Zipf, G.K. (1949). Human Behavior and the Principle of Least Effort: An Introduction to Human Ecology. Reading, Mass.: Addison-Wesley. 182
1993
23
DISTRIBUTIONAL CLUSTERING OF ENGLISH WORDS Fernando Pereira AT&T Bell Laboratories 600 Mountain Ave. Murray Hill, NJ 07974, USA pereira@research, att. com Naftali Tishby Dept. of Computer Science Hebrew University Jerusalem 91904, Israel tishby@cs, hu]i. ac. il Lillian Lee Dept. of Computer Science Cornell University Ithaca, NY 14850, USA llee~cs, cornell, edu Abstract We describe and evaluate experimentally a method for clustering words according to their dis- tribution in particular syntactic contexts. Words are represented by the relative frequency distribu- tions of contexts in which they appear, and rela- tive entropy between those distributions is used as the similarity measure for clustering. Clusters are represented by average context distributions de- rived from the given words according to their prob- abilities of cluster membership. In many cases, the clusters can be thought of as encoding coarse sense distinctions. Deterministic annealing is used to find lowest distortion sets of clusters: as the an- nealing parameter increases, existing clusters be- come unstable and subdivide, yielding a hierarchi- cal "soft" clustering of the data. Clusters are used as the basis for class models of word coocurrence, and the models evaluated with respect to held-out test data. INTRODUCTION Methods for automatically classifying words ac- cording to their contexts of use have both scien- tific and practical interest. The scientific ques- tions arise in connection to distributional views of linguistic (particularly lexical) structure and also in relation to the question of lexical acqui- sition both from psychological and computational learning perspectives. From the practical point of view, word classification addresses questions of data sparseness and generalization in statistical language models, particularly models for deciding among alternative analyses proposed by a gram- mar. It is well known that a simple tabulation of fre- quencies of certain words participating in certain configurations, for example of frequencies of pairs of a transitive main verb and the head noun of its direct object, cannot be reliably used for compar- ing the likelihoods of different alternative configu- rations. The problemis that for large enough cor- pora the number of possible joint events is much larger than the number of event occurrences in the corpus, so many events are seen rarely or never, making their frequency counts unreliable estimates of their probabilities. Hindle (1990) proposed dealing with the sparseness problem by estimating the likelihood of unseen events from that of "similar" events that have been seen. For instance, one may estimate the likelihood of a particular direct object for a verb from the likelihoods of that direct object for similar verbs. This requires a reasonable defini- tion of verb similarity and a similarity estimation method. In Hindle's proposal, words are similar if we have strong statistical evidence that they tend to participate in the same events. His notion of similarity seems to agree with our intuitions in many cases, but it is not clear how it can be used directly to construct word classes and correspond- ing models of association. Our research addresses some of the same ques- tions and uses similar raw data, but we investigate how to factor word association tendencies into as- sociations of words to certain hidden senses classes and associations between the classes themselves. While it may be worth basing such a model on pre- existing sense classes (Resnik, 1992), in the work described here we look at how to derive the classes directly from distributional data. More specifi- cally, we model senses as probabilistic concepts or clusters c with corresponding cluster member- ship probabilities p(clw ) for each word w. Most other class-based modeling techniques for natural language rely instead on "hard" Boolean classes (Brown et al., 1990). Class construction is then combinatorially very demanding and depends on frequency counts for joint events involving partic- ular words, a potentially unreliable source of in- formation as noted above. Our approach avoids both problems. Problem Setting In what follows, we will consider two major word classes, 12 and Af, for the verbs and nouns in our experiments, and a single relation between them, in our experiments the relation between a tran- sitive main verb and the head noun of its direct object. Our raw knowledge about the relation con- sists of the frequencies f~n of occurrence of par- ticular pairs (v,n) in the required configuration in a training corpus. Some form of text analy- sis is required to collect such a collection of pairs. The corpus used in our first experiment was de- rived from newswire text automatically parsed by 183 Hindle's parser Fidditch (Hindle, 1993). More re- cently, we have constructed similar tables with the help of a statistical part-of-speech tagger (Church, 1988) and of tools for regular expression pattern matching on tagged corpora (Yarowsky, 1992). We have not yet compared the accuracy and cover- age of the two methods, or what systematic biases they might introduce, although we took care to fil- ter out certain systematic errors, for instance the misparsing of the subject of a complement clause as the direct object of a main verb for report verbs like "say". We will consider here only the problem of clas- sifying nouns according to their distribution as di- rect objects of verbs; the converse problem is for- mally similar. More generally, the theoretical ba- sis for our method supports the use of clustering to build models for any n-ary relation in terms of associations between elements in each coordinate and appropriate hidden units (cluster centroids) and associations between thosehidden units. For the noun classification problem, the em- pirical distribution of a noun n is then given by the conditional distribution p,~(v) = f~./ ~v f"~" The problem we study is how to use the Pn to clas- sify the n EAf. Our classification method will con- struct a set C of clusters and cluster membership probabilities p(c]n). Each cluster c is associated to a cluster centroid Pc, which is a distribution over l; obtained by averaging appropriately the pn. Distributional Similarity To cluster nouns n according to their conditional verb distributions Pn, we need a measure of simi- larity between distributions. We use for this pur- pose the relative entropy or Kullback-Leibler (KL) distance between two distributions O(p I[ q) = ZP(x) log p(x) : q(x) This is a natural choice for a variety of reasons, which we will just sketch here) First of all, D(p I[ q) is zero just when p = q, and it increases as the probability decreases that p is the relative frequency distribution of a ran- dom sample drawn according to q. More formally, the probability mass given by q to the set of all samples of length n with relative frequency distri- bution p is bounded by exp-nn(p I] q) (Cover and Thomas, 1991). Therefore, if we are try- ing to distinguish among hypotheses qi when p is the relative frequency distribution of observations, D(p II ql) gives the relative weight of evidence in favor of qi. Furthermore, a similar relation holds between D(p IIP') for two empirical distributions p and p' and the probability that p and p~ are drawn from the same distribution q. We can thus use the relative entropy between the context distributions for two words to measure how likely they are to be instances of the same cluster centroid. aA more formal discussion will appear in our paper Distributional Clustering, in preparation. From an information theoretic perspective D(p ]1 q) measures how inefficient on average it would be to use a code based on q to encode a variable distributed according to p. With respect to our problem, D(pn H Pc) thus gives us the infor- mation loss in using cluster centroid Pc instead of the actual distribution pn for word n when mod- eling the distributional properties of n. Finally, relative entropy is a natural measure of similarity between distributions for clustering because its minimization leads to cluster centroids that are a simple weighted average of member dis- tributions. One technical difficulty is that D(p [1 p') is not defined when p'(x) = 0 but p(x) > 0. We could sidestep this problem (as we did initially) by smoothing zero frequencies appropriately (Church and Gale, 1991). However, this is not very sat- isfactory because one of the goals of our work is precisely to avoid the problems of data sparseness by grouping words into classes. It turns out that the problem is avoided by our clustering technique, since it does not need to compute the KL distance between individual word distributions, but only between a word distribution and average distri- butions, the current cluster centroids, which are guaranteed to be nonzero whenever the word dis- tributions are. This is a useful advantage of our method compared with agglomerative clustering techniques that need to compare individual ob- jects being considered for grouping. THEORETICAL BASIS In general, we are interested in how to organize a set of linguistic objects such as words according to the contexts in which they occur, for instance grammatical constructions or n-grams. We will show elsewhere that the theoretical analysis out- lined here applies to that more general problem, but for now we will only address the more specific problem in which the objects are nouns and the contexts are verbs that take the nouns as direct objects. Our problem can be seen as that of learning a joint distribution of pairs from a large sample of pairs. The pair coordinates come from two large sets ./kf and 12, with no preexisting internal struc- ture, and the training data is a sequence S of N independently drawn pairs Si = (ni, vi) 1 < i < N . From a learning perspective, this problem falls somewhere in between unsupervised and super- vised learning. As in unsupervised learning, the goal is to learn the underlying distribution of the data. But in contrast to most unsupervised learn- ing settings, the objects involved have no internal structure or attributes allowing them to be com- pared with each other. Instead, the only informa- tion about the objects is the statistics of their joint appearance. These statistics can thus be seen as a weak form of object labelling analogous to super- vision. 184 Distributional Clustering While clusters based on distributional similarity are interesting on their own, they can also be prof- itably seen as a means of summarizing a joint dis- tribution. In particular, we would like to find a set of clusters C such that each conditional dis- tribution pn(v) can be approximately decomposed as p,(v) = ~p(cln)pc(v) , cEC where p(c[n) is the membership probability of n in c and pc(v) = p(vlc ) is v's conditional probability given by the centroid distribution for cluster c. The above decomposition can be written in a more symmetric form as ~(n,v) = ~_,p(c,n)p(vlc ) cEC = ~-~p(c)P(nlc)P(Vlc) (1) cEC assuming that p(n) and /5(n) coincide. We will take (1) as our basic clustering model. To determine this decomposition we need to solve the two connected problems of finding suit- able forms for the cluster membership p(c[n) and the centroid distributions p(vlc), and of maximiz- ing the goodness of fit between the model distri- bution 15(n, v) and the observed data. Goodness of fit is determined by the model's likelihood of the observations. The maximum like- lihood (ML) estimation principle is thus the nat- ural tool to determine the centroid distributions pc(v). As for the membership probabilities, they must be determined solely by the relevant mea- sure of object-to-cluster similarity, which in the present work is the relative entropy between ob- ject and cluster centroid distributions. Since no other information is available, the membership is determined by maximizing the configuration en- tropy for a fixed average distortion. With the max- imum entropy (ME) membership distribution, ML estimation is equivalent to the minimization of the average distortion of the data. The combined en- tropy maximization entropy and distortion min- imization is carried out by a two-stage iterative process similar to the EM method (Dempster et al., 1977). The first stage of an iteration is a max- imum likelihood, or minimum distortion, estima- tion of the cluster centroids given fixed member- ship probabilities. In the second stage of each iter- ation, the entropy of the membership distribution is maximized for a fixed average distortion. This joint optimization searches for a saddle point in the distortion-entropy parameters, which is equiv- alent to minimizing a linear combination of the two known as free energy in statistical mechanics. This analogy with statistical mechanics is not co- incidental, and provides a better understanding of the clustering procedure. Maximum Likelihood Cluster Centroids For the maximum likelihood argument, we start by estimating the likelihood of the sequence S of N independent observations of pairs (ni,vi). Using (1), the sequence's model log likelihood is N l(S) = log p(c)p(n, le)p(vilc). i=l cEC Fixing the number of clusters (model size) Icl, we want to maximize l(S) with respect to the distri- butions P(nlc ) and p(vlc). The variation of l(S) with respect to these distributions is N /v(v, Ic)@(n ~fl(S) =~ 1 ~..~p(c)| + / (2) i=1 P(ni, vi) c~c \P(nilc)6p(vi Ic)] with p(nlc ) and p(vlc ) kept normalized. Using Bayes's formula, we have 1 v( lni, ~(ni, vi) -- p(c)p(ni[c)p(vi[c) (3) for any c. 2 Substituting (3) into (2), we obtain N (,logp(n, lc)) ~l(S) = ZZp(clni,vi) + (4) logp(vi Ic) i=1 cEC since ~flogp -- @/p. This expression is particu- larly useful when the cluster distributions p(n[c) and p(vlc ) have an exponential form, precisely what will be provided by the ME step described below. At this point we need to specify the cluster- ing model in more detail. In the derivation so far we have treated, p(n c) and p(v c) symmetrically, corresponding to clusters not of verbs or nouns but of verb-noun associations. In principle such a symmetric model may be more accurate, but in this paper we will concentrate on asymmetric mod- els in which cluster memberships are associated to just one of the components of the joint distribution and the cluster centroids are specified only by the other component. In particular, the model we use in our experiments has noun clusters with cluster memberships determined by p(nlc) and centroid distributions determined by p(vlc ). The asymmetric model simplifies the estima- tion significantly by dealing with a single compo- nent, but it has the disadvantage that the joint distribution, p(n, v) has two different and not nec- essarily consistent expressions in terms of asym- metric models for the two coordinates. 2As usual in clustering models (Duda and Hart, 1973), we assume that the model distribution and the empirical distribution are interchangeable at the solu- tion of the parameter estimation equations, since the model is assumed to be able to represent correctly the data at that solution point. In practice, the data may not come exactly from the chosen model class, but the model obtained by solving the estimation equations may still be the closest one to the data. 185 Maximum Entropy Cluster Membership While variations of p(nlc ) and p(vlc ) iri equation (4) are not independent, we can treat them sep- arately. First, for fixed average distortion be- tween the cluster centroid distributions p(vlc ) and the data p(vln), we find the cluster membership probabilities, which are the Bayes inverses of the p(nlc), that maximize the entropy of the cluster distributions. With the membership distributions thus obtained, we then look for the p(vlc ) that maximize the log likelihood l(S). It turns out that this will also be the values ofp(vlc) that mini- mize the average distortion between the asymmet- ric cluster model and the data. Given any similarity measure din , c) between nouns and cluster centroids, the average cluster distortion is (0) = ~_, ~,p(cln)d(n,c ) (5) nEAr tEd If we maximize the cluster membership entropy H = - ~ Zp(cln)logp(nlc) (6) nEX cEd subject to normalization ofp(nlc) and fixed (5), we obtain the following standard exponential forms (Jaynes, 1983) for the class and membership dis- tributions 1 p(nlc) = Z-¢ exp -rid(n, c) (7) 1 p(cJn) = ~ exp -rid(n, c) (8) where the normalization sums (partition func- tions) are Z~ = ~,~ exp-fld(n,c) and Zn = ~exp-rid(n,c). Notice that d(n,c) does not need to be symmetric for this derivation, as the two distributions are simply related by Bayes's rule. Returning to the log-likelihood variation (4), we can now use (7) for p(n[c) and the assumption for the asymmetric model that the cluster mem- bership stays fixed as we adjust the centroids, to obtain N 61(S) = - ~ ~ p(elni)6rid(n,, c) + ~ log Z~ (9) i=1 eEC where the variation of p(v[c) is now included in the variation of d(n, e). For a large enough sample, we may replace the sum over observations in (9) by the average over N 61(s) = - p(n) -"p(¢ln)6rid(n, ¢) + 6 logZ¢ nEN cEC which, applying Bayes's rule, becomes 1 61(S) = - ~ ~(~ ~ p(nlc)6rid(n, c) + 6 log Z¢. eEC hEN At the log-likelihood maximum, this variation must vanish. We will see below that the use of rel- ative entropy for similarity measure makes 6 log Zc vanish at the maximum as well, so the log likeli- hood can be maximized by minimizing the average distortion with respect to the class centroids while class membership is kept fixed 1 p(njc)6d(n,e)= o , cEC nEX or, sufficiently, if each of the inner sums vanish ~ p(nlcl6d(n,c)= 0 (10) tee nEAr Minimizing the Average KL Distortion We first show that the minimization of the relative entropy yields the natural expression for cluster centroids P(vle ) = ~ p(nlc)p(vln ) (11) nEW To minimize the average distortion (10), we ob- serve that the variation of the KL distance be- tween noun and centroid distributions with re- spect to the centroid distribution p(v[c), with each centroid distribution normalized by the Lagrange multiplier Ac, is given by ( - ~evP(V[n)l°gp(v[c) ) ~d(n,c) = ~ + A¢(E,~ev p(vlc) - 1) = ~-~( p(vln)+AO,p(vlc ) v(vl ) Substituting this expression into (10), we obtain , ,~ v p(vlc) Since the ~p(vlc ) are now independent, we obtain immediately the desired centroid expression (11), which is the desired weighted average of noun dis- tributions. We can now see that the variation (5 log Z~ van- ishes for centroid distributions given by (11), since it follows from (10) that 6 log = exp-rid(, , c)6d(n, e) Ze -ri -- 0 n The Free Energy Function The combined minimum distortion and maximum entropy opti- mization is equivalent to the minimization of a sin- gle function, the free energy 1 log Zn F = -~ = <D>-"Hlri , where (D) is the average distortion (5) and H is the cluster membership entropy (6). 186 The free energy determines both the distor- tion and the membership entropy through OZF (D) - O~ OF H - OT ' where T =/~-1 is the temperature. The most important property of the free en- ergy is that its minimum determines the balance between the "disordering" maximum entropy and "ordering" distortion minimization in which the system is most likely to be found. In fact the prob- ability to find the system at a given configuration is exponential in F Pocexp-flF , so a system is most likely to be found in its mini- mal free energy configuration. Hierarchical Clustering The analogy with statistical mechanics suggests a deterministic annealing procedure for clustering Rose et al., 1990), in which the number of clusters s determined through a sequence of phase transi- tions by continuously increasing the parameter/? following an annealing schedule. The higher is fl, the more local is the influence of each noun on the definition of centroids. Dis- tributional similarity plays here the role of distor- tion. When the scale parameter fl is close to zero, the similarity is almost irrelevant. All words con- tribute about equally to each centroid, and so the lowest average distortion solution involves just one cluster whose centroid is the average of all word distributions. As fl is slowly increased, a critical point is eventually reached for which the lowest F solution involves two distinct centroids. We say then that the original cluster has split into the two new clusters. In general, if we take any cluster c and a twin c' of c such that the centroid Pc' is a small ran- dom perturbation of Pc, below the critical fl at which c splits the membership and centroid reesti- mation procedure given by equations (8) and (11) will make pc and Pc, converge, that is, c and c' are really the same cluster. But with fl above the critical value for c, the two centroids will diverge, giving rise to two daughters of c. Our clustering procedure is thus as follows. We start with very low /3 and a single cluster whose centroid is the average of all noun distri- butions. For any given fl, we have a current set of leaf clusters corresponding to the current free en- ergy (local) minimum. To refine such a solution, we search for the lowest fl which is the critical value for some current leaf cluster splits. Ideally, there is just one split at that critical value, but for practical performance and numerical accuracy reasons we may have several splits at the new crit- ical point. The splitting procedure can then be repeated to achieve the desired number of clusters or model cross-entropy. 3 gun missile weapon rocket root 1 missile 0.835 officer rocket 0.850 aide bullet 0.917 chief 0.940 manager 4 0.758 shot 0.858 0.786 bullet 0.925 0.862 rocket 0.930 0.875 missile 1.037 2 0.484 0.612 0.649 0.651 Figure 1: Direct object clusters for fire CLUSTERING EXAMPLES All our experiments involve the asymmetric model described in the previous section. As explained there, our clustering procedure yields for each value of ~ a set CZ of clusters minimizing the free energy F, and the asymmetric model for fl esti- mates the conditional verb distribution for a noun n by cECB where p(cln ) also depends on ft. As a first experiment, we used our method to classify the 64 nouns appearing most frequently as heads of direct objects of the verb "fire" in one year (1988) of Associated Press newswire. In this corpus, the chosen nouns appear as direct object heads of a total of 2147 distinct verbs, so each noun is represented by a density over the 2147 verbs. Figure 1 shows the four words most similar to each cluster centroid, and the corresponding word- centroid KL distances, for the four clusters result- ing from the first two cluster splits. It can be seen that first split separates the objects corresponding to the weaponry sense of "fire" (cluster 1) from the ones corresponding to the personnel action (clus- ter 2). The second split then further refines the weaponry sense into a projectile sense (cluster 3) and a gun sense (cluster 4). That split is some- what less sharp, possibly because not enough dis- tinguishing contexts occur in the corpus. Figure 2 shows the four closest nouns to the centroid of each of a set of hierarchical clus- ters derived from verb-object pairs involving the 1000 most frequent nouns in the June 1991 elec- tronic version of Grolier's Encyclopedia (10 mil- 187 grant distinction form representation state 1.320 t residence ally 1.458 state residence 1.473 conductor /,..movement 1.534 teacher •"-number 0.999 number material 1.361 material variety 1.401 mass mass 1.422'~ variety ~number diversity structure concentration J control 1.2011 recognition 1.317 nomination 1.363 ~i~i~im 1.366 1.392 ent 1.329 _ 1.554 voyage 1.338 -~- 1.571 ~migration 1.428 1.577 progress 1.441 ~ conductor 0.699 j Istate ]1.279 I vice-president 0.756~eople I 1.417] editor 0.814 Imodem 1.418 director 0.825 [farmer 1.425 1.082 j complex 1.161 ~aavy 1.096 I 1.102 network 1.175_._._~ommunity 1.099 I 1.213 community 1.276 ]aetwork 1.244 1.233 group 1.327~ Icomplex 1.259 "~omplex [1.097 I Imaterial [ 0.976 ~network I 1"2111 1.026 ~alt ] 1.217[ lake 11.3601 1.093 ...------'-'-~mg 1.2441 ~region 11.4351 1.252 ~aumber 1.250[ ~ssay [0.695 I l'278~number 1.047 Icomedy 10.8001 comedy 1.060..------"~oem [ 0"8291 essay 1.142 f-reatise [ 0.850] piece 1.198"~urnber 11.120 I ~¢ariety 1.217 I ~aterial 1.275 I Fluster 1.3111 ~tructure [ 1.3711 ~elationship 1.460 I 1.429 change 1.561 j...~P ect 1.492[ 1.537 failure 1.562"-"'- ]system 1.497 I 1.577 variation 1.592~ iaollution 1.187] 1.582, structure 1.592 ~"~ailure 1.290 I \ [re_crease 1.328 I Imtection 1.432] speed 1.177 ~number 11.4611 level 1.315 _.,__Jconcentration 1.478 I velocity 1.371 ~trength 1.488 I size 1.440~ ~atio 1.488 I ~)lspeed 11.130 I ~enith 11.2141 epth 1.2441 ecognition 0.874] tcclaim 1.026 I enown 1.079 nomination 1.104 form 1.110 I ~xplanation 1.255 I :are 1.2911 :ontrol 1.295 I voyage 0.8611 Lrip 0.972] progress 1.016 I improvement 1.114 I )rogram 1.459 I ,peration 1.478 I :tudy 1.480 I nvestigation 1.4811 ;onductor 0.457] rice-president 0.474 I lirector 0.489 I :hairman 0.5001 Figure 2: Noun Clusters for Grolier's Encyclopedia 188 £ ~3 ~o -~ ¢ train ,*-----, test p k s- - - -D new --tt- ........................ ~ ...................................... t t t 0 0 100 200 300 400 number of dusters Figure 3: Asymmetric Model Evaluation, AP88 Verb-Direct Object Pairs 0.8 "\. m.......~ exceptional 3 0.6 -o 0.4 0.2 - s L , . , i 0 0 100 200 300 number of clusters 400 Figure 4: Pairwise Verb Comparisons, AP88 Verb- Direct Object Pairs lion words). MODEL EVALUATION The preceding qualitative discussion provides some indication of what aspects of distributional relationships may be discovered by clustering. However, we also need to evaluate clustering more rigorously as a basis for models of distributional relationships. So, far, we have looked at two kinds of measurements of model quality: (i) relative en- tropy between held-out data and the asymmetric model, and (ii) performance on the task of decid- ing which of two verbs is more likely to take a given noun as direct object when the data relating one of the verbs to the noun has been withheld from the training data. The evaluation described below was per- formed on the largest data set we have worked with so far, extracted from 44 million words of 1988 Associated Press newswire with the pattern matching techniques mentioned earlier. This col- lection process yielded 1112041 verb-object pairs. We selected then the subset involving the 1000 most frequent nouns in the corpus for clustering, and randomly divided it into a training set of 756721 pairs and a test set of 81240 pairs. Relative Entropy Figure 3 plots the unweighted average relative en- tropy, in bits, of several test sets to asymmet- ric clustered models of different sizes, given by 1 ~,,eAr, D(t,,ll/~-), where Aft is the set of di- rect objects in the test set and t,~ is the relative frequency distribution of verbs taking n as direct object in the test set. 3 For each critical value of f?, we show the relative entropy with respect to awe use unweighted averages because we are inter- ested her on how well the noun distributions are ap- proximated by the cluster model. If we were interested on the total information loss of using the asymmetric model to encode a test corpus, we would instead use the asymmetric model based on gp of the train- ing set (set train), of randomly selected held-out test set (set test), and of held-out data for a fur- ther 1000 nouns that were not clustered (set new). Unsurprisingly, the training set relative entropy decreases monotonically. The test set relative en- tropy decreases to a minimum at 206 clusters, and then starts increasing, suggesting that larger mod- els are overtrained. The new noun test set is intended to test whether clusters based on the 1000 most frequent nouns are useful classifiers for the selectional prop- erties of nouns in general. Since the nouns in the test set pairs do not occur in the training set, we do not have their cluster membership probabilities that are needed in the asymmetric model. Instead, for each noun n in the test set, we classify it with respect to the clusters by setting p(cln) = exp -DD(p,~ I lc)/Z, where p,~ is the empirical conditional verb distri- bution for n given by the test set. These cluster membership estimates were then used in the asym- metric model and the test set relative entropy cal- culated as before. As the figure shows, the cluster model provides over one bit of information about the selectional properties of the new nouns, but the overtraining effect is even sharper than for the held-out data involving the 1000 clustered nouns. Decision Task We also evaluated asymmetric cluster models on a verb decision task closer to possible applications to disambiguation in language analysis. The task consists judging which of two verbs v and v' is more likely to take a given noun n as object, when all occurrences of (v, n) in the training set were deliberately deleted. Thus this test evaluates how well the models reconstruct missing data in the the weighted average ~,~e~t fnD(t,~ll~,,) where f,, is the relative frequency of n in the test set. 189 verb distribution for n from the cluster centroids close to n. The data for this test was built from the train- ing data for the previous one in the following way, based on a suggestion by Dagan et al. (1993). 104 noun-verb pairs with a fairly frequent verb (be- tween 500 and 5000 occurrences) were randomly picked, and all occurrences of each pair in the training set were deleted. The resulting training set was used to build a sequence of cluster models as before. Each model was used to decide which of two verbs v and v ~ are more likely to appear with a noun n where the (v, n) data was deleted from the training set, and the decisions were compared with the corresponding ones derived from the orig- inal event frequencies in the initial data set. The error rate for each model is simply the proportion of disagreements for the selected (v, n, v t) triples. Figure 4 shows the error rates for each model for all the selected (v, n, v ~) (al 0 and for just those exceptional triples in which the conditional ratio p(n, v)/p(n, v ~) is on the opposite side of 1 from the marginal ratio p(v)/p(v~). In other words, the exceptional cases are those in which predictions based just on the marginal frequencies, which the initial one-cluster model represents, would be con- sistently wrong. Here too we see some overtraining for the largest models considered, although not for the ex- ceptional verbs. CONCLUSIONS We have demonstrated that a general divisive clus- tering procedure for probability distributions can be used to group words according to their partic- ipation in particular grammatical relations with other words. The resulting clusters are intuitively informative, and can be used to construct class- based word coocurrence models with substantial predictive power. While the clusters derived by the proposed method seem in many cases semantically signif- icant, this intuition needs to be grounded in a more rigorous assessment. In addition to predic- tive power evaluations of the kind we have al- ready carried out, it might be worth comparing automatically-derived clusters with human judge: ments in a suitable experimental setting. Moving further in the direction of class-based language models, we plan to consider additional distributional relations (for instance, adjective- noun) and apply the results of clustering to the grouping of lexical associations in lexicalized grammar frameworks such as stochastic lexicalized tree-adjoining grammars (Schabes, 1992). ACKNOWLEDGMENTS We would like to thank Don Hindle for making available the 1988 Associated Press verb-object data set, the Fidditch parser and a verb-object structure filter, Mats Rooth for selecting the ob- jects of "fire" data set and many discussions, David Yarowsky for help with his stemming and concordancing tools, andIdo Dagan for suggesting ways of testing cluster models. REFERENCES Peter F. Brown, Vincent J. Della Pietra, Peter V. deS- ouza, Jenifer C. Lal, and Robert L. Mercer. 1990. Class-based n-gram models of natural language. In Proceedings of the IBM Natural Language ITL, pages 283-298, Paris, France, March. Kenneth W. Church and William A. Gale. 1991. A comparison of the enhanced Good-Turing and deleted estimation methods for estimating proba- bilities of English bigrams. Computer Speech and Language, 5:19-54. Kenneth W. Church. 1988. A stochastic parts pro- gram and noun phrase parser for unrestricted text. In Proceedings of the Second Conference on Applied Natural Language Processing, pages 136-143, Austin, Texas. Association for Compu- tational Linguistics, Morristown, New Jersey. Thomas M. Cover and Joy A. Thomas. 1991. Ele- ments of Information Theory. Wiley-Interscience, New York, New York. Ido Dagan, Shaul Markus, and Shaul Markovitch. 1993. Contextual word similarity and estimation from sparse data. In these proceedings. A. P. Dempster, N. M. Laird, and D. B. Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B, 39(1):1-38. Richard O. Duda and Peter E. Hart. 1973. Pat- tern Classification and Scene Analysis. Wiley- Interseience, New York, New York. Donald Hindle. 1990. Noun classification from predicate-argument structures. In 28th Annual Meeting of the Association for Computational Linguistics, pages 268-275, Pittsburgh, Pennsyl- vania. Association for Computational Linguistics, Morristown, New Jersey. Donald Hindle. 1993. A parser for text corpora. In B.T.S. Atldns and A. Zampoli, editors, Computa- tional Approaches to the Lexicon. Oxford Univer- sity Press, Oxford, England. To appear. Edwin T. Jaynes. 1983. Brandeis lectures. In Roger D. Rosenkrantz, editor, E. T. Jaynes: Papers on Probability, Statistics and Statistical Physics, number 158 in Synthese Library, chap- ter 4, pages 40-76. D. Reidel, Dordrecht, Holland. Philip Resnik. 1992. WordNet and distributional analysis: A class-based approach to lexical dis- covery. In AAAI Workshop on Statistically- Based Natural-Language-Processing Techniques, San Jose, California, July. Kenneth Rose, Eitan Gurewitz, and Geoffrey C. Fox. 1990. Statistical mechanics and phase transitions in clustering. Physical Review Letters, 65(8):945- 948. Yves Sehabes. 1992. Stochastic lexicalized tree- adjoining grammars. In Proceeedings of the 14th International Conference on Computational Lin- guistics, Nantes, France. David Yarowsky. 1992. CONC: Tools for text corpora. Technical Memorandum 11222-921222-29, AT&T Bell Laboratories. 190
1993
24
TRANSFERS OF MEANING Geoffrey Nunberg Xerox Palo Alto Research Center and Stanford University Xerox PARC 3333 Coyote Hill Road Palo Alto CA, 94304 USA Internet: [email protected] Abstract In one form or another, the phenomena associated with "meaning transfer" have become central is- sues in a lot of recent work on semantics. Speaking very roughly, we can partition approaches to the phenomenon along two dimensions, which yield four basic points of departure. In the first two, people have considered transfer in basically se- mantic or linguistic terms. Some have concen- trated on what we might call the paradigmatic aspects of transfer, focusing on the productive lexical processes that map semantic features into features -- for example, the "grinding" rule that applies to turn the names of animals into mass terms denoting their meat or fur. This the ap- proach that's involved in most recent work on "regular polysemy," "systematic polysemy," and the like, for example by Apresjan, Ostler and Atkins, Briscoe and Copestake, Nunberg and Za- enen, Wilensky, Kilgarriff and a number of other people. Other people have emphasized the syncat- egorematic aspects of transfer; that is, the ways meaning shifts and specifications are coerced in the course of semantic composition. This is an ap- proach that hass been developed in particular by James Pustejovsky and his collaborators, building on earlier work on type shifting. As opposed to these, there are conceptual and pragmatic approaches to transfer, which fo- cus on the extralinguistic circumstances that li- cense transfers of various types. Here again there are both paradigmatic and syncategorematic ap- proaches, loosely speaking. The first is exempli- fied in a lot of recent work on metaphor by people associated with the "cognitive linguistics" school, which has focused chiefly on the relations between domains of experience that metaphor variously ex- ploits and imputes. The second is represented by work on indirect speech within Gricean pragmat- ics, Relevance Theory, and the like, which has been chiefly concerned with specifying the con- versational conditions that give rise to metaphor, irony, and analogous phenomena. Of course this categorization is somewhat fac- titious. The borders between these approaches are highly porous, and most work on transfer over- laps several of them. This is entirely appropriate, since these are in no sense competing theories or accounts of the phenomena. Transfer is clearly a linguistic process, and in many of its most impor- tant forms a lexical one. But it just as clearly has its basis in very general cognitive and commu- nicative principles. And while it's reasonable that people should choose to focus on one or another of these considerations relative to their immediate interests, it is also useful to keep the Big Picture in mind, lest we inadvertently ascribe to one domain of explanation a responsibility that more properly belongs to another. This is the picture I want to sketch out in this talk. A comprehensive account of transfer has to make appeal to three different kinds of regulari- ties or rules. The first are nonlinguistic: the cor- respondences between domains, real or imputed, that transfer invokes, and the communicative in- terests that may make these invocations useful or instructive -- they enable us to identify one thing in virtue of its relation to another, explain an ab- stract domain by reference to a concrete one, and so forth. Second, there is the repertory of general linguistic processes of transfer that exploit these correspondences and principles. By these I have in mind not traditional categories like metaphor, synecdoche, and metonymy- distinctions that have basically to do with the kinds of domain cor- respondences that transfer exploits -- but the var- ious types of operations that make possible type- shifting and sortal reassignment of expressions, syntactic recategorizations, and deferred indexical reference. These processes may cross-cut the types of domain correspondences that they exploit, and I'll show that we often find a single type of domain correspondence underlying two or more distinct semantic processes of transfer. Third, there are the language-specific instantiations of these oper- ations, for example in the form of constructions or lexical rules that license particular types or 191 subtypes of transfers (for example some language, like Greenlandic Eskimo, permit "grinding" of tree names to yield names of types of woods, but not of animal names to yield the names of furs or meats.) In the first part of this talk, I'll focus on one of the general processes that underlie transfer: the semantic operation of "predicate transfer," which licenses the sortal reassignment of expressions de- noting properties and relations, and which under- lies a great deal of lexical polysemy. The process I have in mind is illustrated by an example like "I am parked out back." This is usually regarded as a classic instance of metonymy- i.e.,"person" for "car" -- where we use an expression that would conventionally denote one thing to refer to some other thing to which it is connected by a "rela- tion of contiguity." But I'll show that on consid- eration there are compelling reasons for supposing that I here refers to the speaker, rather than his car. (For one thing the number of the pronoun doesn't vary according to the number of cars in- volved: if you had two cars parked out back you wouldn't say "We are parked out back," though of course this would be the appropriate thing to say to refer to a single car owned by two or more people). And other morphological and syntactic observations support the same conclusion: in ex- amples like this what has been transferred is the meaning of the predicate, rather than its argu- ment. That is, the predicate parked out back has a transferred reading here: it denotes the property that the speaker acquires in virtue of his relation to a car that has the property of being parked out back. Two conditions have to be satisfied before predicate transfers like this one are licensed. First, there has to be a salient correspondence (more specifically, an injective function) between the properties of things in one domain and the prop- erties of things in another; e.g., between the lo- cations of cars in a lot and the properties that distinguish the owner of one car from the owner of another. Second, it has to be either useful or in- teresting to know that these acquired or inherited properties apply to their carriers: that's why we can say "I am parked out back" to someone who is about to go get the car, whereas it is hard to imagine a context in which one would want to say "I was once driven by R.icardo Montalban." I will give a simple formal account of these conditions on predicate transfer, and then show how it resolves some familiar syntactic and seman- tic difficulties. Take Jackendoff's example, "Ringo squeezed himself into a narrow parking space." If we analyze this as involving a metonymy, we will have to say that the reflexive here denotes something distinct from its antecedent, and so make provision for certain sortal shifts in giving the identity conditions on reflexivization and other rules and constructions ordinarily require corefer- ence of pronoun and antecedent. Whereas now we will take squeeze into a narrow parking place as a transferred predicate that denotes a relation between persons: in virtue of having squeezed his car into a space, that is, Ringo has also done some- thing noteworthy to himself. More generally, I'll argue that the conditions on rules of anaphora and similar operations need never provide for sor- tal shifts; sortally speaking, we must always take "syntactic identity" in the strictest possible way. In the second part of this paper, I'll show how predicate transfer is instantiated lexically in the rules that provide for systematic polysemy. I'll mention several familiar cases: grinding, conver- sion of names of artists to the names of their works (e.g., a Picasso, an Agatha Christie, and the use of the names of publications like newspaper and magazine for the organizations that produce them. Each of these processes is subject to a variety of constraints, which may answer any of several dif- ferent principles. Some are due to the absence of perceived domain correspondences of the appro- priate type (for example, the reluctance of words like mammal and bird to undergo grinding. Some are explained by the fact that the acquired prop- erty denoted by the transferred predicate is insuf- ficiently noteworthy or criterial: that is why we don't say She was reading a Kafka. Still others are due to the absence of specific lexical licenses for certain types of transfer; this explains why we don't generally use the "artist for work" rule to de- rive the names of musical works (?two Beethovens, ?several Elvises), or why grinding does not apply in English to derive the names of liquids ? We al- ways cook with olive. All of this by way of showing why it is important to bear in mind the hetero- geneity of the mechanisms that underlie transfers of all types. 192
1993
25
A COMPLETE AND RECURSIVE FEATURE THEORY* Rolf Backofen and Gert Smolka German Research Center for Artificial Intelligence (DFKI) W-6600 Saarbr/icken, Germany {backofen,smolka} @dfki.uni-sb.de Abstract Various feature descriptions are being employed in constrained-based grammar formalisms. The com- mon notational primitive of these descriptions are functional attributes called features. The descrip- tions considered in this paper are the possibly quan- tified first-order formulae obtained from a signature of features and sorts. We establish a complete first- order theory FT by means of three axiom schemes and construct three elementarily equivalent models. One of the models consists of so-called feature graphs, a data structure common in computational linguistics. The other two models consist of so-called feature trees, a record-like data structure generaliz- ing the trees corresponding to first-order terms. Our completeness proof exhibits a terminating simplification system deciding validity and satisfia- bility of possibly quantified feature descriptions. 1 Introduction Feature descriptions provide for the typically partial description of abstract objects by means of functional attributes called features. They originated in the late seventies with so-called unification grammars [14], a by now popular family of declarative grammar for- malisms for the description and processing of natu- ral language. More recently, the use of feature de- scriptions in logic programming has been advocated and studied [2, 3, 4, 17, 16]. Essentially, feature de- scriptions provide a logical version of records, a data structure found in many programming languages. Feature descriptions have been proposed in vari- ous forms with various formalizations [1, 13, 9, 15, 5, 10]. We will follow the logical approach pioneered by [15], which accommodates feature descriptions as standard first-order formulae interpreted in first- order structures. In this approach, a semantics for *We appreciate discussions with Joachim Niehren and Ralf Treinen who read a draft version of this paper. The research reported in this paper has been supported by the Bundesminister ffir Forschung und Technologie under contracts ITW 90002 0 (DISCO) and ITW 9105 (Hydra). For space limitations proofs are omitted; they can be found in the complete paper [6]. feature descriptions can be given by means of a fea- ture theory (i.e., a set of closed feature descriptions having at least one model). There are two comple- mentary ways of specifying a feature theory: either by explicitly constructing a standard model and tak- ing all sentences valid in it, or by stating axioms and proving their consistency. Both possibilities are exemplified in [15]: the feature graph algebra ~" is given as a standard model, and the class of feature algebras is obtained by means of an axiomatization. Both approaches to fixing a feature theory have their advantages. The construction of a standard model provides for a clear intuition and yields a com- plete feature theory (i.e., if ¢ is a closed feature de- scription, then either ¢ or -~¢ is valid). The presenta- tion of a recursively enumerable axiomatization has the advantage that we inherit from predicate logic a sound and complete deduction system for valid fea- ture descriptions. The ideal case then is to specify a feature theory by both a standard model and a corresponding re- cursively enumerable axiomatization. The existence of such a double characterization, however, is by no means obvious since it implies that the feature theory is decidable. In fact, so far no decidable, consistent and complete feature theory has been known. In this paper we will establish a complete and de- cidable feature theory FT by means of three axiom schemes. We will also construct three models of FT, two consisting of so-called feature trees, and one con- sisting of so-called feature graphs. Since FT is com- plete, all three models are elementarily equivalent (i.e., satisfy exactly the same first-order formulae). While the feature graph model captures intuitions common in linguistically motivated investigations, the feature tree model provides the connection to the tree constraint systems [8, 11, 12] employed in logic programming. Our proof of FT's completeness will exhibit a sim- plification algorithm that computes for every feature description an equivalent solved form from which the solutions of the description can be read of easily. For a closed feature description the solved form is either T (which means that the description is valid) or _L (which means that the description is invalid). For 193 a feature description with free variables the solved form is .L if and only if the description is unsatisfi- able. 1.1 Feature Descriptions Feature descriptions are first-order formulae built over an alphabet of binary predicate symbols, called features, and an alphabet of unary predicate sym- bols, called sorts. There are no function symbols. In admissible interpretations features must be func- tional relations, and distinct sorts must be disjoint sets. This is stated by the first and second axiom scheme of FT'. (Axl) VxVyVz(f(x, y) A I(x, z) --~ y - z) (for every feature jr) (Ax2) W(A(=) ^ B(.) -~ ±) (for every two dis- tinct sorts A and B). A typical feature description written in matrix no- tation is X : woman father 3y husband engineer ] : age :y : [ painter age:y ] It may be read as saying that x is a woman whose father is an engineer, whose husband is a painter, and whose father and husband are both of the same age. Written in plain first-order syntax we obtain the less suggestive formula 3y, F, H (woman(X) A father(x, F) A engineer(V) A age(V, y) A husband(x, H) A painter(H) A age(H, y) ). The axiom schemes (Axl) and (Ax2) still ad- mit trivial models where all features and sorts are empty. The third and final axiom scheme of FT states that certain "consistent" descriptions have so- lutions. Three Examples of instances of FT's third axiom scheme are 3x, y, z (f(x, y) A A(y) A g(x, z) A B(z)) Vu, z 3x, y (f(x, y) A g(y, u) A h(y, z) A YfT ) Vz 3x, y (f(x, y) A g(y, x) A h(y, z) A yfT), where yfT abbreviates -~3z(f(y, z)). Note that the third description f(=, y) ^ g(y, =) ^ h(y, z) A f~T is "cyclic" with respect to the variables x and y. 1.2 Feature Trees A feature tree (examples are shown in Figure 1) is a tree whose edges are labeled with features, and whose nodes are labeled with sorts. As one would expect, the labeling with features must be determin- istic, that is, the direct subtrees of a feature tree must be uniquely identified by the features of the 194 point xval~:val point xval~lor 2 3 red circle xval~yval n"s ) Figure 1: Examples of Feature Trees. edges leading to them. Feature trees can be seen as a mathematical model of records in programming lan- guages. Feature trees without subtrees model atomic values (e.g., numbers). Feature trees may be finite or infinite, where infinite feature trees provide for the convenient representation of cyclic data structures. The last example in Figure 1 gives a finite graph representation of an infinite feature tree, which may arise as the representation of the recursive type equa- tion nat = 0 + s(nat). Feature descriptions are interpreted over feature trees as one would expect: • Every sort symbol A is taken as a unary predi- cate, where a sort constraint A(x) holds if and only if the root of the tree x is labeled with A. • Every feature symbol f is taken as a binary predicate, where a feature constraint f(x,y) holds if and only if the tree x has the direct subtree y at feature f. The theory of the corresponding first-order structure (i.e., the set of all closed formulae valid in this struc- ture) is called FT. We will show that FT is in fact ex- actly the theory specified by the three axiom schemes outlined above, provided the alphabets of sorts and features are both taken to be infinite. Hence FT is complete (since it is the theory of the feature tree structure) and decidable (since it is complete and specified by a recursive set of axioms). Another, elementarily equivalent, model of FT is the substructure of the feature tree structure ob- tained by admitting only rational feature trees (i.e., finitely branching trees having only finitely many subtrees). Yet another model of FT can be obtained from so-called feature graphs, which are finite, di- rected, possibly cyclic graphs labelled with sorts and features similar to feature trees. In contrast to fea- ture trees, nodes of feature graphs may or may not be labelled with sorts. Feature graphs correspond to the so-called feature structures commonly found in linguistically motivated investigations [14, 7]. 1.3 Organization of the Paper Section 2 recalls the necessary notions and nota- tions from Predicate Logic. Section 3 defines the theory FT by means of three axiom schemes. Sec- tion 4 establishes the overall structure of the com- pleteness proof by means of a lemma. Section 5 studies quantifier-free conjunctive formulae, gives a solved form, and introduces path constraints. Sec- tion 6 defines feature trees and graphs and estab- lishes the respective models of FT. Section 7 studies the properties of so-called prime formulae, which are the basic building stones of the solved form for gen- eral feature constraints. Section 8 presents the quan- tifier elimination lemmas and completes the com- pleteness proof. 2 Preliminaries Throughout this paper we assume a signature SOR~ FEA consisting of an infinite set SOR of unary pred- icate symbols called sorts and an infinite set FEA of binary predicate symbols called features. For the completeness of our axiomatization it is essen- tial that there are both infinitely many sorts and infinitely many features.The letters A, B, C will al- ways denote sorts, and the letters f, g, h will always denote features. A path is a word (i.e., a finite, possibly empty sequence) over the set of all features. The symbol c denotes the empty path, which satisfies cp = p = pc for every path p. A path p is called a prefix of a path q, if there exists a path p' such that pp' = q. We also assume an infinite alphabet of variables and adopt the convention that x, y, z always de- note variables, and X, Y always denote finite, pos- sibly empty sets of variables. Under our signa- ture SOR ~ FEA, every term is a variable, and an atomic formula is either a feature constraint xfy (f(x,y) in standard notation), a sort constraint Ax (A(x) in standard notation), an equation x - y, _L ("false"), or T ("true"). Compound formulae are obtained as usual with the connectives A, V, --+, ~-+, -~ and the quantifiers 3 and V. We use 3¢ [V¢] to de- note the existential [universal] closure of a formula ¢. Moreover, 1)(¢) is taken to denote the set of all variables that occur free in a formula ¢. The letters ¢ and ¢ will always denote formulae. We assume that the conjunction of formulae is an associative and commutative operation that has T as neutral element. This means that we identify eA(¢A0) withOA(¢A¢),andeATwith¢(but not, for example, xfy A xfy with xfy). A conjunc- tion of atomic formulae can thus be seen as the finite multiset of these formulae, where conjunction is mul- tiset union, and T (the "empty conjunction") is the empty multiset. We will write ¢ C ¢ (or ¢ E ¢, if ¢ is an atomic formula) if there exists a formula ¢~ such that ¢ A ¢1 = ¢. Moreover, we identify 3x3y¢ with 3y3x¢. If X = {xl,...,xn}, we write 3X¢ for 3xl ...3xn¢. IfX = 0, then 3X¢ stands for ¢. Structures and satisfaction of formulae are defined as usual. A valuation into a structure `4 is a total function from the set of all variables into the universe 1`4] of`4. A valuation ~' into,4 is called an x-update [X-update] of a valuation a into ,4 if (~' and a a~ree everywhere but possibly on x [X]. We use ¢~ to denote the set of all valuations c~ such that ,4, c~ ~ ¢. We write ¢ ~ ¢ ("¢ entails ¢") if CA C ¢ A for all structures ,4, and ¢ ~ ¢ ("¢ is equivalent to ¢") if ¢.4 = cA for all structures ,4. A theory is a set of closed formulae. A model of a theory is a structure that satisfies every formulae of the theory. A formula ¢ is a consequence of a theory T (T ~ ¢) if V¢ is valid in every model of T. A formula ¢ entails a formula ¢ in a theory T (¢ ~T ¢) if ¢'4 C_ ¢.4 for every model ,4 of T. Two formulae ¢, ¢ are equivalent in a theory T (¢ ~T ¢) if cA = ¢.4 for every model ,4 of T. A theory T is complete if for every closed formula either ¢ or -,¢ is a consequence of T. A theory is decidable if the set of its consequences is decidable. Since the consequences of a recursively enumerable theory are recursively enumerable (completeness of first-order deduction), a complete theory is decidable if and only if it is recursively enumerable. Two first-order structures ,4, B are elementarily equivalent if, for every first-order formula ¢, ¢ is valid in ,4 if and only if ¢ is valid in B. Note that all models of a complete theory are elementarily equiv- alent. 3 The Axioms The first axiom scheme says that features are func- tional: (Ax1) VxVyVz(xfy A z.fz ---* y -- z) (for every feature f). The second scheme says that sorts are mutually dis- joint: (Ax2) Vx(Ax A Bx --* 1) (for every two distinct sorts A and B). The third and final axiom scheme will say that cer- tain "consistent feature descriptions" are satisfiable. For its formulation we need the important notion of a solved clause. An exclusion constraint is an additional atomic formula of the form zfI ("f undefined on x") taken to be equivalent to -~3y (xfy) (for some variable y # z). A solved clause is a possibly empty conjunction ¢ of atomic formulae of the form xfy, Ax and xf~ such that the following conditions are satisfied: 1. no atomic formula occurs twice in ¢ 2. ifAxEeandBxE¢,thenA=B 3. ifxfyEeandxfzE¢,theny=z 4. if xfy E ¢, then xfT q~ ¢. Figure 2 gives a graph representation of the solved clause xfu A xgv A zh~ A 195 f-~x hT = ~ B gT Figure 2: A graph representation of a solved clause. Cu A uhx A ugy A u f z A Av ^ vgz ^ vhw ^ vfT A Bw A wIT A wg T . As in the example, a solved clause can always be seen as the graph whose nodes are the variables appearing in the clause and whose arcs are given by the feature constraints xfy. The constraints Ax, xfT appear as labels of the node x. A variable x is constrained in a solved clause ¢ if ¢ contains a constraint of the form Ax, xfy or xfT. We use CV(¢) to denote the set of all variables that are constrained in ¢. The variables in V(¢) - CV(¢) are called the parameters of a solved clause ¢. In the graph representation of a solved clause the parameters appear as leaves that are not not labeled with a sort or a feature exclusion. The parameters of the solved clause in Figure 2 are y and z. We can now state the third axiom scheme. It says that the constrained variables of a solved clause have solutions for all values of the parameters: (Ax3) ~/qx¢ (for every solved clause ¢ and X = cv(¢)). The theory FT is the set of all sentences that can be obtained as instances of the axiom schemes (Axl), (Ax2) and (Ax3). The theory FTo is the set of all sentences that can be obtained as instances of the first two axiom schemes. As the main result of this paper we will show that FT is a complete and decidable theory. By using an adaption of the proof of Theorem 8.3 in [15] one can show that FTo is undecidable. 4 Outline of the Completeness Proof The completeness of FT will be shown by exhibit- ing a simplification algorithm for FT. The following lemma gives the overall structure of the algorithm, which is the same as in Maher's [12] completeness proof for the theory of constructor trees. Lemma 4.1 Suppose there exists a set of so-called prime formulae such that: 1. every sort constraint Ax, every feature con- straint xfy, and every equation x - y such that = ~ y is a prime formula 2. T is a prime formula, and there is no other closed prime formula 3. for every two prime formulae fl and fl' one can compute a formula 6 that is either prime or .1_ and satisfies flAi'MFT6 and )2(6)C_V(flAff) 4. for every prime formula fl and every variable x one can compute a prime formula i' such that 3xi MFT fl' and Y(t') C_ Y(3xfl) 5. if i, ill,''' ,fin are prime formulae, then ft ^ 3=(t ^ i=1 i----1 6. for every two prime formulae fl, fl I and every variable x one can compute a Boolean combina- tion 6 of prime formulae such that 3~(fl^-,¢) I~FT 6 and Vff) C VO=(fl^-~l')). Then one can compute for every formula ¢ a Boolean combination ~ of prime formulae such that ¢ MET ~ and V(O C_ V(¢). Proof. Suppose a set of prime formulae as required exists. Let ¢ be a formula. We show by induction on the structure of ¢ how to compute a Boolean combi- nation df of prime formulae such that ¢ MET 6 and V(O C_ V(¢). If ¢ is an atomic formula Ax, xfy or x - y, then ¢ is either a prime formula, or ¢ is a trivial equation z - z, in which case it is equivalent to the prime formula T. If ¢ is -~¢, ¢ ^ ¢' or ¢ V ¢', then the claim follows immediately with the induction hypothesis. It remains to show the claim for ¢ = 3=¢. By the induction hypothesis we know that we can compute a Boolean combination df of prime formulae such that 6 MFT ~) and V(6) C_ V(¢). Now ~ can be trans- formed to a disjunctive normal form where prime formulae play the role of atomic formulae; that is, 6 is equivalent to 6'1 V... V ¢,, where every "clause" qi is a conjunction of prime and negated prime for- mulae. Hence 3=¢ 14 3=(o-~ v... v ,..) I=13=o-~ v... v 3=o-., where all three formulae have exactly the same free variables. It remains to show that one can compute for every clause ~r a Boolean combination 6 of prime formulae such that =1=o- MET 6 and Y(6) C_ V(3xa). We distinguish the following cases. (i) a = fl for some basic formula i. Then the claim follows by assumption (4). Oi) o" i^" ~ , = Ai=I ti n > 0. Then the claim follows with assumptions (5) and (6). n T n Oil) tr = Ai=I -~ii, n > 0. Then a MET AA/=I -~fli and the claim follows with case (it) since T is a prime formula by assumption (2). (iv) ~ =ill ^...^tk ^-,ill ^... h t', k > 1, n ___ 0. Then we know by assumption (3) that either fll A...A flk MFT .L or fll A ... A flk MET fl for some prime formula ft. In the former case we choose 8 = -,T, and in the latter case the claim follows with case (i) or (ii). [] 196 Note that, provided a set of prime formulae with the required properties exists, the preceding lemma yields the completeness of FT since every closed for- mula can be simplified to T or -~T (since T is the only closed prime formula). In the following we will establish a set of prime formula as required. 5 Solved Formulae In this section we introduce a solved form for con- junctions of atomic formulae. A basic formula is either 3- or a possibly empty conjunction of atomic formulae of the form Ax, xfy, and x - y. Note that T is a basic formula since T is the empty conjunction. Every basic formula ¢ ~ 3- has a unique decom- position ¢ = CN ACG into a possibly empty conjunc- tion CN of equations "x -- y" and a possibly empty conjunction CG of sort constraints "Ax" and feature constraints "xfy". We call CN the normalizer and and ¢G the graph of ¢. We say that a basic formula ¢ binds x to y if x - y E ¢ and x occurs only once in ¢. Here it is important to note that we consider equations as directed, that is, assume that x - y is different from y ~ x ifx ~ y. We say that ¢ eliminatesx if¢ binds x to some variable y. A solved formula is a basic formula 7 ~ 3- such that the following conditions are satisfied: 1. an equation x - y appears in 7 if and only if 7 eliminates x 2. the graph of 7 is a solved clause. Note that a solved clause not containing exclusion constraints is a solved formula, and that a solved formula not containing equations is a solved clause. The letter 7 will always denote a solved formula. We will see that every basic formula is equivalent in FT0 to either 3- or a solved formula. Figure 3 shows the so-called basic simplification rules. With ¢[x ~-- y] we denote the formula that is obtained from ¢ by replacing every occurrence of x with y. We say that a formula ¢ simplifies to a formula ¢ by a simplification rule p if ~ is an instance of p. We say that a basic formula ¢ simplifies to a basic formula ¢ if either ¢ = ¢ or ¢ simplifies to ¢ in finitely many steps each licensed by one of basic simplification rules in Figure 3. Note that the basic simplification rules (1) and (2) correspond to the first and second axiom scheme, re- spectively. Thus they are equivalence transformation with respect to FTo. The remaining three simplifica- tion rules are equivalence transformations in general. Proposition 5.1 The basic simplification rules are terminating and perform equivalence transforma- tions with respect to FT0. Moreover, a basic formula ¢ ~ 3_ is solved if and only if no basic simplification rule applies to it. Proposition 5.2 Let ¢ be a formula built from atomic formulae with conjunction. Then one can 1. xfy A xfz A ¢ xfzAy--zA¢ AxABxA¢ 2. 3- A# B Ax A Ax A ¢ 3. AxA¢ x--yA¢ 4. z E 13(¢) and x ~ y ~- y^¢[~,-- y] z--xA¢ 5. ¢ Figure 3: The basic simplification rules. compute a formula 6 that is either solved or 3_ such that ¢ ~FTo 6 and r(6) C_ l;(¢). In the quantifier elimination proofs to come it will be convenient to use so-called path constraints, which provide a flexible syntax for atomic formulae closed under conjunction and existential quantifica- tion. We start by defining the denotation of a path. The interpretations fit, g~ of two features f, g in a structure .4 are binary relations on the universe 1"41 of .4; hence their composition fA o g.a is again a binary relation on 1-41 satisfying a(f A o gA)b ¢=:¢, 3c ~ 1"41: af Ac A cfAb for all a, b E 1"41. Consequently we define the deno- tation p~t of a path p = fl "'" .In in a structure .4 as the composition (fl...fn) A :---- f:o...ofn A, where the empty path ~ is taken to denote the iden- tity relation. If.4 is a model of the theory FTo, then every paths denotes a unary partial function on the universe of .4. Given an element a E [.41, p~t is thus either undefined on a or leads from a to exactly one b ~ 1.41. Let p, q be paths, x, y be variables, and A be a sort. Then path constraints are defined as follows: .4, a ~ zpy :¢:~ o~(x) pA a(y) .4, a ~ xp.~yq :¢:=~ 3a E 1.41: °t(x)pa aAa(y)q A a .4, a~Azp :~=~3ael.41: a(z)p'4a^aeA "~. Note that path constraints xpy generalize feature constraints x fy. A proper path constraint is a path constraint of the form "Axp" or "xp ~. yq". Every path constraint can be expressed with the already existing formulae, as can be seen from the following equivalences: x~y ~ x - y xfpy ~ 3z(xfz A zpy) (z ~£ x,y) xpl yq N 3z(xpz ^ uqz) (z # ~, ~) mxp ~ 3y(xpy A my) (y • x). 197 The closure [3`] of a solved formula 3` is the closure of the atomic formulae occurring in 7 with respect to the following deduction rules: x-y xpy yfz xpz yqz Ay xpy xEx xey zpf z xp I Yq Axp Recall that we assume that equations x - y are di- rected, that is, are ordered pairs of variables. Hence, xey E [71 and yex ~ [71 if x - y E 7. The closure of a solved clause 6 is defined anal- ogously. Proposition 5.3 Let 7 be a solved formula. Then: I. if ~v E [7], then 7 ~ ~r 2. xeyE[7] iff x=yorx--yE7 3. xfy E [7] iff zfy E 3` or 3z: z "-- z E 7 and zfy E 7 4. xpfy e [7] iff 3z: xpz e [7] and zfy e 3` 5. if p 7 £ e and xpy, xpz E [3`], then y = z 5. it is decidable whether a path constraint is in [3']. 6 Feature Trees and Feature Graphs In this section we establish three models of FT con- sisting of either feature trees or feature graphs. Since we will show that FT is a complete theory, all three models are in fact elementarily equivalent. A tree domain is a nonempty set D _C FEA* of paths that is prefix-closed, that is, if pq E D, then p E D. Note that every tree domain contains the empty path. A feature tree is a partial function a: FEA* --+ SOR whose domain is a tree domain. The paths in the domain of a feature tree represent the nodes of the tree; the empty path represents its root. We use D~ to denote the domain of a feature tree ~. A feature tree is called finite [infinite I if its domain is finite [infinite]. The letters a and 7. will always denote feature trees. The subtree pa of a feature tree a at a path p E Da is the feature tree defined by (in relational notation) pa := {(q,A) l(pq, A) Ea}. A feature tree a is called a subtree of a feature tree 7- if ~r is a subtree of 7- at some path p E Dr, and a direct subtree if p = f for some feature f. A feature tree a is called rational if (1) cr has only finitely many subtrees and (2) a is finitely branching (i.e., for every p E D~, the set {pf E D~ [ f E FEA} is finite). Note that for every rational feature tree a there exist finitely many features fl,...,In such that Do C_ {fl,..-,fn}*. The feature tree structure'T is the SOR~FEA- structure defined as follows: * the universe of 7- is the set of all feature trees • (r E A 7- iff a(c) = A (i.e., a's root is labeled with A) • (~,7-) EfT" iff f E Da and 7- = fa (i.e., r is the subtree of a at f). The rational feature tree structure 7~ is the sub- structure of T consisting only of the rational feature trees. Theorem 6.1 The feature tree structures T and 7~ are models of the theory FT. A feature pregraph is a pair (x, 7) consisting of a variable x (called the root) and a solved clause 7 not containing exclusion constraints such that, for every variable y occurring in 7, there exists a path p satisfying xpy E [7]- If one deletes the exclusion constraints in Figure 2, one obtains the graphical representation of a feature pregraph whose root is x. A feature pregraph (x, 7) is called a subpregraph of a feature pregraph (y,~) if 7 _C 6 and x -- y or x E ]2(~). Note that a feature pregraph has only finitely many subpregraphs. We say that two feature pregraphs are equivalent if they are equal up to consistent variable renaming. For instance, (x, xfy A ygx) and (u, ufx A xgu) are equivalent feature pregraphs. A feature graph is an element of the quotient of the set of all feature pregraphs with respect to equivalence as defined above. We use (x, 7) to denote the feature graph obtained as the equivalence class of the feature pregraph (x, 7). In contrast to feature trees, not every node of a feature graph must carry a sort. The feature graph structure ~ is the SOR FEA-structure defined as follows: • the universe of ~ is the set of all feature graphs • (x,7) EA ~iffAxE7 • ((x, 7), a) E f6 iff there exists a maximal fea- ture subpregraph (y, ~) of (x, 7) such that xfy E 7 and ~r -- (y, 6). Theorem 6.2 The feature graph structure ~ is a model of the theory FT. Let ~" be the structure whose domain consists of all feature pregraphs and that is otherwise defined analogous to G. Note that G is in fact the quotient of jc with respect to equivalence of feature pregraphs. Proposition 6.3 The feature pregraph structure yr is a model of FTo but not of FT. 7 Prime Formulae We now define a class of prime formulae having the properties required by Lemma 4.1. The prime for- mulae will turn out to be solved forms for formulae built from atomic formulae with conjunction and ex- istential quantification. A prime formula is a formula 3X7 such that 1. 7 is a solved formula 2. X has no variable in common with the normal- izer of 3' 3. every x E X can be reached from a free variable, that is, there exists a path constraint ypx E [7] such that y ~t X. 198 The letter/3 will always denote a prime formula. Note that T is the only closed prime formula, and that 3X 7 is a prime formula if 3x3X 7 is a prime formula. Moreover, every solved formula is a prime formula, and every quantifier-free prime formula is a solved formula. The definition of prime formulae certainly fulfills the requirements (1) and (2) of Lemma 4.1. The fulfillment of the requirements (3) and (4) will be shown in this section, and the fulfillment of the re- quirements (5) and (6) will be shown in the next section. Proposition 7.1 Let 3X 7 be a prime formula, .A be a model of FT, and ,4, a ~ 3X7. Then there exists one and only one X-update (~' of ~ such that A,a' ~7. The next proposition establishes that prime formu- lae are closed under existential quantification (prop- erty (4) of Lemma 4.1). Proposition 7.2 For every prime formula /3 and every variable x one can compute a prime formula /3' such thai 3x/3 ~:~FT /3' and Y(/3') C Y(3x/3). Proposition 7.3 If /3 is a prime formula, then FT p i/3. The next proposition establishes that prime formu- lae are closed under consistent conjunction (property (3) of Lemma 4.1). Proposition 7.4 For every two prime formulae /3 and/3' one can compute a formula 8 that is either prime or _L and satisfies /3 A/3' ~FT 8 and 1)(6) C 1)(/3 A/3'). Proposition 7.5 Let ¢ be a formula that is built from atomic formulae with conjunction and existen- tial quantification. Then one can compute a formula 6 that is either prime or I such that ¢ ~FT 8 and Vff) _C V(¢). The closure of a prime formula 3X7 is defined as follows: [3xv] := { ~ e [7] I v(~) n x = ~ or ~ = xc~ or ~ = =¢ 1=~ }- The proper closure of a prime formula/3 is de- fined as follows: [/3]* := {Tr • [/3] I r is a proper path constraint}. Proposition 7.6 If/3 is a prime formula and r • [/3], then/3 p ~ (and hence --,,~ p --,/3). We now know that the closure [ill, taken as an infinite conjunction, is entailed by/3. We are going to show that, conversely,/3 is entailed by certain finite subsets of its closure [/3]. An access function for a prime formula/3 = 3X 7 is a function that maps every x • 1)(7 ) - X to the rooted path x¢, and every x E X to a rooted path x'p such that x'px • [7] and x' ~ X. Note that every prime formula has at least one access function, and that the access function of a prime formula is injective on 1)(3') (follows from Proposition 5.3 (5)). The projection of a prime formula/3 = 3X7 with respect to an access function @ for/3 is the conjunc- tion of the following proper path constraints: {Ax'p I Ax E 7, x'p = @x} U {='pf~y'q [ xfy E 7, x'p = @x, y'q = @y}. Obviously, one can compute for every prime formula an access function and hence a projection. Further- more, if )~ is a projection of a prime formula/3, then )~ taken as a set is a finite subset of the closure [/3]. Proposition 7.7 Let )~ be a projection of a prime formula/3. Then )t C [/3]* and )t ~=~FT /3" As a consequence of this proposition one can compute for every prime formula an equivalent quantifier-free conjunction of proper path con- straints. We close this section with a few propositions stat- ing interesting properties of closures of prime formu- lae. These propositions will not be used in the proofs to come. Proposition 7.8 If fl is a prime formula, then /3 ~FT [/3]*. Proposition 7.9 If/3 is a prime formula, and r is a proper path constraint, then ~e[Z]* ¢=~ /3Prr~- Proposition 7.10 Let /3, /3' be prime formulae. Then/3 ~FT fl' ¢=~ ~]* _D [/3']*. Proposition 7.11 Let/3,/3' be prime formulae, and let )d be a projection of/3'. Then ]3 ~FT /3t [#]* _~ k'. Proposition 7.11 gives us a decision procedure for "/3 ~FT /3" since membership in [/3]* is decidable, k' is finite, and ,V can be computed from/3'. 8 Quantifier Elimination In this section we show that our prime formulae sat- isfy the requirements (5) and (6) of Lemma 4.1 and thus obtain the completeness of FT. We start with the definition of the central notion of a joker. A rooted path xp consists of a variable x and a path p. A rooted path xp is called unfree in a prime formula 13 if 3 prefix p' of p 3 yq: x 5£ y and xp' I Yq E [/3]. A rooted path is called free in a prime formula/3 if it is not unfree in/3. Proposition 8.1 Let/3 = 3X 7 be a prime formula. Then: 1. if xp is free in/3, then x does not occur in the normalizer of 7 2. if x ~ 1)(/3), then xp is free in/3 for every path p. 199 A proper path constraint 7r is called an z-joker for a prime formula/3 if r ~ [/3] and one of the following conditions is satisfied: 1. 7r = Axp and xp is free in fl 2. 7r = xp ~ yq and xp is free in/3 3. 7r = yp ~ xq and xq is free in/3. Proposition 8.2 It is decidable whether a rooted path is free in a prime formula, and whether a path constraint is an x-joker for a prime formula. Lemma 8.3 Let/3 be a prime formula, x be a vari- able, 7r be a proper path constraint that is not an x-joker for /3, A be a model of FT, .A,c~ ~ fl, .4,~' ~ /3, and t~' be an z-update of c~. Then A, c~ ~ 7r if and only if.A, a' ~ 7r. Lemma 8.4 Let /3 be a prime formula and 7q,..., rn be x-jokers for/3. Then 3x/3 ~FT 3Z(/3A Z"nffi )" i=1 The proof of this lemma uses the third axiom scheme, the existence of infinitely many features, and the existence of infinitely many sorts. Lemma 8.5 Let/3, /3' be prime formulae and a be a valuation into a model A of FT such that ,4, ~ p 3x(/3 A/3') and .4, ~ p 3x(/3 A -,/3'). Then every projection of/3' contains an z-joker for /3. Lemma 8.6 If/3, /31,...,/3n are prime formulae, then ::lz(fl A Z "~/3`) ~::~FT Z 3z(fl A "-,fl,). i=1 i=l Lemma 8.7 For every two prime formulae /3, /3' and every variable x one can compute a Boolean com- bination 6 of prime formulae such that 3x(/j A-,/3') ~FT 6 and 12(6) C 12(qx(fl A ~/3')). Theorem 8.8 For every formula ~b one can compute a Boolean combination 6 of prime formulae such that MFT 6 and V(6) C_ V(/3) Corollary 8.9 FT is a complete and decidable the- ory. References [1] H. A[t-Kaci. An algebraic semantics approach to the effective resolution of type equations. Theoretical Computer Science, 45:293-351, 1986. [2] H. Ait-Kaci and R. Nasr. LOGIN: A logic program- ming language with built-in inheritance. The Jour- nal of Logic Programming, 3:185-215, 1986. [3] H. A~t-Kaci and A. Podelski. Towards a meaning of LIFE. In Proc. of the PLILP'91, Springer LNCS vol. 528, pages 255-274. Springer-Verlag, 1991. [4] H. Ait-Kaci, A. Podelski, and G. Smolka. A feature- based constraint system for logic programming with entailment. In Fifth Generation Computer Systems 1992, pages 1012-1021, Tokyo, Japan, June 1992. Institute for New Generation Computer Technology. [5] F. Bander, H.-J. Bfirckert, B. Nebel, W. Nutt, and G. Smolka. On the expressivity of feature logics with negation, functional uncertainty, and sort equations. Research Report RR-91-01, German Research Cen- ter for Artificial Intelligence (DFKI), January 1991. [6] R. Backofen and G. Smolka. A complete and re- cursive feature theory. Research Report RR-92-30, German Research Center for Artificial Intelligence (DFKI), July 1992. [7] B. Carpenter. The Logic of Typed Feature Struc- tures, volume 32 of Cambridge Tracts in Theoreti- cal Computer Science. Cambridge University Press, Cambridge, UK, 1992. [8] A. Colmerauer. Equations and inequations on finite and infinite trees. In Proceedings of the 2nd Inter- national Conference on Fifth Generation Computer Systems, pages 85-99, 1984. [9] M. Johnson. Attribute-Value Logic and the Theory of Grammar. CSLI Lecture Notes 16. Center for the Study of Language and Information, Stanford University, CA, 1988. [10] M. Johnson. Logic and feature structures. In Pro- ceedings oflJCAI-91, Sydney, Australia, 1991. [11] J.-L. Lassez, M. Maher, and K. Marriot. Unification revisited. In J. Minker, editor, Foundations of De- ductive Databases and Logic Programming. Morgan Kaufmann, Los Altos, CA, 1988. [12] M. J. Maher. Complete axiomatizations of the al- gebras of finite, rational and infinite trees. In Pro- ceedings o£ the 3rd Annual Symposium on Logic in Computer Science, pages 348-457, Edinburgh, Scot- land, July 1988. [13] W. C. Rounds and R. T. Kasper. A complete logical calculus for record structures representing linguistic information. In Proceedings of the 1st IEEE Sym- posium on Logic in Computer Science, pages 38-43, Boston, MA, 1986. [14] S. M. Shieber. An Introduction to Unification-Based Approaches to Grammar, volume 4 of CSLI Lecture Notes. Center for the Study of Language and Infor- mation, Stanford University, CA, 1986. [15] G. Smolka. Feature constraint logics for unification grammars. Journal of Logic Programming, 12:51- 87, 1992. [16] G. Smolka, M. Henz, and J. Wfirtz. Object-oriented concurrent constraint programming in oz. Research Report RR-93-16, German Research Center for Ar- tificial Intelligence (DFKI), Apr. 1993. [17] G. Smolka and R. Treinen. Records for logic pro- gramming. In Proceedings of the 1992 Joint Inter- national Conference and Symposium on Logic Pro- gramming, pages 240-254, Washington, DC, 1992. 200
1993
26
ON THE DECIDABILITY OF FUNCTIONAL UNCERTAINTY* Rolf Backofen German Research Center for Artificial Intelligence (DFKI) W-6600 Saarbrficken, Germany [email protected] Abstract We show that feature logic extended by functional uncertainty is decidable, even if one admits cyclic descriptions. We present an algorithm, which solves feature descriptions containing functional un- certainty in two phases, both phases using a set of de- terministic and non-deterministic rewrite rules. We then compare our algorithm with the one of Kaplan and Maxwell, that does not cover cyclic feature de- scriptions. 1 Introduction Feature logic is the main device of unification gram- mars, the currently predominant paradigm in com- putational linguistics. More recently, feature de- scriptions have been proposed as a constraint system for logic programming (e.g. see [ll D . They provide for partial descriptions of abstract objects by means of functional attributes called features. Formalizations of feature logic have been proposed in various forms (for more details see [3] in this vol- ume). We will follow the logical approach intro- duced by Smolka [9, 10], where feature descriptions are standard first order formulae interpreted in first order structures. In this formalization features are considered as functional relations. Atomic formulae (which we will call atomic constraints) are of either the form A(x) or zfy, where x, y are first order vari- ables, A is some sort predicate and f is a feature (written in infix notation). The constraints of the form xfy can be generalized to constraints of the form xwy, where w = fl-.. fn is a finite feature path. This does not affect the computational properties. In this paper we will be concerned with an ex- tension to feature descriptions, which has been in- troduced as "functional uncertainty" by Kaplan and Zaenen [7] and Kaplan and Maxwell [5]. This for- mal device plays an important role in the framework of LFG in modelling so-called long distance depen- dencies and constituent coordination. For a detailed linguistic motivation see [7], [6] and [5]; a more gen- eral use of functional uncertainty can be found in [8]. Functional uncertainty consists of constraints of *This work was supported by a research grant, ITW 9002 0, from the German Bundesministerium ffir Forschung und Technologic to the DFKI project DISCO. I would like to thank Jochen Dhrre, Joachim Niehren and Ralf Treinen for reading draft version of this paper. For space limitations most of the proofs are omitted; they can be found in the complete paper [2] the form xLy, where L is a finite description of a regular language of feature paths. A constraint xLy holds if there is a path w E L such that zwy holds. Under this existential interpretation, a con- straint xLy can be seen as the disjunction = I ,.,, e xLy L}. Certainly, this disjunction may be infinite, thus functional uncertainty yields additional expressivity. Note that a constraint zwy is a special case of a func- tional uncertainty constraint. To see some possible application of functional un- certainty we briefly recall an example that is given in Kaplan and Maxwell [5, page 1]. Consider the top- icalized sentence Mary John telephoned yesterday. Using s as a variable denoting the whole sentence, the LFG-like clause s topic x A s obj x specifies that in s Mary should be interpreted as the object of the relation telephoned. The sentence could be extended by introducing additional complement predicates, as e.g. in sentences like Mary John claimed thai Bill telephoned; Mary John claimed thai Bill said that ...Henry telephoned yesterday; .... For this fam- ily of sentences the clauses s topic x A s comp obj x, s topic xAs comp cornp obj x and so on would be ap- propriate; specifying all possibilities would yield an infinite disjunction. This changes if we make use of functional uncertainty allowing to specify the above as the single clause s topic x A s comp* obj x. Kaplan and Maxwell [5] have shown that consis- tency of feature descriptions is decidable, provided that a certain aeyclicity condition is met. More re- cently, Bander et hi. [1] have proven, that consistency is not decidable if we add negation. But it is an open problem whether consistency of feature descriptions without negation and without additional restrictions (such as acyclicity) is decidable. In the work pre- sented here we show that it indeed is decidable. 2 ']?he Method We will first briefly describe the main part of solving the standard feature terms and then turn to their extension with functional uncertainty. Consider a clause ¢ = xplyl A xpzy2 (from now on we will refer to pure conjunctive formulae as clauses). A standard method for solving feature terms would rewrite ¢ in order to achieve a solved form. This rewriting depends on the paths Pl and Pz. If Pl equals Pz, we know that yl and Y2 must be equal. This implies that ¢ is equivalent to xplyx Ayl -- Yz. If 201 p~ is a prefix of p2 and hence P2 = P~P~, we can trans- form ¢ equivalently into the formulae xplyi A YlP'Y2. The reverse case is treated in a similar'fashion. If neither prefix or equality holds between the paths, there is nothing to be done. By and large, clauses where this holds for every x and every pair of differ- ent constraints xp~y and xp2z are the solved forms in Smolka [9], which are consistent. If we consider a clause of the form ¢ = zL~y~ A zL2y~, then we again have to check the relation be- tween ys and y~. But now there is in general no unique relation determined by ¢, since this depends on which paths p~ and P2 we choose out of L~ and L~. Hence, we have to guess the relation between pl and p~ before we can calculate the relation between yl and y~. However, there is a problem with the original syntax, namely that it does not allow one to express any relation between the chosen paths (in a later section we will compare our algorithm to the one of Kaplan/Maxwell, thus showing where exactly the problem occurs in their syntax). Therefore, we extend the syntax by introducing so-called path vari- ables (written c~, fl, a',...), which are interpreted as feature paths (we will call the other variables first order variables). Hence, if we use the modified sub- term relation xo~y and a restriction constraint o~ ~ L, a constraint xLy can equivalently be expressed as xay A a ~ L (4 new). The interpretation of xay is done in two steps. Given a valuation V~, of the path variables as feature paths, a constraint =c~y in ¢ is substituted by xV~,(cQy. This constraint is then in- terpreted using the valuation for the first order vari- ables in the way such constraints are usually inter- preted. By using this extended (two-sorted) syntax we are now able to reason about the relations between dif- ferent path variables. In doing so, we introduce ad- ditional constraints c~ - fl (equality), o~ ~ fl (prefix) and c~ fl fl (divergence). Divergence holds if neither equality nor prefix holds. Now we can describe a nor- mal form equivalent to the solved clauses in Smolka's work, which we will call pre-solved clauses. A clause ¢ is pre-solved iff for each pair of different constraint xayl and x~y2 in ~b there is a constraint a I] ~ in ¢. We call this clauses pre-solved, since such clauses are not necessarily consistent. It may happen, that the divergence constraints together with the restrictions of form a ~ L are inconsistent (e.g. think of the clause a~ f+ A ~ ~ ff+ A (~ fl fl). But pre-solved clauses have the property, that if we find a valuation for the path variables, then the clause is consistent. Our algorithm first transforms a clause into a set of pre-solved clauses, which is (seen as a disjunction) equivalent to the initial clause. In a second phase the pre-solved clauses are checked for consistency with respect to the path variables. In this paper we will concentrate on the first phase, since it is the more difficult one. Before looking at the technical part we will illus- trate the first phase. For the rest of the paper we will write clauses as sets of atomic constraints. Now consider the clause 7 = {xay, al ~ L1, xflz, fl~ L2}. The first step is to guess the relation between the path variables c~ and ft. Therefore, 7 can be ex- pressed equivalently by the set of clauses 71 = {4 ,) ~} u 7 73 = {~ ~ ~} u 7 72 = {,~ - ~} u 7 74 = {~ -~ ,~} u 7 The clause 71 is pre-solved. For the others we have to evaluate the relation between a and ]Y, which is done as follows. For 72 we substitute/~ by ot and z by y, which yields {y "--z, xay, o~E L1, aEL2}. We keep only the equality constraint for the first or- der variables, since we are only interested in their val- uation. Combining {4 ~ L1, a ~ L2} to {4 ~ (L1 f')L2)} then will give us the equivalent pre~solved clause For 73 we know that the variable/3 can be split into two parts, one of them covered by 4. We can use concatenation of path variables to express this, i.e. we can replace fl by the term c~.fl', where ~' is new. Thus we get the clause 7~ - {xc~y, a~ L1, yfl' z, c~.fl'~L2}, The only thing that we have to do additionally in order to achieve a pre-solved clause is to resolve the constraint a./~ ~ ~ L2. To do this we have to guess a so-called decomposition P, S of L2 with P.S C_ L2 such that a ~ P and ]~' ~ S. In general, there can be an infinite number of decompositions (think of the possible decompositions of the language f'g). But as we use regular languages, there is a finite set of regular decompositions covering all possibilities. Fi- nally, reducing {c~ ~ L~, ~ ~ P} to {~ ~ (L1 n P)} will yield a pre-solved clause. Note that the evaluation of the prefix relation in 73 has the additional effect of introducing a new con- straint y~z. This implies that there again may be some path variables the relation of which is unknown. Hence, after reducing the terms of form a --" ]~ or ~ fl we may have to repeat the non-deterministic choice of relations between path variables. In the end, the only remaining constraints between path variables will be of the form a fl ft. We have to consider some additional point, namely that the rules we present will (naturally) loop in some cases. Roughly speaking, one can say that this al- ways occurs if a cycle in the graph coincides with a cycle in the regular language. To see this let us vary the above example and let 7 now be the clause {xax, c~ ~ f, xflz, fl ~ f'g}. Then a possible looping derivation could be 1. adda4]~: {4 4 fl, xax, a~f, xflz, fl~f*g} 2. split fl into a-f~': 3. decompose c~-/~ I~ f'g: {=~, ~f, ~f~'~, a~f*, Z'~f*g} 202 4. join a-restrictions: {=~z, ~I, ~/~'z, ~'~y*g} However, we will proof that the rule system is quasi-terminating, which means that the rule system may cycle, but produces only finitely many different clauses (see [4]). This means that checking for cyclic derivations will give us an effective algorithm. Quasi-termination is achieved by the following measures: first we will guarantee that the rules do not introduce additional variables; second we restrict concatenation to length 2; and third we will show that the rules system produces only finitely many regular languages. In order to show that our rewrite system is complete, we also have to show that every solution can be found in a pre-solved clause. 3 Preliminaries Our signature consists of a set of sorts S (A, B,...), first order variables X (z,y,...), path variables 7 9 (a,/3,...) and features Jr (f, g,...). We will assume a finite set of features and infinite sets of variables and sorts. A path is a finite string of features. A path u is a prefix of a path v (written u ~ v) if there is a non-empty path w such that v = uw. Note that is neither symmetric nor reflexive. Two paths u, v diverge (written u n v) if there are features f, g with f ~ g and possibly empty paths w, wl, w2 such that u = wfw~ A v = wgw2. Clearly, n is a symmetric relation. Proposition 3.1 Given two paths u and v, then ex- actly one of the relations u = v, u .~ v, u ~- v oru II v holds. A path term (p, q .... ) is either a path variable a or a concatenation of path variables a.fl. We will allow complex path terms only in divergence constraints and not in prefix or equality constraints. Hence, the set of atomic constraints is given by e ~ Az sort restriction z ":- y agreement z f~.....fn Y subterm agreement 1 zo~y subterm agreement 2 p~ L path restriction p fi q divergence ~ [3 prefix c~ - fl path equality We exclude empty paths in subterm agreement since xey is equivalent to x - y. Therefore, we require fl"...'fn E ~r+ and L C_ jr+. A clause is a finite set of atomic constraint de- noting their conjunction. We will say that a path term a.fl is contained (or used) in some clause ¢ if ¢ contains either a constraint a-fl ~ L or a constraint a.fl ti q) Constraints of the form p~ L, p fl q, a :~ fl and c~ - fl will be called path constraints. An interpretation Z is a standard first order struc- ture, where every feature f ~ ~ is interpreted as a bi- nary, functional relation F z and where sort symbols We will not differentiate between p fl q and q ~ p. are interpreted as unary, disjoint predicates (hence A zOBz= 0 for A 5£ B). A valuation is a pair (Vx, VT~), where Vx is a standard first order valu- ation of the variables in X and Vv is a function V~v : P ---+ ~'+. We define V~,(a.fl) to be VT,(a)V~,(13), The validity of an atomic constraint in an inter- pretation 2" under a valuation (Vx, V~,) is defined as follows: (Vx, V~,) ~z Ax :¢=:~ Vx(x) e A z (Vx, Vr) Pz = - Y :¢=~ Vx(=) = Vx(U) (vx,vr) ~z zpy (vx, vv) ~z =.u (vx, vT~) ~z p e L (Vx, VT~) ~z pbq :¢=, vx(~) F? 0...o F, ~ Vx(y) :¢:=~ (Vx, Vv) ~z • Vv(a) y :¢==~ V~,(p) C L :¢::~ Vp(p) o VT~(q) for aC {u,k,--" }, where p is the path fl"...'f, and F/z are the inter- pretations of fi in Z. For a set ~ C X we define =£ to be the following relation on first order valuation: Vx =~ V/~ iff W e ~ : Vx(~) = V/~(x). Similarly, we define =~ with 7r C 79 for path valua- tions. Let 0 C_ XU79 be a set of variables. For a given interpretation 7: we say that a valuation (Vx, V~) is a O-solution of a clause ¢ in 2" if there is a valuation (V~, V~) in 2" such that Vx =a'ne V~:, Vp =~,no V~ and (V~:, V~) ~z ¢. The set of all 0-solutions of ¢ in 2: is denoted by [¢]~. We will call X-solutions just solutions and write [¢]z instead of [¢],~. For checking satisfiability we will use transfor- mation rules. A rule R is O-sound ¢ --*n 7 [¢]z D [7]~ z for every interpretation 2". R is called O-preserving if ¢ "+R 3' :¢" [¢]Z C [7]~. R is globally O-preservingif [¢]z C_ U [7]$- ¢--*n7 4 The First Phase 4.1 A Set of Rules Recall that we have switched from the original syntax to a (two-sorted) syntax by translating constraints zLy into {zay, ~ ~ L}, where a is new. The result of the translation constitutes a special class of clauses, namely the class of prime clauses, which will be de- fined below. Hence, it suffices to show decidability of consistency of prime clauses. They are the input clauses for the first phase. Let ¢ be some clause and z, y be different vari- ables. We say that ¢ binds y t0 z if z - y E ¢ and y occurs only once in ¢. Here it is important that we consider equations as directed, i.e. we assume that z -" y is different from y - x. We say that ¢ elimi- nates y if ¢ binds y to some variable x. A clause is called basic if 1. x - y appears in ¢ iff ¢ eliminates y, 2. For every path variable a used in ¢ there is at most one constraint zc~y E ¢. 203 (Eq) {c~ -/3, zay, x/3z) U ¢ {v - z, ~v} u ¢[/3.---, ~.--u] (Pre) {'~ "~ /3' z~y, x/3z) U ¢ {x~,v} u {v/3z} u ¢[/3.- ~./3] (Join) {a ~ L, ~ ~ L'} U ¢ L :/: L' {,~ ~ (L n L')} U ¢ (Divl) {a fi/3') U {a./3 fi/3'} U ¢ {~ ii y} u ¢ (Div2) {a-/3 fi ~./3'} U ¢ {/3fi y} u¢ J_ (DClashl) {a'/3 fi a} U ¢ _L (DClash2) {o~ fi a} U ¢ J_ (Empty) {a ~ 0) O ¢ _L Figure 1: Simplification rules. Note that (Pre) does not introduce a new variable. A basic clause ¢ is called prime if ¢ does not contain an atomic constraint of the form p fl q, c~ -~/3 or ot - /3. Every clause ¢ in the original Kaplan/Maxwell syntax can be translated into a prime clause 7 such that ¢ is consistent iff 9' is consistent. Now let's turn to the output clauses of the first step. A basic clause is said to be pre-soived if the following holds: 1. Ax 6 ¢ and Bz 6 ¢5 implies A - B. 2. c~ d L 6 ¢ and a d L' 6 ¢ implies L = L*. Furthermore, a d O is not in ¢. 3. a-/3, c~ -/3 or a ~/3 are not contained in ¢. 4. afl/36¢iffa~/3, x(~y6¢andz/3z6¢. Lemma 4.1 A pre-soived clause ¢ is consistent iff there is a path valuation V~, with VT~ ~ Cp, where Cp is the set of path constraints in ~. Now let's turn to the rule system. As we have explained informally, the first rule adds non- deterministiely relational constraints between path variables. In one step we will add the relations be- tween one fixed variable a and all other path vari- ables/3 which are used under the same node x as a. Furthermore, we will consider only the constraints - /3, c~ fl /3 and a ~ /3 and not additionally the constraint a 9/3. For better readability we will use pseudo-code for describing this rule (using the usual don't care/don't know distinction for non-determinism): (PathRel) Choose x 6 l)arsx(¢) (don't care) Choose xay 6 ¢ (don't know) For each x/3z 6 ¢ with c~ #/3 and c~ fl/3 ~ ¢ add a 6~/3 with 5Z 6 {-, 4~, fl} (don't know) "don't care non-determinism" means that one is free to choose an arbitrary alternative at this choose point, whereas "don't know" means that one has to consider every alternative in parallel (i.e. for every al- ternative of the don't care non-determinism a clause ¢ is equivalent to the set of all don't know alterna- tives that can be generated by applying the rule to ¢). Note that the order of rule application is another example for don't care non-determinism in our rule system. Although we have restricted the relations 6~ to {-, :(, u}, this rule is globally preserving since we have non-deterministically chosen zay. To see this let ¢ be a clause, 27 be an interpretation and (Vx, VT~) be a valuation in 27 with (Vx, V~) ~z ¢. To find an instance of (PathRel) such that (Vx, V~,) ~z 7 where 3' is the result of applying this instance, we choose xay 6 ¢ with V~(a) is prefix minimal in {v~@ 1~/3z ~ ¢}. Then for each x/3z 6 ¢ with a #/3 and ~ fi /3 ~ ¢ we add a 6~ /3 where Vp(a) o~ V~(/3) holds. Note that 5 0 equals ~ will not occur since we have cho- sen a path variable a whose interpretation is prefix minimal. Therefore, the restriction 6~ 6 {-, k, fi} is satisfied. We have defined (PathRel) in a very special way. The reason for this is that only by using this spe- cial definition we can maintain the condition that concatenation of path variables is restricted to bi- nary concatenation. E.g. assume that we would have added both /31 "~ O~ and a :¢ /32 to a clause 7. Then first splitting up the variable a into/31 .a' and then 132 into a./3~ will result in a substitution of/32 in 7 by/31"a"/3~. By the definition of (PathRel) we have ensured that this does not occur. The second non-deterministic rule is used in the decomposition of regular languages. For decomposi- tion we have the following rules: (DecClash) {a./3~L} O ¢ {w e L llwl > 1} = g _L (LangDecn) {a.fl ~ L) U ¢ P.S C L {o~P} U {/3~S} U¢ where P, S, L C F + and A is a finite set of reg. languages with L, P, S 6 A. L must contain a word w with [w[ > 1. The clash rule is needed since we require regular lan- guages not to contain the empty path. The remain- ing rules are listed in Figure 1. We use A in (LangDecA) as a global restriction, i.e. for every A we get an different rule (LangDecA) (and hence a different rule system 7~A). This is done because the rule system is quasi-terminating. By restricting (LangDeca) we can guarantee that only finitely many regular languages are produced. 204 For (LangDec^) to be globally preserving we need to find a suitable pair P, S in A for every possible valuation of (~ and ]3. Therefore, we require A to satisfy VL E A, Vwl, w2 ~ e : [WlW 2 E L =:~ BP, S e A : (P.S C_ L A Wl E PAw2 e S)]. We will call A closed under decomposition if it sat- isfies this condition. Additionally we have to ensure that L E A for every L that is contained in some clause ¢. We will call such a set A C-closed. Surely, we will not find a finite A that is closed under de- composition and C-closed for arbitrary ¢. But the next lemma states some weaker condition that suf- fices. We say that 7 is a (¢,TiA)-derivative if 7 is derivable from C by using only rules from 7~h. If R^ is clear from the context, we will just say that 7 is a C-derivative. Lemma 4.2 1. If A is C-closed and closed under intersection, then A is 7-closed for all (C, T~h)-derivaLives 7. 2. For every prime clause C there is a finite A such that A is C-closed and closed under intersection and decomposition. The proof of this lemma (containing the construc- tion of the set A) can be found in the appendix. 4.2 Completeness and Quasi-Termination The rule system serves for an algorithm to transform a prime clause into an equivalent set of pre-solved clauses. The rules are applied in arbitrary order un- til a pre-solved clause has been derived. If one of the non-deterministic rules is applied, a clause is sub- stituted by a whole set of clauses, one for each of the don't know alternatives. Since the rule system is quasi-terminating, we may encounter cycles dur- ing the application of the rules. In this case we skip the corresponding alternative, since every pre-solved clause that can be produced via a cyclic derivation can also be produced via a derivation that does not contain a cycle. Theorem 4.3 Let ¢ be a prime clause. If A is C- closed, closed under intersection and decomposition, then [[C] z = U.y~ [[7] z for every interpretation Z, where ¢b is the set of pre-solved (C, T~^)-derivatives. The set (9 is finite and effectively computable. To prove this theorem we have to show that the rule system is sound and complete. Sound means, that we do not add new solutions during the pro- cessing, whereas complete means that we find all so- lutions in the set of pre-solved derivatives. For the completeness it normally suffices to show that (1) every rule preserves (or globally preserves) the initial solutions and (2) the pre-solved clauses are exactly the T~h-irreducible clause (i.e. if a clause is not pre-solved, then one rule applies). But in our case this is not sufficient as the rule system is quasi- terminating. A prime clause ¢ may have a solution Vx which is a solution of all (C, T~A)-derivatives in some cyclic derivation, but can not be found in any pre-solved (¢, T~h)-derivative. We have to show that this cannot happen. Since this part of the proof is unusual, we will explain the main idea (see the ap- pendix for a more detailed outline of the proofs). Let ¢ be some (consistent) prime clause and let Vx E ~¢]z for some Z. Then there exists a path val- uation Vp such that (Vx, V~) ~z ¢. We will find a pre-solved C-derivative that has Vx as a solution by imposing an additional control that depends on V~,. This control will guarantee (1) finiteness of deriva- tions, (2) that each derivation ends with a pre-solved clause, (3) the initial solution is a solution of every clause that is derivable under this control. Since the (Pre) rule does not preserve the initial path valua- tion V~, (recall that the variable fl is substituted by the term a.~), we have to change the path valuation V~, every time (Pre) is applied. It is important to no- tice that this control is only used for proof purposes and not part of the algorithm. For the algorithm it suffices to encounter all pre-solved e-derivatives. To understand this control, we will compare derivations in our syntax to derivations in standard feature logic. Recall that we have a two-level inter- pretation. A constraint xay is valid under Vx and V~ if xV~(c~)y is valid under Vx. Hence, for each clause ¢ and each valuation Vx, Vp with C valid un- der Vx and Vp there is a clause Cv~ in standard feature logic syntax (not containing functional un- certainty) such that ¢v~ is valid under Vx. E.g. for the clause {xax, a~f, xflz, fl~f*g} and a path val- uation V~, with VT,(a) = f and V~,(j3) = g the clause Cv~, is {xfx, xgy}. The control we have mentioned requires (by and large) that only those rewrite rules will be applied, that are compatible to the clause Cv~ and thus preserve Vx. If one of the rules (Eq) or (Pre) is applied, we also have to rewrite Cv~. Tak- ing the above example, we are only allowed to add ali fl to C (using (PathRel)), since ev~ is already in pre-solved form. Now let's vary the example and let Vp be a path valuation with V~,(a) = f and V~,(f~) = Hg. Then we have to add a ~ /3 in the first step, since this relation holds between a and ft. The next step is to apply (Pre) on a :~ /3. Here we have to rewrite both ¢ and Cv~. Hence, the new clauses ¢1 and evv are {xax, a~f, x/3z, a./3~ f*g} and {x f x, x fgy} respectively. Note that the constraint xffgy has been reduced to x fg y by the application of (Pre). Since infinite derivations must infinitely often use (Pre), this control guarantees that we find a pre- solved clause that has Vx as a solution. 5 The Second Phase In the second phase we have to check consistency of pre-solved clauses. As we have mentioned, a pre- solved clause is consistent if we find some appropri- ate path valuation. This means that we have to check the consistency of divergence constraints of the form al fi a2 together with path restrictions 205 al ~ L1 and a2 ~ L2. A constraint al ti a2 is va|id under some valuation V~, if there are (possi- bly empty) words w, wl, w2 and features'f ~ g such that V~,(al) = WfWl and V~,(c~2) = wgw2. This def- inition could directly be used for a rewrite rule that solves a single divergence constraint, which gives us {al fi ct2} U¢ f#g, ~,~1 2new where ¢' = ¢[al ~--/?.a~,a2 ~/3.a~]. By the ap- plication of this rule we will get constraints of the form j3.a~ ~ L1 and fl.a~ ~ L2. Decomposing these restriction constraints and joining the correspond- ing path restrictions for ~ and ~,~ will result in {fl~ (Pl nP2), ~i ~ (S~:*ns,), ,~ (g~'*MS2)} with PI.S~ C L~ and P2.S2 C_ L~, which completes the consistency check. Additionally, one has to consider the effects of in- troducing the path terms/~.a~. The main part of this task is to resolve constraints of the form fl.tr~ li tr. There are two possibilities: Either a has also f~ as an prefix, in which case we have to add fl ~ a; or fl is not a prefix of c~, which means that we have to add c~ fl ft. After doing this, the introduced prefix constraints have to be evaluated using (Pre). (In the appendix we present a solution which is more appro- priate for proofing termination). 6 Kaplan and Maxwell's Method We are now able to compare our method with the one used by Kaplan and Maxwell. In our method, the non-deterministic addition of path relation and the evaluation of these relations are done at different times. The evaluation of the introduced constraints c~ - fl and o~ :¢ fl are done after (PathRel) in the first phase of the algorithm, whereas the evaluation of the divergence constraints is done in a separate second phase. In Kaplan and Maxwell's algorithm all these steps are combined into one single rule. Roughly, they substitute a clause {xL~y, xL2z, } O ¢ non- deterministicly by one of the following clauses: ~ { x(L~f3L~)y, x-y}U¢ { x(L~f3P)y, ySz }U¢ P.SC_L~ { x(L~NP)z, zSy }U¢ P.S C L1 { x(P1NP2)u, u(f.S1)y, u(g.S2)z }U¢ with PI'f'S~ C_ L~, P2"g'S~ C_ L~, f # g, u new Recall that {XLly, xL2z} is expressed in our syntax by the clause 3' = {xay, o~ ~ L1, x~z, j~ ~ L2}, which is the example we have used on page 2. The first three cases correspond exactly to the result of the 2This is not the way their algorithm was originally described in [5] as they use a slightly different syntax. Furthermore, they don't use non-deterministic rules, but use a single rule that produces a disjunction. However, the way we describe their method seems to be more ap- propriate in comparing both approaches. derivations that have been described for 72, 73 and 3'4. By and large, the last case is achieved if we first add c~ [I ~ to 3' and then turn over to the second phase as described in the last section. The problem with Kaplan/Maxwell's algorithm is that one has to introduce a new variable u in the last case, since there is no other possibility to express di- vergence. If their rule system is applied to a cyc!ic description, it will not terminate as the last part in- troduces new variables. Hence it cannot be used for an algorithm in case of cyclic descriptions. The delaying of the evaluation of divergence con- straint may not only be useful when applied to cyclic feature descriptions. As Kaplan and Maxwell pointed out, it is in general useful to postpone the consistency check for functional uncertainty. With the algorithm we have described it is also possible to delay single parts of the evaluation of constraints containing functional uncertainty. Appendix Proof of Lemma 4.2. The first claim is easy to prove. For the second claim let {L1,...,Ln} C P(~+) be the set of regular languages used in ¢ and let .Ai = (Q.4~, i.4~, cr a~, Fin.4~) be finite, determinis- tic automatons such that .A i recognizes Li. For each .Ai we define dec(.Ai) to be the set dee(A/) = {L~ ]p,q E QJt,}, whereL~ = {w E 2 "+ I a~,(p,w) = q}. It is easy to show that dec(.Ai) is a set of regular languages that contains Li and is closed under decomposition. Hence, the set A0 = [.Jinx dec (Ai) contains each Li and is closed under decomposition. Let A = fi (A0) be the least set that contains A0 and is closed under intersection. Then A is finite and e-closed, since it contains each Li. We will prove that A is also closed under decompo- sition. Given some L E A and a word w = wlw2 E L, we have to find an appropriate decomposition P, S in A. Since each L in A can be written as a finite m L intersection L = Nk=l i~ where Lik is in A0, we know that w = wlw2 is in Li~ for 1..m. As A0 is closed under decomposition, there are languages Pi~ and Si~ for k = 1..m with wl E Pi~, w2 E Si~ and Pik'Sik C Li~. Let P = M~n=l Pik and S = s,~. Clearly, wl 6 P, w2 6 S and P.S C L. Furthermore, P, S 6 A as A is closed under intersection. This im- plies that P, S is an appropriate decomposition for Wl W2 . I"1 A.1 Phase I: Soundness, Completeness and Quasi-Termination Proposition A.1 The rule (PathRel) is X U 12- sound and globally X U 12-preserving. If A is closed under decomposition, then (LangDec^) is X U 12- sound and globally X U IJ-preserving. The (Pre) rule is X-sound and X-preserving. All other rules are X U 13-sound and X U 13-preserving. 206 Next we will prove some syntactic properties of the clauses derivable by the rule system. For the rest of the paper we will call clauses that are derivable from prime clauses admissible. Proposition A.2 Every admissible clause is basic. Ira -~ 13, o~ -- [3 or c~ (I 13 is contained in some admissible clause ¢, then there is a variable z such that zc~y and zflz is in ¢. Note that (by this proposition) (Pre) (resp. (Eq)) can always be applied if a constraint c~ 4 [3 (resp. -/3) is contained in some admissible clause. The next lemma will show that different applications of (Pre) or (Eq) will not interact. This means the application of one of these rule to some prefix or path equality constraint will not change any other prefix or path equality constraint contained in the same clause. This is a direct consequence of the way (PathP~el) was defined. Lemma A.3 Given two admissible clauses 7, 7' with 7 ---~r 7' and r different from (PathRel). Then c~ "- 13 E 7' (resp. ~ 4 13 E 7 I) implies ~ -- 13 E 7 (resp. a :¢ [3 E 7). Furthermore, if a.13 is contained in 7', then either a.fl or a -~ 13 is contained in 7. Note that this lemma implies that new path equality or prefix constraints are only introduced by (PathRel). We can derive from this lemma some syntactic properties of admissible clauses which are needed for proving completeness and quasi- termination. Lemma A.4 If ¢ is an admissible clause, then 1. If c~ :< 13 is contained in ¢, then there is no other prefix or equality constraint in ¢ involving 13. Furthermore, neither 13.[3~ nor 13~.[3 is contained in ¢. e. ira.13 fi 13' is in ¢, then either 13' equals a or ¢ contains a constraint of form afi t3', a - 13' or :~ ~'. The first property will guarantee that concatena- tion does not occur in prefix or equality constraints and that the length of path concatenation is re- stricted to 2. The second property ensures that a constraint c~.13 fi 13' is always reducible. Theorem A.5 For every finite A the rule system 7~a is quasi-terminating. Proof. The rule system produces only finitely many different clauses since the rules introduce no addi- tional variables or sort symbols and the set of used languages is finite. Additionally, the length of con- catenation is restricted to 2. [] Lemma A.6 There are no infinite derivations using only finitely many instances of (Pre). Since the rule system is quasi-terminating, the completeness proof consists of two parts. In the first part we will proof that pre-solved clauses are just the irreducible clauses. In the second part we will show that one finds for each solution Vx of a prime clause ¢ a pre-solved e-derivative 7 such that Vx is also a solution of 7. Theorem A.7 (Completeness I) Given an ad- missible clause ¢ ~ _1_ such that ¢ is not in pre-solved form. If A is e-closed and closed under decomposi- tion, then ¢ is T~A-reducible. Theorem A.8 (Completeness II) For ev- ery prime clause ¢ and for every A that is e-closed, closed under decomposition and intersection we have I¢] _c U b] z 7 E pre-solved (¢,R^) where pre-solved(¢,R^) is the set of pre-solved (¢, R A )-derivat ives. Proof (Sketch) We have to show, that for each prime clause ¢ and each Vx, V~,Z with (Vx, V~) ~z ¢ there is a pre-solved (¢, T~A)-derivative 7 such that Vx E ~7] z. We will do this by controlling deriva- tion using the valuation (Vx, VT~). The control will guarantee finiteness of derivations and will maintain the first completeness property, namely that the ir- reducible clauses are exactly the pre-solved clauses. We allow only those instances of the non- deterministic rules (PathRel) and (LangDecA), which preserve exactly the valuation (Vx, V~). That means if (Vx,V~) ~z ¢ and ¢ --~r 7 for one of these rules, then (Va', V~) ~z 7 must hold. Note that the control depends only on VT,. E.g. for the clause ¢ = {xc~y, a ~ L1, x13z, 13~ L2} and arbitray Z, Vx this means that if VT,(a) = f, V~,(13) = g and (Vx, VT,) ~z ¢, the rule (PathRel) can transform ¢ only into {a h 13} U ¢. If V~, satisfies V~, (tr) 7~ V~, (13) for ~ different from fl with zcry E ¢ and 213z E ¢, we cannot add any prefix constraint using this control. Hence, (Pre) cannot be applied, which implies (by lemma A.6) that in this case there is no infinite controlled derivation. We will call such path valuations prefix-free with respect to ¢. If V~, is not prefix-free, then (Pre) will be applied during the derivations. In this case we have to change the path valuation, since (Pre) is not P-preserving. If (Vx, V~) ~z ¢ = {a k 13} U ¢ and we apply (Pre) on cr -~ fl yielding 7, then the valuation V¢ with v (13) = and = for # will satisfy (Vx, pz % We will use for controlling the further derivations. If we change the path valuation in this way, there will again be only finite derivations. To see this, note that every time (Pre) is applied and the path valuation is changed, the valuation of one variable is shortened by a non-empty path. As the number of variables used in clauses does not increase, this short- ening can only be done finitely many times. This implies, that (Pre) can only finitely often be applied under this control. Hence (by lemma A.6), there are again only finite controlled derivations. 1:3 207 A.2 Consistency of Pre-Solved Clauses We will first do a minor redefinition of divergence. We say that two paths u, v are directly diverging (written u u0 v) if there are features f ~ g such that u E f/'* and v 6 g/'*. Then u n v holds if there are a possible empty prefix w and paths u', v' such that u = wu' and v = wC and u' n0 v'. We will reformulate the reduction of divergence constraints in order to avoid constraints of form a.fl fi fl'. Handling such constraints would make the termination proof somewhat complicated. For the reformulation we use a special property of pre-solved clauses, namely that a fi fl is in a pre-solved clause ¢ iff zay and zflz is in ¢. Hence, if a fi/? and ~ fi df is in ¢, then a Ii df is also in ¢. This implies, that we can write ep as fi(At) ~...~ fl(A,) t9 ¢, where fl (A) is a syntactic sugar for fi(A) = {a fia' I a # a'Aa, a' 6 A}, As,...,An are disjoint sets of path variables and ¢ does not contain divergence constraints. Note that for every Ai = {al,...,a,} there are vari- ables x, Yt,...,yn such that {xatyt,...,x~,y,} C_ ¢. Now given such that a constraint fi (A), we as- sume that a whole set of path variables A1 C A di- verges with the same prefix ft. That means we can replace fl(At) C fl(A) by As = fl.A',O fi0(A~), where fl is new, A~ = {a~,..., a~} is a disjoint copy of A1 = {or1,...,an} and A - fi.A~ is an abbre- viation for the clause {al - fl'a~,..., c~, - fl.a~}. fl 0(A) is defined similar to fl (A). Assuming addi- tionally that the common prefix fl is maximal implies that fl fl a holds for a E (A-A1). If we also consider the effects of A1 = fl'A'l on the subterm agreements in ¢ that involves variables of At, then we result in the following rule: .A, YxU fi(A) u (Red1) {xflz} U zA'IY1 U fi0(A~) U Ii({fl}UA2) U ¢' where ¢' = ¢[al ~-- fl'a~,..., a, ~ fl.a'], AI~A2 =A, IAll > 1 and z, fl new. A~ is a disjoint copy of A1. xAtY1 is short for {zalm,...,za,y,}. ¢ may not contain constraints of form 6.6 ~ L in ¢. Note that we have avoided constraints of the form a-fl fi fit The rules (Reds) fl (A) U %b fi0(A) U¢ u¢ (so,v) o f~#f~, for a#~' together with the rules (LangDech), (Join) and (Empty) completes the rule system 7~ °iv. (Reds) is needed as path variables always denote non-empty paths. We will view (Redz) and (Red2) as one single rule (Reduce). A clause ~ is said to be solved if (1) a.fl ~ L and ot~0 is not in ep; (2) a~L1 in ep and a~L~ in ep implies Lz = L2; (3) ¢ does not contain constraints of form afl fl, a Ii0 fl, oL :< fl, or a -" fl; and (4) for every {xay, z/~z} _C ~ with a ¢/? there are features f #g with {a~fLs,fl~gL2} _C ¢. It is easy to see that every solved clause is consistent. Note that every solved clause is also prime. Lemma A.9 The rules (Reduce) = (Redt) + (Reds) and (Solv) are X-sound and globally X- preserving. Furthermore, 7~ s°lv is terminating. Lemma A.10 Let ¢ be a pre-soived clause. If A is e-closed, closed under intersection and decomposi- tion, then a (¢, TiS°lv)-derivative different from 1 is irreducible if and only if it is solved. Finally we can combine both phases of the algorithm. Theorem A.11 Consistency of prime clauses is de- cidable. References [1] F. Baader, H.-J. Bfirckert, B. Nebel, W. Nutt, and G. Smolka. On the expressivity of feature logics with negation, functional uncertainity, and sort equa- tions. Research Report RR-91-01, DFKI, 1991. [2] R. Backofen. Regular path expressions in feature logic. Research Report RR-93-17, DFKI, 1993. [3] R. Backofen and G. Smolka. A complete and recur- sire feature theory. In Proc. of the 31 th ACL, 1993. this volume. [4] N. Dershowitz. Termination of rewriting. Journal of Symbolic Computation, 3:69-116, 1987. [5] R. M. Kaplan and J. T. Maxwell III. An algorithm for functional uncertainty. In Proc. of the 12th COL- ING, pages 297-302, Budapest, Hungary, 1988. [6] R. M. Kaplan and A. Zaenen. Functional uncer- tainty and functional precedence in continental west germanic. In H. Trost, editor, 4- (gsterreichische Artificial-lnteiligence- Tagung: Wiener Workshop- Wissensbasierte Sprachverarbeitung, pages 114-123. Springer, Berlin, Heidelberg, 1988. [7] R. M. Kaplan and A. Zaenen. Long-distance depen- dencies, constituent structure, and functional uncer- tainty. In M. Baltin and A. Kroch, editors, Alterna- tive Conceptions of Phrase Structure. University of Chicago Press, Chicago, 1988. [8] B. Keller. Feature logics, infinitary descriptions and the logical treatment of grammar. Cognitive Science Research Report 205, Univerity of Sussex, School of Cognitive and Computing Sciences, 1991. [9] G. Smolka. A feature logic with subsorts. LILOG- Report 33, IBM Deutschland, Stuttgart, 1988. [10] G. Smolka. Feature constraint logics for unification grammars. Journal of Logic Programming, 12:51- 87, 1992. [11] G. Smolka and R. Treinen. Records for logic pro- gramming. In Proceedings of the 1992 Joint Inter- national Conference and Symposium on Logic Pro- gramming, pages 240-254, Washington, DC, 1992. 208
1993
27
A LOGICAL SEMANTICS FOR NONMONOTONIC SORTS Abstract Suppose we have a feature system, and we wish to add default values in a well-defined way. We might start with Kasper-Rounds logic, and use Reiter's example to form it into a default logic. Giving a node a default value would be equiv- alent to saying "if it is consistent for this node to have that value, then it does." Then we could use default theories to describe feature structures. The particular feature structure described would be the structure that supports the extension of the default theory. This is, in effect, what the theory of nonmonotonic sorts gives you. This paper describes how that the- ory derives from what is described above. Mark A. Young &~ Bill Rounds Artificial Intelligence Laboratory The University of Michigan 1101 Beal Ave. Ann Arbor, MI 48109 marky, rounds©engin, umich, edu The original presentation of nonmonotonic sorts provided only a description of their operation and an informal description of their meaning. In this paper, we present a logical basis for NSs and non- monotonically sorted feature structures (NSFSs). NSFSs are shown to be equivalent to default theo- ries of default logic (Reiter 1980). In particular, we show how nonmonotonic sort unification is equiv- alent to finding the smallest default theory that describes both NSFSs; and also how taking a solu- tion for a NSFS is the same as finding an extension for that theory. INTRODUCTION There have been many suggestions for incorporat- ing defaults into unification-based grammar for- malisms (Bouma 1990; Bouma 1992; Carpenter 1991; Kaplan 1987; Russell et al. 1992; Shieber 1986; Shieber 1987). Each of these proposes a non-commutative, non-associative default unifica- tion operation that combines one structure repre- senting strict information with another represent- ing default information. When presented with a set of structures, the result depends on the order in which the structures are combined. This runs very much against the unification tradition, in which any set has a unique most general satisfier (if a satisfier exists at all). A method that is free of these ordering effects was presented in (Young 1992). The method of nonmonotonic sorts (NSs) allows default labels to be assigned at any time, and used only in the ab- sence of conflicting information. NSs replace the more traditional labels on feature structures to give nonmonotonically sorted feature structures (NS- FSs). These structures can be combined by an asso- ciative and commutative unification operation. FSs are rederived from NSFSs by taking a solution--an operation defined in terms of information present in the NSFS. FEATURE SYSTEMS Unification-based grammar formalisms use formal objects called feature structures to encode linguis- tic information. We use a variant of the standard definition. Each structure has a sort (drawn from a finite set 8), and a (possibly empty) set of at- tributes (drawn from a finite set ~). Definition1 A feature structure is a tuple (Q, r, 6, O) where • Q is a finite set of nodes, • r E Q is the root node, • 6 : QxY r ---+ Q is a partial feature value function that gives the edges and their labels, and • (9 : Q ~ S is a sorting function that gives the labels of the nodes. This structure must be connected. It is not unusual to require that these structures also be acyclic. For some systems O is defined only for sink nodes (PATR-II, for example). Fig. 1 shows a standard textual representation for a FS. We sometimes want to refer to substructures of a FS. If .A is a feature structure as described above, we write .A/f for the feature structure rooted at 6(q, f). This feature structure is defined by Q~ c_ Q, the set of nodes that can be reached from 6(r, f). We will use the letter p (possibly subscripted) to represent paths (that is, finite sequences from .T'*). We will also extend ~ to have paths in its second 209 <subj agr person> isa 3rd <subj agr number> isa singular <subj agr> = <pred agr> <pred actor> = <subj> <pred rep> isa sleep <pred tense> isa present Figure 1: Textual Feature Structure: "Uther sleeps." TRUE FALSE a where a E S pl -" P2 where each Pi E J~* f : ¢ where f E ~- and ¢ E FML ¢^¢ ¢v¢ Figure 2: SFML: the domain of sorted logical for- mulas. 1. A 2. A 3..4 4.,4 5. A 6. A 7..4 position, with the notion of iterated application of 5. We will assume that there is a partial order, -~, defined on S. This ordering is such that the great- est lower bound of any two sorts is unique, if it exists. In other words, (S U {_1_}, -q) is a meet- semilattice (where _l_ represents inconsistency or failure). This allows us to define the most general unifier of two sorts as their greatest lower bound, which write as aAsb. We also assume that there is a most general sort, T, called top. The structure (S, -g) is called the sort hierarchy. KASPER-ROUNDS LOGIC (Kasper 1988) provides a logic for describing fea- ture structures. Fig. 2 shows the domain of these logical formulas. We use the standard notion of satisfaction. Let A = (Q, r, 5, O). = TRUE always; - FALSE never; =a~O(r)__.a; =pl --'p~ -:-, > 5(r, pl) = 5(r,p~); = f : ¢ ¢=~ A/f is defined and A/f ~ ¢; = ¢A¢ ¢===~ A ~ ¢ and .A ~ ¢; = ¢ V¢ ¢---~ A ~¢orA~¢ Note that item 3 is different than Kasper's original formulation. Kasper was working with a flat sort hierarchy and a version of FSs that allowed sorts only on sink nodes. The revised version allows for order-sorted hierarchies and internal sorted nodes. NONMONOTONIC SORTS Figure 3 shows a lexical inheritance hierarchy for a subset of German verbs. The hierarchy specifies VERB template <past tense suffix> default +te <past participle prefix> isa ge+ <past participle suffix> default +t spiel lex VERB MIDDLE-VERB template VERB <past participle suffix> isa +en mahl lex MIDDLE-VERB STRONG-VERB template MIDDLE-VERB <past tense suffix> isa 0 zwing lex STRONG-VERB <past tense stem> isa zwang <past participle stem> isa zwung Figure 3: Example Lexicon with Defaults strict (isa) and default (default) values for various suffixes. If we ignore the difference between strict and default values, we find that the information specified for the past participle of mahl is inconsis- tent. The MIDDLE-VERB template gives +en as the suffix, while VERB gives +t. The declaration of the latter as a default tells the system that it should be dropped in favour of the former. The method of nonmonotonic sorts formalizes this no- tion of separating strict from default information. Definition 2 A nonmonotonic sort is a pair (s, A / where s E S, and A C S such that for each d E A, d-4 s. The first element, s, represents the strict informa- tion. The default sorts are gathered together in A. We write Af for the set of nonmonotonic sorts. Given a pair of nonmonotonic sorts, we can unify them to get a third NS that represents their com- bined information. Definition 3 The nonmonotonic sort unifier of nonmonotonic sorts (sl,Az) and (s2,As) is the nonmonotonic sort (s, A) where • S ~ 81Ass2, and • A = {dAss I d E Az U A2 A (dAss) -~ s}. The nonmonotonic sort unifier is undefined if saAss2 is undefined. We write nzA~n2 for the NS unifier of nl and n2. The method strengthens consistent defaults while eliminating redundant and inconsistent ones. It should be clear from this definition that NS unifica- tion is both commutative and associative. Thus we may speak of the NS unifier of a set of NSs, with- out regard to the order those NSs appear. Looking back to our German verbs example, the past par- ticiple suffix in VERB is (T, {+t}), while that of MIDDLE-VERB is (+en, {}). The lexical entry for mahl gets their nonmonotonic sort unifier, which is (+en, {}). If +tAs+en had been defined, and equal 210 to, say, +ten, then the NS unifier of (T, {+t}) and (+en, {}) would have been (+an, {+ten}}. Once we have nonmonotonic sorts, we can create nonmonotonically sorted feature structures (NS- FSs) by replacing the function 0 : Q ~ S by a function ~ : Q ~ Af. The nodes of the graph are thus labeled by NSs instead of the usual sorts. NSFSs may be unified by the same procedures as before, only replacing sort unification at the nodes with nonmonotonic sort unification. NSFS unifi- cation, written with the symbol rlN, is associative and commutative. NSFSs allow us to carry around default sorts, but has so far given us no way to apply them. When we are done collecting information, we will want to return to the original system of FSs, using all and only the applicable defaults. To do that, we introduce the notions of explanation and solution. Definition 4 A sort t is said to be explained by a nonmonotonic sort (s,A} if there is a D C A such that t = S^s(AsD). If t is a maximally specific explained sort, lhen ~ is called a solution of n. The solutions for {+en, {)) and {T, {+t}) are +en and +t respectively. The latter NS also explains T. Note that, while D is maximal, it's not necessar- ily the case that D = A. If we have mutually incon- sistent defaults in A, then we will have more than one maximal consistent set of defaults, and thus more than one solution. On the other hand, strict information can eliminate defaults during unifica- tion. That means that a particular template can inherit conflicting defaults and still have a unique solution--provided that enough strict information is given to disambiguate. NSFS solutions are defined in much the same way as NS solutions. Definition 5 A FS (Q,r,~,O) is said to be ex- plained by a NSFS (Q,r, 8, Q) if for each node q E Q we have ~2(q) explains O(q). If.A is a max- imally specific explained FS, then A is called a so- lution. If we look again at our German verbs example, we can see that the solution we get for mahl is the FS that we want. The inconsistent default suffix +t has been eliminated by the strict +en, and the sole remaining default must be applied. For the generic way we have defined feature structures, a NSFS solution can be obtained sim- ply by taking NS solutions at each node. More restricted versions of FSs may require more care. For instance, if sorts are not allowed on internal nodes, then defining an attribute for a node will eliminate any default sorts assigned to that node. Another example where care must be taken is with typed feature structures (Carpenter 1992). Here the application of a default at one node can add strict information at another (possibly making a default at the other node inconsistent). The defini- tion of NSFS solution handles both of these cases (and others) by requiring that the solution be a FS as the original system defines them. In both of these cases, however, the work can be (at least partially) delegated to the unification routine (in the former by Mlowing labels with only defaults to be removed when attributes are defined, and in the latter by propagating type restrictions on strict sorts). What is done in other systems in one step has been here broken into two steps--gathering infor- mation and taking a solution. It is important that the second step be carried out appropriately, since it re-introduces the nonmonotonicity that we've taken out of the first step. For a lexicon, templates exist in order to organize information about words. Thus it is appropriate to take the solution of a lex- ical entry (which corresponds to a word) but not of a higher template (which does not). If the lexicon were queried for the lexical entry for mahl, then, it would collect the information from all appropriate templates using NSFS unification, and return the solution of that NSFS as the result. DEFAULT LOGIC The semantics for nonmonotonic sorts is motivated by default logic (Reiter 1980). What we want a default sort to mean is: "if it is consistent for this node to have that sort, then it does." But where Reiter based his DL on a first order language, we want to base ours on Kasper-P~ounds logic. This will require some minor alterations to lZeiter's for- malism. A default theory is a pair (D, W) where D is a set of default inferences and W is a set of sentences from the underlying logic. The default inferences are triples, written in the form ~:Mp Each of the greek letters here represents a wff from the logic. The meaning of the default inference is that if ~ is believed and it is consistent to assume t5, then 7 can be believed. Given a default theory (D, W), we are interested in knowing what can we believe. Such a set of be- liefs, cMled an extension, is a closure of W under the usual rules of inference combined with the de- fault rules of inference given in D. An extension E is a minimal closed set containing W and such that if c~ :M fl/7 is a default, and if ~ E E and consistent with E then 7 E E (that is, if we believe ~x and fl is consistent with what we believe, then we also believe 7). l~eiter can test a formula for consistency by test- ing for the absence of its negation. Since Kasper- Rounds logic does not have negation, we will not be able to do that. Fortunately, we have do have our 211 own natural notion of consistency--a set of formu- las is consistent if it is satisfiable. Testing a set of Kasper-Rounds formulas for consistency thus sim- ply reduces to finding a satisfier for that set. Formally, we encode our logic as an information system (Scott 1982). An information system (IS) is a triple (A, C, b) where A is a countable set of "atoms," Cis a class of finite subsets of A, and t- is a binary relation between subsets of A and elements of A. A set X is said to be consistent if every finite subset of X is an element of C. A set G is closed if for every X _C G such that X l- a, we have a E G. Following thestyle used for information systems, we will write G for the closure of G. In our case, A is the wffs of SFML (except FALSE), and C is the class of satisfiable sets. The entailment relation encodes the semantics of the particular unification system we are using. That is, we have FI-I~ if VF.F~AF~F~fl. For instance, Pl ":- P2, P2 -- P3 I- Pl -- P3 represents the transitivity of path equations. DEFAULT KASPER-ROUNDS LOGIC In the previous section we described the generic form of default logic. We will not need the full generality to describe default sorts. We will re- strict our attention to closed precondition-free nor- mal defaults. That is, all of our defaults will be of the form: :M~ We will write D E as an abbreviation for this default inference. Here fl stands for a generic wff from the base language. Even this is more general than we truly need, since we are really only interested in default sorts. Nevertheless, we will prove things in the more general form. Note that our default inferences are closed and normal. This means that we will always have an extension and that the extension(s) will be consis- tent if and only if W is consistent. These follow from our equivalents of Reiter's theorem 3.1 and corollaries 2.2 and 2.3. Let's consider now how we would represent the information in Fig. 3 in terms of Kasper-Rounds default logic. The strict statements become normal KR formulas in W. For instance, the information for MIDDLE-VERBs (not counting the inheritance information) is represented as follows: ({}, {past : participle: suffix: +en)) The information for VERB will clearly involve some defaults. In particular, we have two paths leading to default sorts. We interpret these state- ments as saying that the path exists, and that it has the value indicated by default. Thus we represent the VERB template as: D = {Dpast:tenae:suyfix:+te, Dpast:partieiple:su ] ] ix : + t ), W = {past : tense : suffix : T, past : participle : suffix : -I-, past : participle : prefix : ge+ } Inheritance is done simply by pair-wise set union of ancestors in the hierarchy. Since the entry for mahl contains no local information, the full description for it is simply the union of the two sets above. D = {Dpast:tense:suy$i~::+te, Opast:partieiple:,u Lfix: +t }, W = {past : tense : suffix :-l-, past : participle : suffix : T, past : participle : prefix : ge+, past : participle : suffix : +en} We can then find an extension for that default the- ory and take the most general satisfier for that for- mula. It is easy to see that the only extension for raahl is the closure of: past : tense : suffix : +te, past : participle : suffix : +en, past : participle : prefix : ge+ The default suffix +t is not applicable for the past participle due to the presence of +en. The suffix +re is applicable and so appears in the extension. DKRL AND NONMONOTONIC SORTS In the previous section we defined how to get the right answers from a system using default sorts. In this section we will show that the method of non- monotonic sorts gives us the same answers. First we formalize the relation between NSFSs and de- fault logic. Definition 6 Let 79 = (Q, r, 5, ~) be a nonmono- tonically sorted feature structure. The default the- ory of D is DT(79) = ({Dp:t I ~2(5(r,p)) = (s, A) At 6 A}, {{Pl,P2} I 5(r, PQ ---- 5(r, p2)} u{p:s I ~(5(r,p)) = (s, A))) The default part of DT(79) encodes the default sorts, while the strict part encodes the path equa- tions and strict sorts. Theorem 1 The FS .4 is a solution for the NSFS 7) if and only if {¢1.4~¢} is an extension of DT(79). 212 Because we are dealing with closed normal default theories, we can form extensions simply by taking maximal consistent sets of defaults. This, of course, is also how we form solutions, so the the solution of a NSFS is an extension of its default theory. We now need to show that NSFS unification be- haves properly. That is, we must show that non- monotonic sort unification doesn't create or destroy extensions. We will write (D1, W1)=zx(D2, I4/2) to indicate that (O1, W1) and (D2, W2) have the same set of extensions. We will do this by combining a number of intermediate results. Theorem 2 Let (D, W) be a closed normal default theory. 1. /fc~ A/3 ¢* 7, then (D, W to {4 ^/3})=a(D, W to {7})- 2. /f W U {/3} is inconsistent, then (D t^ {DE} , W)=A(D, W). 3. IfW ~-/3, then (D U {DE} , W)=A(D, W). 4. IfW~-~ anda^/3¢:~7, then (D tO {DE} , W)=A(D tO {D.y}, W). The formulas ~ and /3 represent the (path pre- fixed) sorts to be unified, and 7 their (path pre- fixed) greatest lower bound. The first part deals with strict sort unification, and is a simple conse- quence of the fact that (D, W) has the same exten- sions as (D, W). The next two deal with inconsis- tent and redundant default sorts. They are simi- lar to theorems proved in (Delgrande and Jackson 1991): inconsistent defaults are never applicable; while necessary ones are always applicable. The last part allows for strengthening of default sorts. It follows from the previous three. Together they show that nonmonotonic unification preserves the information present in the NSFSs being unified. Theorem 3 Let 791 and 792 be NSFSs. Then DT(79Z RN792)=zx DT(791) to DT(792) (using pair- wise set union). DISCUSSION Most treatments of default unification to date have been presented very informally. (Bouma 1992) and (Russell et al. 1992), however, provide very thorough treatments of their respective methods. Bouma's is more traditional in that it relies on "subtracting" inconsistent information from the de- fault side of the unification. The method given in thispaper is similar to Russell's method in that it relies on consistency to decide whether default information should be added. Briefly, Bouma defines a default unification op- eration AU!B = (A - B) II B, where A - B is de- rived from A by eliminating any path that either gets a label or shares a value in B. In the lexi- con, each template has both "strict" and "default" information. The default information is combined A template <f> isa a <g> default b B template <f> default c <g> isa d C lex A B Figure 4: Multiple Default Inheritance with the inherited information by the usual unifica- tion. This information is then combined (using El!) with the strict information to derive the FS associ- ated with the template. This FS is then inherited by any children of the template. Note that the division into "strict" and "default" for Bouma is only local to the template. At the next level in the hierarchy, what was strict becomes default. Thus "defaultness" is not a property of the information itself, as it is with NSs, but rather a relation one piece of information has to another. The method described in (Russell et al. 1992) also divides templates into strict and default parts 1. Here, though, the definitions of strict and default are closer to our own. Each lexical entry inherits from a list of templates, which are scanned in order. Starting from the lexical entry, at each template the strict information is added, and then all consistent defaults are applied. The list of templates that the lexical entry in- herits from is generated by a topological sort of the inheritance hierarchy. Thus the same set may give two different results based on two different order- ings. This approach to multiple inheritance allows for conflicts between defaults to be resolved. Note, however, that if template A gets scanned before template B, then A must not contain any defaults that conflict with the strict information in template B. Otherwise we will get a unification failure, as the default in A will already have been applied when we reach B. With NSs, the strict informa- tion will always override the default, regardless of the order information is received. The treatment of default information with NSs allows strict and default information to be inherited from multiple parents. Consider Fig. 4. Assuming that the sorts do not combine at all, the resulting FS for lexical entry C should be [,a] g d The two methods mentioned above would fail to get any answer for 6': one default or the other would l'I'here may actually be multiple strict parts, which are treated as disjuncts, but that is not pertinent to the comparison. 213 be applied before the other template was even con- sidered. In order to handle this example correctly, they would have to state C's properties directly. One advantage of both Bouma and Russell is that exceptions to exceptions are allowed. With nonmonotonic sorts as we have presented them here, we would get conflicting defaults and thus multiple answers. However, it is straight-forward to add priorities to defaults. Each solution has a unique set of defaults it uses, and so we can com- pare the priorities of various solutions to choose the most preferred one. The priority scheme can be any partial order, though one that mirrored the lexical inheritance hierarchy would be most natural. Another advantage that both might claim is that they deal with more than just default sorts. How- ever, the theorems we proved above were proved for generic wits of Kasper-Rounds logic. Thus any formula could be used as a default, and the only question is how best to represent the information. Nonmonotonic sorts are a concise and correct im- plementation of the kind of default inheritance we have defined here. CONCLUSION This paper has shown how the method ofnonmono- tonic sorts is grounded in the well-established the- ories of Kasper-Rounds logic and Reiter's default logic. This is, to our knowledge, the first attempt to combine Reiter's theory with feature systems. Most previous attempts to fuse defaults with fea- ture structures have relied on procedural code-- a state of affairs that is highly inconsistent with the declarative nature of feature systems. Meth- ods that do not rely on procedures still suffer from the necessity to specify what order information is received in. It seems to us that the major problem that has plagued attempts to add defaults to feature systems is the failure to recognize the difference in kind be- tween strict and default information. The state- ment that the present participle suffix for English is '+ing' is a very different sort of statement than that the past participle suffix is '+ed' by default. The former is unassailable information. The latter merely describes a convention--that you should use '+ed' unless you're told otherwise. The method of nonmonotonic sorts makes this important distinc- tion between strict and default information. The price of this method is in the need to find solu- tions to NSFSs. But much of the cost of finding solutions is dissipated through the unification pro- cess (through the elimination of inconsistent and redundant defaults). In a properly designed lexi- con there will be only one solution, and that can be found simply by unifying all the defaults present (getting a unification failure here means that there is more than one solution--a situation that should indicates an error). The semantics given for NSs can be extended in a number of ways. In particular, it suggests a se- mantics for one kind of default unification. It is possible to say that two values are by default equal by giving the formula Dp -p2. This would be useful in our German verbs example to specify that the past tense root is by default equal to the present tense root. This would fill in roots for spiel and mahl without confounding zwing. Another exten- sion is to use a prioritized default logic to allow for resolution of conflicts between defaults. The nat- ural prioritization would be parallel to the lexicon structure, but others could be imposed if they made more sense in the context. References Bouma, Gosse 1990. Defaults in unification gram- mar. In Proceedings of the 1990 Conference of the Association for Computational Linguistics. 165- 172. Bouma, Gosse 1992. Feature structures and nonmonotonicity. Computational Linguistics 18(2):183-203. Carpenter, Bob 1991. Skeptical and credulous de- fault unification with applications to templates and inheritance. In Default Inheritance Within Unification-Based Approaches to the Lexicon. Carpenter, Bob 1992. The Logic of Typed Feature Structures. Cambridge University Press. Delgrande, James P and Jackson, W Ken 1991. Default logic revisited. In Proceedings of the Sec- ond International Conference on the Principles of Knowledge Representation and Reasoning. 118- 127. Kaplan, Ronald 1987. Three seductions of com- putational linguistics. In Linguistic Theory and Computer Applications. Academic Press, London. 149-188. Kasper, Bob 1988. Feature Structures: A Logical Theory with Applications to Language Analysis. Ph.D. Dissertation, University of Michigan, Ann Arbor. Reiter, Ray 1980. A logic for default reasoning. Artificial Intelligence 13:81-132. Russell, Graham; Ballim, Afzal; Carroll, John; and Warwick-Armstrong, Susan 1992. A practi- cal approach to multiple default inheritance for unification-based lexicons. Computational Lin- guistics 18(3):311-337. Scott, Dana 1982. Domains for Denotational Se- mantics, volume 140 of Lecture Notes in Computer Science. Shieber, Stuart 1986. An Introduction to Unification-Based Approaches to Grammar, vol- ume 4 of CSLI Lecture Notes. University of Chicago Press, Chicago. 214 Shieber, Stuart 1987. Separating linguistic anal- yses from linguistic theories. In Linguistic The- ory and Computer Applications. Academic Press, London. 1-36. Young, Mark 1992. Nonmonotonic sorts for fea- ture structures. In National Conference on Arti- ficial Intelligence, San Jose, California. 596-601. 215
1993
28
F-PATR: FUNCTIONAL CONSTRAINTS FOR UNIFICATION-BASED GRAMMARS Kent Wittenburg Bellcore 445 South St., MRE 2A-347 Morristown, NJ 07962-1910, USA Internet: [email protected] Abstract Motivation for including relational constraints other than equality within grammatical formalisms has come from discontinuous constituency and partially free word order for natural languages as well as from the need to define combinatory operations at the most basic level for languages with a two-dimensional syntax (e.g., mathematical notation, chemical equations, and various diagramming languages). This paper presents F-PATR, a generalization of the PATR-II unification-based for- malism, which incorporates relational constraints expressed as user-defined functions. An operational semantics is given for unification that is an adaptation and extension of the approach taken by Ait-Kaci and Nasr (1989). It is designed particularly for unification- based formalisms implemented in functional program- ming environments such as Lisp. The application of unification in a chart parser for relational set languages is discussed briefly. 1. INTRODUCTION For the most part, unification-based grammar for- malisms (e.g., Kaplan and Bresnan 1982; Pereira and Warren 1980; Shieber 1984) have adopted string rewriting conventions from context-free grammar rules, assuming string concatenation as the basic combining operator external to the unification process itself. Kay's Functional Unification Grammar (Kay 1979), while not borrowing the conventions of CFG rewriting rules, still assumed concatenation of strings as the underlying combining operation. However, recent work in HPSG (e.g., Pollard and Sag 1987, Reape 1990, Carpenter et al. 1991) and elsewhere has sought to incorporate con- straints for combining operations into the unification- based representation directly. Part of the motivation for doing so is to accommodate partially free word order and discontinuous constituency without the complica- tion of passing along intermediate "threading" informa- tion within derivations. Such exensions to unification grammars require the use of nonequational constraints, i.e., constraints on values other than simple conditions of equality and the logical connectives built with them. Reape (1990) has proposed, for example, the relations permutation and sequence union to constrain word sequences in his HPSG fragment for German. A different motivation for extending the constraint language for combination within unification grammars comes from languages with a two-dimensional syntax (e.g., mathematical notation, chemical equations, and various diagramming languages). Approaching such domains from a linguistic perspective requires that grammars be capable of dealing with a richer source of data types than just strings and also with specifying a richer set of combinatory operations than simple string concatenation. The approach taken by Helm and Marri- ott (1986, 1990) and Wittenburg, Weitzman, and Talley (1991) [hereafter WWT] is to augment declarative, uni- fication-based grammars with relational constraints. Combinatory operations can then be defined out of the sets of relational constraints present in rule bodies. The approach in WWT includes a set-valued attribute called cover in feature structures. Relations such as above, below, north-east-of, and connected-to are examples that may be incorporated into cover constraints used in grammars for two-dimensional languages. These con- straints apply to sets of the basic input vocabulary, whose members may themselves be complex objects. The use of sets in these grammars takes the place of strings, or sequences of words, as used in grammars for natural languages. This paper presents a generalization of the PATR-II unification-based grammar formalism to incorporate relational constraints. The extension has been primarily motivated by the demands of combinatory operations in the syntax for two-dimensional languages although such constraints can be used to express more complex com- binatory relations on strings as well as for other pur- poses (see, for example, work in CLG (Damas and Varile 1989; Balari et al. 1990)). The approach described here arose as a result of extending a Lisp-based implementation of PATR-II used with a chart parser. A natural path was provided by 216 Ait-Kaci and Nasr (1989), who proposed integrating logic and functional programming by allowing con- straints to be specified with applicative expressions. This work has subsequently become one of the three cornerstones of the programming language Life (Ait- Kaci 1990). The key idea is to allow interpreted func- tional expressions to appear as bonafide arguments in logical statements. Unification operations then must allow for delaying the evaluation of functional expres- sions until such time as argument variables become grounded, a process that leads to what Ait-Kaci and Nasr call residuation. For the most part, the adaptation of Ait-Kaci and Nasr's methods to an extension of PATR-II proved to be straitforward. However, there are two points on which the operational semantics of F-PATR unifica- tion as defined here differs from theirs. The first, a variation on dereferencing applicative values, was motivated by the demands of caching intermediate results imposed by chart parsing. The second, atomic disjunction, allows for more expressiveness in the grammar and also, again, was motivated by the parsing algorithm we assumed. We will return to these points in Section 6. 2. FUNCTIONAL CONSTRAINTS From the graph perspective, the basic vocabulary of PATR-II (Shieber 1984) consists of a set of arc labels and a set of terminal (leaf) node labels, the latter including a variable (or null) value. The graphs can have reentrancies at the leaf levels or higher up, which express identity (or unification) of structure. Following Ait-Kaci and Nasr (1989), we incorpo- rate applicative expressions (function specification followed by zero or more argument specifications), into our constraint language. Two uses of applicative expressions in the Ait-Kaci/Nasr language Le Fun concern us here. The first allows variables to equate to an (eventual) evaluation of some applicative expres- sion whose arguments may contain variables. For example, X = (union Y Z) (Our convention will be to write applicative expres- sions using Lisp s-expression syntax, i.e, function name followed by zero or more arguments all enclosed in parentheses.) The second allows Le Fun clauses to be formed from arbitrary ground,decidable predicates, i.e., applicative expressions whose arguments also may start out as variables. For example, given the user-defined boolean function sw-of (south-west of), the following would be an acceptable statement: (sw- of X Y). The analogous PATR-II extension to the first of these allows leaf nodes to be labeled with an applica- tive expression. Any "unbound" arguments in these applicative expressions will point to variable nodes elsewhere in the graph. Equations such as the follow- ing example will then be allowed in the language. <mother cover> = (union <daughtl cover> <daught2 cover>) In F-PATR, we restrict the types of nodes repre- sented by paths to those that may appear as leaf values, i.e., atomic, a disjunction of atoms, null (variable), or another applicative value. This restriction is signifi- cant: it does not allow for arguments in functional con- straints to be of the complex attribute-value type. The second use of applicative expressions, as pred- icates, allows the inclusion of functional expressions into feature specifications as independent conditions on successful unification. So here the evaluation of the expression is not associated with a leaf node's value. The statement below is an example of a such a con- straint on the value of a node that might be included in graph. This predicate sw-of will be taken to be a con- dition on successful unification. (sw-of <daughtl cover> <daught2 cover>) The two statements above taken together would then correspond to the graph shown in Figure 1, a first approximation for a rule for forming exponent expres- sions in a grammar of mathematical notation. The unlabeled arcs linking arguments in applicative expressions to the variable nodes are a notational con- venience, indicating a forwarding pointer. The argu- ments to these expressions are in fact the nodes themselves. mother J ~ I cover r c°ver I _ ) ~cover ~(union J ~ (sw-of ~. /') Figure 1 An F-PATR Graph Our proposal for F-PATR feature structures begins with a vocabulary of the following types suitable for 217 interpretive, functionally oriented programming lan- guages such as Lisp. Atom Symbol or number Fun-exp Function, i.e, symbol pointing to a function, or lambda expression interpretable as a function, of type Atom X Atom × ... Atom ---> Atom or else Atom × Atom X ... Atom ---> List-of-atoms (where List-of-atoms will be interpreted as a logical disjunction of atomic values) S-expression Any complete evaluatable expression with- out internal references to F-PATR nodes The following then is a BNF grammar for F-PATR equations representing feature structures: Feat-struct ::= Statement + Statement ::= Atom I Equation I Appl Equation :'= Path = Path I Path = Val I Path = Appl Path ::= < Atom + > Val ::= Atom I { Atom Atom + } Appl ::= ( Fun-exp Arg* ) Arg ::= Path I Appl I S-expression We will assume the existence of a familiar equiva- lent notation for these feature equations, in which graph reentrancies (or path equivalences) are expressed by a matrix with integers used for shared reference. Predicates will follow the core attribute- value matrix. For example, [a: 1[] b: 2(foo <1>)] (fie <1> <2>) is equivalent to <b> = (foo <a>) (fie <a> <b>). In addition to functional values and constraints, we augment the original PATR-II notation with atomic disjunction (interpreted as exclusive OR) as a possible value of leaf nodes. Such values are written with curly braces surrounding two or more atoms. Atomic dis- junction is one of the most basic extensions to the PATR-II unification language and is in common use. If atomic values are considered to be singleton sets, unification of atomic disjunctions with other disjunc- tions or atoms can be operationally treated as set inter- section. In F-PATR, atomic disjunctions may appear not only independently but also as arguments and val- ues of applicative expressions. 3. DATA TYPES In Ait-Kaci and Nasr (1989), functional expres- sions in feature structures are evaluated as soon as their arguments become bound. Otherwise, data struc- tures will become residuated, a state representing incompletion with respect to determining constraints on unification. Ait-Kaci and Nasfs algorithms thus delay the resolution of functionally-specified values or predicates until all variables are bound, but then resolve them as early as possible once bindings occur. Here we follow this same general approach for predi- cates only, but not for applicative values, which are checked for readiness to evaluate only when derefer- enced. Further, we expand the routines to deal with atomic disjunction. We assume the following data types for nodes in a feature structure graph: :Arc-list a set of attribute labels and associated val- ues, the latter of which may be of any type :null the uninstantiated "variable" type :atomic a singleton set of one symbol or number :disjunct a set of 2 or more atomic values :appl an applicative expression :res-var a residuated variable, i.e., a :null type that appears as an argument in at least one predicate :res-disjunct a residuated disjunction, i.e., a :disjunct type that appears as an argument in at least one predicate The node types that may acquire residuations include :null, :disjunct, and :appl (a type for which we do not distinguish residuated from nonresiduated sub- types). There are two kinds ofresiduations: predicates not ready for evaluation and delayed unifications asso- ciated with the :appl type. Predicate residuations arise when a predicate contains any arguments of type :null or :appl, or else when a predicate has more than one argument of type :disjunct. During unification, any such arguments mutate to a residuated type (if they are urtresidutated to start with), and the predicate is pushed onto their residuation list. The second kind of residuation arises when unifica- tion is called for between a node of type :appl that is not ready for evaluation and any other non-:null type. The unification call itself must be delayed until such time as the function is ready for evaluation, and so a form that will provoke the unification is pushed onto the residuation list of the :appl node. 218 4. DEREFERENCING The notion of dereferencing a data structure repre- senting a feature value (or node) is common to most unification implementations. A field in the data struc- ture indicates whether the value is to be found locally or else by following pointers to other data structures that may have been introduced through prior unifica- tion. Introducing residuations into the data structures adds the wrinkle that, during dereferencing, applica- tive expressions will be evaluated if they are ready. In F-Patr, dereferencing an :appl type node is in fact the only point at which to evaluate an applicative expres- sion. This is a change from Le Fun--there arguments in applicative expressions may acquire applicative expressions as residuations that can be evaluated as argument terms become grounded during unification. This design change will be motivated in Section 6. For each node type, the dereference function fol- lows pointers in the usual way until no pointers remain. In addition, if the resulting node is of :appl type, we check to see if all its arguments are atomic or else lisp s-expressions, an indication that the function is ready to be evaluated. If the function evaluates to a non-nil atom or a disjunctive list of atoms, then any residuations (delayed unifications) on the node are also called. Note then that dereferencing can itself fail as a result of provoking unifications that fail, which the top-level unification routines need to take account of. 5. UNIFICATION The types associated with successful unifications of dereferenced leaf node types are shown in Table 1. Some cells contain more than one type since residua- tions and disjunctions may or may not be reduced in the result term. Note that an :appl type unified with any other type always yields another :appl type. This is a bit misleading, however, since the table does not take into account the effects of dereferencing, which, as we have just described, can provoke a chain of delayed unifications involving any types. During unification, the evaluation of functions used in predicates and :appl nodes each may produce disjunctive values, but in different ways. Predicates can be evaluated when there is at most one disjunctive argument node, in which case we map the predicate Over each of the disjunctions in the disjunctive argu- ment, and collect successful results. If there is more than one successful result, then the result is a disjunc- tion. Alternatively, for functions appearing in :appl nodes only, the function itself may produce a disjunc- tive value as directed by the internal definition of the function. But note that functions used in F-PATR graphs do not themselves take disjunctive arguments directly, as indicated in the discussion of data types above. Table 1: Unification for leaf types :r-dis :null :null :atom :disju :appl :r-var :atom I :atom :atom :atom :appl :atom :r-dis :atom :disju :r-dis :disju :disju :atom :atom :appl :atom :atom :disju :disju :disju :r-dis :r-dis :appl :appl :appl :appl :appl :appl :appl :r-var :r-var :atom :atom :appl :r-var :atom :disju :disju :r-dis :r-dis :r-dis :atom :disju :r-dis There are a number of pairings in Table 1 that are capable of producing either residuated disjunctions, disjunctions, or atoms. These all involve a residuated predicate appearing in at least one of the leaf node arguments. If the initial intersection of the node's con- tents (independently from residuations) yields a value that still does not provoke evaluation of the predicate, then the result is a residuated disjunction. If the pred- icate is evaluated, then the unification process may yield an atomic value or a disjunctive value, as explained in the previous paragraph. Space precludes us from further discussion of the unification algorithms here. With refej-ence to Ait- Kaci and Nasr (1989) and Table 1, however, the details should emerge. See also the examples in the Appendix, which are taken from program output. 6. APPLICATION TO PARSING The two significant design changes that we have introduced were motivated by our application of F- PATR to parsing of relational set grammars for graph- ical languages, which is discussed in detail in WWT. Initial experiments adopted the Ait-Kaci/Nasr 219 approach of evaluating the functions of :appl nodes as soon as possible, which meant residuating the argu- ment nodes of these functions. However, this approach led to difficulties in our chart parsing algo- rithm, which needed to cache the feature structures of active edges before any of the destructive effects of unification involving what we call expander functions took place. The root of the issue is that with the Ait- Kaci/Nasr approach, the control of function evaluation is within unification rather than with some external algorithm. In our approach, it was most natural to use external co n~0| to implement chart parsing. This point may be c.larified by considering an example, for which we need to summarize F-PATR relational set grammars. (See also Wittenburg (1992a 1992b).) The feature structures for grammatical constituents include the primary attributes cover, syntax, and semantics. The attribute cover takes as value a refer- ence to a subset of input objects. This scheme is anal- ogous to HPSG feature structures, where the string- valued phonology attribute is replaced by the set-val- ued cover attribute. Rules have the form [mother: [cover: [] syntax: [] semantics: []] daughtl: [cover: [] syntax: [] semantics: []] daughtn: [cover: [] syntax: [] semantics: []]] with the condition that for the daughter elements of a rule D1...Dn, there must exist at least one expander relation between covers of each daughter Di, 2 < i < n, and a cover of daughter Dj where j < i. The expander relations are a subclass of relational constraints among sets of input objects used to define the combinatory possibilities of rules. For parsing, the constraints are expressed as functions from cover-sets to cover-sets and appear as a functional value of cover attributes. [mother: [syntax: Exp cover: (union-covers <2> <3> <4>) semantics: (divide <6> <7>)] daughtl: [syntax: horizontal-line cover: 2[]] daught2: [syntax: Exp cover: 3(what-is-above <2>) semantics: <6>] daught3: [syntax: Exp cover: 4(what-is-below <2>) semantics: <7>]] (contains-in-x <2> <4>) (contains-in-x <2> <3>) The example above is the rule for vertical infixation for fractions, used in a grammar of mathematical nota- tion. Let us consider now what the feature structure for an active chart-parsing edge for the fraction rule would look like after the first daughter had been unified in. The cover attribute would acquire a set-reference value (we will use a number in binary suggestive of the use of bit vectors to represent subsets). Active edge feature structure: [mother: [syntax: Exp cover: (union-covers 0001 <3> <4>) semantics: (divide <6> <7>)] daughtl: [syntax: horizontal-line cover: 0001] daught2: [syntax: Exp cover: 3(what-is-above 0001) semantics: <6>] daught3: [syntax: Exp cover: 4(what-is-below 0001) semantics: <7>]] (contains-in-x 0001 <4>) (contains-in-x 0001 <3>) At this point the Ait-Kaci/Nasr algorithm for unifi- cation would provoke the evaluation of the what-is- above and what-is-below functions, since their argu- ments are now "grounded". However, this is not what we want for a chart parser since the features of the active edge graph shown here must be kept indepen- dent from each of its future advancements. That is, we want to evaluate these two functions at separate cycles in the parsing algorithm at the points when we are ready to extend this edge with the daughters in ques- tion. The more conservative approach to derefencing and evaluation of :appl nodes and also the extension of disjunctions as possible values of expander functions provides an elegant solution. ! The functions what-is- above and what-is-below will be evaluated in indepen- dent expand steps of the WWT algorithm. In either case, the function is capable of returning a disjunction of values. But any such values must also meet the con- straints of the predicate contains-in-x, the application of which may have the effect of reducing the set of val- 1. Hassan Ait-Kaci (personal communication) has pointed out that a solution to the control problem is available in the Le Fun/LIFE framework. An extra unbound argument could be added to expander functions such as what-is-above so that evaluation would not be provoked at undesired times. A binding for this extra variable could later be offered when evaluation was wanted. 220 ues and perhaps eliminating all of them, leading to a unification failure. All this happens as it should with the approach to unification outlined above. 7. CONCLUDING REMARKS One of the goals of this paper is to bring the work of Ait-Kaci and Nasr to the attention of the computa- tional linguistics community. Their techniques for marrying declarative and functional programming par- adigms are an important avenue to explore in expand- ing the expressiveness of formalisms for linguisic applications. The design issues encountered in build- ing an implementation of F-PATR should be of inter- est to implementors of such a paradigm. Of course we do not address here issues in the logic of such feature structures or their declarative semantics. The signifi- cant differences of F-Patr from Le Fun include an alternative approach to dereferencing certain data types, a change motivated by an environment in which parsing control is outside the unification process, and also an extension to a simple form of disjunction. In contrast to the research projects that implement unifi- cation-based grammar formalisms on top of Prolog, this implementation has built a unification environ- ment on top of Lisp. The job of integrating the declar- ative and functional paradigms is made considerably easier by relying on Lisp for lambda conversion and function evaluation. In the by now extensive literature on unification grammar frameworks, the current proposal figures as a somewhat conservative, and yet radically expressive, extension to PATR-II. It is conservative in that the logic of feature structures includes only minimal dis- junction and no negation or conditionalization. But the extension leads to unlimited expressive power by bringing in the full power of function evaluation. It appears to be an extension appropriate for the repre- sentational problems we encountered, but it also has led to unanticipated uses. For example, in writing the semantics for graphical grammars we have been able to use functions in feature structures as a way of build- ing forms that can simply be evaluated to invoke the appropriate operations for applications. Here again, having more control over when evaluation takes place external to the unification process has proved to be important. There are limitations, however, to the expressive power of F-PATR as it stands. It cannot directly sup- port some of the constraints envisioned in current HPSG literature, for example, because of F-PATR's restrictions on arguments to functional constraints. In HPSG, relations constrain not just atomic values but also general feature structures incuding lists and sets. Such an extension to F-PATR is not planned by the author but it may be of interest. From the logic gram- mar point of view, the work reported on here may be relevant as a source of ideas for efficiency. Con- straints expressed as relations in frameworks such as Zajac (1992) could instead be expressed in F-PATR as compiled functions, leading perhaps to improved runtime speeds. The MCC/Bellcore implementation of F-PATR includes both destructive and nondestructive versions of unification. The destructive version is, as expected, more straightforward to implement but more expen- sive computationally given that over copying and early copying are profligate (see Wroblewski 1987). The algorithms for nondestructive unification have been influenced by Tomabechi (1991), but applicative expressions and residuations change the landscape sig- nificantly. There tends to be extensive circularity in the data structures: residuated argument nodes point to predicates that in turn point back to their arguments; residuations in applicative-valued nodes point to unifi- cation forms that in turn point back to the applicative nodes. There is a need for future work to address issues of space and time efficiency for extensions represented by F-PATR just as there has been such a need for other PATR-II extensions. A line of research that the author is pursuing cur- rently (Wittenburg 1992b) is to design a more spe- cialized grammar formalism that finesses some of the complexity of residuation and unification through a version of "pseudo-unification" (Tomita 1990). In contrast to residuation, which manages function eval- uation at runtime, the idea is to manage the order of evaluation for functional constraints at compile time. In grammar formalisms and parsers under investiga- tion, it is possible for a compiler to order constraints within rule data structures such that evaluation readi- ness is a deterministic matter, circumventing the need for runtime checks and extra data structures required for delaying evaluation dynamically. ACKNOWLEDGEMENTS This research was carded out at MCC under the sponsorship of Bellcore and MCC. Louis Weitzman and Jim Talley worked closely with the author in exploring the use of F-PATR grammars in visual lan- guage applications. Roger Nasr was very helpful in consultations on the Le Fun unification algorithms. Thanks to Phil Cannata and Jim Hollan for their sup- port of the project and to the anonymous ACL review- ers for their helpful comments on the manuscript. 221 REFERENCES Ait-Kaci, H. (1991) An Overview of LIFE. In J.W. Schmidt and A.A. Stogny (eds.), Next Generation Informa- tion System Technology, Proceedings of the 1st Interna- tional East/West Data Base Workshop, Lecture Notes in Computer Science 504, Springer Verlag, pp. 42-58. Ait-Kaci, H., and R. Nasr (1989) Integrating Logic and Functional Programming. Lisp and Symbolic Computation 2:51-89. Balari, S., L. Damas, and G. B. Varile (1989) CLG: Con- straint Logic Grammars, Proceedings of the 13th Interna- tional Conference on Computational Linguistics, Helsinki, vol. 3, pp. 7-12. Carpenter, B., C. Pollard, and A. Franz (1991) The Spec- ification and Implementation of Constraint-Based Unifica- tion Grammars. In Proceedings IWPT 91, Second International Workshop on Parsing Technologies, pp. 143- 153. Damas, L. and G. Varile (1989) CLG: A Grammar For- malism based on Constraint Resolution. In E.M. Morgado and J.P. Martins (eds.), EPIA '89, Lecture Notes in Artificial Intelligence 390, Springer Verlag. Helm, R., and K. Marriott (1986) Declarative Graphics. In Proceedings of the Third International Conference on Logic Programming, Lecture Notes in Computer Science 225, pp. 513-527. Springer-Veflag. Helm, R., and K. Marriott (1990) Declarative Specifica- tion of Visual Languages. In 1990 IEEE Workshop on Visual Languages (Skokie, Illinois), pp. 98-103. Kaplan, R., and J. Bresnan (1982) Lexical-Functional Grammar: A Formal System for Grammatical Representa- tion. In J. Bresnan (ed.), The Mental Representation of Grammatical Relations, MIT Press, pp. 173-281. Kay, M. (1979) Functional Grammar. In Proceedings of the Fifth Annual Meeting of the Berkeley Linguistic Society. Pereira, F.C.N., and D. Warren (1980) Definite Clause Grammars for Language Analysis--A Survey of the Formal- ism and a Comparison with Augmented Transition Net- works. Artificial Intelligence 13:23 lr278. Pollard, C., and I. Sag (1987) Information-based Syntax and Semantics: Volume 1. Center for the Study of Language and Information. Reape, M. (1990) Getting Things in Order. In Proceed- ings of the Symposium on Discontinuous Constituency, Institute for Language Technology and Artificial Intelli- gence, Tilburg University, The Netherlands, pp. 125-137. Tomabechi, H. (1991) Quasi-Destructive Graph Unifica- tion. In Proceedings IWPT 91, Second International Work- shop on Parsing Technologies, pp. 164-171. Tomita, M. (1990) The Generalized LR Parser/Compiler V8-4: A Software Package for Practical NL Projects. In: COLING-90, Volume 1, 59-63. Wittenburg, K. (1992a) Earley-style Parsing for Rela- tional Gramars. In Proceedings of IEEE Workshop on Visual Languages, Sept, 15-18, 1992, Seattle, Washington, pp. 192-199. Wittenburg, K. (1992b) The Relational Language Sys- tem, Bellcore Technical Memorandum TM-ARH-022353. Wittenburg, K., L. Weitzman, and J. Talley (1991) Unifi- cation-Based Grammars and Tabular Parsing for Graphical Languages. Journal of Visual Languages and Computing 2:347-370. Wroblewski, D. (1987) Nondestructive Graph Unifica- tion. In Proceedings of AAAI 87, pp. 582-587. Zajac, R. (1992) Inheritance and Constraint-Based Gram- mar Formalisms. Computational Linguistics 18:159-182. APPENDIX: Unification Examples Example I :appl with :atom [ obj: [ length: 25]] {.J [ obj: [ length: (+ 1[] 2[])] obj2: [length: <2>] objl: [length: <1>]] = [ obj: [ length: 1(+ 2[] 3[])] obj2: [ length: <3>] objl: [ length: <2>]] (Unify <1> 25) U [ objl: [ length: 20]] = [ obj: [ length: 1(+ 20 2[])] obj2: [ length: <2>] objl: [ length: 20]] (Unify <1> 25) U [ obj2: [ length: 5]] = [ obj: [ length: 1(+ 20 5)] obj2: [ length: 5] objl: [ length: 20]] (Unify <1> 25) Example 2 :res-var with :atom [ obj: [ length: 1[]] objl: [length: 2[]]] (EQ <1> <2>) [.J [ obj: [ length: 55]] = [ obj: [ length: 55] objl: [length: 1[]]] (EQ 55 <1>) U [ objl: [ length: 55]] = [ obj: [ length: 55] objl: [ length: 55]] Example 3 :res-var with :disjunct [ obj: [ length: 1[]] objl: [length: 2[]]] (EQ <1> <2>) U [ obj: [ length: {55 36]]] = [obj: [length: I{5536]] objl: [length: 2[]]] (EQ <1> <2>) U [ objl: [ length: 55]] = [ obj: [ length: 55] objl: [ length: 55]] Example 4 :appl with :res-var [ obj 1: [ length: (+ 5 1[])] obj2: [ length: <1>]] 222 U [ obj 1: [ length: 1 []] obj2: [ length: []]] (>= <1> 54) = [ objl: [ length: 1(+ 5 2[])] obj2: [ length: <2>]] (>= <1> 54) 13 [ obj2: [ length: 50]] = [ objl: [ length: 1(+ 5 50)] obj2: [ length: 50]] (>= <1> 54) U [ objl: [ length: []]] = [ objl: [ length: 55] obj2: [ length: 50]] Example 5 :res-var with :res-var [ obj: [ length: 1[]]] (<= <1> 60) 13 [obj: [length: 1[]]] (>= <1> 54) = [ obj: [ length: 1[]]] (>= <1> 54) (<= <1> 60) O [ obj: [ length: 55]] = [ obj: [ length: 55]] Example 6 :res-disjunct with :atom and :disjunct [ obj: [ length: 1 {55 43 44}] objl: [length: 2[]]] (EQ <1> <2>) U [obj: [length: {43 55}]] = [obj: [length: 1{5543}] objl: [length: 2[]]] (EQ <1> <2>) U [ obj: [ length: 55]] = [ obj: [ length: 55] objl: [length: 1[]]] (EQ 55 <1>) U [ obj 1: [ length: 55]] = [ obj: [ length: 55] objl: [ length: 55]] Example 7 :appl with :res-disjunct [ objl: [ length: (+ 5 1[])] obj2: [length: <1>]] U [objl: [length: 1{554342}] obj2: [ length: 2[]]] (> <1> <2>) = [ objl: [ length: 1(+ 5 2[])] obj2: [ length: <2>]] (Unify <1> {55 43 42}) (> <1> <2>) 13 [ obj2: [ length: 50]] = [ objl: [ length: 1(+ 5 50)] obj2: [ length: 50]] (> <1> 50) (Unify <1> {55 43 42}) U [ objl: [ length: []]] --- [ obj 1: [ length: 55] obj2: [ length: 50]] Example 8 :res-var with :res-disjunct [obj: [length: 1{554342}] objl: [length: 2[]]] (EQ <1> <2>) U [obj: [length: 1[]] objl: [length: 2[]]] ((LAMBDA (X Y) (EQ (+ X Y) 110)) <1> <2>) = [obj: [length: 1{554342}] objl: [length: 2[]]] ((LAMBDA (X Y) (EQ (+ X Y) 110)) <1> <2>) (EQ <1> <2>) U [ objl: [ length: 55]] = [ obj: [ length: 55] objl: [ length: 55]] Example 9 :appl with :appl [ obj: [ length: (+ 5 1[])] obj2: [ length: <1>]] U [ obj: [ length: ((LAMBDA (X) (- (+ X 10) 5)) 1[])] obj2: [ length: <1>]] = [ obj: [ length: 1(+ 5 2[])] obj2: [ length: <2>]] (Unify <1> ((LAMBDA (X) (- (+ X 10) 5)) <2>)) U [ obj2: [ length: 55]] = [ obj: [ length: 1(+ 5 55)] obj2: [ length: 55]] (Unify <1> ((LAMBDA (X) (- (+ X 10) 5)) 55)) U [ obj: [ length: []]] = [ obj: [ length: 60] obj2: [ length: 55]] Example 10 :res-disjunct with res-disjunct [objl: [length: 1{554342}] obj2: [length: 2[]]] (EQ <1> <2>) U [objl: [length: 1{45 5543}] obj2: [length: 2[]]] ((LAMBDA (X Y) (EQ (+ X Y) 110)) <1> <2>) = [objl: [length: 1{5543}] obj2: [ length: 2[]]] (EQ <1> <2>) ((LAMBDA (X Y) (EQ (+ X Y) 110)) <1> <2>) U [ obj2: [ length: 55]] = [ objl: [ length: 55] obj2: [ length: 55]] 223
1993
29
AN ALGORITHM FOR FINDING NOUN PHRASE CORRESPONDENCES IN BILINGUAL CORPORA Julian Kupiec Xerox Palo Alto Research Center 3333 Coyote Hill Road, Palo Alto, CA [email protected] 94304 Abstract The paper describes an algorithm that employs English and French text taggers to associate noun phrases in an aligned bilingual corpus. The tag- gets provide part-of-speech categories which are used by finite-state recognizers to extract simple noun phrases for both languages. Noun phrases are then mapped to each other using an iterative re-estimation algorithm that bears similarities to the Baum-Welch algorithm which is used for train- ing the taggers. The algorithm provides an alter- native to other approaches for finding word cor- respondences, with the advantage that linguistic structure is incorporated. Improvements to the basic algorithm are described, which enable con- text to be accounted for when constructing the noun phrase mappings. INTRODUCTION Areas of investigation using bilingual corpora have included the following: • Automatic sentence alignment [Kay and RSscheisen, 1988, Brown eL al., 1991a, Gale and Church, 1991b]. • Word-sense disambiguation [Dagan el al., 1991, Brown et ai., 1991b, Church and Gale, 1991]. • Extracting word correspondences [Gale and Church, 1991a]. • Finding bilingual collocations [Smadja, 1992]. • Estimating parameters for statistically-based machine translation [Brown et al., 1992]. The work described here makes use of the aligned Canadian Hansards [Gale and Church, 1991b] to obtain noun phrase correspondences be- tween the English and French text. The term "correspondence" is used here to sig- nify a mapping between words in two aligned sen- tences. Consider an English sentence Ei and a French sentence Fi which are assumed to be ap- proximate translations of each other. The sub- script i denotes the i'th alignment of sentences in both languages. A word sequence in E/is defined here as the correspondence of another sequence in Fi if the words of one sequence are considered to represent the words in the other. Single word correspondences have been investi- gated [Gale and Church, 1991a] using a statistic operating on contingency tables. An algorithm for producing collocational correspondences has also been described [Smadja, 1992]. The algorithm in- volves several steps. English collocations are first extracted from the English side of the corpus. In- stances of the English collocation are found and the mutual information is calculated between the instances and various single word candidates in aligned French sentences. The highest ranking candidates are then extended by another word and the procedure is repeated until a corresponding French collocation having the highest mutual in- formation is found. An alternative approach is described here, which employs simple iterative re-estimation. It is used to make correspondences between simple noun phrases that have been isolated in corre- sponding sentences of each language using finite- state recognizers. The algorithm is applicable for finding single or multiple word correspondences and can accommodate additional kinds of phrases. In contrast to the other methods that have been mentioned, the algorithm can be extended in a straightforward way to enable correct correspon- dences to be made in circumstances where numer- ous low frequency phrases are involved. This is important consideration because in large text cor- pora roughly a third of the word types only occur once. Several applications for bilingual correspon- dence information have been suggested. They can be used in bilingual concordances, for automat- ically constructing bilingual lexicons, and proba- bilistically quantified correspondences may be use- ful for statistical translation methods. COMPONENTS Figure 1 illustrates how the corpus is analyzed. The words in sentences are first tagged with their 17 corresponding part-of-speech categories. Each tagger contains a hidden Markov model (HMM), which is trained using samples of raw text from the Hansards for each language. The taggers are robust and operate with a low error rate [Ku- piec, 1992]. Simple noun phrases (excluding pro- nouns and digits) are then extracted from the sen- tences by finite-state recognizers that are specified by regular expressions defined in terms of part-of- speech categories. Simple noun phrases are iden- tified because they are most reliably recognized; it is also assumed that they can be identified un- ambiguously. The only embedding that is allowed is by prepositional phrases involving "of" in En- glish and "de" in French, as noun phrases involv- ing them can be identified with relatively low error (revisions to this restriction are considered later). Noun phrases are placed in an index to associate a unique identifier with each one. A noun phrase is defined by its word sequence, excluding any leading determiners. Singular and plural forms of common nouns are thus distinct and assigned different positions in the index. For each sentence corresponding to an alignment, the index positions of all noun phrases in the sentence are recorded in a separate data structure, provid- ing a compact representation of the corpus. So far it has been assumed (for the sake of sim- plicity) that there is always a one-to-one mapping between English and French sentences. In prac- tice, if an alignment program produces blocks of several sentences in one or both languages, this can be accommodated by treating the block in- stead as a single bigger "compound sentence" in which noun phrases have a higher number of pos- sible correspondences. THE MAPPING ALGORITHM Some terminology is necessary to describe the al- gorithm concisely. Let there be L total alignments in the corpus; then Ei is the English sentence for alignment i. Let the function ¢(Ei) be the num- ber of noun phrases identified in the sentence. If there are k of them, k = ¢(Ei), and they can be referenced by j = 1...k. Considering the j'th noun phrase in sentence Ei, the function I~(Ei, j) produces an identifier for the phrase, which is the position of the phrase in the English index. If this phrase is at position s, then I~(Ei,j) = s. In turn, the French sentence Fi will contain ¢(Fi) noun phrases and given the p'th one, its po- sition in the French index will be given by/~(Fi, p). It will also be assumed that there are a total of VE and Vr phrases in the English and French in- dexes respectively. Finally, the indicator function I 0 has the value unity if its argument is true, and zero otherwise. Assuming these definitions, the algorithm is I English sentence E i 1 I English Tagger I I English NP Recognizer I I n0.sh'o ex I I Bilingual Corpus I rth alignment I French FTntence I French Tagger I I French I NP Recognizer I Frenchlndex I Figure 1: Component Layout stated in Figure 2. The equations assume a direc- tionality: finding French "target" correspondences for English "source" phrases. The algorithm is re- versible, by swapping E with F. The model for correspondence is that a source noun phrase in Ei is responsible for producing the various different target noun phrases in Fi with correspondingly different probabilities. Two quan- tities are calculated; Cr(s, t) and Pr(s, t). Compu- tation proceeds by evaluating Equation (1), Equa- tion (2) and then iteratively applying Equations (3) and (2); r increasing with each successive iter- ation. The argument s refers to the English noun phrase nps(s) having position s in the English index, and the argument t refers to the French noun phrase npF(t) at position t in the French index. Equation (1) assumes that each English noun phrase in Ei is initially equally likely to cor- respond to each French noun phrase in Fi. All cor- respondences are thus equally weighted, reflecting a state of ignorance. Weights are summed over the corpus, so noun phrases that co-occur in sev- eral sentences will have larger sums. The weights C0(s, t) can be interpreted as the mean number of times that npF(t) corresponds to apE(s) given the corpus and the initial assumption of equiprobable correspondences. These weights can be used to form a new esti- mate of the probability that npF(t) corresponds to npE(s), by considering the mean number of times npF(t) corresponds to apE(s) as a fraction of the total mean number of correspondences for apE(s), as in Equation (2). The procedure is then iter- ated using Equations (3), and (2) to obtain suc- cessively refined, convergent estimates of the prob- 18 Co( ,t) = = cr( ,t) = r>O VE>s>I Vv>t>l L ¢(E~) ¢(F0 1 E E E I(tt(Ei' J) = s)l(tt(Fi' k) = t) ¢(F,) i=1 j=l k=l Cr-l(S,t) vF Eq=l Cr-l(s, q) L ¢(E0 ¢(F0 E E E I(#(Ei,j) = s)I(tt(Fi,k) = t)Pr_l(s,t) i=I j=l k=l (1) (2) (3) Figure 2: The Algorithm ability that ripE(t) corresponds to ripE(s). The probability of correspondences can be used as a method of ranking them (occurrence counts can be taken into account as an indication of the re- liability of a correspondence). Although Figure 2 defines the coefficients simply, the algorithm is not implemented literally from it. The algorithm em- ploys a compact representation of the correspon- dences for efficient operation. An arbitrarily large corpus can be accommodated by segmenting it ap- propriately. The algorithm described here is an instance of a general approach to statistical estimation, rep- resented by the EM algorithm [Dempster et al., 1977]. In contrast to reservations that have been expressed [Gale and Church, 1991a] about us- ing the EM algorithm to provide word correspon- dences, there have been no indications that pro- hibitive amounts of memory might be required, or that the approach lacks robustness. Unlike the other methods that have been mentioned, the ap- proach has the capability to accommodate more context to improve performance. RESULTS A sample of the aligned corpus comprising 2,600 alignments was used for testing the algorithm (not all of the alignments contained sentences). 4,900 distinct English noun phrases and 5,100 distinct French noun phrases were extracted from the sam- ple. When forming correspondences involving long sentences with many clauses, it was observed that the position at which a noun phrase occurred in El was very roughly proportional to the correspond- ing noun phrase in Fi. In such cases it was not necessary to form correspondences with all noun phrases in Fi for each noun phrase in Ei. Instead, the location of a phrase in Ei was mapped lin- early to a position in Fi and correspondences were formed for noun phrases occurring in a window around that position. This resulted in a total of 34,000 correspondences. The mappings are stable within a few (2-4) iterations. In discussing results, a selection of examples will be presented that demonstrates the strengths and weaknesses of the algorithm. To give an indication of noun phrase frequency counts in the sample, the highest ranking correspondences are shown in Ta- ble 1. The figures in columns (1) and (3) indicate the number of instances of the noun phrase to their right. 185 Mr. Speaker 187 M. Le PrSsident 128 Government 141 gouvernement 60 Prime Minister 65 Premier Ministre 63 Hon. Member 66 d6put6 67 House 68 Chambre Table 1: Common correspondences To give an informal impression of overall per- formance, the hundred highest ranking correspon- dences were inspected and of these, ninety were completely correct. Less frequently occurring noun phrases are also of interest for purposes of evaluation; some of these are shown in Table 2. 32 Atlantic Canada Opportunities Agency 5 DREE 1 late spring 1 whole issue of free trade 23 Agence de promotion 6conomique du Canada atlantique 4 MEER 1 fin du printemps 1 question du libre-~change Table 2: Other correspondences The table also illustrates an unembedded En- glish noun phrase having multiple prepositional 19 phrases in its French correspondent. Organiza- tional acronyms (which may be not be available in general-purpose dictionaries) are also extracted, as the taggers are robust. Even when a noun phrase only occurs once, a correct correspondence can be found if there are only single noun phrases in each sentence of the alignment. This is demonstrated in the last row of Table 2, which is the result of the following alignment: Ei: "The whole issue of free trade has been men- tioned." Fi: "On a mentionn~ la question du libre- ~change." Table 3 shows some incorrect correspondences produced by the algorithm (in the table, "usine" means "factory"). 11 r ° tho obtraining I 01 asia0 I 1 mix of on-the-job 6 usine Table 3 The sentences that are responsible for these cor- respondences illustrate some of the problems asso- ciated with the correspondence model: Ei: "They use what is known as the dual system in which there is a mix of on-the-job and off- the-job training." Fi: "Ils ont recours £ une formation mixte, partie en usine et partie hors usine." The first problem is that the conjunctive modifiers in the English sentence cannot be accommodated by the noun phrase recognizer. The tagger also assigned "on-the-job" as a noun when adjectival use would be preferred. If verb correspondences were included, there is a mismatch between the three that exist in the English sentence and the single one in the French. If the English were to reflect the French for the correspondence model to be appropriate, the noun phrases would per- haps be "part in the factory" and "part out of the factory". Considered as a translation, this is lame. The majority of errors that occur are not the result of incorrect tagging or noun phrase recognition, but are the result of the approximate nature of the correspondence model. The corre- spondences in Table 4 are likewise flawed (in the table, "souris" means "mouse" and "tigre de pa- pier" means "paper tiger"): 1 toothless tiger 1 souris 1 toothless tiger 1 tigre de papier 1 roaring rabbit 1 souris 1 roaring rabbit 1 tigre de papier Table 4 These correspondences are the result of the fol- lowing sentences: Ei: "It is a roaring rabbit, a toothless tiger." Fi: "C' est un tigre de papier, un souris qui rugit." In the case of the alliterative English phrase "roar- ing rabbit", the (presumably) rhetorical aspect is preserved as a rhyme in "souris qui rugit"; the re- sult being that "rabbit" corresponds to "souris" (mouse). Here again, even if the best correspon- dence were made the result would be wrong be- cause of the relatively sophisticated considerations involved in the translation. EXTENSIONS As regards future possibilities, the algorithm lends itself to a range of improvements and applications, which are outlined next. Finding Word Correspondences: The algo- rithm finds corresponding noun phrases but pro- vides no information about word-level correspon- dences within them. One possibility is simply to eliminate the tagger and noun phrase recognizer (treating all words as individual phrases of length unity and having a larger number of correspon- dences). Alternatively, the following strategy can be adopted, which involves fewer total correspon- dences. First, the algorithm is used to build noun phrase correspondences, then the phrase pairs that are produced are themselves treated as a bilingual noun phrase corpus. The algorithm is then em- ployed again on this corpus, treating all words as individual phrases. This results in a set of sin- gle word correspondences for the internal words in noun phrases. Reducing Ambiguity: The basic algorithm assumes that noun phrases can be uniquely identi- fied in both languages, which is only true for sim- ple noun phrases. The problem of prepositional phrase attachment is exemplified by the following corresp on den ces: 16 Secretary 20 secrdtaire d' Etat of State 16 Secretary 19 Affaires extdrieures of State 16 External Affairs 19 Affaires extdrieures 16 External Affairs 20 secrdtaire d' Etat Table 5 The correct English and French noun phrases are "Secretary of State for External Affairs" and "secr~taire d' Etat aux Affaires ext~rieures". If prepositional phrases involving "for" and "~" were also permitted, these phrases would be correctly 20 identified; however many other adverbial preposi- tional phrases would also be incorrectly attached to noun phrases. If all embedded prepositional phrases were per- mitted by the noun phrase recognizer, the algo- rithm could be used to reduce the degree of ambi- guity between alternatives. Consider a sequence np~ppe of an unembedded English noun phrase npe followed by a prepositional phrase PPe, and likewise a corresponding French sequence nplpp I. Possible interpretations of this are: 1. The prepositional phrase attaches to the noun phrase in both languages. 2. The prepositional phrase attaches to the noun phrase in one language and does not in the other. 3. The prepositional phrase does not attach to the noun phrase in either language. If the prepositional phrases attach to the noun phrases in both languages, they are likely to be repeated in most instances of the noun phrase; it is less likely that the same prepositional phrase will be used adverbially with each instance of the noun phrase. This provides a heuristic method for reducing ambiguity in noun phrases that oc- cur several times. The only modifications required to the algorithm are that the additional possible noun phrases and correspondences between them must be included. Given thresholds on the num- ber of occurrences and the probability of the cor- respondence, the most likely correspondence can be predicted. Including Context: In the algorithm, cor- respondences between source and target noun phrases are considered irrespectively of other cor- respondences in an alignment. This does not make the best use of the information available, and can be improved upon. For example, consider the fol- lowing alignment: El: "The Bill was introduced just before Christmas." Fi: "Le projet de lot a ~t~ present~ juste avant le cong~ des F~tes." Here it is assumed that there are many instances of the correspondence "Bill" and "projet de lot", but only one instance of "Christmas" and "cong~ des F~tes". This suggests that "Bill" corresponds to "projet de lot" with a high probability and that "Christmas" likewise corresponds strongly to "cong~ des F~tes". However, the model will assert that "Christmas" corresponds to "projet de lot" and to "cong~ des F~tes" with equal probability, no matter how likely the correspondence between "Bill" and "projet de lot". The model can be refined to reflect this situ- ation by considering the joint probability that a target npr(t) corresponds to a source ripE(s) and all the other possible correspondences in the align- ment are produced. This situation is very similar to that involved in training HMM text taggers, where joint probabilities are computed that a par- ticular word corresponds to a particular part-of- speech, and the rest of the words in the sentence are also generated (e.g. [Cutting et al., 1992]). CONCLUSION The algorithm described in this paper provides a practical means for obtaining correspondences be- tween noun phrases in a bilingual corpus. Lin- guistic structure is used in the form of noun phrase recognizers to select phrases for a stochastic model which serves as a means of minimizing errors due to the approximations inherent in the correspon- dence model. The algorithm is robust, and exten- sible in several ways. References [Brown et al., 1991a] P. F. Brown, J. C. Lai, and R. L. Mercer. Aligning sentences in parallel cor- pora. In Proceedings of the 29th Annual Meeting of the Association of Computational Linguis- tics, pages 169-176, Berkeley, CA., June 1991. [Brown et al., 1991b] P. F. Brown, S. A. Della Pietra, V. J. Della Pietra, and R. L. Mer- cer. Word sense disambiguation using statisti- cal methods. In Proceedings of the 29th Annual Meeting of the Association of Computational Linguistics, pages 264-270, Berkeley, CA., June 1991. [Brown et al., 1992] P. F. Brown, S. A. Della Pietra, V. J. Della Pietra, J. D. Lafferty, and R. L. Mercer. Analysis, statistical transfer, and synthesis in machine translation. In Proceedings of the Fourth International Conference on The- oretical and Methodological Issues in Machine Translation, pages 83-100, Montreal, Canada., June 1992. [Church and Gale, 1991] K. W. Church and W. A. Gale. Concordances for parallel text. In Proceedings of the Seventh Annual Conference of the UW Center for the New OED and Text Research, pages 40-62, September 1991. [Cutting et at., 1992] D. Cutting, J. Kupiec, J. Pedersen, and P. Sibun. A practical part- of-speech tagger. In Proceedings of the Third Conference on Applied Natural Language Pro- cessing, Trento, Italy, April 1992. ACL. [Dagan et al., 1991] I. Dagan, A. Itai, and U. Schwall. Two languages are more informa- tive than one. In Proceedings of the 29th Annual Meeting of the Association of Computational 21 Linguistics, pages 130-137, Berkeley, CA., June 1991. [Dempster et ai., 1977] A.P. Dempster, N.M. Laird, and D.B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statis- tical Society, B39:1-38, 1977. [Gale and Church, 1991a] W. A. Gale and K. W. Church. Identifying word correspondences in parallel texts. In Proceedings of the Fourth DARPA Speech and Natural Language Work- shop, pages 152-157, Pacific Grove, CA., Febru- ary 1991. Morgan Kaufmann. [Gale and Church, 1991b] W. A. Gale and K. W. Church. A program for aligning sentences in bilingual corpora. In Proceedings of the 29th Annual Meeting of the Association of Compu- tational Linguistics, pages 177-184, Berkeley, CA., June 1991. [Kay and RSscheisen, 1988] M. Kay and M. RSscheisen. Text-translation alignment. Technical Report P90-00143, Xerox Palo Alto Research Center, 3333 Coyote Hill Rd., Palo Alto, CA 94304, June 1988. [Kupiec, 1992] J. M. Kupiec. Robust part-of- speech tagging using a hidden markov model. Computer Speech and Language, 6:225-242, 1992. [Smadja, 1992] F. Smadja. How to compile a bilingual collocational lexicon automatically. In C. Weir, editor, Proceedings of the AAAI- 92 Workshop on Statistically-Based NLP Tech- niques, San Jose, CA, July 1992. 22
1993
3
QUANTIFICATIONAL DOMAINS AND RECURSIVE CONTEXTS Barbara Partee University of Massachusetts Department of Linguistics Amherst, MA 01003, USA Internet: [email protected] Abstract The implicit delimiting or narrowing of the domain of quantification, e.g., in the case of "unselective quantifiers" such as the adverbs of quantification always, usually, mostly, etc., is a heavily context- dependent phenomenon that has much in common with anaphora, presupposition projection, the dy- namics of reference time, reference location, etc., and other of the context-dependent phenomena discussed in Partee (1979). While many non- linguistic factors clearly play a role in such phe- nomena, there are interesting issues at the inter- section of discourse processing and sentence gram- mar, since in addition to context as constructed at the discourse level, there are subsentential "lo- cal contexts" which have limited lifespans and are constrained by aspects of sentence grammar, both syntactic and semantic. So for example in the case of anaphora, while a pronoun can get its value from an entirely non-linguistic context, if the value of a pronoun is determined by a linguistic antecedent, there are grammatical contraints on the possible struc- tural relations that may hold between antecedent and pronoun, as illustrated by the familiar "pre- cede~command" conditions known since the early work of Ross and Langacker and illustrated in (la- b) below with respect to the possibility of inter- preting "some people" as the antecedent of "they". (la) Some people complain loudly in the mid- dle of the night and they make so much noise upstairs that one can't sleep. (lb) They make so much noise upstairs that one can't sleep and some people complain loudly in the middle of the night. In examples (2a-b) we see a similar restriction on the possibility of restricting the domain of the quantifier usually by means of material accessible in the linguistic context: and the relevant notion of accessibility turns out to be the same for the wide range of phenomena mentioned above. (2a) Henrik likes to travel. He goes to France in the summer and he usually travels by car. He goes to England for the spring holidays and he usually travels by ferry. (2b) Henrik likes to travel. He usually trav- els by ear and he goes to France in the sum- mer. He usually travels by ferry and he goes to England for the spring holidays. In the discourse (2b), unlike that in (2a), it is impossible to understand the domain of the quantifier usually to be limited to the trips to France and the trips to England on its two occur- rences, so the discourse ends up sounding contra- dictory. This constraint on "backwards domain restriction" is analogous to constraints on back- wards anaphora. Similar constraints apply to the local satisfac- tion of presuppositions by virtue of material that has its source in the local linguistic context. And Heim has shown in her work on the presupposi- tion projection problem that the relevant acces- sibility constraints are fundamentally semantic in nature, as can be seen from examples with propo- sitional attitude verbs (which will be reviewed in the lecture) where examples with identical syn- tactic structure behave differently because of dif- ferent presuppositional relationships among e.g., "belief worlds" and "hope worlds". Of course in many cases the semantic and syntactic structures are sufficiently parallel that the constraints can of- ten be described either way. The notions of topic and focus appear to be among the important linguistic notions that play a role in structuring these "recursive contexts"; re- cent work by Rooth and unpublished work by Von Fintel makes progress in relating focus structure to anaphoric structure more generally. As Kempson has demonstrated, the same broad range of inferential processes that play a role in discourse anaphoric phenomena (e.g., in li- censing the use of a definite article) also play a role in the corresponding phenomena when they show 224 up in local subsentential contexts; so the fact that aspects of sentence grammar play a crucial role in defining accessibility relations for "antecedent" material in this whole family of phenomena does not mean that the phenomena themselves are to be described in sentence-grammar terms. One of the interesting issues, then, is the characterization of the nature of the interface between the gram- matical and the extragrammatical mechanisms in- volved. Work by Sidner and Webber represents one early line of attack on related problems, and recent developments in dynamic semantics are an- other. This lecture will focus more on articulating the relationships among the different phenomena that appear to operate under common "accessi- bility" constraints than on choosing a particular formal approach to treating them. 225
1993
30
TAILORING LEXICAL CHOICE TO THE USER'S VOCABULARY IN MULTIMEDIA EXPLANATION GENERATION Kathleen McKeown Jacques Robin Michael Tanenblatt Department of Computer Science 450 Computer Science Building Columbia University New York, N.Y. 10027 { kathy,robin,tanenbla} @cs.columbia.edu ABSTRACT In this paper, we discuss the different strategies used in COMET (COordinated Multimedia Explanation Testbed) for selecting words with which the user is familiar. When pictures cannot be used to disambiguate a word or phrase, COMET has four strategies for avoiding unknown words. We give examples for each of these strategies and show how they are implemented in COMET. 1. Introduction A language generation system should select words that its user knows. While this would seem to involve simply selecting a known word instead of an un- known word (as is done, for example, in [1]), in many cases it requires entirely rephrasing the rest of the sentence. For example, in our domain of equipment maintenance and repair, if the user does not know the word "polarity," a sentence like "Check the polarity." will be rephrased as "Make sure the plus on the batte~,lines up with the plus on the battery compartment. Even when alternative words can be used-instead of an unknown word (e.g., a descriptive expression can be used instead of an object name), the alternative phrase may interact with other parts of the sentence which then need to be reworded as well. In this paper, we discuss the different strategies used in COMET for selecting words with which the user is familiar. Since COMET integrates text and pictures in a single explanation 1, unknown words are fre- quently disambiguated through accompan, ying. pic- tures. For example, when the accompanying picture clearly shows the object and its location, COMET will use the most common object name even if the user is unfamiliar with the name 2. When pictures can- not be used to disambiguate a word-or phrase, COMET has four strategms for avoiding unknown words: 1. Selecting an alternative word or phrase (e.g., generating "some number" in- stead of "arbitrary number' ') 2. Rephrasing by providing conceptual definitions (e.g., generating "Make sure the plus on the battery lines up with the plus on the battery compartment." in- stead of "Check the polarity") 3. Rephrasing by generating descriptive referring expressions (e.g., generating "the cable that runs to the KY57" in- stead of "the COMSEC cable' ') 4. Using past discourse to construct a referring expression (e.g., generating "Test the cable you just removed." in- stead of "Test the COMSEC cable." if the user had previously been instructed to remove this cable.) In the following sections, we first t?rov!de an over- view of lexical choice in COMET, snowing how and where it occurs in the overall system. Each of the strategies is then described in turn, prefaced by a brief discussion of disambiguation of unknown terms through pictures. Finally, we compare our work with previous work in the area. 1See [2] for a system overview and [3, 4] for details on media coordination in COMET. 2This is similar to Appelt's [5] integration of language and physical actions for generating referring expressions. 226 i Text Generator Content Planner Logical Form I l. I Annotated Logical Form / J Lexical ] Chooser Text )Lay tion I Graphics Generator ] Multimedia Explanation Figure 1: COMET System Architecture 2. Lexical Choice and Architecture COMET's architecture is shown in Figure 1. On receiving a request for an explanation via a menu in- terface, the content planner uses schemas [6] to determine which information should be included in the explanation from the underlying knowledge sources. The explanation content, represented as a hierarchy of logical forms (LFs) [7] is passed to the media coordinator [3, 8], which adds annotations in- dicating which portions are to be produced by the text generator and which by the graphics generator [9]. The Lexical Chooser is part of the text generator [7]. Typically, it selects a word or phrase for each seman- tic concept in the input LF (i.e., the semantic con- straints on word choice). In terms of coverage, the implementation can select words for 148 different semantic concepts using 253 mapping rules, thus yielding on average slightly less than two alternative word choices per concept (there are many concepts which are mapped to a single word, while others have more than two alternatives). The lexicon contains 159 open class words. In this paper, we show how the user model and past discourse (pragmatic constraints) also influence word choice. But these are not the only constraints on word choice. Syntactic form of the sentence and lex- ical constraints are other demonstrated [10, 11] influences on lexical choice. For example, once the verb has been chosen, syntactic constraints on its arguments (e.g., whether the object is a clause, 227 Load the frequency in channel one. Step 3 of 4 Step 1: Set the FCTN knob to LD. Figure 2: Accompanying Picture Clarifies Referent an adj, or np) will influence what words are chosen to realize the semantic concept that fill these arguments. Conversely, if one of the verb roles can only be real- ized as a noun phrase, for example, and not as other syntactic categories, this restricts which verb is selected. Lexical constraints on word choice arise from the use of collocations [12]. For example, a verb like "stand" takes the preposition "on" for its loca- tion role, while the verb "turn" takes the preposition "onto." Lexical choice is thus influenced by a wide variety of constraints which interact in many ways. Since syntactic and lexical constraints are only avail- able within the text generator, lexical choice is delayed until this point. Thus COMET waits until a variety of semantic, pragmatic, syntactic and lexical constraints are accumulated before selecting words. This means that COMET can use syntactic and lex- ical constraints on word choice in conjunction with semantic and graphical constraints provided as input, plus the new. pragmatic constraints we present. Pre- vious work addressing pragmatic constraints on word usage folded lexical choice into the content planner (e.g., [13], [1]). This was possible since the work focused primarily on lexical side effects of content determination (e.g., what property to include in a ref- erence as opposed to what linguistic form to use for a property). Such approaches do not allow a system to take syntactic and lexical constraints on word choice into account. On receiving the hierarchy of logical forms, the Lex- ical Chooser determines the overall grammatical form of each sentence based on the semantic structure of the LFs (e.g., conditional sentences are generated for precondition-action structures) and selects the words and phrases realizing semantic concepts of the LF. It aSSeS a specification of the sentence's grammatical rm and open-class words to the general purpose surface sentence generator FUF [14, 15, 16]. The Lexical Chooser uses a rewriting system itself im- plemented on top of FUF. Its lexicon consists of a base of rules, where each rule rewrites a given set of semantic features into a corresponding set of lexical and syntactic features. Thus, each lexicon entry as- sociates a semantic concept with words that can be used to realize it. Additional constraints from the user model, past discourse, and the underlying knowledge base determine which of the alternative words or phrases should be selected. 3 The user model indicates both the reading level of the current user 4, any in- dividual words that COMET knows the user does not understand, and any wording preferences (e.g., the user knows abbreviations, the user is familiar with military terminology). We make no claims about which of these forms of user models is easier to ac- quire, but simply show how to use them when avail- able. If none of the alternative wordings for a given seman- tic concept of the LF are known to the user and the 3When these constraints come from knowledge sources exter- nal to FUF, the Lexical Chooser uses FUF extensions to access such knowledge through the use of coroutines [ 17]. 4We currently use two levels for a poor and good reader. At the beginning of the session, the reading level is either preset or COMET can ask the user. 228 Install the new holding battery. Step 2 of 6 Remove the old holding battery, shown in the cutaway view. Figure 3: Use of Cross References: Remove the holding battery, shown in the cutaway view accompanying illustration cannot disambiguate these words, COMET reinvokes the content planner to replan portions of the sentence content or to include additional semantic information. Thus, COMET's ar- chitecture interleaves lexical choice and content plan- ning in order to account for a wide variety of mter- acting constraints on word choice. 3. Multimedia Disambiguation An accompanying picture often makes clear what the referent of a referring expression is. If the user is unfamiliar with a term, the accompanying picture might define it. For example, Figure 2 shows one step of an explanation generated by COMET for loading frequency into the radio. The text refers to a "FCTN knob ' and the accompanying picture clearly singles out the knob on the front panel of the radio [4]. COMET can also generate an explicit reference to the illustration itself (called a cross reference). For example, the cross reference shown in Figure 3 is generated if the user does not understand the term "holding battery". In this case, the Lexical Chooser, on determining that "holding battery" is an un- familiar term, reinvokes the content planner which finds that no accompanying illustration is currently planned and invokes graphics to generate an accom- panying illustration that depicts the holdin~ battery and its location. For full details on cross reierencing in COMET see [ 18]. 4. Selecting a Familiar Word/phrase Whenever possible, COMET simply selects a familiar word over an unknown word from the list of alternatives in the lexicon. Figure 4 shows some 9uaired sentences that COMET generates which ii- strate alternative wordings. The first italicized phrase is generated if the user's vocabulary level is above a certain reading level or if a word is not ex- plicitly listed in the user model as unknown. Since the lexicon maintains a simple association between the semantic concept and alternative phrasings, COMET selects the first alternative which the user model indicates is familiar to the user. For example, Figure 5 shows that for any concept under the con- cept c-disconnect in the knowledge base taxonomy, COMET will use the word "disconnect" if the user's vocabulary level is high and the word "remove" otherwise. COMET also checks whether the user knows abbreviations and if so, will use a referring expression such as "FCTN knob" as shown m Figure 2. If not, COMET uses the full name ("func- tion knob"). If COMET has no information about the user, it generates the abbreviation and relies on the accompanying illustration to clarify the referent. 1. Screw the new manpack antenna onto the RT and tighten until the manpack antenna is snug/tight. 2. Disconnect/Remove the COMSEC cable from the KY57 audio connector. 3. This will cause the display to show an arbitrary/some number. Figure 4: COMET-Generated Word Substitutions 229 (; semantic key ((concept #(under c-disconnect))) ; realization ((process ((cat verb-group) ; will be a verb (alt ( ; if level high select "disconnect" ((CONTROL (OK-Lex-UM 'c-disconnect high)) (lex "disconnect")) ; else select "remove" ((lex "remove")))))))) Figure 5: Lexicon Entry for Disconnect Concept 5. Rephrasing through Replanning Selecting an alternative wording for a semantic con- cept is not always possible since none of the alter- natives may be known by the user. Instead, COMET can describe concepts at a more detailed semantic level of abstraction by retrieving additional defini- tional information from the knowledge base and it can create referring descriptions when object names are not known, by retrieving object attributes. 5.1. Retrieving alternative concept definitions Sometimes the original text uses a word or phrase that abstracts the details of a concept to allow genera- tion of a very concise expression. If unfamiliar with the word or phrase, the user will be unable to infer the specifics needed to perform the task. Alternative wordings require choosing a less abstract level of semantic decomposition at which to describe the con- cept. In these cases, COMET's lexical chooser rein- vokes the content planner to retrieve a finer grained definition of the concept from the knowledge base. For example, this strategy is used for rephrasing the request "Check the polarity" which COMET issues when providing instructions for installing a new hold- ing battery. More detailed semantics of checking the polarity are stored as different tokens of the concept c-polarity in the knowledge base. 5 For example, in Figure 6 polarity is represented as the ecjuivalence be- tween the two plusses on two batteries °. Now, if the plan calls for checking polarity, it can be represented In terms of a checking action on the equivalence of these two plusses (i.e., that they line up). If the user is unfamiliar with the word "polarity," an alternate decomposition will be retrieved and replace the phenomenon role filler in the original LF (Figure 7). Figure 8 shows the alternative LF with a new phenomenon role (the remainder of the LF is un- changed). The resulting rephrased sentence is "Make sure that the plus on the battery lines up with the plus on the battery compartment.. "Lines up' is selected in the lexicon for the equivalence relatlon based on the semantics of its roles (i.e., that they are both plusses on the batteries). Here semantic selec- tional restrictions on the.roles control lexical choice of the verb. Since the object of the new sentence is an embedded sentence, COMET can use either the verb "check" or the collocation "make sure" as the verb realizing the mental process concept c-check. Note that, while these two verbs are listed as alternatives in the lex- icon for c-cheek, "make sure" cannot be used in the original sentence due to a syntactic constraint: its ob- ject cannot be an NP as one cannot say "Make sure the polarity.. This is an example of interaction be- tween syntactic and pragmatic constraints. Since syn- tax does not constrain the choice of verb in the modified sentence, COMET arbitrarily selects "make sure' '. The lexicon entry containing these two verbs is shown below in Figure 9. Note that the entry is in- dexed by the semantic concept c-check. There are two alternative verbs, only one of which is com- patible with a clause as phenomenon role (ultimately the object). When the phenomenon is an NP, both verbs are valid and one is randomly selected. ; Instance definitions for polarity (tellm (polarity polarity-l) (polarity polarity-2)) ; More detail for one instance: polarity is ; represented as two plusses which should ; be equivalent. The roles of the ec/uative ; relation are identified and identifier :about polarity-2 (identified plus-l) (identifier plus-2)) ; one is located on the battery (:about plus-I (on-loc battery-l)) ; one is located on the battery compartment (:about plus-2 (on-loc bc-l)))) Figure 6: Knowledge base tokens for polarity 5The more detailed definition is stored with e-polarity and not with c-check since in our domain checking is carried out on many different objects, while few actions are carried out on polarity. 6The equative relations has two roles, identified and identifier. Since they are included here, the equative relation (i.e., that the two plusses "line up") is inferred to hold. 230 (Concept C-Check) ; "check" (Process-Type Mental) (Roles ((Phenomenon ((Concept C-Polarity)))))) ; "the polarity" Figure 7: Logical Form for Original Sentence (Concept C-Check) ; "make sure that" (Process-Type Mental) (Roles ((Phenomenon ((Concept C-Polarity) (Process-Type Equative) ; "lines up with" (Roles ((Identified ((Concept C-Plus) ; "the plus" (Roles ((On-Loc ; "on the battery" ((Concept C-Battery))))))) (Identifier ((Concept C-Plus) ; "the plus" (Roles ((On-Loc ; "on the battery compartment" ((Concept C-BC)))))))))))))) Figure 8: Logical Form of Rephrased Sentence ; semantic key ((concept #(under c-check)) ; realization (cat verb-group) ; will be a verb (alt ( ; if phenomenon realized by NP ((roles ((phenomenon ((cat #((under np))))) ; then always choose "to check" (lex "check")) ; if phenomenon realized by clause ((roles ((phenomenon ((cat #((under clause) )))) ; then randomly pick "to check" or ; "to make sure" (lex ((Ralt ("check .... make sure")))) ))) Figure 9: Lexicon Entry for Check Concept 5.2. Generating New Referential Descriptions If the user does not know an object name, the content ~ lanner is reinvoked to generate object attributes to uild a referential description. Although our selec- tion algorithm is not as sophisticated as others [19, 5, 13] because we do not use a detailed model of user beliefs, we address a new issue: the interaction between the new description and other parts of the original sentence which may require rephrasing. Two types of object attributes are used in a referring ex- pression in COMET: object subpart relations and atial relations to other objects in the accompanying stration. COMET selects the relations that uniquely identify the object. For example, suppose COMET's Lexical Chooser is provided with the LF for sentence 1, Figure 10, but the user does not know the term "COMSEC." In- stead of generating sentence 1, COMET generates sentence 2. To do this, COMET first selects a unique relation between the cable and a known object. In this case, it selects the connects spatial relation between the Radio Transmitter (RT) and the KY57, since this cable is the only one that connects the radio and the KY57. Selecting this relation for the description and substituting it for 'the COMSEC cable would result in sentence 3, Fig. 10. However, COMET notes the redundant references to the audio connector and removes one from the cable modifier by selecting the verb "runs to" instead which only requires one role in the generated sentence. This would result in the sentence 4, Fig. 10. In this sentence, the attachment of the prepositional phrase "from the KY57 audio connector is ambiguous. COMET detects this am- biguity when it removes the first from-location; since the two from-locations would have occurred side by side and both previous verbs of the sentence take it as a modifier, the generator must clarify that it is the from-location of the earlier verb "disconnect" and not "run to." To remove ambiguity, COMET sur- rounds the modifier of the cable by commas in sen- tence 2, Fig. 107. Descriptions Generated by COMET: I. "Disconnect the COMSEC cable from the KY57 audio connector." 2. "Disconnect the cable, which runs to the RT, from the KY57 audio connector." Descriptions Avoided by COMET: 3. "Disconnect the cable that connects the RT to the KY57 audio connector from the KY57 audio connector." 4. "Disconnect the cable that runs to the RT from the KY57 audio connector." Figure 10: Generated Object Description 7Another possible way to avoid ambiguity would be to generate two sentences such as "Find the cable that runs from the RT to the KY57 audio connector. Disconnect the cable from the audio connector." 231 6. Using Past Discourse For subsequent reference, the presence of a discursive context allows for a wider variety of strategies to get around gaps in the user's vocabulary. COMET takes advantage of this fact by maintaining a discourse his- tory, The content planner records all descriptions into the discourse history, creating one record for the description as a whole and a separate record for each of its roles. The entry for the description has four fields: • The name of the concept. • The description used in the reference. • The action in which the referring description plays a role. • The list of roles that the description fills in that action (e.g., "COMSEC cable" is the medium of the action "discon- nect"). For each subsequent reference, the concept name is used as the access key and the three other fields are updated; they thus always contain the information on the last reference. By looking up information in the discourse history, the content planner is able to con- struct object descriptions in terms of the last action it was involved in. Sentences generated if the user knows "COMSEC" 1. "Disconnect the COMSEC cable from the KY57 audio-connector." 2. "Plug in the handset to the KY57 audio- connector.' ' 3. "Test the COMSEC cable." Sentences generated if not: 4. "Disconnect the cable, which runs to the RT, from the KY57 audio connector." 5. "Plug in the handset to the KY57 audio connector." 6. "Test the cable that you just disconnected." Figure 11: Use of Previous Discourse As an example, consider the explanations COMET enerates when instructing the user how to diagnose ss of side tone. When the user has no vocabulary gaps, COMET .generates sentences 1-3, Figure 1 l. When the user is unfamiliar with the term "COM- SEC," sentences 4-6 are generated instead. Here COMET uses past discourse to produce a descriptive reference for the second reference to the COMSEC cable. As in the previous examples, the gap is detected when the Lexical Chooser checks the user model. Since there is no alternative phrase for "COMSEC" in the lexicon, COMET calls the content planner to replan the reference. Since it is not the first reference to the cable, COMET uses the discourse history to plan a modifying description. A reference to the cable ts discovered in the history (its entry is shown in Figure 12) and the action in this entry is selected as the modifier to build a referring expression. 8 The role of the cable was medium and thus, COMET can generate the modifier as a relative clause. The LF for this referring expression is shown in Figure 13. This LF is sent back to the lexical chooser, which selects the words for the concepts within it, and continues with generation where it left off. On third and fourth reference to the same concept, COMET uses its anaphoric reference facility to generate either a bare head (e.g., "cable") or a pronoun (e.g., "it' '). (; The concept name: ((Concept C-Comsec-Cable)) ; The initial generated description: ; included where connected to and from. ((Concept C-Cable) (Roles ((To-Loc ((Concept C-RT))) (From-Loc ((Concept C-KY57)))))) ; The role it plays in the action: ((Roles Medium)) ; The action itself: ''disconnect the cable''. ((Process-Type Material) (Concept C-Disconnect) ; Rest of action description ; in discourse history )) ; but not shown here Figure 12: Entry for COMSEC Cable in the Discourse History 7. Conclusions and Related Work COMET performs several lexical choice tasks. It can choose between alternative words or phrases for any part of speech. When generating a request to perform an action, it chooses a level of detail in the concept description appropriate to the user. When generating both initial and subsequent referring expressions, it selects a set of distinguishing properties of the referent and chooses words to express the selected 8There is a limit to how far back COMET looks in the dis- course to construct a new referring expression: the discourse history is cleared after each menu request for a new explanation. 232 ((Concept C-Cable) (Roles ((Latest-Participation ((Process-Type Material) (Concept C-Disconnect) (Roles ((Agent ((Concept C-User))) (Medium ((Concept {^5 Concept})))))))))) Figure 13: "the cable you just disconnected" properties, Finally, for subsequent references, COMET can use previous discourse to avoid un- known words. COMET is thus using constraints from the user model, the accompanying illustration, and past dis- course in addition to traditional constraints from semantics, syntax, and other word choices. Although other generation systems take into account some of these constraints, COMET is the first attempt to in- tegrate such a variety of constraints and lexical choice strategies in a single system. In addition, be- cause COMET is a multimedia system, it can use the accompanying illustrations advantageously for dis- ambiguation. WIP [20] can also generate cross references but does not rely. on a user model for either cross reference eneratlon or lexical choice. EPICURE [19], KAMP 5], and FN [13] tailor references based on situation, but they do not constrain this choice based on the user's lexical knowledge. EPICURE uses the user's domain knowledge, KAMP mutual beliefs about the domain, and FN the user's domain knowledge in con- junction with rules on implicatures. They focus on the selection of appropriate properties to distinguish an object in generating references but do not choose between alternative wordings for the selected properties. None of these systems reword action descriptions or use past discourse to avoid terms the user does not know. While Bateman and Paris' sys- tem [21] uses different dialects depending on which class of users it is addressing through register map- pings, in COMET different terms can be mixed and matched depending on the individual user model. Acknowledgements Research on language generation in COMET has been supported in part by Defense Advanced Research Projects Agency Contract N00039-84- C-0165, National Science Foundation Grants IRT-84-51438 and GER-90-2406, New York State Center for Advanced Technology Contracts NYSSTF-CAT(90)-053, (91)-053, and(92)-053, and Office of Naval Research Contracts N00014-82- K-0256 and N00014-89-J-1782. COMET's develop- ment is an ongoing group effort and has benefited from the contributions of Michael Elhadad (FUF), Doree Seligmann (graphics generator), Andrea Danyluk (diagnostic rule base), Yumiko Fukumoto (media coordinator), Jong Lim (static knowledge base and content planner), Christine Lombardi (media coordinator), Jacques Robin (lexical chooser), James Shaw (anaphoric reference facility), Michael Tanenblatt (knowledge base, content planner), Michelle Baker, Cliff Beshers, David Fox, Laura Gabbe, Frank Smadja, and Tony Weida. REFERENCES 1. . . . . . . . . 10. Swartout, W.R., "XPLAIN: a system for creating and explaining expert consulting systems", Artificial lntelligence, Vol. 21, No. 3, 1983, pp. 285-325. Feiner, S. and K.R. McKeown, "Generating Coor- dinated Multimedia Explanations", Proceedings of the IEEE Conference on AI Applications, Santa Barbara, CA., March 1990. Feiner, S. and K.R. McKeown, "Coordinating Text and Graphics in Explanation Generation", Proceedings of the National Conference on Artifi- cial Intelligence, Boston, Mass., August 1990. Feiner, S. and McKeown, K.R., "Automating the Generation of Coordinated Multimedia Explana- tions", IEEE Computer, Vol. 24, No. 10, October 1991, pp. 33-41. Appelt, D.E., Planning English Sentences, Cambridge University Press, Cambridge, England, 1985. McKeown, K.R., Text Generation: Using Dis- course Strategies and Focus Constraints to Generate Natural Language Text, Cambridge University Press, Cambridge, England, 1985. McKeown, K.R., Elhadad, M., Fukumoto, Y., Lim, J., Lombardi, C., Robin, J., and Smadja, F., "Lan- guage Generation in COMET", in Current Research in Language Generation, Mellish, C., Dale, R., and Zock, M., eds., Academic Press, Lon- don, 1990. Elhadad, M., Seligmann, D., Feiner, S., and McKeown, K., "A Common Intention Description Language for Interactive Multi-media Systems", A New Generation of Intelligent Interfaces: Proceed- ings of lJCAl89 Workshop on Intelligent Interfaces, Detroit, MI, August 22 1989, pp. 46-52. Seligmann, D.D., and Feiner, S., "Specifying Composite Illustrations with Communicative Goals", Proc. ACM Symposium on User Interface Software and Technology, Williamsburg, VA, November 13-15 1989, pp. 1-9. McDonald, D.D, "On the place of words in the generation process", in Natural Language Genera- 233 11. 12. 13. 14. 15. 16. tion in Artificial Intelligence and Computational Linguistics, Paris, C., Swartout, W. and Mann. W.C., eds., Kluwer Academic Publishers, 1991. Danlos, L., The Linguistic Basis of Text Generation, Cambridge University Press, Cambridge, England, 1987. Smadja, F. and K.R. McKeown, "Automatically Extracting and Representing Collocations for Lan- guage Generation", Proceedings of the 28th An- nual Meeting of the Association for Computational Linguistics, Pittsburgh, Pa., June 1990, pp. 252-9. Reiter, E.B., Generating appropriate natural lan- guage object description, PhD dissertation, Center for research in computing technology, Harvard University, 1990. Elhadad, M., "The FUF Functional Unifier: User's Manual", Tech. report, Columbia University, 1988. Elhadad, M., "Types in Functional Unification Grammars", Proceedings of the 28th meeting of the Association for Computational Linguistics, Pittsburgh, Pa, June 1990. Elhadad, M., Using argumentation to control lex- ical choice: a unification-based implementation, PhD dissertation, Computer Science Department, Columbia University, 1993. 17. Elhadad, M. and Robin, J., "Controlling Content Realization with Functional Unification Gram- mars", in Aspects of Automated Natural Language Generation, Dale, R. and Hovy, H. and Roesner, D. and Stock, O., ed., Springier Verlag, 1992, pp. 89-104. 18. McKeown, K. R., Feiner, S.K., Robin, J., Selig- mann, D., and Tanenblatt, M., "Generating Cross References for Multimedia Explanations", Proceedings ofAAAI-92, AAAI, July 1992. 19. Dale, R., Generating Referring Expressions, ACL- MIT Press Series in Natural Language Processing, Cambridge, Ma., 1992. 20. Wahlster, W., Andre, E., Hecking, M., and T. Rist, "WIP: Knowledge-based Presentation of Infor- mation", Tech. report WIP-1, German Research Center for Artificial Intelligence, May 1989. 21. Bateman, J.A. and Paris, C.L., "Phrasing a text in terms the user can understand", Proceedings of the llth International Joint Conference on Artificial Intelligence, Detroit, MI, 1989, pp. 1511-1517. 234
1993
31
AUTOMATIC ACQUISITION OF A LARGE SUBCATEGORIZATION DICTIONARY FROM CORPORA Christopher D. Manning Xerox PARC and Stanford University Stanford University Dept. of Linguistics, Bldg. 100 Stanford, CA 94305-2150, USA Internet: [email protected] Abstract This paper presents a new method for producing a dictionary of subcategorization frames from un- labelled text corpora. It is shown that statistical filtering of the results of a finite state parser run- ning on the output of a stochastic tagger produces high quality results, despite the error rates of the tagger and the parser. Further, it is argued that this method can be used to learn all subcategori- zation frames, whereas previous methods are not extensible to a general solution to the problem. INTRODUCTION Rule-based parsers use subcategorization informa- tion to constrain the number of analyses that are generated. For example, from subcategorization alone, we can deduce that the PP in (1) must be an argument of the verb, not a noun phrase mod- ifier: (1) John put [Nethe cactus] [epon the table]. Knowledge of subcategorization also aids text ger- eration programs and people learning a foreign language. A subcategorization frame is a statement of what types of syntactic arguments a verb (or ad- jective) takes, such as objects, infinitives, that- clauses, participial clauses, and subcategorized prepositional phrases. In general, verbs and ad- jectives each appear in only a small subset of all possible argument subcategorization frames. A major bottleneck in the production of high- coverage parsers is assembling lexical information, °Thanks to Julian Kupiec for providing the tag- ger on which this work depends and for helpful dis- cussions and comments along the way. I am also indebted for comments on an earlier draft to Marti Hearst (whose comments were the most useful!), Hin- rich Schfitze, Penni Sibun, Mary Dalrymple, and oth- ers at Xerox PARC, where this research was completed during a summer internship; Stanley Peters, and the two anonymous ACL reviewers. such as subcategorization information. In early and much continuing work in computational lin- guistics, this information has been coded labori- ously by hand. More recently, on-line versions of dictionaries that provide subcategorization in- formation have become available to researchers (Hornby 1989, Procter 1978, Sinclair 1987). But this is the same method of obtaining subcatego- rizations - painstaking work by hand. We have simply passed the need for tools that acquire lex- ical information from the computational linguist to the lexicographer. Thus there is a need for a program that can ac- quire a subcategorization dictionary from on-line corpora of unrestricted text: 1. Dictionaries with subcategorization information are unavailable for most languages (only a few recent dictionaries, generally targeted at non- native speakers, list subcategorization frames). 2. No dictionary lists verbs from specialized sub- fields (as in I telneted to Princeton), but these could be obtained automatically from texts such as computer manuals. 3. Hand-coded lists are expensive to make, and in- variably incomplete. 4. A subcategorization dictionary obtained auto- matically from corpora can be updated quickly and easily as different usages develop. Diction- aries produced by hand always substantially lag real language use. The last two points do not argue against the use of existing dictionaries, but show that the incom- plete information that they provide needs to be supplemented with further knowledge that is best collected automatically) The desire to combine hand-coded and automatically learned knowledge 1A point made by Church and Hanks (1989). Ar- bitrary gaps in listing can be smoothed with a pro- gram such as the work presented here. For example, among the 27 verbs that most commonly cooccurred with from, Church and Hanks found 7 for which this 235 suggests that we should aim for a high precision learner (even at some cost in coverage), and that is the approach adopted here. DEFINITIONS AND DIFFICULTIES Both in traditional grammar and modern syntac- tic theory, a distinction is made between argu- ments and adjuncts. In sentence (2), John is an argument and in the bathroom is an adjunct: (2) Mary berated John in the bathroom. Arguments fill semantic slots licensed by a particu- lar verb, while adjuncts provide information about sentential slots (such as time or place) that can be filled for any verb (of the appropriate aspectual type). While much work has been done on the argu- ment/adjunct distinction (see the survey of dis- tinctions in Pollard and Sag (1987, pp. 134-139)), and much other work presupposes this distinction, in practice, it gets murky (like many things in linguistics). I will adhere to a conventional no- tion of the distinction, but a tension arises in the work presented here when judgments of argu- ment/adjunct status reflect something other than frequency of cooccurrence - since it is actually cooccurrence data that a simple learning program like mine uses. I will return to this issue later. Different classifications of subcategorization frames can be found in each of the dictionaries mentioned above, and in other places in the lin- guistics literature. I will assume without discus- sion a fairly standard categorization of subcatego- rization frames into 19 classes (some parameter- ized for a preposition), a selection of which are shown below: IV TV DTV THAT NPTHAT INF NPINF ING P(prep) Intransitive verbs Transitive verbs Ditransitive verbs Takes a finite ~hal complement Direct object and lhaL complement Infinitive clause complement Direct object and infinitive clause Takes a participial VP complement Prepositional phrase headed by prep NP-P(prep) Direct object and PP headed by prep subcategorization frame was not listed in the Cobuild dictionary (Sinclair 1987). The learner presented here finds a subcategorization involving from for all but one of these 7 verbs (the exception being ferry which was fairly rare in the training corpus). PREVIOUS WORK While work has been done on various sorts of col- location information that can be obtained from text corpora, the only research that I am aware of that has dealt directly with the problem of the automatic acquisition of subcategorization frames is a series of papers by Brent (Brent and Berwick 1991, Brent 1991, Brent 1992). Brent and Bet- wick (1991) took the approach of trying to gen- erate very high precision data. 2 The input was hand-tagged text from the Penn Treebank, and they used a very simple finite state parser which ignored nearly all the input, but tried to learn from the sentences which seemed least likely to contain false triggers - mainly sentences with pro- nouns and proper names. 3 This was a consistent strategy which produced promising initial results. However, using hand-tagged text is clearly not a solution to the knowledge acquisition problem (as hand-tagging text is more laborious than col- lecting subcategorization frames), and so, in more recent papers, Brent has attempted learning sub- categorizations from untagged text. Brent (1991) used a procedure for identifying verbs that was still very accurate, but which resulted in extremely low yields (it garnered as little as 3% of the in- formation gained by his subcategorization learner running on tagged text, which itself ignored a huge percentage of the information potentially avail- able). More recently, Brent (1992) substituted a very simple heuristic method to detect verbs (any- thing that occurs both with and without the suffix -ing in the text is taken as a potential verb, and every potential verb token is taken as an actual verb unless it is preceded by a determiner or a preposition other than to. 4 This is a rather sim- plistic and inadequate approach to verb detection, with a very high error rate. In this work I will use a stochastic part-of-speech tagger to detect verbs (and the part-of-speech of other words), and will suggest that this gives much better results. 5 Leaving this aside, moving to either this last ap- proach of Brent's or using a stochastic tagger un- dermines the consistency of the initial approach. Since the system now makes integral use of a high-error-rate component, s it makes little sense 2That is, data with very few errors. 3A false trigger is a clause in the corpus that one wrongly takes as evidence that a verb can appear with a certain subcategorization frame. 4Actually, learning occurs only from verbs in the base or -ing forms; others are ignored (Brent 1992, p. 8). SSee Brent (1992, p. 9) for arguments against using a stochastic tagger; they do not seem very persuasive (in brief, there is a chance of spurious correlations, and it is difficult to evaluate composite systems). SOn the order of a 5% error rate on each token for 236 for other components to be exceedingly selective about which data they use in an attempt to avoid as many errors as possible. Rather, it would seem more desirable to extract as much information as possible out of the text (even if it is noisy), and then to use appropriate statistical techniques to handle the noise. There is a more fundamental reason to think that this is the right approach. Brent and Ber- wick's original program learned just five subcat- egorization frames (TV, THAT, NPTHAT, INF and NPINF). While at the time they suggested that "we foresee no impediment to detecting many more," this has apparently not proved to be the case (in Brent (1992) only six are learned: the above plus DTV). It seems that the reason for this is that their approach has depended upon finding cues that are very accurate predictors for a certain subcategori- zation (that is, there are very few false triggers), such as pronouns for NP objects and to plus a finite verb for infinitives. However, for many sub- categorizations there just are no highly accurate cues/ For example, some verbs subcategorize for the preposition in, such as the ones shown in (3): (3) a. Two women are assisting the police in their investigation. b. We chipped in to buy her a new TV. c. His letter was couched in conciliatory terms. But the majority of occurrences of in after a verb are NP modifiers or non-subcategorized locative phrases, such as those in (4). s (4) a. He gauged support for a change in the party leadership. b. He built a ranch in a new suburb. c. We were traveling along in a noisy heli- copter. There just is no high accuracy cue for verbs that subcategorize for in. Rather one must collect cooccurrence statistics, and use significance test- ing, a mutual information measure or some other form of statistic to try and judge whether a partic- ular verb subcategorizes for in or just sometimes the stochastic tagger (Kupiec 1992), and a presumably higher error rate on Brent's technique for detecting verbs, rThis inextensibility is also discussed by Hearst (1992). SA sample of 100 uses of /n from the New York Times suggests that about 70% of uses are in post- verbal contexts, but, of these, only about 15% are sub- categorized complements (the rest being fairly evenly split between NP modifiers and time or place adjunct PPs). appears with a locative phrase. 9 Thus, the strat- egy I will use is to collect as much (fairly accurate) information as possible from the text corpus, and then use statistical filtering to weed out false cues. METHOD One month (approximately 4 million words) of the New York Times newswire was tagged using a ver- sion of Julian Kupiec's stochastic part-of-speech tagger (Kupiec 1992). l° Subcategorization learn- ing was then performed by a program that pro- cessed the output of the tagger. The program had two parts: a finite state parser ran through the text, parsing auxiliary sequences and noting com- plements after verbs and collecting histogram-type statistics for the appearance of verbs in various contexts. A second process of statistical filtering then took the raw histograms and decided the best guess for what subcategorization frames each ob- served verb actually had. The finite state parser The finite state parser essentially works as follows: it scans through text until it hits a verb or auxil- iary, it parses any auxiliaries, noting whether the verb is active or passive, and then it parses com- plements following the verb until something recog- nized as a terminator of subcategorized arguments is reached) 1 Whatever has been found is entered in the histogram. The parser includes a simple NP recognizer (parsing determiners, possessives, ad- jectives, numbers and compound nouns) and vari- ous other rules to recognize certain cases that ap- peared frequently (such as direct quotations in ei- ther a normal or inverted, quotation first, order). The parser does not learn from participles since an NP after them may be the subject rather than the object (e.g., the yawning man). The parser has 14 states and around 100 transi- tions. It outputs a list of elements occurring after the verb, and this list together with the record of whether the verb is passive yields the overall con- text in which the verb appears. The parser skips to the start of the next sentence in a few cases where things get complicated (such as on encountering a 9One cannot just collect verbs that always appear with in because many verbs have multiple subcatego- rization frames. As well as (3b), chip can also just be a IV: John chipped his tooth. 1°Note that the input is very noisy text, including sports results, bestseller lists and all the other vagaries of a newswire. aaAs well as a period, things like subordinating con- junctions mark the end of subcategorized arguments. Additionally, clausal complements such as those intro- duced by that function both as an argument and as a marker that this is the final argument. 237 conjunction, the scope of which is ambiguous, or a relative clause, since there will be a gap some- where within it which would give a wrong observa- tion). However, there are many other things that the parser does wrong or does not notice (such as reduced relatives). One could continue to refine the parser (up to the limits of what can be recog- nized by a finite state device), but the strategy has been to stick with something simple that works a reasonable percentage of the time and then to filter its results to determine what subcategoriza- tions verbs actually have. Note that the parser does not distinguish be- tween arguments and adjuncts. 12 Thus the frame it reports will generally contain too many things. Indicative results of the parser can be observed in Fig. 1, where the first line under each line of text shows the frames that the parser found. Because of mistakes, skipping, and recording adjuncts, the finite state parser records nothing or the wrong thing in the majority of cases, but, nevertheless, enough good data are found that the final subcate- gorization dictionary describes the majority of the subcategorization frames in which the verbs are used in this sample. Filtering Filtering assesses the frames that the parser found (called cues below). A cue may be a correct sub- categorization for a verb, or it may contain spuri- ous adjuncts, or it may simply be wrong due to a mistake of the tagger or the parser. The filtering process attempts to determine whether one can be highly confident that a cue which the parser noted is actually a subcategorization frame of the verb in question. The method used for filtering is that suggested by Brent (1992). Let Bs be an estimated upper bound on the probability that a token of a verb that doesn't take the subcategorization frame s will nevertheless appear with a cue for s. If a verb appears m times in the corpus, and n of those times it cooccurs with a cue for s, then the prob- ability that all the cues are false cues is bounded by the binomial distribution: m m! n (m- - B,) m-- i=n Thus the null hypothesis that the verb does not have the subcategorization frame s can be rejected if the above sum is less than some confidence level C (C = 0.02 in the work reported here). Brent was able to use extremely low values for B~ (since his cues were sparse but unlikely to be 12Except for the fact that it will only count the first of multiple. PPs as an argument. false cues), and indeed found the best performance with values of the order of 2 -8 . However, using my parser, false cues are common. For example, when the recorded subcategorization is __ NP PP(of), it is likely that the PP should actually be attached to the NP rather than the verb. Hence I have used high bounds on the probability of cues be- ing false cues for certain triggers (the used val- ues range from 0.25 (for WV-P(of)) to 0.02). At the moment, the false cue rates B8 in my system have been set empirically. Brent (1992) discusses a method of determining values for the false cue rates automatically, and this technique or some similar form of automatic optimization could prof- itably be incorporated into my system. RESULTS The program acquired a dictionary of 4900 subcat- egorizations for 3104 verbs (an average of 1.6 per verb). Post-editing would reduce this slightly (a few repeated typos made it in, such as acknowl- ege, a few oddities such as the spelling garontee as a 'Cajun' pronunciation of guarantee and a few cases of mistakes by the tagger which, for example, led it to regard lowlife as a verb several times by mistake). Nevertheless, this size already compares favorably with the size of some production MT systems (for example, the English dictionary for Siemens' METAL system lists about 2500 verbs (Adriaens and de Braekeleer 1992)). In general, all the verbs for which subcategorization frames were determined are in Webster's (Gove 1977) (the only noticed exceptions being certain instances of prefixing, such as overcook and repurchase), but a larger number of the verbs do not appear in the only dictionaries that list subcategorization frames (as their coverage of words tends to be more limited). Examples are fax, lambaste, skedaddle, sensationalize, and solemnize. Some idea of the growth of the subcategorization dictionary can be had from Table 1. Table 1. Growth of subcategorization dictionary Words Verbs in Subcats Subcats Processed subcat learned learned (million) dictionary per verb 1.2 1856 2661 1.43 2.9 2689 4129 1.53 4.1 3104 4900 1.58 The two basic measures of results are the in- formation retrieval notions of recall and precision: How many of the subcategorization frames of the verbs were learned and what percentage of the things in the induced dictionary are correct? I have done some preliminary work to answer these questions. 238 In the mezzanine, a man came with two sons and one baseball glove, like so many others there, in case, [p(with)] OKIv of course, a foul ball was hit to them. The father sat throughout the game with the [pass,p(to)] [p(throughout)] °KTv *IV glove on, leaning forward in anticipation like an outfielder before every pitch. By the sixth inning, he *P(forward) appeared exhausted from his exertion. The kids didn't seem to mind that the old man hogged the [xcomp,p( from)] [inf] [that] [np] *XCOMP OKINF OKTHAT OKTv glove. They had their hands full with hot dogs. Behind them sat a man named Peter and his son [that] *TV-XCOMP *IV OK DTV Paul. They discussed the merits of Carreon over McReynolds in left field, and the advisability of [np,p(of)] OKTV replacing Cone with Musselman. At the seventh-inning stretch, Peter, who was born in Austria but OKTv-v(with ) OKTV came to America at age 10, stood with the crowd as "Take Me Out to the Ball Game" was played. The °KP(to) OKIv fans sang and waved their orange caps. [np] OKIv OKTv OKTv Figure 1. A randomly selected sample of text from the New York Times, with what the parser could extract from the text on the second line and whether the resultant dictionary has the correct subcategorization for this occurrence shown on the third line (OK indicates that it does, while * indicates that it doesn't). For recall, we might ask how many of the uses of verbs in a text are captured by our subcate- gorization dictionary. For two randomly selected pieces of text from other parts of the New York Times newswire, a portion of which is shown in Fig. 1, out of 200 verbs, the acquired subcatego- rization dictionary listed 163 of the subcategori- zation frames that appeared. So the token recall rate is approximately 82%. This compares with a baseline accuracy of 32% that would result from always guessing TV (transitive verb) and a per- formance figure of 62% that would result from a system that correctly classified all TV and THAT verbs (the two most common types), but which got everything else wrong. We can get a pessimistic lower bound on pre- cision and recall by testing the acquired diction- ary against some published dictionary. 13 For this 13The resulting figures will be considerably lower than the true precision and recall because the diction- ary lists subcategorization frames that do not appear in the training corpus and vice versa. However, this is still a useful exercise to undertake, as one can at- tain a high token success rate by just being able to accurately detect the most common subcategorization test, 40 verbs were selected (using a random num- ber generator) from a list of 2000 common verbs. 14 Table 2 gives the subcategorizations listed in the OALD (recoded where necessary according to my classification of subcategorizations) and those in the subcategorization dictionary acquired by my program in a compressed format. Next to each verb, listing just a subcategorization frame means that it appears in both the OALD and my subcat- egorization dictionary, a subcategorization frame preceded by a minus sign (-) means that the sub- categorization frame only appears in the OALD, and a subcategorization frame preceded by a plus sign (+) indicates one listed only in my pro- gram's subcategorization dictionary (i.e., one that is probably wrong). 15 The numbers are the num- ber of cues that the program saw for each subcat- frames. 14The number 2000 is arbitrary, but was chosen following the intuition that one wanted to test the program's performance on verbs of at least moderate frequency. 15The verb redesign does not appear in the OALD, so its subcategorization entry was determined by me, based on the entry in the OALD for design. 239 egorization frame (that is in the resulting subcat- egorization dictionary). Table 3 then summarizes the results from the previous table. Lower bounds for the precision and recall of my induced subcat- egorization dictionary are approximately 90% and 43% respectively (looking at types). The aim in choosing error bounds for the filter- ing procedure was to get a highly accurate dic- tionary at the expense of recall, and the lower bound precision figure of 90% suggests that this goal was achieved. The lower bound for recall ap- pears less satisfactory. There is room for further work here, but this does represent a pessimistic lower bound (recall the 82% token recall figure above). Many of the more obscure subcategoriza- tions for less common verbs never appeared in the modest-sized learning corpus, so the model had no chance to master them. 16 Further, the learned corpus may reflect language use more accurately than the dictionary. The OALD lists retire to NP and retire from NP as subeategorized PP complements, but not retire in NP. However, in the training corpus, the colloca- tion retire in is much more frequent than retire to (or retire from). In the absence of differential error bounds, the program is always going to take such more frequent collocations as subeategorized. Actually, in this case, this seems to be the right result. While in can also be used to introduce a locative or temporal adjunct: (5) John retired from the army in 1945. if in is being used similarly to to so that the two sentences in (6) are equivalent: (6) a. John retired to Malibu. b. John retired in Malibu. it seems that in should be regarded as a subcatego- rized complement of retire (and so the dictionary is incomplete). As a final example of the results, let us discuss verbs that subcategorize for from (of. fn. 1 and Church and Hanks 1989). The acquired subcate- gorization dictionary lists a subcategorization in- volving from for 97 verbs. Of these, 1 is an out- right mistake, and 1 is a verb that does not appear in the Cobuild dictionary (reshape). Of the rest, 64 are listed as occurring with from in Cobuild and 31 are not. While in some of these latter cases it could be argued that the occurrences of from are adjuncts rather than arguments, there are also a6For example, agree about did not appear in the learning corpus (and only once in total in another two months of the New York Times newswire that I exam- ined). While disagree about is common, agree about seems largely disused: people like to agree with people but disagree about topics. Table 2. Subcategorizations for 40 randomly se- lected verbs in OALD and acquired subcategori- zation dictionary (see text for key). agree: INF:386, THAT:187, P(lo):101, IV:77, P(with):79, p(on):63, -P(about), --WH aih --TV annoy: --TV assign: TV-P(t0):19, NPINF:ll, --TV-P(for), --DTV, +TV:7 attribute: WV-P(to):67, +P(to):12 become: IV:406, XCOMP:142, --PP(Of) bridge: WV:6, +P(between):3 burden: WV:6, TV-P(with):5 calculate: THAT:I 1, TV:4, --WH, --NPINF, --PP(on) chart: TV:4, +DTV:4 chop: TV:4, --TV-P(Up), --TV-V(into) depict: WV-P(as):10, IV:9, --NPING dig: WV:12, P(out):8, P(up):7, --IV, --TV- P (in), --TV-P (0lit), --TV-P (over), --TV-P (up), --P(for) drill: Tv-P(in):I4, TV:14, --IV, --P(FOR) emanate: P(from ):2 employ: TV:31,--TV-P(on),--TV-P(in),--TV- P(as), --NPINF encourage: NPINF:IO8, TV:60, --TV-P(in) exact: --TV, --TV-PP(from) exclaim: THAT:10,--IV,--P0 exhaust: TV:12 exploit: TV:11 fascinate: TV:17 flavor: TV:8, --TV-PP(wiih) heat: IV:12, TV:9, --TV-P(up), --P(up) leak: P(out):7, --IV, --P(in), --IV, --TV- P(tO) lock: TV:16, TV-P(in):16, --IV, --P(), --TV- P(together), --TV-P(up), --TV-P(out), --TV- P(away) mean: THAT:280, TV:73, NPINF:57, INF:41, ING:35, --TV-PP (to), --POSSING, --TV-PP (as) --DTV, --TV-PP (for) occupy: TV:17, --TV-P(in), --TV-P(with) prod: TV:4, Tv-e(into):3, --IV, --P(AT), --NPINF redesign: TV:8, --TV-P (for), --TV-P(as), --NPINF reiterate: THAT:13, --TV remark: THAT:7, --P(on), --P(upon), --IV, +IV:3, retire: IV:30, IV:9, --P(from), --P(t0), --XCOMP, +e(in):38 shed: TV:8, --TV-P (on) sift: P(through):8, --WV, --TV-P(OUT) strive: INF:14, P(for):9, --P(afler), -e (against), -P (with), --IV tour: TV:9, IV:6, --P(IN) troop: --IV, -P0, [TV: trooping the color] wallow: P(in):2,--IV,-P(about),-P(around) water: WV:13,--IV,--WV-P(down), -}-THAT:6 240 Table 3. Comparison of results with OALD Subcategorization frames Word Right Wrong Out of Incorrect agree: 6 8 all: 0 1 annoy: 0 1 assign: 2 1 4 Tv attribute: 1 1 1 P(/o) become: 2 3 bridge: 1 1 1 wv-P(belween) burden: 2 2 calculate: 2 5 chart: 1 1 1 DTV chop: 1 3 depict: 2 3 dig: 3 9 drill: 2 4 emanate: 1 1 employ: 1 5 encourage: 2 3 exact: 0 2 exclaim: 1 3 exhaust: 1 1 exploit: 1 1 fascinate: 1 1 flavor: 1 2 heat: 2 4 leak: 1 5 lock: 2 8 mean: 5 10 occupy: 1 3 prod: 2 5 redesign: 1 4 reiterate: 1 2 remark: 1 1 4 IV retire: 2 1 5 P(in) shed: 1 2 sift: 1 3 strive: 2 6 tour: 2 3 troop: 0 3 wallow: 1 4 water: 1 1 3 THAT 60 7 139 Precision (percent right of ones learned): 90% Recall (percent of OALD ones learned): 43% some unquestionable omissions from the diction- ary. For example, Cobuild does not list that forbid takes from-marked participial complements, but this is very well attested in the New York Times newswire, as the examples in (7) show: (7) a. The Constitution appears to forbid the general, as a former president who came to power through a coup, from taking of- fice. b. Parents and teachers are forbidden from taking a lead in the project, and ... Unfortunately, for several reasons the results presented here are not directly comparable with those of Brent's systems. 17 However, they seems to represent at least a comparable level of perfor- mance. FUTURE DIRECTIONS This paper presented one method of learning sub- categorizations, but there are other approaches one might try. For disambiguating whether a PP is subcategorized by a verb in the V NP PP envi- ronment, Hindle and Rooth (1991) used a t-score to determine whether the PP has a stronger asso- ciation with the verb or the preceding NP. This method could be usefully incorporated into my parser, but it remains a special-purpose technique for one particular ease. Another research direc- tion would be making the parser stochastic as well, rather than it being a categorical finite state de- vice that runs on the output of a stochastic tagger. There are also some linguistic issues that re- main. The most troublesome case for any English subcategorization learner is dealing with prepo- sitional complements. As well as the issues dis- cussed above, another question is how to represent the subcategorization frames of verbs that take a range of prepositional complements (but not all). For example, put can take virtually any locative or directional PP complement, while lean is more choosy (due to facts about the world): l~My system tries to learn many more subcatego- rization frames, most of which are more difficult to detect accurately than the ones considered in Brent's work, so overall figures are not comparable. The re- call figures presented in Brent (1992) gave the rate of recall out of those verbs which generated at least one cue of a given subcategorization rather than out of all verbs that have that subcategorization (pp. 17- 19), and are thus higher than the true recall rates from the corpus (observe in Table 3 that no cues were gen- erated for infrequent verbs or subcategorization pat- terns). In Brent's earlier work (Brent 1991), the error rates reported were for learning from tagged text. No error rates for running the system on untagged text were given and no recall figures were given for either system. 241 (8) a. John leaned against the wall b. *John leaned under the table c. *John leaned up the chute The program doesn't yet have a good way of rep- resenting classes of prepositions. The applications of this system are fairly obvi- ous. For a parsing system, the current subcate- gorization dictionary could probably be incorpo- rated as is, since the utility of the increase in cov- erage would almost undoubtedly outweigh prob- lems arising from the incorrect subcategorization frames in the dictionary. A lexicographer would want to review the results by hand. Nevertheless, the program clearly finds gaps in printed diction- aries (even ones prepared from machine-readable corpora, like Cobuild), as the above example with forbid showed. A lexicographer using this program might prefer it adjusted for higher recall, even at the expense of lower precision. When a seemingly incorrect subcategorization frame is listed, the lex- icographer could then ask for the cues that led to the postulation of this frame, and proceed to verify or dismiss the examples presented. A final question is the applicability of the meth- ods presented here to other languages. Assuming the existence of a part-of-speech lexicon for an- other language, Kupiec's tagger can be trivially modified to tag other languages (Kupiec 1992). The finite state parser described here depends heavily on the fairly fixed word order of English, and so precisely the same technique could only be employed with other fixed word order languages. However, while it is quite unclear how Brent's methods could be applied to a free word order lan- guage, with the method presented here, there is a clear path forward. Languages that have free word order employ either case markers or agreement af- fixes on the head to mark arguments. Since the tagger provides this kind of morphological knowl- edge, it would be straightforward to write a similar program that determines the arguments of a verb using any combination of word order, case marking and head agreement markers, as appropriate for the language at hand. Indeed, since case-marking is in some ways more reliable than word order, the results for other languages might even be better than those reported here. CONCLUSION After establishing that it is desirable to be able to automatically induce the subcategorization frames of verbs, this paper examined a new technique for doing this. The paper showed that the technique of trying to learn from easily analyzable pieces of data is not extendable to all subcategorization frames, and, at any rate, the sparseness of ap- propriate cues in unrestricted texts suggests that a better strategy is to try and extract as much (noisy) information as possible from as much of the data as possible, and then to use statistical techniques to filter the results. Initial experiments suggest that this technique works at least as well as previously tried techniques, and yields a method that can learn all the possible subcategorization frames of verbs. REFERENCES Adriaens, Geert, and Gert de Braekeleer. 1992. Converting Large On-line Valency Dictionaries for NLP Applications: From PROTON Descrip- tions to METAL Frames. In Proceedings of COLING-92, 1182-1186. Brent, Michael R. 1991. Automatic Acquisi- tion of Subcategorization Frames from Untagged Text. In Proceedings of the 29th Annual Meeting of the ACL, 209-214. Brent, Michael R. 1992. Robust Acquisition of Subcategorizations from Unrestricted Text: Un- supervised Learning with Syntactic Knowledge. MS, John Hopkins University, Baltimore, MD. Brent, Michael R., and Robert Berwick. 1991. Automatic Acquisition of Subcategorization Frames from Free Text Corpora. In Proceedings of the ~th DARPA Speech and Natural Language Workshop. Arlington, VA: DARPA. Church, Kenneth, and Patrick Hanks. 1989. Word Association Norms, Mutual Information, and Lexicography. In Proceedings of the 27th An- nual Meeting of the ACL, 76-83. Gove, Philip B. (ed.). 1977. Webster's seventh new collegiate dictionary. Springfield, MA: G. & C. Merriam. Hearst, Marti. 1992. Automatic Acquisition of Hyponyms from Large Text Corpora. In Pro- ceedings of COLING-92, 539-545. Hindle, Donald, and Mats Rooth. 1991. Struc- tural Ambiguity and Lexical Relations. In Pro- ceedings of the 291h Annual Meeting of the ACL, 229-236. Hornby, A. S. 1989. Oxford Advanced Learner's Dictionary of Current English. Oxford: Oxford University Press. 4th edition. Kupiec, Julian M. 1992. Robust Part-of-Speech Tagging Using a Hidden Markov Model. Com- puter Speech and Language 6:225-242. Pollard, Carl, and Ivan A. Sag. 1987. Information-Based Syntax and Semantics. Stanford, CA: CSLI. Procter, Paul (ed.). 1978. Longman Dictionary of Contemporary English. Burnt Mill, Harlow, Essex: Longman. Sinclair, John M. (ed.). 1987. Collins Cobuild English Language Dictionary. London: Collins. 242
1993
32
AN EMPIRICAL STUDY ON THEMATIC KNOWLEDGE ACQUISITION BASED ON SYNTACTIC CLUES AND HEURISTICS Rey-Long Liu* and Von-Wun Soo** Department of Computer Science National Tsing-Hua University HsinChu, Taiwan, R.O.C. Email: [email protected]* and [email protected]** Abstract Thematic knowledge is a basis of semamic interpreta- tion. In this paper, we propose an acquisition method to acquire thematic knowledge by exploiting syntactic clues from training sentences. The syntactic clues, which may be easily collected by most existing syn- tactic processors, reduce the hypothesis space of the thematic roles. The ambiguities may be further resolved by the evidences either from a trainer or from a large corpus. A set of heurist-cs based on linguistic constraints is employed to guide the ambi- guity resolution process. When a train,-.r is available, the system generates new sentences wtose thematic validities can be justified by the trainer. When a large corpus is available, the thematic validity may be justi- fied by observing the sentences in the corpus. Using this way, a syntactic processor may become a thematic recognizer by simply derivir.g its thematic knowledge from its own syntactic knowledge. Keywords: Thematic Knowledge Acquisition, Syntac- tic Clues, Heuristics-guided Ambigu-ty Resolution, Corpus-based Acquisition, Interactive Acquisition 1. INTRODUCTION Natural language processing (NLP) systems need various knowledge including syntactic, semantic, discourse, and pragmatic knowledge in different applications. Perhaps due to the relatively well- established syntactic theories and forrc.alisms, there were many syntactic processing systew, s either manu- ally constructed or automatically extenJ~d by various acquisition methods (Asker92, Berwick85, Brentgl, Liu92b, Lytinen90, Samuelsson91, Simmons91 Sanfi- lippo92, Smadja91 and Sekine92). However, the satis- factory representation and acquisition methods of domain-independent semantic, disco~lrse, and prag- matic knowledge are not yet develo~d or computa- tionally implemented. NLP systems 6f'.en suffer the dilemma of semantic representation. Sophisticated representation of semantics has better expressive power but imposes difficulties on acquF;ition in prac- tice. On the other hand, the poor adequacy of naive semantic representation may deteriorate the perfor- mance of NLP systems. Therefore, for plausible acquisition and processing, domain-dependent seman- tic bias was 9ften employed in many previous acquisi- tion systez, s (Grishman92b, Lang88, Lu89, and Velardi91). In thi~ paper, we present an implemented sys- tem that acquires domain-independent thematic knowledge using available syntactic resources (e.g. syntactic p~acessing systems and syntactically pro- cessed cort;ara). Thematic knowledge can represent semantic or conceptual entities. For correct and effi- cient parsing, thematic expectation serves as a basis for conflict resolution (Taraban88). For natural language understanding and other applications (e.g. machine translation), thematic role recognition is a major step. ~ematic relations may serve as the voca- bulary shared by the parser, the discourse model, and the world knowledge (Tanenhaus89). More impor- tantly, since thematic structures are perhaps most closely link~d to syntactic structures ($ackendoff72), thematic knowledge acquisition may be more feasible when only .:'yntactic resources are available. The con- sideration of the availability of the resources from which thematic knowledge may be derived promotes the practica2 feasibility of the acquisition method. In geaeral, lexical knowledge of a lexical head should (at ~east) include 1) the number of arguments of the lexic~-~l head, 2) syntactic properties of the argu- ments, and 3) thematic roles of the arguments (the argument ,:~ructure). The former two components may be eitt~er already constructed in available syntac- tic processors or acquired by many syntactic acquisi- tion system s . However, the acquisition of the thematic roles of th~ arguments deserves more exploration. A constituent~ay have different thematic roles for dif- ferent verbs in different uses. For example, "John" has different th,~matic roles in (1.1) - (1.4). (1.1) [Agenz John] turned on the light. (1.2) [Goal rohn] inherited a million dollars. (1.3) The magic wand turned [Theme John] into a frog. 243 Table 1. Syntactic clues for hypothesizing thematic roles Theta role Agent(Ag) Goal(Go) Source(So) Instrument(In) Theme(Th) Beneficiary(Be) Location(Lo) Time(Ti) Quantity(Qu) Proposition(Po) Manner(Ma) Cause(Ca) Result(Re) Constituent NP NP NP NP NP NP NP,ADJP NP(Ti) NP(Qu) Proposition ADVP,PP NP NP Animate Subject Y y(animate) y(animate) y(no Ag) Y n Y Y Object n n n Y Preposition in PP by till,untill,to,into,down from with,by of, about for at,in,on,under at,in,before,after,about,by,on,during for none in,with by,for,because of in ,into (1.4) The letter reached [Goal John] yesterday. To acquire thematic lexical knowledge, precise thematic roles of arguments in the sentences needs to be determined. In the next section, the thematic roles con- sidered in this paper are listed. The syntactic proper- ties of the thematic roles are also summarized. The syntactic properties serve as a preliminary filter to reduce the hypothesis space of possible thematic roles of arguments in training sentences. To further resolve the ambiguities, heuristics based on various linguistic phenomena and constraints are introduced in section 3. The heuristics serve as a general guidance for the system to collect valuable information to discriminate thematic roles. Current status of the experiment is reported in section 4. In section 5, the method is evaluated and related to previous methodologies. We conclude, in section 6, that by properly collecting discrimination information from available sources, thematic knowledge acquisition may be, more feasible in practice. 2. THEMATIC ROLES AND SYNTAC- TIC CLUES The thematic roles considered in this paper and the syntactic clues for identifying them are presented in Table 1. The syntactic clues include i) the possible syntactic constituents of the arguments, 2) whether animate or inanimate arguments, 3) grammatical functions (subject or object) of the a;guments when they are Noun Phrases (NPs), and 4) p:epositions of the prepositional phrase in which the aaguments may occur, The syntactic constituents inc!t:de NP, Propo- sition (Po), Adverbial Phrase (ADVP), Adjective Phrase (ADJP), and Prepositional phrase (PP). In addition to common animate nouns (e.g. he, she, and I), proper nguns are treated as animate NPs as well. In Table 1, "y", "n", "?", and "-" denote "yes", "no", "don't care", and "seldom" respectively. For example, an Agent should be an animate NP which may be at the subject (but not object) position, and if it is in a PP, the preposition of the PP should be "by" (e.g. "John" in "the light is turned on by John"). We consider the thematic roles to be well- known and referred, although slight differences might be found in various works. The intrinsic properties of the thematic roles had been discussed from various perspectivez in previous literatures (Jackendoff72 and Gruber76). Grimshaw88 and Levin86 discussed the problems o_ ~ thematic role marking in so-called light verbs and aJjectival passives. More detailed descrip- tion of the thematic roles may be found in the litera- tures. To illustrate the thematic roles, consider (2.1)- (2.9). (2.1) lag The robber] robbed [So the bank] of [Th the money]. (2.2) [Th The rock] rolled down [Go the hill]. (2.3) [In Tt,e key] can open [Th the door]. (2.4) [Go Will] inherited [Qua million dollars]. (2.5) [Th ~!e letter] finally reached [Go John]. (2.6) [Lo "121e restaurant] can dine [Th fifty people]. (2.7) [Ca A fire] burned down [Th the house]. (2.8) lAg John] bought [Be Mary] [Th a coat] [Ma reluctantly]. (2.9) lag John] promised [Go Mary] [Po to marry her]. - When a tr, lining sentence is entered, arguments of lexical verbs in the sentence need to be extracted before leart ing. This can be achieved by invoking a syntactic processor. 244 Table 2. Heuristics for discriminating ther atic roles • Volition Heuristic (VH): Purposive constructions (e.g. in order to) an0 purposive adverbials (e.g. deliberately and intentionally) may occur in sentences with Agent arguments (Gruber76). • Imperative Heuristic OH): Imperatives are permissible only for Agent subjects (Gruber76). • Thematic Hierarchy Heuristic (THH): Given a thematic hierarchy (from higher to lower) "Agent > Location, Source, Goal > Theme", the passive by-phrases must reside at a higher level than the derived subjects in the hierar- chy (i.e. the Thematic Hierarchy Condition in Jackendoff72). In this papzr, we set up the hierarchy: Agent > Loca- tion, Source, Goal, Instrument, Cause > Theme, Beneficiary, Time, Quantity, Proposition, Manner, Result. Subjects and objects cannot reside at the same level. • Preposition Heuristic (PH): The prepositions of the PPs in which the arguments occur often convey good discrimination information for resolving thematic roles ambiguities (see the "Preposition in PP" column in Table 1). • One-Theme Heuristic (OTH): An ~xgument is preferred to be Theme if itis the only possible Theme in the argu- ment structure. • Uniqueness Heuristic (UH): No twc, arguments may receive the sanle thematic role (exclusive of conjunctions and anaphora which co-relate two constituents assigned with the same thematic role). If the sentence is selected from a syntactically pro- cessed corpus (such as the PENN treebank) the argu- ments may be directly extracted from the corpus. To identify the thematic roles of the arguments, Table 1 is consulted. For example, consider (2.1) as the training sen- tence. Since "the robber" is an animate NP with the subject grammatical function, it can only qualify for Ag, Go, So, and Th. Similarly, since "the bank" is an inanimate NP with the object grammatical function, it can only satisfy the requirements of Go, So, Th, and Re. Because of the preposition "of", "th~ money" can only be Th. As a result, after con,;ulting the con- straints in Table 1, "the robber", "the bank", and "the money" can only be {Ag, Go, So, Tb}, {Go, So, Th, Re}, and {Th} respectively. Therefore, although the clues in Table 1 may serve as a filter, lots of thematic role ambiguities still call for other discrimination information and resolution mechanisms. 3. FINDING EXTRA INFORMATION FOR RESOLVING THETA ROLE AMBIGUITIES The remaining thematic role ambiguities should be resolved by the evidences from other sources. Trainers and corpora are the two most commonly available sources of the extra information. Interactive acquisition had been applied in various systems in which the oracle from the trainer may reduce most ambiguities (e.g. Lang88, Liu93, Lu89, and Velardi91). Corpus-based acquisition systems may also converge to a satisfactory performance by col- lecting evidences from a large corpus (e.g. Brent91, Sekine92, Smadja91, and Zernik89). We are con- cerned with the kinds of information the available sources may contribute to thematic knowledge acquisition. The heuristics to discriminate thematic roles are proposed in Table 2. The heuristics suggest the sys- tem the ways of collecting useful information for resolving ambiguities. Volition Heuristic and Impera- tive Heuriz'jc are for confirming the Agent role, One-Theme Heuristic is for Theme, while Thematic Hierarchy Heuristic, Preposition Heuristic and Uniqueness Heuristic may be used in a general way. It sh~ald be noted that, for the purposes of effi- cient acquisition, not all of the heuristics were identi- cal to the corresponding original linguistic postula- tions. For example, Thematic Hierarchy Heuristic was motivated by the Thematic Hierarchy Condition (Jackendoff72) but embedded with more constraints to filter ou~ more hypotheses. One-Theme Heuristic was a relaxed version of the statement "every sen- tence has a theme" which might be too strong in many cases (Jack. mdoff87). Becaase of the space limit, we only use an example tc illustrate the idea. Consider (2.1) "The robber rob'~ed the bank of the money" again. As 245 mentioned above, after applying the preliminary syn- tactic clues, "the robber", "the bank", and "the money" may be {Ag, Go, So, Th}, {Ge, So, Th, Re}, and {Th} respectively. By applying Uniqueness Heuristic to the Theme role, the argument structure of "rob" in the sentence can only be (AS1) "{Ag, Go, So}, {Go, So, Re}, {Th}", which means that, the external argument is {Ag, Go, So} and the internal arguments are {Go, So, Re} and {Th}. Based on the intermediate result, Volition Heuristic, Imperative Heuristic, Thematic Hierarchy Heuristic, and Preposition Heuristic could be invoked to further resolve ambiguities. Volition Heuristic and Imperative Heuristic ask the learner to verify the validities of:the sentences such as "John intentionally robbed the bank" ("John" and "the robber" matches because they have the same properties considered in Table 1 and Table 2). If the sentence is "accepted", an Agent is needed for "rob". Therefore, the argument structure becomes (AS2) "{Ag}, {Go, So, Re}, {Th}" Thematic Hierarchy Heuristic guides the learner to test the validity of the passive Form of (2.1). Similarly, since sentences like "The barb: is robbed by Mary" could be valid, "The robber" is higher than "the bank" in the Thematic Hierarchy. Therefore, the learner may conclude that either AS3 or AS4 may be the argument structure of "rob": (AS3) "{Ag}, {Go, So, Re}, {Th}" (AS4) "{Go, So}, {Re}, {Th}". Preposition Heuristic suggests the learner to to resolve ambiguities based on the prel:ositions of PPs. For example, it may suggest the sys~.em to confirm: The money is from the bank? If sc, "the bank" is recognized as Source. The argument structure becomes (AS5) "{Ag, Go}, {So}, {Th}". Combining (AS5) with (AS3) or (ASS) with (AS2), the learner may conclude that the arg~rnent structure of"rob" is "{Ag}, {So}, {Th}". In summary, as the arguments of lexical heads are entered to the acquisition system, the clues in Table 1 are consulted first to reduce tiae hypothesis space. The heuristics in Table 2 are then invoked to further resolve the ambiguities by coliecting useful information from other sources. The information that the heuristics suggest the system to collect is the thematic validities of the sentences that may help to confirm the target thematic roles. The confirmation information required by Voli- tion Heuristic, Imperative Heuristic. and Thematic Hierarchy Heuristic may come from corpora (and of course trainers as well), while Preposition Heuristic sometimes r, eeds the information only available from trainers. This is because the derivation of new PPs might generate ungrammatical sentences not available in general .:orpora. For example, (3.1) from (2.3) "The key can open the door" is grammatical, while (3.2) from (2.5) "The letter finally reached John" is ungrammatical. (3.1) The door is opened by the key. (3.2) *The letter finally reached to John. Therefore, simple queries as above are preferred in the method. It should also be noted that since these heuris- tics only serve as the guidelines for finding discrimi- nation information, the sequence of their applications does not have significant effects on the result of learning. However, the number of queries may be minimized by applying the heuristics in the order: Volition Heuristic and Imperative Heuristic -> Thematic Hierarchy Heuristic -> Preposition Heuris- tic. One-Th',~me Heuristic and Uniqueness Heuristic are invoked each time current hypotheses of thematic roles are changed by the application of the clues, Vol- ition Heuristic, Imperative Heuristic, Thematic Hierarchy Heuristic, or Preposition Heuristic. This is because One-Theme Heuristic and Uniqueness Heuristic az'e constraint-based. Given a hypothesis of thematic r~.es, they may be employed to filter out impossible combinations of thematic roles without using any qaeries. Therefore, as a query is issued by other heuristics and answered by the trainer or the corpus, the two heuristics may be used to "extend" the result by ft~lher reducing the hypothesis space. 4. EXPERIMENT As described above, the proposed acquisition method requires syntactic information of arguments as input (recall Table 1). We believe that the syntactic infor- mation is one of the most commonly available resources, it may be collected from a syntactic pro- cessor or a ;yntactically processed corpus. To test the method wita a public corpus as in Grishman92a, the PENN Tre~Bank was used as a syntactically pro- cessed co~pus for learning. Argument packets (including VP packets and NP packets) were extracted .tom ATIS corpus (including JUN90, SRI_TB, and TI_TB tree files), MARI corpus (includ- ing AMBIC~ and WBUR tree files), MUC1 corpus, and MUC2 corpus of the treebank. VP packets and NP packets recorded syntactic properties of the argu- ments of verbs and nouns respectively. 246 Corpus Sentences ATIS 1373 MARI 543 MUC1 1026 MUC2 3341 Table 3. Argument extraction from TreeBank {Nords 15286 9897 22662 73548 VP packe~ Verbs NPpacke~ Nouns 1716 138 959 188 1067 509 425 288 1916 732 907 490 6410 1556 3313 1177 Since not all constructions involving movement were tagged with trace information in the corpus, to derive the arguments, the procedure needs to consider the constructions of passivization, interjection, and unbounded dependency (e.g. in relative clauses and wh-questions). That is, it needs to determine whether a constituent is an argument of a verb (or noun), whether an argument is moved, and if so, which con- stituent is the moved argument. Basically, Case Theory, Theta Theory (Chomsky81), and Foot Feature Principle (Gazdar85) were employed to locate the arguments (Liu92a, Liu92b). Table 3 summarizes the results of the argument extraction. About 96% of the trees were extracted. Parse trees with too many words (60) or nodes (i.e. 50 subgoals of parsing) were discarded. ~2~1 VP packets in the parse trees were derived, but only the NP pack- ets having PPs as modifiers were extracted. These PPs could help the system to hypothesize axgument struc- tures of nouns. The extracted packets were assimi- lated into an acquisition system (called EBNLA, Liu92a) as syntactic subcategorization frames. Dif- ferent morphologies of lexicons were not counted as different verbs and nouns. As an example of the extracted argument pack- ets, consider the following sentence from MUCI: "..., at la linea ..... where a FARC front ambushed an 1 lth brigade army patrol". The extraction procedure derived the following VP packet for "ambushed": ambushed (NP: a FARC fxont) (WHADVP: where) (NP: an 1 lth brigade army patrol) The first NP was the external argument of the verb. Other constituents were internal arga:nents of the verb. The procedure could not determ,r.e whether an argument was optional or not. In the corpora, most packets were for a small number of verbs (e.g. 296 packets tot "show" were found in ATIS). Only 1 to 2 packets could be found for most verbs. Therefore, although tt.e parse trees could provide good quality of argument packets, the information was too sparse to resoNe, thematic role ambiguities. This is a weakness embedded in most corpus-based acquisition methods, since the learner might finally fail to collect sufficient information after spending much. effort to process the corpus. In that case, the ~ambiguities need to be temporarily suspended. ~To seed-up learning and focus on the usage of the proposed method, a trainer was asked to check the thematic validities (yes/no) of the sentences generated b,, the learner. Excluding packets of some special verbs to be discussed later and erroneous packets (due to a small amount of inconsistencies and incompleteness of the corpus and the extraction procedure), the packets were fed into the acquisition system (one packet for a verb). The average accuracy rate of the acquired argu- ment struct~ares was 0.86. An argument structure was counted as correct if it was unambiguous and con- firmed by the trainer. On average, for resolving ambi- guities, 113 queries were generated for every 100 suc- cessfully acquired argument structures. The packets from ATIS caused less ambiguities, since in this corpus there were many imperative sentences to which Impe:ative Heuristic may be applied. Volition Heuristic, Thematic Hierarchy Heuristic, and Preposi- tion Heuristic had almost equal frequencies of appli- cation in the experiment. As an. example of how the clues and heuristics could successfully derive argument structures of verbs, consider the sentence from ATIS: "The flight going to San Francisco ...". Without issuing any queries, the learner concluded that an argument structure of "go" is "{Th}, {Go}" This was because, according to the clues, "San Fran- cisco" couM only be Goal, while according to One- Theme Heuristic, "the flight" was recognized as Theme. Most argument structures were acquired using 1 to ~ queries. The result showed that, after (manually or automatically) acquiring an argument packet (i.e. a syntactic s t, bcategorization frame plus the syntactic constituent l 3f the external argument) of a verb, the acquisition~'rnethod could be invoked to upgrade the syntactic knowledge to thematic knowledge by issu- ing only 113 queries for every 100 argument packets. Since checking the validity of the generated sentences is not a heavy burden for the trainer (answering 'yes' 247 or 'no' only), the method may be attached to various systems for promoting incremental extensibility of thematic knowledge. The way of counting the accuracy rate of the acquired argument structures deserves notice. Failed cases were mainly due to the clues and heuristics that were too strong or overly committed. For example, the thematic role of "the man" in (4.1) from MARI could not be acquired using the clues and heuristics. (4.1) Laura ran away with the man. In the terminology of Gruber76, this is an expression of accompaniment which is not considered in the clues and heuristics. As another example, consider (4.2) also from MARI. (4.2) The greater Boston area ranked eight among major cities for incidence of AIDS. The clues and heuristics could not draw any conclu- sions on the possible thematic roles of "eight". On the other hand, the cases cour.ted as "failed" did not always lead to "erroneous" argument struc- tures. For example, "Mary" in (2.9) "John promised Mary to marry her" was treated as Theme rather than Goal, because "Mary" is the only possible Theme. Although "Mary" may be Theme in this case as well, treating "Mary" as Goal is more f'me-grained. The clues and heuristics may often lead to acceptable argument structures, even if the argument structures are inherently ambiguous. For example, an NP might function as more than one thematic role within a sentence (Jackendoff87). Ia (4.3), "John" may be Agent or Source. (4.3) John sold Mary a coat. Since Thematic Hierarchy Heuristic assumes that sub- jects and objects cannot reside at the same level, "John" must not be assigned as Sotuce. Therefore, "John" and "Mary" are assigned as Agent and Goal respectively, and the ambiguity is resolved. In addition, some thematic roles may cause ambiguities if only syntactic evidences are available. Experiencer, such as "John" in (4.4), arid Maleficiary, such as "Mary" in (4.5), are the two examples. (4.4) Mary surprised John. (4.5) Mary suffers a headache. There are difficulties in distinguishing Experiencer, Agent, Maleficiary and Theme. Fortunately, the verbs with Experiencer and Maleficiary may be enumerated before learning. Therefore, the argumen,: structures of these verbs are manually constructed rather than learned by the proposed method. 5. RELATED WORK To explore the acquisition of domain-independent semantic knowledge, the universal linguistic con- straints postulated by many linguistic studies may provide gefieral (and perhaps coarse-grained) hints. The hints may be integrated with domain-specific semantic bias for various applications as well. In the branch of Lhe study, GB theory (Chomsky81) and universal feature instantiation principles (Gazdar85) had been shown to be applicable in syntactic knowledge ,.cquisition (Berwick85, Liu92a, Liu92b). The proposed method is closely related to those methodolog,.es. The major difference is that, various thematic theories are selected and computationalized for thematic knowledge acquisition. The idea of structural patterns in Montemagni92 is similar to Preposition Heuristic in that the patterns suggest gen- eral guidance to information extraction. Extra information resources are needed for thematic knawledge acquisition. From the cognitive point of view, morphological, syntactic, semantic, contextual (Jacobs88), pragmatic, world knowledge, and observations of the environment (Webster89, Siskind90) .~e all important resources. However, the availability~of the resources often deteriorated the feasibility of learning from a practical standpoint. The acquisition often becomes "circular" when rely- ing on semantic information to acquire target seman- tic informatmn. Prede~:ined domain linguistic knowledge is another important information for constraining the hypothesis ,space in learning (or for semantic bootstrapping). From this point of view, lexical categories (Zernik89, Zemik90) and theory of lexical semantics (Pustejovsky87a, Pustejovsky87b) played similar role~ as the clues and heuristics employed in this paper. The previous approaches had demon- strated the¢::etical interest, but their performance on large-scale acquisition was not elaborated. We feel that, requ~,ng the system to use available resources only (i.e, .,;yntactic processors and/or syntactically processed c'orpora) may make large-scale implemen- tations more feasible. The research investigates the issue as to l what extent an acquisition system may acquire thematic knowledge when only the syntactic resources a:e available. McClelland86 showed a connectionist model for thematic role assignment. By manually encoding training ass!gnments and semantic microfeatures for a limited number of verbs and nouns, the connectionist network learned how to assign roles. Stochastic approaches (Smadja91, Sekine92) also employed available corpora to acquire collocational data for resolving ambiguities in parsing. However, they acquired numerical values by observing the whole 248, training corpus (non-incremental learning). Explana- tion for those numerical values is difficult to derive in those models. As far as the large-scale thematic knowledge acquisition is concerned, the incremental extensibility of the models needs to be further improved. 6. CONCLUSION Preliminary syntactic analysis could be achieved by many natural language processing systems. Toward semantic interpretation on input sentences, thematic lexical knowledge is needed. Although each lexicon may have its own idiosyncratic thematic requirements on arguments, there exist syntactic clues for hypothesizing the thematic roles of the arguments. Therefore, exploiting the information derived from syntactic analysis to acquire thematic knowledge becomes a plausible way to build an extensible thematic dictionary. In this paper, various syntactic clues are integrated to hypothesize thematic roles of arguments in training sentences. Heuristics-guided ambiguity resolution is invoked to collect extra discrimination information from the nainer or the corpus. As more syntactic resources become avail- able, the method could upgrade the acquired knowledge from syntactic level to thematic level. Acknowledgement This research is supported in part by NSC (National Science Council of R.O.C.) under the grant NSC82- 0408-E-007-029 and NSC81-0408-E007-19 from which we obtained the PENN TreeBank by Dr. Hsien-Chin Liou. We would like to thank the anonymous reviewers for their helpful comments. References [Asker92] Asker L., Gamback B., Samuelsson C., EBL2 : An Application to Automatic Lezical Acquisi- tion, Proc. of COLING, pp. 1172-1176, 1992. [Berwick85] Berwick R. C., The Acquisition of Syn- tactic Knowledge, The MIT Press, Cambridge, Mas- sachusetts, London, England, 1985. [Brent91] Brent M. R., Automatic Acquisition of Sub- categorization Frames from Untagged Text, Proc. of the 29th annual meeting of the ACL, pp. 209-214, 1991. [Chomsky81] Chomsky N., Lectures or Government and Binding, Foris Publications - Dordrecht, 1981. [Gazdar85] Gazdar G., Klein E., Pullum G. K., and Sag I. A., Generalized Phrase Struc;ure Grammar, Harvard University Press, Cambridge Massachusetts, 1985. [Grimshaw88] Grimshaw J. and Mester A., Light Verbs and Theta-Marking, Linguistic Inquiry, Vol. 19, No. 2, pp. 205-232, 1988. [Grishman92a] Grishman R., Macleod C., and Ster- ling J., Evaluating Parsing Strategies Using Stand- ardized Parse Files, Proc. of the Third Applied NLP, pp. 156-161, 1992. [Grishman92b] Grishman R. and Sterling J., Acquisi- tion of Selec tional Patterns, Proc. of COLING-92, pp. 658-664, 1992. [Gruber76] .Gruber J. S., Lexical Structures in Syntax and Semantics, North-Holland Publishing Company, 1976. [Jackendoff72] Jackendoff R. S., Semantic Interpreta- tion in Generative Grammar, The MIT Press, Cam- bridge, Massachusetts, 1972. [Jackendoff87] Jackendoff R. S., The Status of Thematic Relations in Linguistic Theory, Linguistic Inquiry, VoL 18, No. 3, pp.369-411, 1987. [Jacobs88] Jacobs P. and Zernik U., Acquiring Lexi- cal Knowledge from Text: A Case Study, Proc. of AAAI, pp. 739-744, 1988. [Lang88] Lang F.-M. and Hirschman L., Improved Portability ~nd Parsing through Interactive Acquisi- tion of Semantic Information, Proc. of the second conference on Applied Natural Language Processing, pp. 49-57, ~988. [-Levin86] Lzvin B. and Rappaport M., The Formation of Adjectival Passives, Linguistic Inquiry, Vol. 17, No. 4, pp. 623-661, 1986. [Liu92a] L.ia R.-L. and Soo V.-W., Augmenting and Efficiently Utilizing Domain Theory in Explanation- Based Nat~.ral Language Acquisition, Proc. of the Ninth International Machine Learning Conference, ML92, pp. 282-289, 1992. [Liu92b] Liu R.-L and Soo V.-W., Acquisition of Unbounded Dependency Using Explanation-Based Learning, Froc. of ROCLING V, 1992. [Liu93] Li~a R.-L. and Soo V.-W., Parsing-Driven Generalization for Natural Language Acquisition, International Journal of Pattern Recognition and Artificial Intelligence, Vol. 7, No. 3, 1993. [Lu89] Lu R., Liu Y., and Li X., Computer-Aided Grammar Acquisition in the Chinese Understanding System CC!~AGA, Proc. of UCAI, pp. I550-I555, 1989. [Lytinen90] Lytinen S. L. and Moon C. E., A Com- parison of Learning Techniques in Second Language Learning, ]r roc. of the 7th Machine Learning confer- ence, pp. 317-383, 1990. 249 [McClelland86] McClelland J. L. and Kawamoto A. H., Mechanisms of Sentence Processing: Assigning Roles to Constituents of Sentences, in Parallel Distri- buted Processing, Vol. 2, pp. 272-325, 1986. [Montemagni92] Montemagni S. and Vanderwende L., Structural Patterns vs. String Patterns for Extract- ing Semantic Information from Dictionary, Proc. of COLING-92, pp. 546-552, 1992. [Pustejovsky87a] Pustejovsky J. and Berger S., The Acquisition of Conceptual Structure for the Lexicon, Proc. of AAM, pp. 566-570, 1987. [Pustejovsky87b] Pustejovsky J, On the Acquisition of Lexical Entries: The Perceptual Origin of Thematic Relation, Proc. of the 25th annual meeting of the ACL, pp. 172-178, 1987. [Samuelsson91] Samuelsson C. and Rayner M., Quantitative Evaluation of Explanation-Based Learn- ing as an Optimization Tool for a Large-Scale Natural Language System, Proc. of IJCAI, pp. 609- 615, 1991. [Sanfilippo92] Sanfilippo A. and Pozanski V., The Acquisition of Lexical Knowledge from Combined Machine-Readable Dictionary Sources, Proc. of the Third Conference on Applied NLP, pp. 80-87, 1992. [Sekine92] Sekine S., Carroll J. J., Ananiadou S., and Tsujii J., Automatic Learning for Semantic Colloca- tion, Proc. of the Third Conference on Applied NLP, pp. 104-110, 1992. [Simmons91] Simmons R. F. and Yu Y.-H., The Acquisition and Application of Context Sensitive Grammar for English, Proc. of the 29th annual meet- ing of the ACL, pp. 122-129, 1991. [Siskind90] Siskind J. M., Acquiring Core Meanings of Words, Represented as Jackendoff-style Concep- tual structures, from Correlated Streams of Linguistic and Non-linguistic Input, Proc. of the 28th annual meeting of the ACL, pp. 143-156, 1990. [Smadja91] Smadja F. A., From N-Grams to Colloca- tions: An Evaluation of EXTRACT, Proc. of the 29th annual meeting of the ACL, pp. 279-284, 1991. [Tanenhaus89] Tanenhaus M. K. and Carlson G. N., Lexical Structure and Language Comprehension, in Lexical Representation and Process, William Marson-Wilson (ed.), The MIT Press, 1989. [Taraban88] Taraban R. and McClelland J. L., Consti- tuent Attachment and Thematic Role Assignment in Sentence Processing: Influences of Content-Based Expectations, Journal of memory and language, 27, pp. 597-632, 1988. [Velardi91] Velardi P., Pazienza M. T., and Fasolo M., How to Encode Semantic Knowledge: A Method for Meaning Representation and Computer-Aided Acquisition,~Computational Linguistic, Vol. 17, No. 2, pp. 153-17G~ 1991. [Webster89] I Webster M. and Marcus M., Automatic Acquisition of the Lexical Semantics of Verbs from Sentence Frames, Proc. of the 27th annual meeting of the ACL, pp. 177-184, 1989. [Zernik89] Zernik U., Lexicon Acquisition: Learning from Corpus by Capitalizing on Lexical Categories, Proc. of IJC&I, pp. 1556-1562, 1989. [Zernik90] Zernik U. and Jacobs P., Tagging for Learning: Collecting Thematic Relation from Corpus, Proc. of COLING, pp. 34-39, 1990. 250
1993
33
PART-OF-SPEECH INDUCTION FROM SCRATCH Hinrich Schiitze Center for the Study of Language and Information Ventura Hall Stanford, CA 94305-4115 schuetze~csli.stanford.edu Abstract This paper presents a method for inducing the parts of speech of a language and part- of-speech labels for individual words from a large text corpus. Vector representations for the part-of-speech of a word are formed from entries of its near lexical neighbors. A dimen- sionality reduction creates a space represent- ing the syntactic categories of unambiguous words. A neural net trained on these spa- tial representations classifies individual con- texts of occurrence of ambiguous words. The method classifies both ambiguous and unam- biguous words correctly with high accuracy. INTRODUCTION Part-of-speech information about individual words is necessary for any kind of syntactic and higher level processing of natural language. While it is easy to obtain lists with part of speech labels for frequent English words, such information is not available for less common languages. Even for En- glish, a categorization of words that is tailored to a particular genre may be desired. Finally, there are rare words that need to be categorized even if fre- quent words are covered by an available electronic dictionary. This paper presents a method for inducing the parts of speech of a language and part-of-speech labels for individual words from a large text cor- pus. Little, if any, language-specific knowledge is used, so that it is applicable to any language in principle. Since the part-of-speech representations are derived from the corpus, the resulting catego- rization is highly text specific and doesn't contain categories that are inappropriate for the genre in question. The method is efficient enough for vo- cabularies of tens of thousands of words thus ad- dressing the problem of coverage. The problem of how syntactic categories can be induced is also of theoretical interest in language acquisition and learnability. Syntactic category information is part of the basic knowledge about language that children must learn before they can acquire more complicated structures. It has been claimed that "the properties that the child can detect in the input - such as the serial positions and adjacency and co-occurrence relations among words - are in general linguistically irrelevant." (Pinker 1984) It will be shown here that relative position of words with respect to each other is suf- ficient for learning the major syntactic categories. In the first part of the derivation, two iterations of a massive linear approximation of cooccurrence counts categorize unambiguous words. Then a neural net trained on these words classifies indi- vidual contexts of occurrence of ambiguous words. An evaluation suggests that the method classi- fies both ambiguous and unambiguous words cor- rectly. It differs from previous work in its effi- ciency and applicability to large vocabularies; and in that linguistic knowledge is only used in the very last step so that theoretical assumptions that don't hold for a language or sublanguage have min- imal influence on the classification. The next two sections describe the linear ap- proximation and a birecurrent neural network for the classification of ambiguous words. The last section discusses the results. CATEGORY SPACE The goal of the first step of the induction is to com- pute a multidimensional real-valued space, called category space, in which the syntactic category of each word is represented by a vector. Proximity in the space is related to similarity of syntactic cat- egory. The vectors in this space will then be used as input and target vectors for the connectionist net. The vector space is bootstrapped by collecting relevant distributional information about words. The 5,000 most frequent words in five months of the New York Times News Service (June through 251 October 1990) were selected for the experiments. For each pair of these words < wi, w i >, the num- ber of occurrences of wi immediately to the left of wj (hi,j), the number of occurrences of wi immedi- ately to the right ofwj (cij), the number of occur- rences of wl at a distance of one word to the left of wj (ai,j), and the number of occurrences ofwi at a distance of one word to the right of wj (d/j) were counted. The four sets of 25,000,000 counts were collected in the 5,000-by-5,000 matrices B, C, A, and D, respectively. Finally these four matrices were combined into one large 5,000-by-20,000 ma- trix as shown in Figure 1. The figure also shows for two words where their four cooccurrence counts are located in the 5,000-by-20,000 matrix. In the experiments, w3000 was resistance and ~/24250 was theaters. The four marks in the figure, the posi- tions of the counts 1:13000,4250, b3000,4250, e3000,4250, and d3000,4~50, indicate how often resistance oc- curred at positions -2, -1, 1, and 2 with respect to theaters. These 20,000-element rows of the matrix could be used directly to compute the syntactic similar- ity between individual words: The cosine of the angle between the vectors of a pair of words is a measure of their similarity. I However, computa- tions with such large vectors are time-consuming. Therefore a singular value decomposition was per- formed on the matrix. Fifteen singular values were computed using a sparse matrix algorithm from SVDPACK (Berry 1992). As a result, each of the 5,000 words is represented by a vector of real num- bers. Since the original 20,000-component vectors of two words (corresponding to rows in the ma- trix in Figure 1) are similar if their collocations are similar, the same holds for the reduced vectors because the singular value decomposition finds the best least square approximation for the 5,000 orig- inal vectors in a 15-dimensional space that pre- serves similarity between vectors. See (Deerwester et al. 1990) for a definition of SVD and an appli- cation to a similar problem. Close neighbors in the 15-dimensional space generally have the same syntactic category as can be seen in Table 1. However, the problem with this method is that it will not scale up to a very large number of words. The singular value decomposi- tion has a time complexity quadratic in the rank of the matrix, so that one can only treat a small part of the total vocabulary of a large corpus. Therefore, an alternative set of features was con- sidered: classes of words in the 15-dimensional space. Instead of counting the number of occur- rences of individual words, we would now count 1The cosine between two vectors corresponds to the normalized correlation coefficient: cos(c~(~,ff)) = the number of occurrences of members of word classes. 2 The space was clustered with Buckshot, a linear-time clustering algorithm described in (Cut- ting et al. 1992). Buckshort applies a high-quality quadratic clustering algorithm to a random sam- ple of size v/k-n, where k is the number of desired cluster centers and n is the number of vectors to be clustered. Each of the remaining n - ~ vec- tors is assigned to the nearest cluster center. The high-quality quadratic clustering algorithm used was truncated group average agglomeration (Cut- ting et al. 1992). Clustering algorithms generally do not con- struct groups with just one member. But there are many closed-class words such as auxiliaries and prepositions that shouldn't be thrown together with the open classes (verbs, nouns etc.). There- fore, a list of 278 closed-class words, essentially the words with the highest frequency, was set aside. The remaining 4722 words were classified into 222 classes using Buckshot. The resulting 500 classes (278 high-frequency words, 222 clusters) were used as features in the matrix shown in Figure 2. Since the number of features has been greatly reduced, a larger num- ber of words can be considered. For the second matrix all 22,771 words that occurred at least 100 times in 18 months of the New York Times News Service (May 1989 - October 1990) were selected. Again, there are four submatrices, corresponding to four relative positions. For example, the entries aij in the A part of the matrix count how often a member of class i occurs at a distance of one word to the left of word j. Again, a singular value decomposition was performed on the matrix, this time 10 singular values were computed. (Note that in the first figure the 20,000-element rows of the matrix are reduced to 15 dimensions whereas in the second matrix the 2,000-element columns are reduced to 10 dimensions.) Table 2 shows 20 randomly selected words and their nearest neighbors in category space (in order of proximity to the head word). As can be seen from the table, proximity in the space is a good predictor of similar syntactic category. The near- est neighbors of athlete, clerk, declaration, and dome are singular nouns, the nearest neighbors of bowers and gibbs are family names, the near- est neighbors of desirable and sole are adjectives, and the nearest neighbors of financings are plu- ral nouns, in each case without exception. The neighborhoods of armaments, cliches and luxuries (nouns), and b'nai and northwestern (NP-initial modifiers) fail to respect finer grained syntactic 2Cf. (Brown et al. 1992) where the same idea of improving generalization and accuracy by looking at word classes instead of individual words is used. 252 4250 A + I B + I C + I D + I 3000 3000 3000 3000 Figure 1: The setup ofthematrixforthe first singular value decomposition. Table 1: Ten random and three selected words and their nearest neighbors in category space 1. word accompanied almost causing classes directors goal japanese represent think york nearest neighbors submitted banned financed developed authorized headed canceled awarded barred virtually merely formally fully quite officially just nearly only less reflecting forcing providing creating producing becoming carrying particularly elections courses payments losses computers performances violations levels pictures professionals investigations materials competitors agreements papers transactions mood roof eye image tool song pool scene gap voice chinese iraqi american western arab foreign european federal soviet indian reveal attend deliver reflect choose contain impose manage establish retain believe wish know realize wonder assume feel say mean bet angeles francisco sox rouge kong diego zone vegas inning layer Oil must through in at over into with from for by across we you i he she nobody who it everybody there they might would could cannot will should can may does helps 500 features 500 features 500 features 500 features A B C D 22,771 words Figure 2: The setup of the matrix for the second singular value decomposition. 253 Table 2: Twenty random and four selected words and their neigborhoods in category space 2. word armaments athlete b'nal bowers clerk cliches cruz declaration desirable dome equally financings gibbs luxuries northwestern oh sole nearest neighbors turmoil weaponry landmarks coordination prejudices secrecy brutality unrest harassment [ virus scenario [ event audience disorder organism candidate procedure epidemic I suffolk sri allegheny cosmopolitan berkshire cuny broward multimedia bovine nytimes jacobs levine cart hahn schwartz adams bucldey dershowitz fitzpatrick peterson [ salesman ] psychologist photographer preacher mechanic dancer lawyer trooper trainer pests wrinkles outbursts streams icons endorsements I friction unease appraisals lifestyles antonio I' clara pont saud monica paulo rosa mae attorney palma sequence mood profession marketplace concept facade populace downturn moratorium I re'cognizable I frightening loyal devastating excit!ng troublesome awkward palpable blackout furnace temblor quartet citation chain countdown thermometer shaft I I somewhat progressively acutely enormously excessively unnecessarily largely scattered [ endeavors monopolies raids patrols stalls offerings occupations philosophies religions adler reid webb jenkins stevens carr lanrent dempsey hayes farrell [ volatility insight hostility dissatisfaction stereotypes competence unease animosity residues ] transports vividly walks [ baja rancho harvard westchester ubs humboldt laguna guinness vero granada gee gosh ah hey I appleton ashton dolly boldface baskin lo I lengthy vast monumental rudimentary nonviolent extramarital lingering meager gruesome I spokesman copyboy staffer barrios comptroller alloy stalks spokeswoman dal spokesperson Iskillfully frantically calmly confidently streaming relentlessly discreetly spontaneously floats [ jumps collapsed sticks stares crumbled peaked disapproved runs crashed claims Oil must they credits promises [ forecasts shifts searches trades practices processes supplements controls through from in [ at by 'within with under against for will might would cannot could can should won't [ doesn't may we [ i you who nobody he it she everybody there distinctions, but are reasonable representations of syntactic category. The neighbors of cruz (sec- ond components of names), and equally and vividly (adverbs) include words of the wrong category, but are correct for the most part. In order to give a rough idea of the density of the space in different locations, the symbol "1" is placed before the first neighbor in Table 2 that has a correlation of 0.978 or less with the head word. As can be seen from the table, the re- gions occupied by nouns and proper names are dense, whereas adverbs and adjectives have more distant nearest neighbors. One could attempt to find a fixed threshold that would separate neigh- bors of the same category from syntactically dif- ferent ones. For instance, the neighbors of oh with a correlation higher than 0.978 are all interjections and the neighbors of cliches within the threshold region are all plural nouns. However, since the density in the space is different for different re- gions, it is unlikely that a general threshold for all syntactic categories can be found. The neighborhoods of transports and walks are not very homogeneous. These two words are ambiguous between third person singular present tense and plural noun. Ambiguity is a problem for the vector representation scheme used here, be- cause the two components of an ambiguous vector can add up in a way that makes it by chance simi- lar to an unambiguous word of a different syntactic category. If we call the distributional vector fi'¢ of words of category c the profile of category c, and if a word wl is used with frequency c~ in category cl and with frequency ~ in category c2, then the weighted sum of the profiles (which corresponds to a column for word Wl in Figure 2) may turn out to be the same as the profile of an unrelated third category c3: This is probably what happened in the cases of transports and walks. The neighbors of claims demonstrate that there are homogeneous "am- biguous" regions in the space if there are enough words with the same ambiguity and the same fre- quency ratio of the categories, lransports and walks (together with floats, jumps, sticks, stares, and runs) seem to have frequency ratios a/fl dif- ferent from claims, so that they ended up in dif- ferent regions. The last three lines of Table 2 indicate that func- tion words such as prepositions, auxiliaries, and nominative pronouns and quantifiers occupy their own regions, and are well separated from each other and from open classes. 254 A BIRECURRENT NETWORK FOR PART-OF-SPEECH PREDICTION A straightforward way to take advantage of the vector representations for part of speech catego- rization is to cluster the space and to assign part- of-speech labels to the clusters. This was done with Buckshot. The resulting 200 clusters yielded good results for unambiguous words. However, for the reasons discussed above (linear combination of profiles of different categories) the clustering was not very successful for ambiguous words. There- fore, a different strategy was chosen for assigning category labels. In order to tease apart the differ- ent uses of ambiguous words, one has to go back to the individual contexts of use. The connectionist network in Figure 3 was used to analyze individual contexts. The idea of the network is similar to Elman's re- current networks (Elman 1990, Elman 1991): The network learns about the syntactic structure of the language bY trying to predict the next word from its own context units in the previous step and the current word. The network in Figure 3 has two novel features: It uses the vectors from the second singular vMue decomposition as input and target. Note that distributed vector representations are ideal for connectionist nets, so that a connection- ist model seems most appropriate for the predic- tion task. The second innovation is that the net is birecurrent. It has recurrency to the left as well as to the right. In more detail, the network's input consists of the word to the left tn-1, its own left context in the previous time step c-l,,-1, the word to the right tn+l and its own right context C-rn+l in the next time step. The second layer has the context units of the current time step. These feed into thirty hidden units h,~ which in turn produce the output vector o,,. The target is the current word tn. The output units are linear, hidden units are sigmoidM. The network was trained stochastically with truncated backpropagation through time (BPTT, Rumelhart et al. 1986, Williams and Peng 1990). For this purpose, the left context units were un- folded four time steps to the left and the right con- text units four time steps to the right as shown in Figure 4. The four blocks of weights on the connections to c-in-3, c-ln-~., c-in-l, and c-In are linked to ensure identical mapping from one "time step" to the next. The connections on the right side are linked in the same way. The train- ing set consisted of 8,000 words in the New York Times newswire (from June 1990). For each train- ing step, four words to the left of the target word (tn_3, tn_2,tn_l, and in) and four words to the right of the target word (tn, tn+l, tn+2, and in+3) F U:q I h. I .--q ,+-z] ,+-;71 Figure 4: Unfolded birecurrent network in train- ing. were the input to the unfolded network. The tar- get was the word tn. A modification of bp from the pdp package was used with a learning rate of 0.01 for recurrent units, 0.001 for other units and no momentum. After training, the network was applied to the category prediction tasks described below by choosing a part of the text without unknown words, computing all left contexts from left to right, computing all right contexts from right to left, and finally predicting the desired category of a word t, by using the precomputed contexts c-l,, and c-rn. In order to tag the occurrence of a word, one could retrieve the word in category space whose vector is closest to the output vector computed by the network. However, this would give rise to too much variety in category labels. To illustrate, con- sider the prediction of the category NOUN. If the network categorizes occurrences of nouns correctly as being in the region around declaration, then the slightest variation in the output will change the nearest neighbor of the output vector from decla- ration to its nearest neighbors sequence or mood (see Table 2). This would be confusing to the hu- man user of the categorization program. Therefore, the first 5,000 output vectors of the network (from the first day of June 1990), were clustered into 200 output clusters with Buckshot. Each output cluster was labeled by the two words closest to its centroid. Table 3 lists labels of some of the output clusters that occurred in the ex- periment described below. They are easily in- terpretable for someone with minimal linguistic knowledge as the examples show. For some cat- egories such as HIS_THI~. one needs to look at a couple of instances to get a "feel" for their mean- 255 I , t n (10) I I o-(lO) I [ h. (30) It,,-, (10) I [ C-In-, (15)I I ~n+'l (15) { Figure 3: The architecture of the birecurrent network Table 3: The labels of 10 output clusters. output cluster label exceLdepart prompt_select cares_sonnds office_staff promotion_trauma famous_talented publicly_badly his_the part of speech intransitive verb (base form) transitive verb (base form) 3. person sg. present tense noun noun adjective adverb NP-initial ing. The syntactic distribution of an individual word can now be more accurately determined by the following algorithm: • compute an output vector for each position in the text at which the target word occurs. • for each output vector j do the following: - determine the centroid of the cluster i which is closest - compute the correlation coefficient of the out- put vector j and the centroid of the output cluster i. This is the score si,i for cluster i and vector j. Assign zero to the scores of the other clusters for this vector: s~,j :- 0, k ~ i • for each cluster i, compute the final score fi as the sum of the scores sij : fi := ~j si,j • normalize the vector of 200 final scores to unit length This algorithm was applied to June 1990. If for a given word, the sum of the unnormalized final scores was less than 30 (corresponding to roughly 100 occurrences in June), then this word was dis- carded. Table 4 lists the highest scoring categories for 10 random words and 11 selected ambiguous words. (Only categories with a score of at least 0.2 are listed.) The network failed to learn the distinctions be- tween adjectives, intransitive present participles and past participles in the frame "to-be + [] + non-NP'. For this reason, the adjective close, the present participle beginning, and the past partici- ple shot are all classified as belonging to the cate- gory STRUGGLING_TRAVELING. (Present Partici- ples are successfully discriminated in the frame "to-be + [] + NP": see winning in the table, which is classified as the progressive form of a transitive verb: HOLDING_PROMISING.) This is the place where linguistic knowledge has to be injected in form of the following two rules: • If a word in STRUGGLING_TRAVELING is a mor- phological present participle or past participle assign it to that category, otherwise to the cat- egory ADJECTIVE_PREDICATIVE. * If a word in a noun category is a morpho- logical plural assign it to NOUN_PLURAL, to NOUN_SINGULAR otherwise. With these two rules, all major categories are among the first found by the algorithm; in particular the major categories of the am- biguous words better (adjective/adverb), close (verb/adjective), work (noun/base form of verb), hopes (noun/third person singular), beginning (noun/present-participle), shot (noun/past par- ticiple) and's ('s/is). There are two clear errors: GIVEN_TAKING for contain, and RICAN_ADVISORY for 's, both of rank three in the table. 256 Table word adequate admit appoint consensus contain dodgers genes language legacy thirds good better close work hospital buy hopes beginning shot 'S winning 4: The highest scoring categories for 10 random and 11 selected words. highest scoring categories universal_martial (0.50) excel_depart (0.88) prompt_select (0.72) office_staff (0.71) gather_propose (0.76) promotion_trauma (0.57) office_staff (0.43) promotion_trauma (0.65) promotion_trauma (0.95) hand_shooting (0.75) famous_talented (0.86) famous_talented (0.65) gather_propose (0.43) exceLdepart (0.72) promotion_trauma (0.75) gather_propose (0.77) promotion_trauma (0.56) promotion_trauma (0.90) hand_shooting (0.54) 's_f~cto (0.54) famous_talented (0.71) struggling_traveling (0.33) gather_propose (0.30) gather_propose (0.65) promotion_trauma (0.43) prompt_select (0.43) yankees_paper (0.52) promotion_trauma (0.75) office_staff (0.57) office_staff (0.22) famous_talented (0.41) his_the (0.34) struggling_traveling (0.42) promotion_trauma (0.51) office_agent (0.40) prompt_select (0.47) cares.sounds (0.53) struggling_travehng (0.34) struggling_traveling (0.45) makes_is (0.40) holding_promising (0.33) several_numerous (0.33) prompt_select (0.20) hand_shooting (0.39) given_taking (0.24) fantasy_ticket (0.48) route_style (0.22) office_agent (0.21) iron_pickup (0.36)_ pubhcly_badly (0.27) famous_talented (0.36) remain_want (0.27) fantasy_ticket (0.24) remain_want (0.22) windows_pictures (0.21) promotion_trauma (0.40) rican_advisory (0.~7) iron_pickup (0.29) These results seem promising given the fact that the context vectors consist of only 15 units. It seems naive to believe that all syntactic informa- tion of the sequence of words to the left (or to the right) can be expressed in such a small number of units. A larger experiment with more hidden units for each context vector will hopefully yield better results. DISCUSSION AND CONCLUSION Brill and Marcus describe an approach with simi- lar goals in (Brill and Marcus 1992). Their method requires an initial consultation of a native speaker for a couple of hours. The method presented here makes a short consultation of a native speaker nec- essary, however it occurs at the end, as the last step of category induction. This has the advantage of avoiding bias in an initial a priori classification. Finch and Chater present an approach to cat- egory induction that also starts out with offset counts, proceeds by classifying words on the ba- sis of these counts, and then goes back to the lo- cal context for better results (Finch and Chater 1992). But the mathematical and computational techniques used here seem to be more efficient and more accurate than Finch and Chater's, and hence applicable to vocabularies of a more realistic size. An important feature of the last step of the pro- cedure, the neural network, is that the lexicogra- pher or linguist can browse the space of output vectors for a given word to get a sense of its syn- tactic distribution (for instance uses of better as an adverb) or to improve the classification (for in- stance by splitting an induced category that is too coarse). The algorithm can also be used for cate- gorizing unseen words. This is possible as long as the words surrounding it are known. The procedure for part-of-speech categorization introduced here may be of interest even for words whose part-of-speech labels are known. The di- mensionality reduction makes the global distribu- tional pattern of a word available in a profile con- sisting of a dozen or so real numbers. Because of its compactness, this profile can be used effi- ciently as an additional source of information for improving the performance of natural language processing systems. For example, adverbs may be lumped into one category in the lexicon of a processing system. But the category vectors of adverbs that are used in different positions such as completely (mainly pre~adjectival), normally (mainly pre-verbal) and differently (mainly post- verbal) are different because of their different dis- tributional properties. This information can be exploited by a parser if the category vectors are available as an additional source of information. The model has also implications for language acquisition. (Maratsos and Chalkley 1981) pro- pose that the absolute position of words in sen- tences is important evidence in children's learn- ing of categories. The results presented here show that relative position is sufficient for learning the major syntactic categories. This suggests that rel- ative position could be important information for learning syntactic categories in child language ac- quisition. The basic idea of this paper is to collect a 257 large amount of distributional information con- sisting of word cooccurrence counts and to com- pute a compact, low-rank approximation. The same approach was applied in (Sch/itze, forth- coming) to the induction of vector representations for semantic information about words (a differ- ent source of distributional information was used there). Because of the graded information present in a multi-dimensional space, vector representa- tions are particularly well-suited for integrating different sources of information for disambigua- tion. In summary, the algorithm introduced here pro- vides a language-independent, largely automatic method for inducing highly text-specific syntactic categories for a large vocabulary. It is to be hoped that the method for distributional analysis pre- sented here will make it easier for computational and traditional lexicographers to build dictionar- ies that accurately reflect language use. ACKNOWLEDGMENTS I'm indebted to Mike Berry for SVDPACK and to Marti Hearst, Jan Pedersen and two anony- mous reviewers for very helpful comments. This work was partially supported by the National Cen- ter for Supercomputing Applications under grant BNS930000N. REFERENCES Berry, Michael W. 1992. Large-scale sparse singu- lar value computations. The International Jour- nal of Supercomputer Applications 6(1):13-49. Brill, Eric, and Mitch Marcus. 1992. Tagging an Unfamiliar Text with Minimal Human Supervi- sion. In Working Notes of the AAAI Fall Sym- posium on Probabilistic Approaches to Natural Language, ed. Robert Goldman. AAAI Press. Brown, Peter F., Vincent J. Della Pietra, Pe- ter V. deSouza, Jenifer C. Lai, and Robert L. Mercer. 1992. Class-Based n-gram Models of Natural Language. Computational Linguistics 18(4):467-479. Cutting, Douglas R., Jan O. Pedersen, David Karger, and John W. Tukey. 1992. Scat- ter/Gather: A Cluster-based Approach to Browsing Large Document Collections. In Pro- ceedings of SIGIR '92. Deerwester, Scott, Susan T. Dumais, George W. Furnas, Thomas K. Landauer, and Richard Harshman. 1990. Indexing by latent semantic analysis. Journal of the American Society for Information Science 41(6):391-407. Elman, Jeffrey L. 1990. Finding Structure in Time. Cognitive Science 14:179-211. Elman, Jeffrey L. 1991. Distributed Repre- sentations, Simple Recurrent Networks, and Grammatical Structure. Machine Learning 7(2/3):195-225. Finch, Steven, and Nick Chater. 1992. Boot- strapping Syntactic Categories Using Statisti- cal Methods. In Background and Experiments in Machine Learning of Natural Language, ed. Walter Daelemans and David Powers. Tilburg University. Institute for Language Technology and AI. Maratsos, M. P., and M. Chalkley. 1981. The inter- nal language of children's syntax: the ontogene- sis and representation of syntactic categories. In Children's language, ed. K. Nelson. New York: Gardner Press. Pinker, Steven. 1984. Language Learnability and Language Development. Cambridge MA: Har- vard University Press. Rumelhart, D. E., G. E. Hinton, and R. J. Williams. 1986. Learning Internal Representa- tions by Error Propagation. In Parallel Dis- tributed Processing. Explorations in the Mi- crostructure of Cognition. Volume I: Founda- tions, ed. David E. Rumelhart, James L. Mc- Clelland, and the PDP Research Group. Cam- bridge MA: The MIT Press. Schiitze, Hinrich. Forthcoming. Word Space. In Advances in Neural Information Processing Sys- tems 5, ed. Stephen J. Hanson, Jack D. Cowan, and C. Lee Giles. San Mateo CA: Morgan Kauf- mann. Williams, Ronald J., and Jing Peng. 1990. An Ef- ficient Gradient-Based Algorithm for On-Line Training of Recurrent Network Trajectories. Neural Computation 2:490-501. 258
1993
34
Automatic Grammar Induction and Parsing Free Text: A Transformation-Based Approach Eric Brill* Department of Computer and Information Science University of Pennsylvania [email protected] Abstract In this paper we describe a new technique for parsing free text: a transformational grammar I is automatically learned that is capable of accu- rately parsing text into binary-branching syntac- tic trees with nonterminals unlabelled. The algo- rithm works by beginning in a very naive state of knowledge about phrase structure. By repeatedly comparing the results of bracketing in the current state to proper bracketing provided in the training corpus, the system learns a set of simple structural transformations that can be applied to reduce er- ror. After describing the algorithm, we present results and compare these results to other recent results in automatic grammar induction. INTRODUCTION There has been a great deal of interest of late in the automatic induction of natural language gram- mar. Given the difficulty inherent in manually building a robust parser, along with the availabil- ity of large amounts of training material, auto- matic grammar induction seems like a path worth pursuing. A number of systems have been built that can be trained automatically to bracket text into syntactic constituents. In (MM90) mutual in- formation statistics are extracted from a corpus of text and this information is then used to parse new text. (Sam86) defines a function to score the quality of parse trees, and then uses simulated an- nealing to heuristically explore the entire space of possible parses for a given sentence. In (BM92a), distributional analysis techniques are applied to a large corpus to learn a context-free grammar. The most promising results to date have been *The author would like to thank Mark Liberman, Melting Lu, David Magerman, Mitch Marcus, Rich Pito, Giorgio Satta, Yves Schabes and Tom Veatch. This work was supported by DARPA and AFOSR jointly under grant No. AFOSR-90-0066, and by ARO grant No. DAAL 03-89-C0031 PRI. 1 Not in the traditional sense of the term. based on the inside-outside algorithm, which can be used to train stochastic context-free grammars. The inside-outside algorithm is an extension of the finite-state based Hidden Markov Model (by (Bak79)), which has been applied successfully in many areas, including speech recognition and part of speech tagging. A number of recent papers have explored the potential of using the inside- outside algorithm to automatically learn a gram- mar (LY90, SJM90, PS92, BW92, CC92, SRO93). Below, we describe a new technique for gram- mar induction. The algorithm works by beginning in a very naive state of knowledge about phrase structure. By repeatedly comparing the results of parsing in the current state to the proper phrase structure for each sentence in the training corpus, the system learns a set of ordered transformations which can be applied to reduce parsing error. We believe this technique has advantages over other methods of phrase structure induction. Some of the advantages include: the system is very simple, it requires only a very small set of transforma- tions, a high degree of accuracy is achieved, and only a very small training corpus is necessary. The trained transformational parser is completely sym- bolic and can bracket text in linear time with re- spect to sentence length. In addition, since some tokens in a sentence are not even considered in parsing, the method could prove to be consid- erably more robust than a CFG-based approach when faced with noise or unfamiliar input. After describing the algorithm, we present results and compare these results to other recent results in automatic phrase structure induction. TRANSFORMATION-BASED ERROR-DRIVEN LEARNING The phrase structure learning algorithm is a transformation-based error-driven learner. This learning paradigm, illustrated in figure 1, has proven to be successful in a number of differ- ent natural language applications, including part of speech tagging (Bri92, BM92b), prepositional 259 UNANNOTATED TEXT STATE ANNOTATED TRUTH RULES Figure 1: Transformation-Based Error-Driven Learning. phrase attachment (BR93), and word classifica- tion (Bri93). In its initial state, the learner is capable of annotating text but is not very good at doing so. The initial state is usually very easy to create. In part of speech tagging, the initial state annotator assigns every word its most likely tag. In prepositional phrase attachment, the ini- tial state annotator always attaches prepositional phrases low. In word classification, all words are initially classified as nouns. The naively annotated text is compared to the true annotation as indi- cated by a small manually annotated corpus, and transformations are learned that can be applied to the output of the initial state annotator to make it better resemble the truth. LEARNING PHRASE STRUCTURE The phrase structure learning algorithm is trained on a small corpus of partially bracketed text which is also annotated with part of speech informa- tion. All of the experiments presented below were done using the Penn Treebank annotated corpus(MSM93). The learner begins in a naive initial state, knowing very little about the phrase structure of the target corpus. In particular, all that is initially known is that English tends to be right branching and that final punctuation is final punctuation. Transformations are then learned automatically which transform the out- put of the naive parser into output which bet- ter resembles the phrase structure found in the training corpus. Once a set of transformations has been learned, the system is capable of taking sentences tagged with parts of speech and return- ing a binary-branching structure with nontermi- nals unlabelled. 2 The Initial State Of The Parser Initially, the parser operates by assigning a right- linear structure to all sentences. The only excep- tion is that final punctuation is attached high. So, the sentence "The dog and old cat ate ." would be incorrectly bracketed as: ((The(dog(and(old (cat ate))))). ) The parser in its initial state will obviously not bracket sentences with great accuracy. In some experiments below, we begin with an even more naive initial state of knowledge: sentences are parsed by assigning them a random binary- branching structure with final punctuation always attached high. Structural Transformations The next stage involves learning a set of trans- formations that can be applied to the output of the naive parser to make these sentences better conform to the proper structure specified in the training corpus. The list of possible transforma- tion types is prespecified. Transformations involve making a simple change triggered by a simple en- vironment. In the current implementation, there are twelve allowable transformation types: • (1-8) (AddHelete) a (leftlright) parenthesis to the (leftlright) of part of speech tag X. • (9-12) (Add]delete) a (left]right) parenthesis between tags X and Y. To carry out a transformation by adding or deleting a parenthesis, a number of additional sim- ple changes must take place to preserve balanced parentheses and binary branching. To give an ex- ample, to delete a left paren in a particular envi- ronment, the following operations take place (as- suming, of course, that there is a left paren to delete): 1. Delete the left paren. 2. Delete the right paren that matches the just deleted paren. 3. Add a left paren to the left of the constituent immediately to the left of the deleted left paren. 2This is the same output given by systems de- scribed in (MM90, Bri92, PS92, SRO93). 260 4. Add a right paren to the right of the con- stituent immediately to the right of the deleted left paren. 5. If there is no constituent immediately to the right, or none immediately to the left, then the transformation fails to apply. Structurally, the transformation can be seen as follows. If we wish to delete a left paten to the right of constituent X 3, where X appears in a subtree of the form: X A YY Z carrying out these operations will transform this subtree into: 4 Z A X YY Given the sentence: 5 The dog barked . this would initially be bracketed by the naive parser as: ((The(dogbarked)).) If the transformation delete a left parch to the right of a determiner is applied, the structure would be transformed to the correct bracketing: (((Thedog) barked), ) To add a right parenthesis to the right of YY, YY must once again be in a subtree of the form: X 3To the right of the rightmost terminal dominated by X if X is a nonterminal. 4The twelve transformations can be decomposed into two structural transformations, that shown here and its converse, along with six triggering environments. 5Input sentences are also labelled with parts of speech. If it is, the following steps are carried out to add the right paren: 1. Add the right paren. 2. Delete the left paten that now matches the newly added paren. 3. Find the right paren that used to match the just deleted paren and delete it. 4. Add a left paren to match the added right paren. This results in the same structural change as deleting a left paren to the right of X in this par- ticular structure. Applying the transformation add a right paten to the right of a noun to the bracketing: ((The(dogbarked)).) will once again result in the correct bracketing: (((Thedog)barked).) Learning Transformations Learning proceeds as follows. Sentences in the training set are first parsed using the naive parser which assigns right linear structure to all sen- tences, attaching final punctuation high. Next, for each possible instantiation of the twelve transfor- mation templates, that particular transformation is applied to the naively parsed sentences. The re- suiting structures are then scored using some mea- sure of success that compares these parses to the correct structural descriptions for the sentences provided in the training corpus. The transforma- tion resulting in the best scoring structures then becomes the first transformation of the ordered set of transformations that are to be learned. That transformation is applied to the right-linear struc- tures, and then learning proceeds on the corpus of improved sentence bracketings. The following procedure is carried out repeatedly on the train- ing corpus until no more transformations can be found whose application reduces the error in pars- ing the training corpus: 1. The best transformation is found for the struc- tures output by the parser in its current state. 6 2. The transformation is applied to the output re- sulting from bracketing the corpus using the parser in its current state. 3. This transformation is added to the end of the ordered list of transformations. SThe state of the parser is defined as naive initial- state knowledge plus all transformations that cur- rently have been learned. 261 4. Go to 1. After a set of transformations has been learned, it can be used to effectively parse fresh text. To parse fresh text, the text is first naively parsed and then every transformation is applied, in order, to the naively parsed text. One nice feature of this method is that dif- ferent measures of bracketing success can be used: learning can proceed in such a way as to try to optimize any specified measure of success. The measure we have chosen for our experiments is the same measure described in (PS92), which is one of the measures that arose out of a parser evaluation workshop (ea91). The measure is the percentage of constituents (strings of words between matching parentheses) from sentences output by our system which do not cross any constituents in the Penn Treebank structural description of the sentence. For example, if our system outputs: (((Thebig) (dogate)).) and the Penn Treebank bracketing for this sen- tence was: (((Thebigdog) ate). ) then the constituent the big would be judged cor- rect whereas the constituent dog ate would not. Below are the first seven transformations found from one run of training on the Wall Street Journal corpus, which was initially bracketed us- ing the right-linear initial-state parser. 1. Delete a left paren to the left of a singular noun. 2. Delete a left paren to the left of a plural noun. 3. Delete a left paren between two proper nouns. 4. Delet a left paten to the right of a determiner. 5. Add a right paten to the left of a comma. 6. Add a right paren to the left of a period. 7. Delete a right paren to the left of a plural noun. The first four transformations all extract noun phrases from the right linear initial structure. The sentence "The cat meowed ." would initially be bracketed as: 7 ((The (cat meowed)) . ) Applying the first transformation to this bracketing would result in: 7These examples are not actual sentences in the corpus. We have chosen simple sentences for clarity. (((Thecat)meowed).) Applying the fifth transformation to the bracketing: ( ( We ( ran ( would result in ( ( ( We ran ) (and(theywalked))))).) , (and(they walked)))). ) RESULTS In the first experiment we ran, training and test- ing were done on the Texas Instruments Air Travel Information System (ATIS) corpus(HGD90). 8 In table 1, we compare results we obtained to re- sults cited in (PS92) using the inside-outside al- gorithm on the same corpus. Accuracy is mea- sured in terms of the percentage of noncrossing constituents in the test corpus, as described above. Our system was tested by using the training set to learn a set of transformations, and then ap- plying these transformations to the test set and scoring the resulting output. In this experiment, 64 transformations were learned (compared with 4096 context-free rules and probabilities used in the inside-outside algorithm experiment). It is sig- nificant that we obtained comparable performance using a training corpus only 21% as large as that used to train the inside-outside algorithm. Method # of Training Accuracy Corpus Sentences Inside-Outside 700 90.36% Transformation Learner 150 91.12% Table 1: Comparing two learning methods on the ATIS corpus. After applying all learned transformations to the test corpus, 60% of the sentences had no cross- ing constituents, 74% had fewer than two crossing constituents, and 85% had fewer than three. The mean sentence length of the test corpus was 11.3. In figure 2, we have graphed percentage correct as a function of the number of transformations that have been applied to the test corpus. As the transformation number increases, overtraining sometimes occurs. In the current implementation of the learner, a transformation is added to the list if it results in any positive net change in the Sin all experiments described in this paper, results are calculated on a test corpus which was not used in any way in either training the learning algorithm or in developing the system. 262 training set. Toward the end of the learning proce- dure, transformations are found that only affect a very small percentage of training sentences. Since small counts are less reliable than large counts, we cannot reliably assume that these transformations will also improve performance in the test corpus. One way around this overtraining would be to set a threshold: specify a minimum level of improve- ment that must result for a transformation to be learned. Another possibility is to use additional training material to prune the set of learned trans- formations. tO 0 O~ ¢1 ¢.- 0 U 00 ¢1 0_ 0 0 10 20 30 40 50 60 RuleNumber Figure 2: Results From the ATIS Corpus, Starting With Right-Linear Structure. We next ran an experiment to determine what performance could be achieved if we dropped the initial right-linear assumption. Using the same training and test sets as above, sentences were ini- tially assigned a random binary-branching struc- ture, with final punctuation always attached high. Since there was less regular structure in this case than in the right-linear case, many more transfor- mations were found, 147 transformations in total. When these transformations were applied to the test set, a bracketing accuracy of 87.13% resulted. The ATIS corpus is structurally fairly regular. To determine how well our algorithm performs on a more complex corpus, we ran experiments on the Wall Street Journal. Results from this exper- iment can be found in table 2. 9 Accuracy is again 9For sentences of length 2-15, the initial right-linear parser achieves 69% accuracy. For sentences of length measured as the percentage of constituents in the test set which do not cross any Penn Treebank constituents.l° As a point of comparison, in (SRO93) an ex- periment was done using the inside-outside algo- rithm on a corpus of WSJ sentences of length 1-15. Training was carried out on a corpus of 1,095 sen- tences, and an accuracy of 90.2% was obtained in bracketing a test set. # Training # of Sent. Corpus Trans- % Length Sents formations Accuracy 2-15 250 83 88.1 2-15 500 163 89.3 2-15 1000 221 91.6 2-20 250 145 86.2 2-25 250 160 83.8 Table 2: WSJ Sentences In the corpus we used for the experiments of sentence length 2-15, the mean sentence length was 10.80. In the corpus used for the experi- ment of sentence length 2-25, the mean length was 16.82. As would be expected, performance degrades somewhat as sentence length increases. In table 3, we show the percentage of sentences in the test corpus that have no crossing constituents, and the percentage that have only a very small number of crossing constituents.11 Sent Length 2-15 2-15 2-25 # Training Corpus Sents 500 1000 250 % of O-error Sents 53.7 62.4 29.2 % of <_l-error Sents 72.3 77.2 44.9 % of <2-error Sents 84.6 87.8 59.9 Table 3: WSJ Sentences. In table 4, we show the standard deviation measured from three different randomly chosen training sets of each sample size and randomly chosen test sets of 500 sentences each, as well as 2-20, 63% accuracy is achieved and for sentences of length 2-25, accuracy is 59%. a°In all of our experiments carried out on the Wall Street Journal, the test set was a randomly selected set of 500 sentences. nFor sentences of length 2-15, the initial right linear parser parses 17% of sentences with no crossing errors, 35% with one or fewer errors and 50% with two or fewer. For sentences of length 2-25, 7% of sentences are parsed with no crossing errors, 16% with one or fewer, and 24% with two or fewer. 263 the accuracy as a function of training corpus size for sentences of length 2 to 20. # Training Corpus Sents % Correct 0 63.0 10 75.8 50 82.1 100 84.7 250 86.2 750 87.3 Std. Dev. 0.69 2.95 1.94 0.56 0.46 0.61 Table 4: WSJ Sentences of Length 2 to 20. We also ran an experiment on WSJ sen- tences of length 2-15 starting with random binary- branching structures with final punctuation at- tached high. In this experiment, 325 transfor- mations were found using a 250-sentence training corpus, and the accuracy resulting from applying these transformations to a test set was 84.72%. Finally, in figure 3 we show the sentence length distribution in the Wall Street Journal cor- pus. 0 8 0 0 CO :3 o °o .> -~ o rr 0 O 04 0 20 40 60 80 1 O0 Sentence Length Figure 3: The Distribution of Sentence Lengths in the WSJ Corpus. While the numbers presented above allow us to compare the transformation learner with systems trained and tested on comparable cor- pora, these results are all based upon the as- sumption that the test data is tagged fairly re- liably (manually tagged text was used in all of these experiments, as well in the experiments of (PS92, SRO93).) When parsing free text, we can- not assume that the text will be tagged with the accuracy of a human annotator. Instead, an au- tomatic tagger would have to be used to first tag the text before parsing. To address this issue, we ran one experiment where we randomly induced a 5% tagging error rate beyond the error rate of the human annotator. Errors were induced in such a way as to preserve the unigram part of speech tag probability distribution in the corpus. The exper- iment was run for sentences of length 2-15, with a training set of 1000 sentences and a test set of 500 sentences. The resulting bracketing accuracy was 90.1%, compared to 91.6% accuracy when using an unadulterated training corpus. Accuracy only degraded by a small amount when training on the corpus with adulterated part of speech tags, sug- gesting that high parsing accuracy rates could be achieved if tagging of the input were done auto- matically by a part of speech tagger. CONCLUSIONS In this paper, we have described a new approach for learning a grammar to automatically parse text. The method can be used to obtain high parsing accuracy with a very small training set. Instead of learning a traditional grammar, an or- dered set of structural transformations is learned that can be applied to the output of a very naive parser to obtain binary-branching trees with un- labelled nonterminals. Experiments have shown that these parses conform with high accuracy to the structural descriptions specified in a manually annotated corpus. Unlike other recent attempts at automatic grammar induction that rely heav- ily on statistics both in training and in the re- sulting grammar, our learner is only very weakly statistical. For training, only integers are needed and the only mathematical operations carried out are integer addition and integer comparison. The resulting grammar is completely symbolic. Un- like learners based on the inside-outside algorithm which attempt to find a grammar to maximize the probability of the training corpus in hope that this grammar will match the grammar that pro- vides the most accurate structural descriptions, the transformation-based learner can readily use any desired success measure in learning. We have already begun the next step in this project: automatically labelling the nonterminal nodes. The parser will first use the ~ransforma- ~ioual grammar to output a parse tree without nonterminal labels, and then a separate algorithm will be applied to that tree to label the nontermi- nals. The nonterminal-node labelling algorithm makes use of ideas suggested in (Bri92), where nonterminals are labelled as a function of the la- 264 bels of their daughters. In addition, we plan to experiment with other types of transformations. Currently, each transformation in the learned list is only applied once in each appropriate environ- ment. For a transformation to be applied more than once in one environment, it must appear in the transformation list more than once. One pos- sible extension to the set of transformation types would be to allow for transformations of the form: add/delete a paren as many times as is possible in a particular environment. We also plan to ex- periment with other scoring functions and control strategies for finding transformations and to use this system as a postprocessor to other grammar induction systems, learning transformations to im- prove their performance. We hope these future paths will lead to a trainable and very accurate parser for free text. [Bak79] [BM92a] [BM92b] [BR93] [Bri92] [Bri93] [BW92] References J. Baker. Trainable grammars for speech recognition. In Speech commu- nication papers presented at the 97th Meeting of the Acoustical Society of America, 1979. E. Brill and M. Marcus. Automatically acquiring phrase structure using distri- butional analysis. In Darpa Workshop on Speech and Natural Language, Har- riman, N.Y., 1992. E. Brill and M. Marcus. Tagging an un- familiar text with minimal human su- pervision. In Proceedings of the Fall Symposium on Probabilistic Approaches to Natural Language - AAAI Technical -Report. American Association for Arti- ficial Intelligence, 1992. E. Brill and P. Resnik. A transformation based approach to prepositional phrase attachment. Technical report, Depart- ment of Computer and Information Sci- ence, University of Pennsylvania, 1993. E. Brill. A simple rule-based part of speech tagger. In Proceedings of the Third Conference on Applied Natu- ral Language Processing, A CL, Trento, Italy, 1992. E. Brill. A Corpus-Based Approach to Language Learning. PhD thesis, De- partment of Computer and Informa- tion Science, University of Pennsylva- nia, 1993. Forthcoming. T. Briscoe and N. Waegner. Ro- bust stochastic parsing using the inside- outside algorithm. In Workshop notes [CC92] [ca91] [HGDg0] [LY90] [MMg0] [MSM93] [PS92] [Sam86] [SJM90] [SR093] from the AAAI Statistically-Based NLP Techniques Workshop, 1992. G. Carroll and E. Charniak. Learn- ing probabilistic dependency grammars from labelled text - aaai technical re- port. In Proceedings of the Fall Sym- posium on Probabilisiic Approaches to Natural Language. American Associa- tion for Artificial Intelligence, 1992. E. Black et al. A procedure for quan- titatively comparing the syntactic cov- erage of English grammars. In Proceed- ings of Fourth DARPA Speech and Nat- ural Language Workshop, pages 306- 311, 1991. C. Hemphill, J. Godfrey, and G. Dod- dington. The ATIS spoken language systems pilot corpus. In Proceedings of the DARPA Speech and Natural Lan- guage Workshop, 1990. K. Lari and S. Young. The estimation of stochastic context-free grammars using the inside-outside algorithm. Computer Speech and Language, 4, 1990. D. Magerman and M. Marcus. Parsing a natural language using mutual infor- mation statistics. In Proceedings, Eighth National Conference on Artificial Intel- ligence (AAAI 90), 1990. M. Marcus, B. Santorini, and M. Marcinkiewiez. Building a large annotated corpus of English: the Penn Treebank. To appear in Computational Linguistics, 1993. F. Pereira and Y. Schabes. Inside- outside reestimation from partially bracketed corpora. In Proceedings of the 30th Annual Meeting of the Association for Computational Linguistics, Newark, De., 1992. G. Sampson. A stochastic approach to parsing. In Proceedings of COLING 1986, Bonn, 1986. R. Sharman, F. Jelinek, and R. Mer- cer. Generating a grammar for sta- tistical training. In Proceedings of the 1990 Darpa Speech and Natural Lan- guage Workshop, 1990. Y. Schabes, M. Roth, and R. Osborne. Parsing the Wall Street Journal with the inside-outside algorithm. In Pro- ceedings of the 1993 European ACL, Uterich, The Netherlands, 1993. 265
1993
35
A Competition-Ba sed Explanation of Syntactic Attachment Preferences and Garden Path Phenomena Suzanne Stevenson Department of Computer Science University of Toronto Toronto, Ontario MSS 1A4 Canada [email protected] Abstract This paper presents a massively parallel parser that pre- dicts critical attachment behaviors of the human sentence processor, without the use of explicit preference heuristics or revision strategies. The processing of a syntactic am- biguity is modeled as an active, distributed competition among the potential attachments for a phrase. Computa- tionally motivated constraints on the competitive mecha- nism provide a principled and uniform account of a range of human attachment preferences and garden path phe- nolnena. 1 A Competition-Based Parser A model of the human parser must explain, among other factors, the following two aspects of the pro- cessing of a syntactic ambiguity: the initial attach- ment preferences that people exhibit, and their abil- ity or inability to later revise an incorrect attachment. This paper presents a competition-based parser, CA- PERS, that predicts critical attachment behaviors of the human sentence processor, without the use of ex- plicit preference heuristics or revision strategies. CA- PERS is a massively parallel network of processing nodes that represent syntactic phrases and their at- tachments within a parse tree. A syntactic ambi- guity leads to a network of alternative attachments that compete in parallel for numeric activation; an at- tachment wins over its competitors when it amasses activation above a certain threshold. The competi- tion among attachments is achieved solely through a technique called competition-based spreading ac- tivation (CBSA) (Reggia 87). The effective use of CBSA requires restrictions on the syntactic attach- ments that are allowed to compete simultaneously. Ensuring these network restrictions necessitates the further constraint that a stable state of the network can only represent a single valid parse state. The re- sulting network structure defines a limited set of corn- peting attachments that simultaneously define the ini- tial attachments for the current input phrase, along with the reanalysis possibilities for phrases previously structured within the parse tree. The competitive mechanism and its ensuing restric- tions have profound consequences for the modeling of the human sentence processor. Whereas other mod- els must impose explicit conditions on the parser's attachment behavior (Abney 89; Gibson 91; McRoy & Hirst 90; Pritchett 88), in CAPERS both initial attachment preferences and reanalyzability are a side effect of independently motivated computational as- sumptions. Furthermore, parsing models generally employ two different computational mechanisms in determining syntactic attachments: a general parser to establish the attachment possibilities, and addi- tional strategies for choosing among them (Abney 89; Frazier 78; Gibson 91; McRoy & Hirst 90; Shieber 83). By contrast, CAPERS provides a more restric- tive account, in which a single competitive mechanism imposes constraints on the parser that determine the potential attachments, as well as choosing the pre- ferred attachment from among those. The competitive mechanism of CAPERS also leads to an advantageous integration of serialism and paral- lelism. In order to conform to human memory limita- tions, other parallel models must be augmented with a scheme for reducing the number of structures that are maintained (Gibson 91; Gorrell 87). Such pruning schemes are unnecessary in CAPERS, since inherent properties of the competitive mechanism lead to a re- striction to maintain a single parse state. However, in spite of this serial aspect, CAPERS is not a sim- ple serial model. The network incorporates each in- put phrase through a parallel atomic operation that determines both the initial attachment for the cur- rent phrase and any revision of earlier attachments. Thus, CAPERS avoids the problems of purely serial or race-based models that rely on backtracking, which is cognitively implausible, or explicit revision strate- 266 gies, which can be unrestrictive (Abney 89; Frazier 78; Inoue & Fodor 92; McRoy & Hirst 90; Pritchett 88). Other work (Stevenson 93b, 90) describes the de- tailed motivation for the CAPERS model, its expla- nation of serial and parallel effects in human parsing, and its predictions of a broad range of human attach- ment preferences. This paper focuses on the competi- tive mechanism described above. Section 2 briefly de- scribes the implementation of the parser) Section 3 discusses the constraints on the network structure, and Section 4 demonstrates the consequences of these constraints for the processing of attachment ambigui- ties. Section 5 summarizes how the competitive mech- anism provides a principled and uniform account of the example human attachment preferences and gar- den path phenomena. 2 The Parsing Network CAPERS dynamically creates the parsing network by allocating processing nodes in response to the input. Control of the parse is distributed among these nodes, which make attachment decisions solely on the basis of the local communication of simple symbolic fea- tures and numeric activation. The symbolic informa- tion determines the grammaticality of potential at- tachments, while numeric activation weighs the rela- tive strengths of the valid alternatives. The spread- ing activation process allows the network to gradually settle on a set of winning attachments that form a globally consistent parse tree. Building the Network When an input token is read, the parser activates a set of phrasal nodes, or p-nodes, from a pool of X tem- plates; their symbolic features are initialized based on the input token's lexical entry. Figure 1 shows a sample X template and its instantiation. Syntactic phrases are only allocated in response to explicit evi- dence in the input; top-down hypothesizing of phrases is disallowed because it greatly increases the complex- ity of the network. Next, the parser allocates process- ing nodes to represent the potential attachments be- tween the current input phrase and the existing parse tree. Attachment nodes, or a-nodes, are established between potential sisters in the parse tree; each a- node connects to exactly two p-nodes, as shown in Figure 2. (In all figures, a-nodes are shown as squares, which are black when the a-node is fully activated.) Once the current phrase is connected to the existing network, each processing node iteratively updates its l CAPERS is implemented in Conunoa Lisp, serially simu- lating the parallel processing of the network. ~ has Case: has_category: selects_categ ory: assignsCase: assignsjheta: selects category: ~ has Oase:"none" has_category: V setects_category: "none" assigns_Case; Acc assigns_theta: theme selects_category: (N I C) expect Figure 1: An X template and sample instantiation. Figure 2: (a) The basic configuration of a phrase in X theory. (b) Representation of these attachments as sister relations in CAPERS. symbolic features and numeric activation, and out- puts them to its neighbors. This network processing loop continues until the activation level of each a-node is either above a certain threshold O, or is zero. 2 The set of active a-nodes in this stable state represents the current parse tree structure. At this point, the next input token is read and the proeess is repeated. Grammaticality of Attachments Unlike other connectionist parsers (Cottrell 89; Fanty 85; Selman & Hirst 85), CAPERS is a hybrid model whose limited symbolic processing abilities support the direct representation of the grammar of a cur- rent linguistic theory. In Government-Binding theory (GB) (Chomsky 81, 86; Rizzi 90), the validity of syn- tactic structures is achieved by locally satisfying the grammatical constraints among neighboring syntac- tic phrases. CAPERS directly encodes this formula- tion of linguistic knowledge as a set of simultaneous local constraints. Symbolic features are simple at- tribute/value pairs, with the attributes corresponding to grammatical entities such as Case and theta roles. The values that these attributes can assume are taken from a pre-defined list of atoms. GB constraints are implemented as equality tests on the values of cer- tain attributes. For example, the Case Filter in (;B states that every NP argument must receive Case. In CAPERS, this is stated as a condition that the at- tribute Case must receive a value when the attribute Category equals Noun and the attribute IsArgument equals True. An a-node receives symbolic features from its p- 2The network always stabifizes in less than 100 iterations. 267 expect to Sara Figure 3: The NP can attach as a sister to the V or the I'. The attachment to the V has a higher grammatical state value, and thus a higher initial activation level. nodes, which are used to determine the grammatical- ity of the attachment. If an a-node receives incom- patible features from its two p-nodes, then it is an in- valid attachment and it becomes inactive. Otherwise, it tests the equality conditions that were developed to encode the following subset of GB constraints: the Theta Criterion, the Case Filter, categorial selection, and the binding of traces. The algorithm outputs a numeric representation of the degree to which these grammatical constraints are satisfied; this state value is used in determining the a-node's activation level. Choosing Preferred Attachments Multiple grammatical attachments may exist for a phrase, as in Figure 3. The network's task is to focus activation onto a subset of the grammatical attach- ments that form a consistent parse tree for the input processed thus far. Attachment alternatives must be made to effectively compete with each other for nu- meric activation, in order to ensure that some a-nodes become highly activated and others have their activa- tion suppressed. There are two techniques for pro- ducing competitive behavior in a connectionist net- work. The traditional method is to insert inhibitory links between pairs of competing nodes. Competition- based spreading activation (CBSA) is a newer tech- nique that achieves competitive behavior indirectly: competing nodes vie for output from a common neigh- bor, which allocates its activation between the com- petitors. In a CBSA function, the output of a node is based on the activation levels of its neighbors, as in equation 1. aj • (1) Oji = ak k where: oji is the output from node ni to node nj; ai is the activation of node hi; k ranges over all nodes connected to node hi. For reasons of space ei-liciency, flexibility, and cogni- tive plausibility (Reggia et al. 88), CBSA was adopted as the means for producing competitive behavior among the a-nodes in CAPERS. Each p-node uses a CBSA function to allocate output activation among its a-nodes, proportional to their current activation level. For example, the NP node in Figure 3 will send more of its output to the attachment to the V node than to the I' node. The CBSA function is designed so that in a stable state of the network, each p-node activates a number of a-nodes in accordance with its grammatical properties. Since every XP must have a parent in the parse tree, all XP nodes must activate exactly one a-node. An X or X ~ node must activate a number of a-nodes equal to the number of comple- ments or specifiers, respectively, that it licenses. The a-nodes enforce consistency among the p-nodes' indi- vidual attachment decisions: each a-node numerically ANDs together the input from its two p-nodes to en- sure that they agree to activate the attachment. A p-node that has obligatory attachments must at all times activate the appropriate number of a-nodes in order for the network to stabilize. However, since the phrase(s) that the p-node will attach to may oc- cur later in the input, the parser needs a way to rep- resent a "null" attachment to act as a placeholder for the p-node's eventual sister(s). For this purpose, the model uses processing nodes called phi-nodes to represent a "dummy" phrase in the tree. 3 Every X and X' node has an a-node that connects to a phi- node, allowing the possibility of a null attachment. A phi-node communicates default symbolic information to its a-node, with two side effects. The a-node is always grammatically valid, and therefore represents a default attachment for the p-node it connects to. But, the default information does not fully satisfy the grammatical constraints of the a-node, thereby lower- ing its activation level and making it a less preferred attachment alternative. 3 Restrictions on the Network The competitive mechanism presented thus far is in- complete. If all possible attachments are established between the current phrase and the existing network, CBSA cannot ensure that the set of active a-nodes forms a consistent parse tree. CBSA can weed out locally incompatible a-nodes by requiring that each p-node activate the grammatically appropriate num- ber of a-nodes, but it cannot rule out the simulta- neous activation of certain incompatible attachments that are farther apart in the tree. Figure 4 shows the types of structures in which CBSA is an insufficient 3 Phi-nodes also represent the traces of displaced phrases ill the parse tree; see (Stevenson 93a, 93b). 268 Figure 4: Example pairs of incompatible attachments that CBSA alone cannot prevent from being active simultaneously. competitive mechanism. Both cases involve violations of the proper nesting structure of a parse tree. Since CBSA cannot rule out these invalid structures, the parsing network must be restricted to prevent these attachment configurations. The parser could insert inhibitory links between all pairs of incompatible a- nodes, but this increases the complexity of the net- work dramatically. The decision was made to instead reduce the size and connectedness of the network, si- multaneously solving the tree structuring problems, by only allowing attachments between the current phrase and the right edge of the existing parse tree. Limiting the attachment of the current phrase to the right edge of the parse tree rules out all of the problematic cases represented by Figure 4(a). In- terestingly, the restriction leads to a solution for the cases of Figure 4(b) as well. Since there is no global controller, each syntactic phrase that is activated must be connected to the existing network so that it can participate in the parse. However, sometimes a phrase cannot attach to the existing parse tree; for example, a subject in English attaches to an inflec- tion phrase (IP) that follows it. The network con- nections between these unattached phrases must be maintained as a stack; this ensures that the current phrase can only establish attachments to the right edge of an immediately preceding subtree. The stack mechanism in CAPERS is implemented as shown in Figure 5: a phrase pushes itself onto the stack when its XP node activates an a-node between it and a spe- cially designated stack node. Because the stack can- not satisfy grammatical constraints, stack node at- tachments are only activated if no other attachment is available for the XP. The flexibility of CBSA al- lows the stack to activate more than one a-node, so that multiple phrases can be pushed onto it. The sur- prising result is that, by having the stack establish a- nodes that compete for activation like normal attach- ments, the indirect competitive relationships within the network effectively suppress all inconsistent at- tachment possibilities, including those of Figure 4(b). This result relies on the fact that any incompatible a-nodes that are created either directly or indirectly stack of partial parse trees .... ::t y ~treeon (x3 top of stack Figure 5: The stack is implemented as a degenerate p-node that can activate attachments to XP nodes. current a 1 phase of Figure 6: Attachments al-a4 were previously acti- vated. To attach the current phrase to the tree on the stack, the following must occur: exactly one of the prior attachments, al, must become inactive, and the corresponding pair of attachments, pi, must become active. This relationship holds for a tree of arbitrary depth on the stack. compete with each other through CBSA. To guaran- tee this condition, all inactive a-nodes must be deleted after the network settles on the attachments for each phrase. Otherwise, losing a-nodes could become acti- vated later in the parse, when the network is no longer in a configuration in which they compete with their incompatible alternatives. Since losing a-nodes are deleted, CAPERS maintains only a single valid parse state at any time. The use of CBSA, and the adoption of a stack mech- anism to support this, strongly restrict the attach- ments that can be considered by the parser. The only a-nodes that can compete simultaneously are those in the set of attachments between the current phrase and the tree on top of the stack. The competitive 269 current s t a c k ~ (~ past (~/ al V Sara expect Figure 7: The network after attaching the NP Sara. current top/ expect ( ( ) Sara Figure 8: A-nodes a2 and a 3 define the necessary at- tachments for the current phrase. relationships among the allowed a-nodes completely define the sets of a-nodes that can be simultaneously active in a stable state of the network. These logi- cal attachment possibilities, shown in Figure 6, fol- low directly from the propagation of local competi- tions among the a-nodes due to CBSA. In over 98% of the approximately 1400 simulations of attachment decisions in CAPERS, the network stabilized on one of these attachment sets (Stevenson 93b). The com- petitive mechanism of CAPERS thus determines a circumscribed set of attachment possibilities for both initial and revised attachments in the parser. 4 Parsing Attachment Ambiguities This section demonstrates the processing of CAPERS on example attachment ambiguities from the sentence processing literature. 4 In sentence (1), the parser is 4 A more complete presentation of CAPERS' explanation of expect ,op/ 2; Sara Figure 9: The misattachment of the NP to the V has been revised. faced with a noun phrase/sentential complement am- biguity at the post-verbal NP Sara: (1) Mary expected Sara to leave. People show a Minimal Attachment preference to at- tach the NP as the complement of the verb, but have no conscious difficulty in processing the continuation of the sentence (Frazier & Rayner 82; Gorrell 87). The CAPERS network after attaching Sara is shown in Figure 7. 5 The NP has valid attachments to the stack (a0) and to the V (al). Since the default stack attachment is less competitive, a-node al is highly activated. This initial attachment accounts for the observed Minimal Attachment preferences. Next, the word to projects an IP; its initial connections to the network are shown in Figure 8. 6 The same set of a- nodes that define the initial attachment possibilities for the current IP phrase, a2 and a3, simultaneously define the revised attachment necessary for the NP Sara. A-node al competes with a2 and a3 for the ac- tivation from the V and NP nodes, respectively; this competition draws activation away from al. When the network stabilizes, a2 and a3 are highly active and al has become inactive, resulting in the tree of Figure 9. In a single atomic operation, the network these and related psycholinguistic data can be found in (Steven- son 93b). 5Note that a tensed verb such as expected projects a full sentential structure--that is, CP/[P/VP--as in (Abney 86), although the figures here are simplified by onfitting display of the CP of root clauses. 6In tlfis and the remaining figures, grannnatically invalid a-nodes and irrelevant phi-nodes are not shown. 270 t:!~c k~ ~ ph:r=n: Kiva eat Figure 10: The NP food has a single valid attachment to the parse tree. has revised its earlier attachment hypothesis for the NP and incorporated the new IP phrase into the parse tree. Sentence (2), an example of Late Closure effects, is initially processed in a similar fashion: (2) When Kiva eats food gets thrown. After attaching food, the network has the configura- tion shown in Figure 10. As in sentence (1), the post- verbal NP makes the best attachment available to it, as the complement of the verb. This behavior is again consistent with the initial preferences of the human sentence processor (Frazier ~ Rayner 82). Since the initial attachment in these cases of Late Closure is de- termined in exactly the same manner as the Minimal Attachment cases illustrated by sentence (1), these two classic preferences receive a uniform account in the CAPERS model. Additional processing of the input distinguishes the sentence types. At gets, a sentential phrase is pro- jected, and the network settles on the attachments shown in Figure 11. As in Figure 8, the revision nec- essary for a valid parse involves the current phrase and the right edge of the tree. However, in this case, the misattached NP cannot break its attachment to the verb and reattach as the specifier of the IP. The difference from the prior example is that here the V node has no other a-node to redirect its output to, and so it continues to activate the NP attachment. The attachment of the NP to the I ~ is not strong enough by itself to draw activation away from the attachment of the NP to the V. The current I' thus activates the default phi-node attachment, leading to a clause with current phrase ¢ When present present Kiva eat ,:0/ stack food Figure 11: The attachment of the NP food to the V is not strong enough to break the attachment of the NP to the V. an empty (and unbound) subject. Since the network settles on an irrecoverably ungrammatical analysis, CAPERS correctly predicts a garden path. The next two examples, adapted from (Pritchett 88), involve double object verbs; both types of sen- tences clearly garden path the human sentence pro- cessor. In each case, the second post-verbal NP is the focus of attention. In sentence (3), this NP is the subject of a relative clause modifying the first NP, but the parser misinterprets it as the verb's second complement: (3) Jamie gave the child the dog bit a bandaid. The initial Connections of the NP the dog to the net- work are shown in Figure 12. The NP can either push itself onto the stack, or replace the null attachment of the verb to the phi-node. Since both stack attach- ments and phi-node attachments are relatively weak, the NP attachment to the V wins the a-node competi- tion, and the network settles on the tree in Figure 13. In accordance with human preferences, the NP is at- tached as the second object of the verb. When bit is processed, the network settles on the configuration in Figure 14. As in the earlier examples, the misat- tached NP needs to attach as the subject of the cur- rent clause; however, this would leave the V node with only one a-node to activate instead of its required two attachments. CAPERS again settles on an ungram- matical analysis in which the current clause has an 271 ;cUrra~:t J I// th~ Jamie ~ = ~ / 0,w= top / ...<~== .j.. the child Figure 12: The initial connections of the NP the dog to the network. the child the Figure 13: The NP the dog attaches as the verb's second complement. empty (unbound) subject, consistent with the garden path effect of this sentence. The second example with a double object verb in- volves the opposite problem. In sentence (4), the sec- ond post-verbal NP is mistakenly interpreted as part of the first object; in a complete parse, it is part of the second object: (4) I convinced her children are noisy. Initially, the parser attaches her as the NP object of convinced. The structure of the network after at- tachment of children is shown in Figure 15. The NP children cannot replace the phi-node attachment to the verb, since the second object of convince must be ~ ~ current • .,/ toy/ ....~/ m ft... ,,oz.,, T of - ~e~ (N) ~ dog s,ack "V" "V T the child the Figure 14: If the NP the dog activates the attachment to the V, the V node would be left with only one active attachment. sentential. In order to maximally satisfy the attach- ment preferences, her is reanalyzed as the specifier of children, with her children replacing her as the first object of convinced. This reanalysis is structurally the same as that required in Figure 8; the relevant a- nodes have been numbered the same in each figure to highlight the similarity. Problems arise when the net- work attaches the next input word, are; see Figure 16. Once again, the misattached NP needs to attach as the specifier of the following sentential phrase, but a V node would be left with only one active a-node when it requires two. A garden path once more re- sults from the network settling on an ungrammatical analysis. This example highlights another aspect of the com- petitive mechanism of CAPERS in driving the attach- ment behavior of the parser: the only way a pre- vious attachment can be broken is if it participates in a competition with an attachment to the current phrase. A correct parse requires her to break its at- tachment to children and re-attach directly to the verb. Because the a-node attaching her to children has no competitor, there is no mechanism for chang- ing the problematic attachment. 5 Summary In each of the examples of Section 4, the initial attach- ment of a phrase was incompatible with the remain- der of the sentence. CAPERS can recover from an attachment error of this type exactly when the mis- attached phrase can reattach to the current phrase, with the current phrase "replacing" the misattached 272 cu rr::t @ ',i ,,Oren top/ convirtce ~) °, stack her Figure 15: Attaching the NP children requires reanal- ysis of the NP her. current children her Figure 16: If the NP headed by children activates the attachment to the I', the V node would be left without an NP complement. phrase in its original attachment site. If the p-node to which the misattached phrase was originally attached does not have an alternative a-node to activate, re- analysis cannot take place and a garden path results. The allowable attachment configurations are a direct consequence of the restrictions imposed by the com- petitive mechanism of CAPERS. The resulting initial attachment preferences, and the parser's ability or in- ability to revise the incorrect structure, account for the preferred readings of these temporarily ambigu- ous sentences, as well as the garden path results. References Abney, S. (1986). "Functional elements and licensing." GLOW Conference, (-;erona, Spain. Abney, S. (1989). "A computational model of human parsing." Journal of Psycholinguistic Research 18:1, 129-144. Chomsky, N. (1981). Lectures on Government and Binding: The Piss Lectures. Dordrecht: Foris Publications. Chomsky, N. (1986). Barriers. Cambridge: MIT Press Cottrell, G.W.(1989). A Connectionist Approach to Word Sense D=sambiguation. Los Altos, CA: Morgan Kaufmann. Fanty, M. (1985). "Context-free parsing in connectionist net- works." Technical Report TR174, University of Rochester. Frazier, L. (1978). On Comprehending Sentences: Syntactic Parsing Strategies. Doctoral dissertation, University of Connecti- cut. Bloomington, IN: Indiana University Linguistics Club. Frazier, L., and K. Rayner (1982). "Making and correcting errors during sentence comprehension: Eye movements in the analysis of structurally ambiguous sentences." Cognitive Psychology 14, 178- 210. Gibson, E. (1991). "A Computational Theory of Human Linguis- tic Processing: Memory Limitations and Processing Breakdown." Doctoral dissertation, Carnegie-Mellon University. Gorrell, P. (1987). "Studies of Human Syntactic Processing: Ranked-Parallel versus Serial Models." Unpublished doctoral dis- sertation, University of Connecticut, Storrs, CT. Inoue, A. and J. Fodor (1992). "Information-paced parsing of Japanese." Presented at the Fifth Annual CUNY Conference on Human Sentence Processing, New York. McRoy, S. and G. Hirst (1990). "Race-Based Parsing and Syntactic Disambiguation." Cognitive Science 14, 313-353. Pritchett, B. (1988). "Garden Path Phenomena and the Grammat- ical Basis of Language Processing." Language 64:3, 539-576. Rizzi, L. (1990). Relativized Minimality. Cambridge: MIT Press. Reggia, J. (1987). "Properties of a Competition-Based Activation Mechanism in Neuromimetic Network Models." Proceedings of the First International Conference on Neural Networks, San Diego, II-131-11-138. Reggia, J., P. Marsland, and R. Berndt (1988). "Competitive Dy- namics in a Dual-Route Connectionist Model of Print-to-Sound Transformation." Complex Systems. Selman, G., and G. Hirst (1985). "A Rule-Based Connectionist Parsing Scheme." Proceedings of the Seventh Annual Conference of the Cognitive Science Society, 212-219. Shieber, S. (1983). "Sentence Disambiguation by a Shift-Reduce Parsing Technique." Proceedings of the 21st Annual Meeting of the Association for Computational Linguistics, 113-118. Stevenson, S. (1993a). "Establishing Long-Distance Dependencies in a Hybrid Network Model of Human Parsing." Proceedings of the 15th Annual Conference of the Cognitive Science Society. Stevenson, S. (1993b). "A Constrained Active Attachment Model for Resolving Syntactic Ambiguities in Natural Language Parsing." Doctoral dissertation, Computer Science Department, University of Maryland, College Park. Stevenson, S. (1990). "A Parallel Constraint Satisfaction and Spreading Activation Model for Resolving Syntactic Ambiguity." Proceedings of the Twelfth Annual Conference of the Cognitive Science Society, 396-403. 273
1993
36
A FLEXIBLE APPROACH TO COOPERATIVE RESPONSE GENERATION IN INFORMATION-SEEKING DIALOGUES Liliana Ardissono, Alessandro Lombardo, Dario Sestero Dipartimento di Informatica - Universita' di Torino C.so Svizzera 185 - 10149 - Torino - Italy E-Mail: [email protected] Abstract This paper presents a cooperative consultation system on a restricted domain. The system builds hypotheses on the user's plan and avoids misunderstandings (with consequent repair dialogues) through clarification dialogues in case of ambiguity. The role played by constraints in the generation of the answer is charac- terized in order to limit the cases of ambiguities re- quiring a clarification dialogue. The answers of the system are generated at different levels of detail, ac- cording to the user's competence in the domain. INTRODUCTION This paper presents a plan-based consultation system for getting information on how to achieve a goal in a restricted domain, l The main purpose of the system is to recognize the user's plans and goals to build coop- erative answers in a flexible way [Allen, 83], [Carberry, 90]. The system is composed of two parts: hypotheses construction and response generation. The construction of hypotheses is based on Context Models (CMs) [Carberry, 90]. Carberry uses default inferences [Carberry, 90b] to select a single hypothe- sis for building the final answer of the system and, in case the choice is incorrect, a repair dialogue is started. Instead, in our system, we consider all plau- sible hypotheses and if the ambiguity among them is relevant for the generation of the response, we try to solve it by starting a clarification dialogue. According to [van Beek and Cohen, 91], clarification dialogues are simpler for the user than repair ones, because they only involve yeshlo questions on the selected ambigu- ous plans. Furthermore, repair dialogues generally re- quire a stronger participation of the user. Finally, if the misunderstanding is not discovered, the system delivers information that is not proper to the user's case. For these reasons, it is preferable to solve the runbiguity a priori, by asking the user information on his intentions. In van Beek and Cohen's approach cl,'wification dialogues are started, even in case the answers associated with the plausible hypotheses are distinguished by features that could dhectly be managed in the answer. We avoid this by identifying the constraints relevant for a clarification dialogue and those which can be mentioned in the answer. In this way, the friendliness of the system is improved lThe system is concerned with information about a CS Deparunent. and the number and the length of the clarification dia- logues are reduced. In the perspective of generating flexible cooperative answers, it is important to differentiate their detail level by adapting them to the user's competence in the domain. In our work, we want to study how to embed information obtained from a user model component in the system. As a first step in this direction, we introduce a preliminary classification of users in three standard levels of competence corresponding to the major users' prototypes the system is devoted to. Then, in order to produce differentiated answers, the hypotheses are expanded according to the user's competence level. The knowledge about actions and plans is stored in a plan library structured on the basis of two main hier- archies: the Decomposition Hierarchy (DH) and the Generalization Hierarchy (GH) [Kautz and Allen, 86]. The first one describes the plans associated with the actions and is used for explaining how to execute a complex action. The second one expresses the relation among genera/and specific actions (the major specificity is due to additional restrictions on parameters). It supports an inheritance mechanism and a top-down form of clarification dialogue. THE ALGORITHM The algorithm consists of two parts: a hypotheses construction and a response generation phase. • 111 the hypotheses construction phase the following steps are repeated for each sentence of the user: 1- Action identification: on the basis of the user's ut- terance, a set of candidate actions is selected. 2- Focusing: CMs built after the analysis of the pre- vious sentences are analyzed to find a connection with any candidate action identified in step 1 and, for each established connection, a new CM is built. (At the beginning of the dialogue, from each candidate action a CM is created). 3- Upward expansion along the DH: each CM is ex- panded (when possible) by appending it to the more complex action having the root of the CM itself in its decomposition. 111 this way we get a higher lever de- scription of the action that the user wants to pursue. 4- Weighted expansion along the DH: for each CM, its actions are repeatedly decomposed in more ele- mentary ones, until all the steps of the CM are suffi- ciently simple for the user's competence level in the domain. In this way, the information necessary to generate an answer suitable to the user is collected. 274 5- Weighted expansion backward through enable- ment links: each CM is expanded in order to include the actions necessary for satisfying the preconditions which the user is supposed not to be able to plata by himself (according to his competence level). • In the response generation phase, the ambiguity among the hypotheses is evaluated. If it is relevant, a top-down clarification dialogue guided by the GH is started up. Finally, the answer is extracted from the CMs selected through the clarification dialogue. THE REPRESENTATION OF GOALS, PLANS AND ACTIONS The basic elements of representation of the domain knowledge are goals and actions. Actions can be ele- mentary or complex and in the second case one or more plans (decompositions) can be associated with them. All these plans share the same main effect. Each action is characterized by the preconditions, constraints, restrictions on the action parmneters, ef- fects, associated plans mid position in the GH. The re- strictions specify die relationship among the par,'une- ters of the main action and diose of the action sub- steps. During the response generation phase, if the value of some parameters is still unknown, their refer- ent can be substituted in die answer by a linguistic de- scription extracted from the restrictions, so avoiding further questions to the user. For example, if the user says that he wants to talk to the advisor for a course plan, but he does not specify which (so it is not possi- ble to determine the name of the advisor), still the system may suggest: "talk with the advisor for the course plan you are interested in". The GH supports an inheritance mechanism in the plan library. Moreover, it allows to describe the de- composition of an action by means of a more abstract specification of some of its substeps when no specific information is available. For exainple, a step of die action of getting information on a course plan is to talk with the curriculum advisor, that can be • specialized in different ways according to the topic of the conversation (talking by phone and talking face to face). If in a specific situation the actual topic is un- known, it is not possible to select one possibility. So, the more general action of talking is considered. In order to support the two phases of weighted ex- pansion, information about the difficulty degree of the actions is embedded in the plan library by labelling them with a weight that is a requested competence threshold (if the user is expert for an action, it is taken as elementary for him, otherwise its steps must be specified). Preconditions are labelled in an analogous way, so as to specify which users know how to plan them by themselves and which need an exph'mation. THE CONSTRUCTION OF THE HYPOTHESES In the action identification phase a set of actions is selected from the plan library, each of them possibly repl~esenting the aspect of the task on which the user's attention is currently focused. The action identifica- tion is accomplished by means of partially ordered rules (a rule is more specific than another one if it im- poses greater constraints on the structure of the log- ical form of the user's utterance). Restrictions on the pmameters of conditions and actions are used to select the most specific action from the plan library that is supported by the user's utterance. In the focusing phase the set of CMs produced by the analysis of the previous sentences and the set of candidate actions selected in the action identification phase are considered. A new set of CMs is built, all of which are obtained by expanding one of the given CMs so as to include a candidate action. CMs for which no links with the candidate actions have been found are discarded. The expansion of the CMs is similar to that of Carberry. However, because of our approach to the response generation, when a focusing rule fires, the expansion is tried backward through the enablement links and along the DH and the GH, so to find all the connections with the candidate actions without preferring any possibility. If a heuristic rule suggests more than one connection, a new CM is generated for each one. After the focusing phase, a further expansion up through tim DH is provided for each CM whose root is part of only one higher-level plan. In the weighted expansion along the DH, for each CM, every action to be included in the answer is ex- panded with its decomposition if it is not elementary for the user's competence level. Actually, only actions with a single decomposition are expanded 2 The ex- pansion is performed until the actions to be mentioned in the answer are not decomposable or they suit the user's competence level. In the weighted expansion backward through en- ablement links, for each CM, preconditions whose planning is not immediate for the user are expanded by attaching to their CMs the actions having them as effects. When a precondition to be expanded is of the form "Know(IS, x)" and the system knows the value of "x", it includes such information in the response; so, the expansion is avoided. While in the previous phase the expansion is performed recursively, here it is not, because expanding along the enablement chain extends the CM far from the current focus. 2 In the last two expansion phases we did not want to extend the set of alternative hypotheses. In particular, in the weighted expansion along the DH, the choice does not reduce the generality of our approach because this kind of ambiguity lies at a more detailed level than that of the user's expressions. Anyway, the specificity of the actions mentioned in the answer can be considered a matter of txade-off between the need of being cooperative and the risk of generating too complex answers. 275 THE RESPONSE GENERATION In the relevance evaluation phase, the ambiguity among candidate hypotheses filtered out in the focus- ing phase is considered. The notion of relevance de- fined by van Beek and Cohen is principally based on the conditions (corresponding to our constraints) as- sociated with the selected plans. We further specify this notion in two ways, in order to avoid a clarifica- tion dialogue when it is not necessary because a more structured answer is sufficient for dealing with the ambiguity. First we classify constraints into three cat- egories: those with a value known to the system, that are the only to be used in order to evaluate the rele- vance of ambiguity; those that involve information peculiar to the user (e.g. if he is a studen0, that can be mentioned in the answer as assumptions for its valid- ity; finally, those with a value unknown to both the user and the system, but that the user can verify by himself (e.g. the availability of books in the library). Also constraints of the last category should be in- cluded in the answer providing a recormnendation to check them. Second, clarification dialogues can be avoided even when the ambiguity is relevant, but all the selected hypotheses are invalidated by some false constraints whose truth value does not cllange in the considered situation; hence, a definitive negative an- swer can be provided. Clarification dialogues are or- ganized in a top-down way, along the GH. In our approach, answers should include not only information about the involved constraints, but also about the specific description of how the user should accomplish his task. For this reason, we consider a clarification dialogue based on constraints as a first step towards a more complex one, that takes into ac- count the ambiguity among sequences of steps as well. In the future work, we are going to complete the answer generation phase by developing tiffs part, as well as the proper answer generation part. AN EXAMPLE Let us suppose that a CM produced in the previous analysis is composed by tile action Get-info-on- course-plan (one of whose steps is the Talk-prof ac- tion) and the user asks if Prof. Smith is in his office. The action identification phase selects the Talk-by- phone and Meet actions, that share tile constraint that the professor is ill his office. Since the two actions are decompositions of tile Talk-prof action, the focusing phase produces two CMs from the previous one. If tile user is expert on the domain, no further expansion of the CMs is needed for the generation of the answer, that could be "Yes, he is; you can phone him to num- ber 64 or meet him in office 42". On tile other hand, if the user has a lower degree of competence, tile steps difficult for him are expanded. For example, the Talk- by-phone action is detailed by specifying: "To phone him go to the internal phone in tile entrance". In order to show one of the cases that differentiate van Beek and Cohen's approach from ours, suppose to add to the action Meet the constraint Is-meeting-time and that the user asks his question when the professor is not in the office and it is not his meeting time. In this case, the false constraint Is-meeting-time causes the ambiguity to be relevant for van Beek and Cohen; on the other hand, our system provides the user with a unique negative answer, so avoiding any clarification dialogue. CONCLUSIONS The paper presented a plan-based consultation sys- tem whose main purpose is to generate cooperative answers on the basis of recognition of the user's plans and goals. In the system, repair dialogues due to mis- understandings of the user's intentions are prevented through a possible clarification dialogue. In order to enhance the flexibility of the system, different detail levels have been provided for the an- swers, according to the competence of the various users. This has been done by specifying the difficulty degree of the various components of the plan library and by expanding the CMs until the information pro- vided for the generation of an answer is suitable for the user. Van Beek and Cohen' notion of the relevance of ambiguity has been refined on the basis of the characteristics of the constraints present in the plans. In the future work, we are going to refine the notion of relevance of ambiguity in order to deal with the presence of different sequences of actions in the pos- sible answers. Finally we are going to complete the proper answer generation. A C KNOWLEDGEMENTS The authors are indebted to Leonardo Lesmo for many useful discussions on the topic presented in the paper. The authors are also grateful to the four anonimous referees for their useful comments. This research has been supported by CNR in the project Pianificazione Automatica. REFERENCES [Allen, 83] J.F.Allen. Recognizing intentions from natural language utterances. In M. Brady and R.C. Berwick editors, Computational Models of Discourse. 107-166. MIT Press, 1983. [Carberry, 90] S.Carberry. Plan Recognition in Natural Language Dialogue. ACL-MIT Press, 1990. [Carberry 90b] S.Carberry. Incorporating Default Inferences into Plan Recognition. Proc. 8th Conf. AAAI, 471-478 Boston, 1990. [Kautz and Allen, 86] H.A.Kautz, J.F.Allen. Generalized Plan Recognition. Proc. 5th Conf. AAAL 32-37 Philadelphia, 1986. [van Beek and Cohen, 91] P.van Beek, R.Cohen. Resolving Plan Ambiguity for Cooperative Response Generation. Proc. 12th Int. Joint Conf. on Artificial Intelligence, 938-944 Sydney, 1991. 276
1993
37
IDENTIFYING RELEVANT PRIOR EXPLANATIONS James A. Rosenblum Department of Computer Science University of Pittsburgh Pittsburgh, PA 15260, USA Internet: [email protected] Abstract When human tutors engage in dialogue, they freely exploit all aspects of the mutually known context, including the previous discourse. Utterances that do not draw on previous discourse seem awkward, unnatural, or even incoherent. Previous discourse must be taken into account in order to relate new information effectively to recently conveyed mate- rial, and to avoid repeating old material that would distract the student from what is new. Producing a system that displays such behavior involves finding an efficient way to identify which previous explanations (if any) are relevant to the current explanation task. Thus, we axe implement- ing a system that uses a case-based reasoning ap- proach to identify previous situations and expla- nations that could potentially affect the explana- tion being constructed. We have identified heuris- tics for constructing explanations that exploit this information in ways similar to what we have ob- served in human-human tutorial dialogues. Introduction and Motivation We are building an explanation component for an existing intelligent training system, SHERLOCK (Lesgold ei al., 1992), which trains avionics tech- nicians to troubleshoot electronic equipment. Us- ing SHERLOCK, trainees solve problems with min- imal tutor interaction and then review their trou- bleshooting in a post-problem reflective follolz-up (RFU) session where the tutor replays each stu- dent action and assesses it as "good" (<+>) or as "could be improved" (<->). After a step is re- played, the student can ask the tutor to justify its assessment. As an example of the way in which human tutors exploit previous discourse, consider the dialogue in Figure 1, taken from our data. Even though the student has made the same mistake twice, the second explanation looks quite different from the first. Yet the two explanations are related to one another in an important way. In the second expla- nation the tutor simply reminds the student that she has not determined the status of the main con- trol data signals and that she should do so before testing the secondary control data signals. The tutor expects the student to be able to make use of the previous explanation once he has indicated that it is relevant to the current situation ("for the same reasons given ..." serves this purpose). Ac- cordingly, the tutor does not repeat the detailed explanation of why the main control data signals should be tested first. By generating the second explanation in such a way that it 'meshes' with the first, not only has the tutor corrected the testing mistake of the student, but has forced the student to consider how the two situations are similar. In pointing out this similarity, he has given the stu- dent a better understanding of the domain. We call an explanation that is later referred to (explic- itly or implicitly) or is integrated into a subsequent explanation the anchor. Clearly it is desirable for a system to produce text that is sensitive to what has been said previ- ously. In order to do this, however, a system must first be able to decide what previous explanation (or part thereof) to use as an anchor. This in- volves deciding, ia an e.~cient uJay, whether there exist suitable candidates to act as anchor, and if so, which amongst them would be best to use. This paper concentrates on this task. The Text Planner For this work, we are extending the text planner built by Moore and Paris (1989). Briefly, it works in the following way. A commuaicati~e goal (e.g., "achieve the state where the hearer believes that an action could be improved") is formed based upon the student's question. Using its library of plan operators that encode knowledge about tu- torial explanations, the system employs a linear planning mechanism to synthesize a response to achieve this goal. The result is a tez~ plan for the explanation. The system then presents the explanation to the user, retaining the plan that produced it in a di- alogue history. The dialogue history is a record of the conversation that has occurred thus far and includes the user's utterances as well as the text plans that led to the system's responses. In this system, a text plan represents the effect that each part of the text is intended to have on the heaxer's mental state, the linguistic strategies that were used to achieve these effects, and how the complete text achieves the overall communicative goal. 277 TUTOR STUDENT TUTOR TUTOR STUDENT TUTOR <-> You tested pin 28 of the A1A3A15. [1] Why is testing pin 28 not a good step? [2] As explained before, the main data inputs to relay B24 are highly suspect at this time. [3] You have tested the low input to B24 but not the high one. Since it is more likely that there may be a problem with the main data signals to relay B24 than with the secondary data signals to relays B25-B30, the main data signal should be tested first. Testing pin 28 is not a bad step, it is just more likely (on the average) that the main data signals are causing the problem. : One ezchange and one action later. <->You tested pin 36 of the A1A3A15. [4] Don't I need to test pin 36? [5] You might have to, but for the same reasons given when you tested pin 28, it is generally [6] more efficient to test the main control data signals first, and then test the secondary control data signals if necessary. Figure 1: Human-Human Advisory Interaction Displaying Contextual Effects Knowledge Sources for Finding Relevant Prior Explanations The most straightforward way to find relevant prior explanations is to exhaustively search the system's dialogue history looking for explanations that have certain features. For example, when ex- pl~inlng why a step was assessed as "could be im- proved," the system could look for previous expla- nations that justified this type of assessment, and in which the two actions being assessed were sim- ilar (i.e., had the same features). However, this approach is problematic. Expla- nation plans are large complex structures, and they will accumulate rapidly as the dialogue pro- gresses. Exhaustively searching the discourse his- tory for relevant prior explanations is computa- tionally prohibitive. Thus, we require an indexing strategy that allows the system to find possibly relevant prior explanations in an efficient manner. To satisfy this requirement, we use case-based reasoning (CBR) to provide a framework in which previous student actions can be efficiently exam- ined to determine which, if any, are relevant when producing an explanation. This approach has the additional advantage of allowing the system to consider what was said as well as what was not said when planning an explanation. For example, the student may have previously performed an ac- tion that displayed some characteristic that the tu- tor decided not to mention at the time and which would now be appropriate to discuss. A Case-Based Algorithm The following aspect of SHERLOCK's reasoning is extremely important to our work. SHERLOCK eval- uates each student action by determining which facets apply to that action. The facets repre- sent factors that expert avionics tutors use in as- sessing student's troubleshooting actions (Pokorny and Gott, 1990). To evaluate an action, SHER- LOCK finds each facet that applies to it and de- termines whether that facet should be considered good (g), bad (b), or neutral (n) given the current problem-solving context. For example, the facet "Making a measurement that is off the active cir- cuit path" is considered a b-facet. The representa- tion of a student action includes the list of facets characterizing the action and an assessment (g, b, or r~) for each of those facets. Case-based reasoning generalizes from cases to support indexing and relevance assessment, and can be used to evaluate a case by comparing it to past cases (Ashley, 1992). This seems to describe our task when we treat each student action as a "case". Influenced by the work of Aleven and Ash- ley (1992), we noted certain similarities between their domain and ours that led us to believe that we could use CBR techniques to identify similar ac- tions as described below. Our algorithm builds a data structure called a similarity DAG (Directed A__cyclic Graph) which indicates the previous student actions that are sim- ilar to a given action. By similar, we mean simi- lar with respect to a certain class of facets (some combination of g, b, or n). For example, when an- swering a question about why the current action was assessed as "could be improved," the similar- ity DAG is built so that it indicates which previ- ous actions were similar to the current action with respect to the b-facets. The root of the DAG rep- resents the current action and the facets of interest (b-facets in our example) that apply to it. Each node in the DAG, including the root, represents a set of student actions that share the same set of interesting facets. The more facets that a node has in common with the current action (in the root), the closer it will be to the root node. Proximity in the DAG corresponds to similarity in facet sets. Basically, the similarity DAG is a partial ordering of the student's actions based on their facet lists. 278 Similarity DAG Discourse I "'" History FACETS FI00: Allowed main data signal relay to remain partially tes~d (b) F101: Tested secondary data signal before main data signal (b) ~ N Action 12: VDC test, pin 36 to ground on A1A3A15 Co) PREVIOUS ACTIONS Action 9: VDC test,pin 28 to ~round on A1A3A15(b) TEXT PLAN 1 / ~ FACETS F~00: A11ow¢d a moiv.data signal retay to remam parUally teste~ (b) ] PREVIOUS ACTIONS ~>~ Action 8: VDC test, pin 38 to ground on A1A3A15(b) / /x/,, ... "I'~'T PL~aN 2 "'" I Figure 2: Data structures when considering how to answer turn 5, Figure 1 Figure 2 shows the similarity DAG that is con- structed when the system considers how to answer the question, "Don't I need to test pin 36?" (turn 5 of Figure 1). The facets relevant to the action in question are F100 and F101. The structure indicates that two previous actions - 9 and to a lesser degree 8, are similar to the current situa- tion. Pointers index the dialogue history's record of what was said at those times. At this point, the system has identified candidate situations that are relevant for planning the current explanation. It can now consider these retrieved situations more closely to determine any other facets that they may possess, and can examine the related explanations in the dialogue history to determine what was said about each of the two previous situations. The fact that there are no other nodes in the DAG indicates that there are no other suitable prior situations. Initial results using this algorithm seem promis- ing. In an analysis of 8 student-tutor protocols involving 154 actions and 22 opportunities for in- tegrating a previous explanation into an answer, the algorithm correctly identified the same previ- ous situations that were used by the human tutor in the actual interactions. In all but 3 cases, when the human tutor did not make a reference to a pre- vious explanation, our algorithm reported no sim- ilar prior situation. In the 3 situations where our algorithm identified a similarity not exploited by the tutor, our expert agreed that they would have been useful to incorporate into his explanations. Lastly, this technique will be useful in answering students' direct questions about the similarities of situations, e.g., "Why is testing 30 good? Isn't it like 36 and 28?" By constructing and consulting a similarity DAG, the system is able to plan re- sponses such as: aYes, but now you know the main control data signals on pins 33 and 22 are good so you need to test the secondary data signals." It is important to note that this approach is suc- cessful, in part, because the facets are based on a tutor's evaluation of a student's actions, and we are currently addressing only questions that jus- tify these evaluations. We focused on this type of question because 48% of student's queries during RFU are of this type. To answer additional ques- tions in a context-sensitive fashion, we will need to extend our indexing scheme to take the intentions behind an explanation into account as well as the domain content discussed. Conclusions and Future Work We have indicated that in order to produce text that is sensitive to the previous discourse, a sys- tem must first be able to identify relevant previous explanations and situations. To achieve this first step, a CBR algorithm was introduced that indexes the dialogue history and supplies the explanations with a context in which to be considered. We are devising techniques that use this information to plan subsequent explanations. References Ahven, V. and Ashley, K. 1992. Auto- mated generation of examples for a tutorial in case-based argumentation. In Proc. of the £nd Int'l Conference on Intelligent 2~toring S~ls- ter~, Montreal, Canada. Ashley, K. 1992. Case-based reasoning and its implications for legal expert systems. Ar- tificial Intelligence and Law 2(1). Lesgold, A.; Lajoie, S.; Bunzo, M.; and Eggan, G. 1992. Sherlock: A coached practice environment for an electronics troubleshooting job. In Computer Assisted Instruction and In- telligent Tutoring S~/stems: Shared Goals and Complementary Approaches. Lawrence Erlbaum Assoc,, NJ. Moore, J. D. and Paris, C. L. 1989. Plan- ning text for advisory dialogues. In Proc. of the £7th Annual Meeting of the ACL, Vancouver, B.C., Canada. 203-211. Pokorny, R. and Gott, S. 1990. The eval- uation of a real-world instructional system: Us- ing technical experts as raters. Technical report, Armstrong Laboratories, Brooks AFB. 279
1993
38
RESPONDING TO USER QUERIES IN A COLLABORATIVE ENVIRONMENT* Jennifer Chu Department of Computer and Information Sciences University of Delaware Newark, DE 19716, USA Internet: jchu @ cis.udel.edu Abstract We propose a plan-based approach for responding to user queries in a collaborative environment. We argue that in such an environment, the system should not accept the user's query automatically, but should consider it a proposal open for negotiation. In this pa- per we concentrate on cases in which the system and user disagree, and discuss how this disagreement can be detected, negotiated, and how final modifications should be made to the existing plan. 1 Introduction In task-oriented consultation dialogues, the user and ex- pert jointly construct a plan for achieving the user's goal. In such an environment, it is important that the agents agree on the domain plan being constructed and on the problem-solving actions being taken to develop it. This suggests that the participants communicate their disagree- ments when they arise lest the agents work on developing different plans. We are extending the dialogue under- standing system in [6] to include a system that responds to the user's utterances in a collaborative manner. Each utterance by a participant constitutes a proposal intended to affect the agents' shared plan. One component of our architecture, the evaluator, examines the user's pro- posal and decides whether to accept or reject it. Since the user has knowledge about his/her particular circumstances and preferences that influence the domain plan and how it is constructed, the evaluator must be a reactive planner that interacts with the user to obtain information used in building the evaluation meta-plan. Depending on the evaluation, the system can accept or reject the proposal, or suggest what it considers to be a better alternative, leading to an embedded negotiation subdialogue. In addition to the evaluator, our architecture consists of a goal selector, an intentional planner, and a discourse realizer. The goal selector, based on the result of the evaluation and the current dialogue model, selects an appropriate intentional goal for the system to pursue. The intentional planner builds a plan to achieve the intentional goal, and the discourse realizer generates utterances to convey information based on the intentional plan. This paper describes the evaluator, concentrating on cases in which the system and user disagree. We show how the system determines that the user's proposed additions are erroneous and, instead of directly responding to the user's utterances, conveys the disagreement. Thus, our work contributes to an overall dialogue system by 1) extending the model in [6] to eliminate the assumption that the system will automatically answer the user's questions or follow the user's proposals, and 2) capturing the notion *This material is based upon work supported by the National Science Foundation under Grant No. IRI-9122026. of cooperative responses within an overall collaborative framework that allows for negotiation. 2 The Tripartite Model Lambert and Carberry proposed a plan-based tripartite model of expert/novice consultation dialogue which in- cludes a domain level, a problem-solving level, and a discourse level [6]. The domain level represents the sys- tem's beliefs about the user's plan for achieving some goal in the application domain. The problem-solving level encodes the system's beliefs about how both agents are going about constructing the domain plan. The dis- course level represents the system's beliefs about both agents' communicative actions. Lambert developed a plan recognition algorithm that uses contextual knowl- edge, world knowledge, linguistic clues, and a library of generic recipes for actions to analyze utterances and construct a dialogue model[6]. Lambert's system automatically adds to the dialogue model all actions inferred from an utterance. However, we argue that in a collaborative environment, the system should only accept the proposed additions if the system believes that they are appropriate. Hence, we separate the dialogue model into an existing dialogue model and a proposed model, where the former constitutes the shared plan agreed upon by both agents, and the latter the newly proposed actions that have not yet been confirmed. Suppose earlier dialogue suggests that the user has the goal of getting a Master's degree in CS (Get- Masters(U, CS)). Figure 1 illustrates the dialogue model that would be built after the following utterances by Lam- bert's plan recognition algorithm modified to accommo- date the separation of the existing and proposed dialogue models, and augmented with a relaxation algorithm to recognize ill-formed plans[2]. U: I want to satisfy my seminar course requirement. Who's teaching CS689? 3 The Evaluator A collaborative system should only incorporate proposed actions into an existing plan if they are considered appro- priate. This decision is made by the evaluator, which will be discussed in this section. This paper only considers cases in which the user's proposal contains an infeasible action (one that cannot be performed) or would result in an ill-formed plan (one whose actions do not contribute to one another as intended)[9]. We argue that the evaluator, in order to check for erroneous plans/goals, only needs to examine actions in the proposed model, since actions in the existing model would have been checked when they were proposed. When a chain of actions is proposed, the evaluator starts examining from the top-most action so that the most general action that is inappropriate will be addressed. 280 Domain Level ~~'-]-i _~o_,o_-_~..~_o~ :m_, . . . . . . . ........ ~ .... ¢--' ! Is~-s,~,,~-co~,~,cs) ~. .......... t ~ , ". P~b, 1 era- So l v-mg_Le v ¢1 .............. "~ , iTal~_Com,~(U,CS689) p...~ ......... ........ S--- - -.-.-.~.~- -.~- - --i.~.,~ 9~d-r~c~.s.s~-s*,mo,,~Co,~eJ,cs)~ [--i": . - ......... , # : ........ i [ Build -Plma (U,S,TaI~¢-Course(U,(~S 689)) I ....... : :" "~o "on ' lna ~tiat*- Singl e~ V at~l,S,_fae,Tca¢ bt s~fae,CS 689)) ', .: V~po~d~ :~__Ao~ , " .......... : ........................... Goal: ,: ................................. -,7:: ............................. i I Obtafin-hffo.Rcf(U,&_f~e,Teach¢,(_fae,CS689)) ] [Ask-Rcf(U,S,_fac,Tcaehes(_fa¢,C S 689)) [ i [ Mal~Q-Accq'tablc ~'s'Teae ~-fae'c s689)) I ¢ [Givc-B ack~r~u ° d(U.S,Tcael~-fae,CS689)) [ "7] In fono(U,S,want0J,Satls~-Scminar-Coua~U,CS))) ] IT~CO,S.wa*t(O,S~'Scr~*~C°°~*¢0J.CS))) [ I ~*f-R*q~a~J,sJ~'TCaev~-f~'cs689)) [ ¢ Suffacc-Say-Prop(U,S,waatfU. I Satiffy.Seaninar-C~0J,CS))) I Surfae~WH-QfU,S,_fac,Tcachcs(_fae,CS689)) I I want to *atirfy my seminar cours~ rttluir~rntnts. Who's r~aching (:$689? Figure 1: The Structure of the User's Utterances The evaluator checks whether the existing and proposed actions together constitute a well-formed plan, one in which the children of each action contribute to their parent action. Therefore, for each pair of actions, the evaluator checks against its recipe library to determine if their parent-child relationship holds. The evaluator also checks whether each additional action is feasible by examining whether its applicability conditions are satisfied and its preconditions ~ can be satisfied. We contend that well-formedness should be checked before feasibility since the feasibility of an action that does not contribute to its parent action is irrelevant. Similarly, the well-formedness of a plan that attempts to achieve an infeasible goal is also irrelevant. Therefore, we argue that the processes of checking well-formedness and feasibility should be interleaved in order to address the most general action that is inappropriate. We show how this interleaved process works by referring back to figure 1. Suppose the system believes that CS689 is not a sem- inar course. The evaluation process starts from Satisfy- Seminar-Course(U, CS), the top-most action in the pro- posed domain model. The system's knowledge indi- cates that Satisfy-Seminar-Course(U, CS) contributes to Get-Masters(U, CS). The system also believes that the applicability conditions and the preconditions for the Satisfy-Seminar-Course domain plan are satisfied, indi- cating that the action is feasible. However, the sys- tem's recipe library gives no reason to believe that Take-Course( U, CS689) contributes to Satisfy-Seminar- Course(U, CS), since CS689 is not a seminar course. The evaluator then decides that this pair of proposed actions would make the domain plan ill-formed. 4 When the Proposal is Erroneous The goal selector's task is to determine, based on the current dialogue model, an intentional goal [8] that is most appropriate for the system to pursue. An intentional goal could be to directly respond to the user's utterance, a Both applicability conditions and preconditions are prereq- uisites for executing a recipe. However, it is unreasonable to attempt to satisfy an applicability condition whereas precondi- tions can be planned for. Action: Correct-Inference(..s 1 ,_s2,_proposed) Recipe-Type: Decomposition Appl Cond: believe(_sl, ~contributes(_actl ,..act2)) believe(_s2, contributes(_actl,_act2)) Constraints: in-plan(_actl,_proposed) V in-plan(_act2,_proposed) Body: Modify-Acts(_s 1 ,_s2,_proposed,_actl ,_act2) Insert-Correction(..s I ,_s2,_proposed) Effects: modified(_proposed) well-formed(_propo sed) Action: Modify-Acts(_sl ,_s2,_proposed,_actl,_act2) Recipe-Type: Specialization Appl Cond: believe(_s 1, -~contributes(_actl ,_act2)) Preconditions: believe(_s2,-,contributes(_actl,_act2)) Body: Remove-Act(_sl ,_s2,_proposed,_actl ) Alter-Act(_sl,_s2,_proposed,-actl ) Effects: modified(_proposed) Goal: modified(_proposed) Figure 2: Two Problem-Solving Recipes to correct a user's misconception, to provide a better alternative, etc. In this paper we only discuss the goal selector's task when the user has an erroneous plan/goal. In a collaborative environment, if the system decides that the proposed model is infeasible/ill-formed, it should refuse to accept the additions and suggest modifications to the proposal by entering a negotiation subdialogue. For this purpose, we developed recipes for two problem- solving actions, Correct-Goal and Correct-Inference, each a specialization of a Modify-Proposal action. We illustrate the Correct-Inference action in more detail. We show two problem-solving recipes, Correct- Inference and Modify-Acts, in figure 2. The Correct- Inference recipe is applicable when _s2 believes that _actl contributes to achieving _act2, while _sl believes that such a relationship does not hold. The goal is to make the resultant plan a well-formed plan; there- fore, its body consists of an action Modify-Acts that deletes the problematic components of the plan, and Insert-Correction, that inserts new actions/variables into the plan. One precondition in Modify-Acts is be- lieve(_s2, -~contributes(_act l,-act2 ) ) (note that in Correct- Inference, _s2 believes contributes(-actl,-act2)), and the change in _s2's belief can be accomplished by invoking the discourse level action Inform so that _sl can convey the ill-formedness to _s2. This Inform act may lead to fur- ther negotiation about whether _actl contributes to _act2. Only when _sl receives a positive feedback from _s2, indicating that _s2 accepts _sl's belief, can _sl assume that the proposed actions can be modified. Earlier discussion shows that the proposed actions in figure 1 would make the domain plan ill-formed. There- fore, the goal selector posts a goal to modify the proposal, which causes the Correct-Inference recipe in figure 2 to be selected. The variables _actl and _act2 are bound to Take- Course( U, CS689 ) and Satisfy-Seminar-Course( U, CS ), re- spectively, since the system believes that the former does not contribute to the latter. Figure 3 shows how we envision the planner to expand on the Correct-Inference recipe, which results in the generation of the following two utterances: (1)S" Taking CS689 does not contribute to satisfying the seminar course requirement, (2) CS689 is not a seminar course. 281 Dialogue Model in Figttre 1 ~t Problem-Solving Level J ......... E ......... J ,- ................. ..- .......... .,- .................................. , ~ C-e n¢ rate-Respo nse(S, U,Proposed- Model ) J ',[Evaluale-ProposalfS,U,Pro[n:,sed-Model} I JModif}'-Proposal(S.U,Proposed-Model) I t i i ', ICorreot'lnfer©ncc(S.U,Pr°P °sed'Model) I , ' Modify -Acts(S,U,Proposcd-Mod¢l ,Take-Co ul'se(U. CS6g9 ), f, Safsfv.s~ainar.Course(U.CSl~ r~',~ f+~-~ ;~q ........................ -- :. :---~- ........................ I n fo rra (S,U,-in fe nm ce(Tak-¢-Co urse(n,c$ 689), I _ Saris P/-Scminar-Coorse(U~ Te[~'~S. U.-in fete n ce(Take ,Co u rse( U,C S689). A ddress- Belie vabili ty(S, U',-(in fe fence( L__.__ Satis~-Seminar-Co utse(U,CS))) I Ta.k¢-Cotn'sed U CS689"~ Satis~-Seminat-Course(U.CS))l VS ur face'Say'Pr °P(S'U "izffcre nee( I Jlaform(S,U.-isa(CS689,seminar-course)) ] Take-Cours~(U.CS689) . . . . [Satis fC-Serainar-Course(U,CS111 [T©II(S ,U,-isa(CS689 ,se rain at.-cottrs¢ )) J f .............................. j s_~._~_r:~e~_s.y.~=sc_s_+sg___~_.,_~_.~_o~))___ Taking CS689 does not contribute to satisfying CS689 is not a seminar ¢ours¢ the seminar course requirement Figure 3: The Dialogue Model for the System's Response The action Inform(_sl,_s2,_prop) has the goal be- lieve(_s2,_prop); therefore, utterance (1) is generated by executing the Inform action as an attempt to satisfy the preconditions for the Modify-Acts recipe. Utterance (2) results from the Address-Believability action, which is a subaction of Inform, to support the claim in (1). The problem-solving and discourse levels in figure 3 operate on the entire dialogue model shown in figure 1, since the evaluation process acts upon this model. Due to this nature, the evaluation process can be viewed as a meta- planning process, and when the goal of this process is achieved, the modified dialogue model is returned to. Now consider the case in which the user continues by accepting utterances (1) and (2), which satisfies the pre- condition of Modify-Acts. Modify-Acts has two special- izations, Remove-Act, which removes the incorrect action (and all of its children), and Alter-Act, which generalizes the proposed action so that the plan will be well-formed. Since Take-Course contributes to Satisfy-Seminar-Course as long as the course is a seminar course, the system gen- eralizes the user's proposed action by replacing CS689 with a variable. This variable may be instantiated by the Insert-Correction subaction of Correct-Inference when the dialogue continues. Note that our model accounts for why the user's original question about the instructor of CS689 is never answered --a conflict was detected that made the question superfluous. 5 Related Work Several researchers have studied collaboration [1, 3, 10] and Allen proposed different plan modalities depending on whether a plan fragment is shared, proposed and ac- knowledged, or merely private [1]. However, they have emphasized discourse analysis and none has provided a plan-based framework for proposal negotiation, speci fled appropriate system response during collaboration, or ac- counted for why a question might never be answered. Litman and Allen used discourse meta-plans to handle a class of correction subdialogues [7]. However, their Correct-Plan only addressed cases in which an agent adds a repair step to a pre-existing plan that does not execute as expected. Thus their meta-plans do not handle correction of proposed additions to the dialogue model (since this generally does not involve adding a step to the proposal). Furthermore, they were only concerned with understand- ing utterances, not with generating appropriate responses. The work in [5, 1 I, 9] addressed generating cooperative responses and responding to plan-based misconceptions, but did not capture these within an overall collaborative system that must negotiate proposals with the user. Hee- man [4] used meta-plans to account for collaboration on referring expressions. We have addressed collaboration in constructing the user's task-related plan, captured cooper- ative responses and negotiation of how the plan should be constructed, and provided an accounting for why a user's question may never be answered. 6 Confusions and Future Work We have presented a plan-based framework for generating responses in a collaborative environment. Our framework improves upon previous ones in that, 1) it captures co- operative responses as a part of collaboration, 2) it is capable of initiating negotiation subdialogues to deter- mine what actions should be added to the shared plan, 3) the correction process, instead of merely pointing out problematic plans/goals to the user, modifies the plan into its most specific form accepted by both participants, and 4) the evaluation/correction process operates at a meta- level which keeps the negotiation subdialogue separate from the original dialogue model, while allowing the same plan-inference mechanism to be used at both levels. We intend to enhance our evaluator so that it also recognizes sub-optimal solutions and can suggest bet- ter alternatives. We will also study the goal selector's task when the user's plan/goal is well-formed/feasible. This includes identifying a set of intentional goals and a strategy for the goal selector to choose amongst them. Furthermore, we need to develop the intentional planner which constructs a plan to achieve the posted goal, and a discourse realizer to generate natural language text. References [1] James Allen. Discourse structure in the TRAINS project. In Darpa Speech and Natural Language Workshop, 1991. [2] Rhonda Eller and Sandra Carberry. A meta-rule approach to flexible plan recognition in dialogue. User Modeling and User-Adapted lnteraction, 2:27--53, 1992. [3] Barbara Grosz and Candace Sidner. Plans for discourse. In Cohen et al., editor, Intentions in Communication, pages 417--444. 1990. [4] Peter Heeman. A computational model of collaboration on referring expressions. Master's thesis, University of Toronto, 1991. [5] Aravind Joshi, Bonnie Webber, and Ralph Weischedel. Living up to expectations: Computing expert responses. In Proc. AAAL pages 169--175, 1984. [6] Lynn Lambert and Sandra Carberry. A tripartite plan-based model of dialogue. In Proc. ACL, pages 47--54, 1991. [7] Diane Litman and James Allen. A plan recognition model for subdialogues in conversation. Cognitive Sci- ence, 11:163--200, 1987. [8] Johanna Moore and Cecile Paris. Planning text for advisory dialogues. In Proc. ACL, pages 203--211, 1989. [9] Mart.ha Pollack. A model of plan inference that distin- guishes between the beliefs of actors and observers. In Proc. ACL, pages 207--214, 1986. [10] Candace Sidner. Using discourse to negotiate in collabo- rative activity: An artificial language. In Workshop Notes: AAAI-92 Cooperation Among Heterogeneous Intelligent Systems, pages 121--128, 1992. [11 ] Peter vanBeek. A model for generating better explanations. In Proc. ACL, pages 215--220, 1987. 282
1993
39
STRUCTURAL MATCHING OF PARALLEL TEXTS Yuji Matsulnoto Graduate School of Information Science Advanced Institute of Science and Technology, Nara Takayanaa-cho, Ikoma-shi, Na.ra 630-01 Japan [email protected] ist-na ra.ac.jp Hiroyuki Ishimoto Takehito Utsuro Department of Electrical Engineering Kyoto University Sakyo-ku, Kyoto 606 Japan {ishimoto, utsuro} @pine.kuee.kyoto-u.ac.jp Abstract This paper describes a method for finding struc- rural matching between parallel sentences of two languages, (such as Japanese and English). Par- allel sentences are analyzed based on unification grammars, and structural matching is performed by making use of a similarity measure of word pairs in the two languages. Syntactic ambiguities are re- solved simultaneously in the matching process. The results serve as a. useful source for extracting lin- guistic a.nd lexical knowledge. INTRODUCTION Bilingual (or parallel) texts are useful resources for acquisition of linguistic knowledge as well as for ap- plications such as machine translation. Intensive research has been done for aligning bilingual texts at the sentence level using statistical teclmiques by measuring sentence lengths in words or in charac- ters (Brown 91), (Gale 91a). Those works are quite successful in that far more than 90% of sentences in bilingual corpora, are a.ligned correctly. Although such parallel texts are shown to be use- ful in real applications such as machine translation (Brown 90) and word sense disambiguatioll (Daga.n 91), structured bilingual sentences are undoubtedly more informative and important for filture natural language researches. Structured bilingual or multi- lingual corpora, serve a.s richer sources for extract- ing linguistic knowledge (Kaji 92), (Klavans 90), (Sadler 91), (Utsuro 92). Phrase level or word level alignment has also been done by several researchers. The Textual Knowledge Bank Project (Sadler 91) is building lnonolingual and multilingual text bases structured by linking the elements with grammatical (depen- dency), referential, and bilingual relations. (Karl 92) reports a method to obtain phrase level corre- spondence of parallel texts by coupling phrases of two languages obtained in CKY parsing processes. This paper presents another method to obtain structural matching of bilingual texts. Sentences in both languages are parsed to produce (disjunctive) feature structures, from which dependency struc- tures are extracted. Ambiguities are represented as disjunction. Then, the two structures are matched to establish a one-to-one correspondence between their substructures. The result of the match is ob- tained as a set of pairs of minimal corresponding substructures of the dependency structures. Exam- ples of the results are shown in Figures 1, 2 and 3. A dependency structure is represented as a tree, in which ambiguity is specified by a disjunctive node (OR. node). Circles in the figure show substruc- tures and bidirectional arrows show corresponding substructures. Our technique and the results are different from those of other lnethods mentioned above. (Kaji 92) identifies corresponding phrases and ahns at pro- ducing tra.nslation templates by abstracting those corresponding phrases. In the Bilingua.l Knowledge Bank (Sadler 91), the correspondence is shown by 23 links between words in two sentences, equating two whole subtrees headed by the words. We prefer the Ininimal substructure correspondence and the relationship between substructures. Such a mini- mal substructure stands for the minimal meaning- ful component in the sentence, which we believe is very useful for our target application of extracting lexical knowledge fi'om bilingual corpora. SPECIFICATION OF STRUCTURAL MATCHING PROBLEM Although the structural matching method shown in this paper is language independent, we deal with parallel texts of Japanese a.nd English. We assume that a.lignment at the sentence level is already pre- processed manually or by other methods such as those in (Brown 91), (Gale 91a). Throughout this paper, we assume to match simple sentences. 1 DEFINITIONS OF DATA STRUCTURES A pair of Ja.panese and English sentences are parsed independently into (disjuuctive)feature structures. For our present purpose, a part of a feature struc- ture is taken out as a dependency structure consist- ing of the content words 2 that appear in the original sentence. Ambiguity is represented by disjunctive feature structures (Kasper 87). Since any relation other than modifier-modifyee dependencies is not considered here, path equivalence is not taken into consideration. Both of va.lue disjunction and gen- eral disjunction are allowed. We are currently using LFG-like grammars for both Japanese and English, where the value of the 'pred' label in an f-structure is the content word that is the head of the corresponding c-structure. We start with the definitions of simplified dis- junctive feature structures, and then disjunctive dependency structures, that are extracted from the disjunctive feature structures obtained by the pars- ing process. Definition 1 Simple feature structures (FS) (L is the sel of feature labels, and A is the set of atomic values) are defined recursively: 1 Matching of compound sentences are done by cutting them up into simple sentence fragments. 2In the present system, llOUllS, l)FOtK~utls, verbs, adjec- tives, mad adverbs are regarded as content, words. NIL a where a E A 1:4 where l E L, 4EFS ¢ A ~b where 4,¢ E FS C V g, where ¢,¢ E FS To define (Disjunctive) Depen.dency Structures as a special case of an FS, we first require the fol- lowing definitions. Definition 2 Top label set of an FS ¢, written as tl(¢), is defined: 1. If O = l: if1, then tl(4) = {l}, 2. If4 = 41A4". or4 = 41V42, then tl(4) = tl(41) U ?~l(42). Definition 3 A relation 'sibling' between feature labels in 4 is defined: 1. If4 -= l : 41, then l and labels in 41 are not sibling, and sibling relation holding in 41 also holds in 4. 2. /Jr4 -- 41A 42, then labels in tl(41) and labels in tl(4_,) are sibling. 3. If ¢ -- 41 V 42, then labels in 41 and labels in 42 are not sibling. Note that the sibling relation is not an equiva- lence relation. We refer to a set of feature labels in ¢ that are mutually sibling as a sibling label set of 4. Now, we are ready to define a dependency structure (DS). Definition 4 A dependency structure ~b is an FS lhaI satisfies the following condition: Condition: Every sibling label set of ¢ includes ex- actly one 'pred' label. The idea behind those are that the value of a 'pred' label is a content word appearing in the orig- inal sentence, and that a sibling label set defines the dependency relation between content words. Among the labels in a sibling label set, the values of the labels other than 'pred' are dependent on (i.e., modify) the value of the 'pred' label. A DS can be drawn as a tree structure where the nodes are either a content word or disjunction operator and the edges represent the dependency relation. Definition 5 A substructure of an FS 4 is defined (sub(4) stands for the sel of all substructures of 4,): 1. NIL and 4 itself are substruclures of 4. 2. If 4 = a (a E A), then a is a s'ubstructare of ¢. 24 English: She has long hair. Japanese: ~ - 0 -~- ~: J~ she - GEN hair - TOP long she long hair = ~ Figure 1: Example of structural matching, No.1 English: This child is starving for parental love. Japanese: U_. 69 --~- ~ ~- 09 ~- W-_ this child - TOP parent- GEN love - DAT pa,ental ~ ~ ¢ be-starving this = child = love = ~.69 Figure 2: Example of structural matching, No.2 English: Japan benefits from free trade. Japa,,ese: ~* ~ ~ 0~ ,~,,,N- * Japan - TOP free-trade - GEN benefit - ACC o°°°°°o.°..O~° oo'" ...................................... ...°°.... ........ .,e.." . I (, japan.) la an :~benefit ) c '~ t~f A ~,. ~ / ~;2:~ i ~ ....... " free : ....................... %.°..,° receive japan = El benefit = ,~,~ trade = I~ 1 ~ Figure 3: Example of structural matching, No.3 25 3. If ¢ ---- l : ¢1, then sub(t1) are substructures of ¢. It" ]f ¢ ---- (~1 A (/)2, then for a~y (q C sub(el) and for any ¢2 e sub(C2), ¢1A¢~ is a subslruclure oft. 5. If ¢ = ¢1 V ¢2, then for for any '/r/)l ~ sub(~) 1 ) and for any ¢2 E sub(C2), ¢1 v¢2 is a sub- slr~ucture of ¢. The DS derived fi'om an FS is the maximuln sub- structure of the FS that satisfies the condition in Definition 4. The DS is uniquely determined fi'oln an FS. Definition 6 A disjunction-free maximal sub- structure of an FS ¢ is called a complete FS of ¢. An FS does not usually have a unique complete FS. This concept is important since the selection of a complete FS corresponds to alnbiguity resolution. Naturally, a lnaximal disjunction-free substructure of a DS ¢ is again a DS and is called a complete DS of ¢. Definition 7 A semi-complete DS of a DS ¢ is a substruclure of a complete DS of¢ thai satisfies the condition in Definilion ~. Note that a substructure of a DS is not neces- sarily a DS. This is why the definition requires the condition in Definition 4. A complete DS ~/., can be decomposed into a set of non-overlapping selni-complete DSs. Such a de- composition defines the units of structural lnatch- ing and plays the key role in our problem. Definition 8 A set of semi-complete DS of a DS ¢, D = {¢1,"'¢n}, is called a decomposition of ¢, iff every ¢i in the set contains at least one oc- currence of 'pred' feature label, and every content word at the 'pred' feature label appeariT~g in '¢ is contained in exactly one ~i. Definition 9 Th.e reduced DS of a DS (, with re- spect to a decomposition D = {¢1,"-4',~} is con- stracted as follows: I. ¢i is transformed to a DS, "pred : St', where Si is the set of all coT~le~l words appeari~J 9 i7~ ¢i. Th.is DS is referred to as red(it). 2. If there is a direcl dependency relatiol~ between two conient words wl and w~ that are in ¢i and tj (i 7~ j), lh.en lhe dependency relation is allotted between ¢i and l/,j. Although this definition should be described pre- cisely, we leave it with this more intuitive descrip- tion. Examples of dependency structures and re- duced dependency structures are found in Figures 1, 2 and 3, where the decompositions are indicated by circles. It is not difficult to show that the reduced DS satisfies the condition of Definition 4. STRUCTURAL MATCHING OF BILIN- GUAL DEPENDENCY STRUCTURES Structural matching problem of bilingual sentences is now defined formally. Parsing parallel English and Japanese sentences results in feature structures, from which depen- dency structures are derived by removing unrelated features. Assmne that ~.'E and 'OJ are dependency struc- tures of English and Japanese sentences. The struc- tural matching is to find the most plausible one-to- one mapping between a decomposition of a com- plete DS of CE and a decomposition of a complete DS of C j, provided that the reduced DS of CE and the reduced DS of Cj w.r.t, the decompositions are isomorphic over the dependency relation. The isomorphism imposes a. natural one-to-one corre- spondence on the dependency relations between the reduced DSs. Generally, the mapping need not always be one- to-one, i.e., all elements in a decomposition need not map into another decomposition. When the mapping is not one-to-one, we assume that dummy nodes are inserted in the dependency structures so that the mapping naturally extends to be one-to- one. When the decompositions of parallel sentences have such an isomorphic one-to-one mapping, we assume that there are systematic methods to com- pute similarity between corresponding elements in the decompositions and to compute similarity be- tween the corresponding dependency relations 3. We write the function defining the former sim- ilarity as f, and that of the latter as g. Then, f is a flmction over semi-complete DSs derived fi'om English and Japanese parallel sentences into a real number, and 9 is a function over feature label sets 3in the case of similarity between dependency relations, the original feature labels are taken into accotult. 26 of English and Japanese into a real number. Definition 10 Given dependency structures, DS1 and DS,,, of two languages, tile structural match- ing problem is to find an isomorphic oT~e-to-one mapping m be*ween decompositions of DSa aT~d DS2 that maximizes the sum of the vahtes of simi- larity functions, f and g. That is, the problem is to find the fltnctioT~ m that maximizes ~-~m(f( d, re(d)) + ~t g(l, ,n.(/))) where d varies over semi-complete DS of DS1 and l varies over feature labels in D,-q. 1. The similarity functions can be defined in vari- ous ways. "vVe assume some similarity measure be- tween Japanese and English words. For instance, we assume that the similarity function f satisfies the following principles: 1. f is a simple function defined by the similar- ity measure between content words of two la.n- guages. 2. Fine-grained decompositions get larger simi- larity measure than coarse-grained decompo- sitions. 3. Dummy nodes should give solne negative vahte to f. The first principle is to simplify the complexity of the structural matching a.lgorithm. The second is to obtain detailed structural matching between parallel sentences and to avoid trivial results, e.g., the whole DSs are matched. The third is to avoid the introduction of dunnny nodes when it, is possi- ble. The fimction g should be defined according to the language pair. Although feature labels repre- sent grammatical relation between content words or phrases and may provide useful information for measuring similarity, we do not use tile informa- tion at, our current stage. The reason is that we found it difficult to have a clear view on the re- lationship between feature labels of English and Japanese and on the meaning of feature labels be- tween semi-complete dependency structures. STRUCTURAL MATCHING ALGORITHM Tile structural matching of two dependency struc- tures are combinatorially diflicult problem. V~re apply the 1)ranch-and-bound method to solve tile problem. Tile branch-and-bound algorithm is a top-down depth-first backtracking algorithm for search prob- lems. It looks for tile answers with the BEST score. Ill each new step, it estimates tile maximum value of the expected scores along the current path and compares it, with the currently known best score. The maxinmm expected score is usually calculated by a. simplified problem that guarantees to give a value not less than the best score attainable along the current path. If the maximuna expectation is less than the currently known best score, it means that there is no chance to find better answers by pursuing the path. Then, it gives up tile current path and hacktracks to try remaining paths. We regard a dependency structure as a tree structure that inchtdes disjunction (OR nodes), and call a content word and a dependency rela- tion as a node and an edge, respectively. Then a semi-complete dependency structure corresponds to a connected subgraph in the tree. The matching of two dependency trees starts from the top nodes and the matching process goes along edges of the trees. During the matching pro- cess, three types of nondeterminisln arise: 1. Selection of top-most subgraphs in both of the trees (i.e., selection of a semi-complete DS) 2. Selection of edges ill both of tile trees to decide the correspondence of dependency relations 3. Selection of one of the disjuncts a.t an 'OR' node While tile matching is done top-down, the exact score of the matched subgraphs is calculated us- ing the similarity function f.4 When the matching process proceeds to the selection of the second type, it selects an edge in each of the dependency trees. The maximum expected score of matching the sub- trees under the selected edges are calculated from the sets of content words in the subtrees. Tile cal- culation method of the maximum expected score is defined ill solne relation with the similarity func- tion f. Suppose h is the function that gives the maxi- mum expected score of two subgraphs. Also, sup- pose B and P be the currently known best score 4~,Ve do not take into account the similarity measure between dependency relations as stated in the preceding section. 27 and the total score of the already matched sub- graphs, respectively. If s and t are the subgraphs under the selected edges and s' and t' are the whole relnailfing subgraphs, the matching under s and t will be undertaken fi, rther only when the following inequation holds: P + h(s,t) + h(s',t') > B Any selection of edges that does not satisfy this inequality cannot provide better matching than the currently known best ones. All of the three types of nondeterminism are sim- ply treated as the nondeterminism in the algorithm. The syntactic ambiguities in the dependency structures are resolved sponta.lmously when the matching with the best score is obtained. EXPERIMENTS We have tested the structural matching algorithm with 82 pairs of sample sentences randomly selected froln a Japanese-English dictionary. We used a machine readable Japanese-English dictionary (Shimizu 79) and Roget's thesaurus (Ro- get 11) to measure the silnilarity of pairs of content words, which are used to define the fimctiou f. Similarity of word pairs Given a pair of Japanese and English sentences, we take two methods to lneasure the similarity be- tween Japanese and English content words appear- ing in the sentences. For each Japanese content word wj apl)earing in the Japanese sentence, we can find a set of translat- able English words fl'om the Japanese-Ellglish die- tionary. When the Japanese word is a. polysemous word, we select an English word fi'om each polyse- mous entry. Let CE] be the set of such translat- able English words of wj. Suppose CE is the set of contents words in the English sentence. The trans- latable pairs of w j, Tp(u u), is de.fined as follows: Tp(wj) = {(wj,'wE) ['we E CE., n C.'L,} We use Roget's thesaurus to measure similarity of other word pairs. Roget's t.hesaurtls is regarded as a tree structure where words are a.llocated at the leaves of the tree: For each Japanese content word 'wj appearing in tim Japanese sentence, we can de- fine the set of translatable English words of wa, CEj. From each English word in the set., the mini- mum distance to each of the English content words appearing in the English sentence is measured. 5 This minimum distance defines the similarity be- tween pairs of Japanese and English words. We decided to use this similarity only for esti- mating dissimilarity between Japanese and English word pairs. We set a predetermined threshold dis- tance. If the minimal distance exceeds the thresh- old, the exceeded distance is counted as the nega- tive similarity. The similarity of two words Wl and w2 appear- ing in the given pair of sentences, sim((wl, w~)), is defined as follows: ) = 6 (wl, w2) E Tp(wl) or ('w2, 'wx) E Tp(w2) -I~ (,w~, w.) ~t Tp(w~) and (w2, w~) ft Tp(w.,) and the distance between wl and w., exceeds the threshold by k. 0 otherwise Similarity of semi-complete DSs The similarity between corresponding semi- complete DSs is defined based on the similarity be- tween the content words. Suppose that s and t are semi-colnplete DSs to be matched, and that Vs and Vt are the sets of content words in s and t. Let A be the less larger set of l~ and Vt and B be the other (I A I<l B I). For each injection p from A into B, the set of word pairs D derived from p can be defined as follows. Now, we define the similarity fimction f over Japaaese and English semi-colnplete DSs to give the naa.xinmm value to the following expression for all possible injections: ( = max/ × O.951vd+IVd -~ J The summation gives the maximuna sum of the similarity of the content words in s and t. 0.95 is the penalty when the semi-complete DSs with more than one content words are used in the matching. Figures 1, 2 and 3 shows the results of the struc- tural matching algorithm, in which the translatable pairs obtained fi'om the Japanese-English dictio- nary are shown by the equations. 5 The dlstaame between words is tile length of tile shortest path in the thesatu'us tree. 28 Table 1: Results of experiment, s Parsing J al)anese and English sent.enccs Number of sentences 82 Parse failure 23 Parsable 59 Correct parsability Correctpa.rse ] 53 ] 89.8%(53/59) Incorrect parse 6 10.2% (6/59) The match with tile best score includes Correct matching 47 89% (47/53) no correct naatching 6 11% (6/53) Single correct matching 34 64% (34/53) Results of the experiments We used 82 pairs of Japanese and English sen- tences appearing in a Japanese-English dictionary. The results were checked and examined in detail by hand. Some of the sentences are not parsable be- cause of the limited coverage of our current gram- mars. Although 59 pairs of them are parsable, 6 out of them do not include correct parse results. The structural matchi,lg algorithm with the set- ting described above is applied to the 53 pairs. The cases where the correct, matchilig is not included in the best rated answers are 6 out of them. The remaining 47 pairs include the correct matching, of which 31 pairs result in the correct matching uniquely. Tal)le 1 sumnaarizes tile results. EVALUATION AND DISCUSSION Although the number of sentences used in tile ex- periments is small, the result, shows that about two third of the pairs give the unique matching, in which every syntactic ambiguity is resolved. The cases where no correct matching was ob- tained needs be examined. Some sentences contain an idiomatic expression that has coml)letely differ- ent syntactic structures fl'om the sentence struc- ture of the other. Such an expression will 110 way be matched correctly except that the whole struc- tures are matched intact. Other cases are caused by complex sentences that include an embedded sen- tence. When the verbs at the roots of the depen- dency trees are irrelevant, extraordinary matchings are produced. We intend not to use our method to match complex or compound sentences as a whole. ~,¥e will rather use our method to find structural matching between simple sentences or verb phrases of two languages. Tile matching problmn of complex sentences are regarded as a different problem though the simi- lar technique is usable. We think that the scores of matched phrases will help to identify tile cor- responding phrases when we match complex sen- tences. Taking the sources of other errors into consider- ation, possible improvements are: 1. Enhancement of English and Japanese gram- mars for wider coverage and lower error rate. 2. Introduction of more precise similarity mea- surement of content words. 3. Utilization of grammatical information: • Feature labels, for estimating matching plausibility of dependency relations • Part of speech, for measuring matching plausibility of content words • Other grammatical information: mood, voice, etc. The first two iml)rovements are undoubtedly im- portant. As for the similarity measurement of con- tent words, completely different approaches such as statistical methods may be useful to get good translatable pairs (Brown 90), (Gale 91). Various grammatical information is kept in the feature descriptions produced in the parsing pro- cess. However, we should be very prudent in using it. Since English and Japanese are grammatically quite different, some grammatical rela.tion may not be preserved between them. In Figure 3, solid ar- rows and circles show the correct matching. While 'benefit' matches with the structure consisting of ' ,~,,~ ' and ' ~_.~ ~ ', their dependent words 'trade' and ' H~:~' modify them as a verb modifier and as a noun modifier, the grammatical relation of which are quite different. This example highlights another interesting point. Dotted arrows and circles show another matching with the salne highest score. In this case, 'japan' is taken as a verb. This rather strange in- terpretation insists that 'japan' matches with ' H~ ' and ' .~ 6 '. Since 'japan' as a verb has little se- lnantic relation with ' []:~ ' as a country, discrim- ination of part-of-speech seems to be useful. On the other hand, the correspondence between 'ben- efit' and ' ~,~ ' is found in their noun entry in the dictionary. Since 'benefit' is used as a verb in the 29 sentence, taking part-of-speech into consideration may jeopardize the correct matching, either. The fact that the verb and noun usages of 'benefit' bear common concept implies that more precise similar- ity measurement will solve this particular probleln. Since the interpretations of the sample English sen- tences are in different mood, imperative and declar- ative, the mood of a. sentence is also usefnl to re- move irrelevant interpretations. CONCLUSIONS The structural matchillg problem of parallel texts is formally defined and our current implementation and experilnents are introduced. Although the re- search is at the preliminary stage and has a. very simple setting, the experiments have shown a. nuln- ber of interesting results. The method is easily enhanced by ilnproving the gramnm.rs and by in- corporating more accurate similarity measurement. Number of other researches of building tra.nsla- tion dictionaries and of deterlnining similarity re- lationship between words are useful to improve our method. To extract useful information fl'om bilingual cor- pora, structural matching is inevitable for language pairs like English and Japanese that have quite dif- ferent linguistic structure. Incidentally, we have found that this dissimilarity plays an important role in resolving syntactic ambiguities since the sources of anlbiguities in English and Japanese sen- tences are in many cases do not coincide (Utsuro 92). We are currently working on extracting verbal case frames of Japanese fi'om the results of struc- tural matching of a aal)anese-l~nglish corpus (Ut- suro 93). The salne teclmique is naturally a.pplica- ble to acquire verbal case fi'ames of English as well. Another application we are envisaging is to extract translation pattern from the results of structural matching. We plan to work on possible improvements dis- cussed in the preceding section, and will make large scale experiments using translated newspal~er arti- cles, based on the phrase matching stra.t.egy. ACKNOWLEDGMENTS This work is partly supported by the (-;rants from Ministry of Education, "Knowledge Science" (#03245103). REFERENCES Brown, P.F., et al., A Statistical Approach to Ma- chine Translation, Computalional Linguistics, Vo1.16, No.2, pp.79-85, 1990. Brown, P.F., Lai, J.C. and Mercer, R.L., Align- ing Sentences ill Parallel Corpora, ACL-91, pp.169-176, 1991. Dagan, I., Itai, A. and Schwall, U., Two Lan- guages are More Iuformative than One, ACL- 91, pp.130-137, 1991a. Gale. W.A. and Church, K.W., A Program for Aligning Sentences in Bilingual Corpora, ACL-91, pp.177-184, 1991b. Gale. W.A. and Church, K.W., Identifying Word Correspondences in Parallel Texts, '91 DARPA Speech and Natural Language Work- shop, pp.152-157, 1991. Kaji, H., Kida, Y., and Morimoto, Y., Learning Translation Templates froln Bilingual Text, COLING-92, pp.672-678, 1992. Kasper, R., A Unification Method for Disjunc- tive Feature Descriptions, ACL-87, pp.235- 242, 1987. Klavans, J. and Tzoukermann, E., The BICORD System: Combining Lexical Information from Bilingual Corpora. and Machine Readable Dic- tionaries, COLING-90, pp.174-179, 1990. Miller, G.A., et al., Five Papers on WordNet, Cog- nilive Science Laboratory, Princeton Univer- sity, CSL Report 43, July 1990. Roget, S.R., Roget's Thesaurus, Crowell Co., 1911. Sadler, V., The Textual Knowledge Bank: De- sign, Construction, Applications, Proc. h~ler- national Workshop on Fundamental Research for the Future Generation of Natural Language Processing (FGNLP), pp.17-32, Kyoto, Japan, 1991. Shimizu, M., et al. (ed.), Japanese-English Dictio- nary, Kodansha, 1979. Utsuro, T., Matsumoto, Y., and Nagao, M., Lexi- cal Knowledge Acquisition from Bilingual Cor- pora., COLING-92, pp.581-587, 1992. Utsuro, T., Matsumoto, Y., a.nd Nagao, M., Ver- bal Case Frame Acquisition from Bilingual Corpora, to appear IJCAI-93, 1993. 30
1993
4
THE IMPERFECTIVE PARADOX AND TRAJECTORY-OF-MOTION EVENTS * Michael White Department of Computer and Information Science University of Pennsylvania Philadelphia, PA, USA mwhit e©l inc. c is. upenn, edu Abstract In the first part of the paper, I present a new treatment of THE IMPERFI~CTIVE PARADOX (Dowty 1979) for the restricted case of trajectory- of-motion events. This treatment extends and re- fines those of Moens and Steedman (1988) and Jackendoff (1991). In the second part, I describe an implemented algorithm based on this treatment which determines whether a specified sequence of such events is or is not possible under certain sit- uationally supplied constraints and restrictive as- sumptions. Introduction Bach (1986:12) summarizes THE IMPERFECTIVE PARADOX (Dowty 1979) as follows: "...how can we characterize the meaning of a progressive sen- tence like (la) [17] on the basis of the meaning of a simple sentence like (lb) [18] when (la) can be true of a history without (lb) ever being true?" (la) John was crossing the street. (lb) John crossed the street. Citing parallels in the nominal domain, Bach goes on to point out that this puzzle is seemingly much more general, insofar as it appears whenever any sort of partitive is employed. In support of this view, we may observe that the start v-ing con- struction exhibits the same behavior: (2a) John started jogging to the museum. (2b) John jogged to the museum. Here we see that (2a) does not entail (2b) -- while (2b) asserts the occurrence of an entire event of John jogging to the museum, (2a) only asserts the *The author gratefully acknowledges the helpful comments of Mark Steedman, Jeff Siskind, Christy Doran, Matthew Stone, and the anonymous refer- ees, as well as the support of DARPA N00014-90-J- 1863, AI~O DAAL03-89-C-0031, NSF IRI 90-16592, Ben Franklin 91S.3078C-1. occurrence of the beginning of such an event, leav- ing open the existential status of its completion. Capitalizing on Bach's insight, I present in the first part of the paper a new treatment of the imperfective paradox which relies on the pos- sibility of having actual events standing in the part-of relation to hypothetical super-events. This treatment extends and refines those of Moens and Steedman (1988) and Jackendoff (1991), at least for the restricted case of trajectory-of-motion events. 1 In particular, the present treatment cor- rectly accounts not only for what (2a) fails to en- tail -- namely, that John eventually reaches the museum -- but also for what (2a) does in fact en- tail -- namely, that John follows (by jogging) at least an initial part of a path that leads to the museum. In the second part of the paper, I briefly describe an implemented algorithm based on this theoretical treatment which determines whether a specified sequence of trajectory-of-motion is or is not possible under certain situationally supplied constraints and restrictive assumptions. Theory The present treatment builds upon the ap- proach to aspectual composition developed in White (1993), a brief sketch of which follows. White (1993) argues that substances, processes and other such entities should be modeled as ab- stract kinds whose realizations (things, events, etc.) vary in amount. 2 This is accomplished for- mally through the use of an order-sorted logic with an axiomatized collection of binary relations. The intended sort hierarchy is much like those of Eberle (1990) and Jackendoff (1991); in par- ticular, both substances and things are taken to be subsorts of the material entities, and similarly 1These are elsewhere called 'directed-motion' events. 2This move is intended to resolve certain empirical and computational problems with the view of refer- ential homogeneity espoused by Krifka (1992) and his predecessors. 283 both processes and events are taken to be sub- sorts of the non-stative eventualities. What is new is the axiomatization of Jackendoff's composed-of relation (comp) -- which effects the aforemen- tioned kind-to-realization mapping -- in terms of Krifka's (1992) part-of relation (_U). Of particular interest is the following subpart closure property: (3) Vxyly2[comp(x)(yx) A y2C_yl --~ comp(x)(y2)] Postulate (3) states that all subparts of a realiza- tion of a given kind are also realizations of that kind. 3 From this postulate it follows, for example, that if e is a process of John running along the river which has a realization el lasting ten min- utes, and if e2 is a subevent of el -- the first half, say -- then e2 is also a realization of e. As such, this postulate may be used to make John ran along the river for ten minutes entail John ran along the river for five minutes, in contrast to the pair John ran to lhe museum in ten minutes and John ran to lhe museum in five minules. In order to resolve the imperfective paradox, we may extend White (1993) by adding a mapping from events to processes (whose realizations need not terminate in the same way), as well as a means for distinguishing actual and hypothetical events. To do the former, we may axiomatize comp's in- verse mapping -- Jackendoff's ground-from (gr) again in terms of Krifka's part-of relation. This is shown below: (4) VxylY2[gr(yl)(X ) A comp(x)(y2) ---* y2C_yl] Postulate (4) simply requires that all the realiza- tions e2 of a process e which is 'ground from' an event el must be subevents of el (and likewise, mutatis mutandis, for substances and things). As the realizations e2 of e may be proper subevents of el, the relation gr provides a means for accessing subevents of el with alternate terminations. To distinguish those events which actually oc- cur from those that are merely hypothetical, we may simply introduce a special predicate Actual, which we require to preserve the part-of relation only in the downwards direction: (5) Vxy[Actual(z) A yU_z --* Actual(y)] Postulate (5) is necessary to get John slopped run- ning to the museum after ten minutes to entail John ran for ten minutes as well as John ran for nine minutes, but not John ran for eleven min- utes. At this point we are ready to examine in some detail how the above machinery may be used in resolving the imperfective paradox. Let us assume 3For the sake of simplicity I will not address the minimal parts problem here. that sentences such as (6) receive compositional translations as in (7): (6a) John ran to the bridge. (6b) John stopped running to the bridge. (7a) 3el. run'(j)(el) A to'(the'(bridge'))(r~(el)) A Actual(el) (7b) 3eele2e3. run'(j)(el) A to'(the'(bridge'))(rs(el))A gr(el)(e) A comp(e)(e2) A stop'(e2)(ea) A Actual(e3) In (7), el is an event of John running to the bridge. 4 In (Ta), this event is asserted to be actual; in (7b), in contrast, the progressive morphology on run triggers the introduction of gr, which maps el to the process e. 5 It is this process which e3 is an event of stopping: following Jackendoff (1991), this is represented here by introducing an event e~ composed of e which has ea as its stopping point. Naturally enough, we may expect the actuality of e3 to entail the actuality of e2, and thus all subevents of e2. Nevertheless, the actuality of et does not follow, as Postulate (4) permits e2 to be a proper subpart of el (which is pragmatically the most likely case). To make the semantics developed so far more concrete, we may now impose a particular inter- pretation on trajectory-of-motion events, namely one in which these are modeled as continuous func- tions from times to locations of the object in mo- tion. Depending on how we model objects and locations, we of course arrive at interpretations of varying complexity. In what follows we focus only on the simplest such interpretation, which takes both to be points. Note that by assuming the preceding inter- pretation of trajectory-of-motion events, we may interpret the relation _ as the relation continuous- subset. Furthermore, we may also interpret pro- cesses as sets of events closed under the v- rela- tion; this then permits comp to be interpreted as element-of, and gr (for events) as mapping an event to the smallest process containing it. Before continuing, we may observe that this interpreta- tion does indeed satisfy Postulates (3) and (4). Application While the above interpretation of trajectory-of- motion events forces one to abstract away from *The spatial trace function r~ maps eventualities to their trajectories (cf. White 1993). 5Much as in Moens and Steedman (1988) and Jack- endoff (1991), the introduction of gr is necessary to avoid having an ill-sorted formula. 284 the manner of motion supplied by a verb, it does nevertheless permit one to consider factors such as the normal speed as well as the meanings of the prepositions 10, lowards, etc. By making two ad- ditional restrictive assumptions, namely that these events be of constant velocity and in one dimen- sion, I have been able to construct and implement an algorithm which determines whether a speci- fied sequence of such events is or is not possible under certain situationally supplied constraints. These constraints include the locations of various landmarks (assumed to remain stationary) and the minimum, maximum, and normal rates associated with various manners of motion (e.g. running, jog- ging) for a given individual. The algorithm takes an input string and com- positionally derives a sequence of logical forms (one for each sentence) using a simple categorial grammar (most of which appears in White 1993). A special-purpose procedure is then used to in- stantiate the described sequence of events as a con- straint optimization problem; note that although this procedure is quite ad-hoc, the constraints are represented in a declarative, hierarchical fashion (cf. White 1993). If the constraint optimiza- tion problem has a solution, it is found using a slightly modified version of the constraint satis- faction procedure built into SCaEAMER, Siskind and McAllester's (1993) portable, efficient version of nondeterministic Common Lisp. 6 As an example of an impossible description, let us consider the sequence of events described below: (8) Guy started jogging eastwards Mong the river. 25 minutes later he reached {the cafe / the museum}. If we assume that the user specifies the cafe and the museum to be 5 and 10 km, respectively, from the implicit starting point, and that the rates spec- ified for Guy are those of a serious but not super- human athlete, then the algorithm will only find a solution for the first case (10 km in 25 minutes is too much to expect.) Now, by reasoning about subevents -- here, subsegments of lines in space- time -- the program exhibits the same behavior with the pair in (9): (9) Guy started jogging to the bar. 25 minutes later he reached {the cafe / the museum}. Since "Guy jogging to the cafe is accepted as a possible proper subevent of Guy jogging to the 6The constraint optimization problem is split into two constraint satisfaction problems, namely find- ing the smallest consistent value of a cost variable and then finding consistent values for the rest of the variables. bar (assuming the bar is further east than the other landmarks), example (9) shows how the present approach successfully avoids the imperfec- tire paradox; since Guy jogging to the museum (in 25 minutes) is not accepted as a possible subevent, example (9) likewise shows how the present ap- proach extends and refines those of Moens and Steedman and 3ackendoff vis-a-vis the subevent relation.7 Future Work The algorithm as implemented functions only un- der a number of quite restrictive assumptions, and suffers from a rather ad-hoc use of the derived logi- cal forms. In future work I intend to extend the al- gorithm beyond the unidimensional and constant velocity cases considered so far, and to investigate incorporating the present treatment into the In- terpretation as Abduction approach advocated by Hobbs et. al. (1993). References [1] Emmon Bach. The algebra of events. Linguistics and Philosophy, 1986. [2] David R. Dowty. Word Meaning and Montague Gram- mar. Reidel, 1979. [3] Kurt Eberle. Eventualities in natural language under- standing systems. In Sorts and Types in Artificial Intel- ligence. Springer Verlag, 1990. [4] Christopher Habel. Propositional and depictorial rep- resentations of spatial knowledge: The case of path- concepts. In Natural Language and Logic. Springer Ver- lag, 1990. Lecture Notes in Artificial Intelligence. [5] Erhard Hinrichs. A Compositional Semantics for Ak- tionsarten and NP Reference in English. PhD thesis, The Ohio State University, 1985. [61 Jerry Hobbs, Mark Stickel, Douglas Appelt, and Paul Martin. Interpretation as abduction, 1993. To appear in Artificial Intelligence Journal. [7] Ray Jackendoff. Parts and boundaries. Cognition, 41:9- 45, 1991. [g] Manfred Krifka. Thematic relations as links between nom- inal reference and temporal constitution. In Ivan A, Sag and Anna Szabolesi, editors, Lexical Matters. CSLI, 1992. [9] Marc Moens and Mark Steedman. Temporal ontology and temporal reference. Computational Linguistics, June 1988. [10] Jeffrey Mark Siskind and David Allen McAllester. Non- deterministic lisp as a substrate for constraint logic pro- gramming. To appear in AAAI-93, 1993. [11] H. J. Verkuyl. Aspectual classes and aspectual composi- tion. Linguistics and Philosophy, 12(1), 1989. [12] Michael White. Delimitedness and trajectory-of-motion events. In Proceedings of the Sixth Conference of the European Chapter of the Association for Computational Linguistics (EACL '93), 1993. 7It is worth noting that the constant velocity re- strictive assumption makes start running to and start running towards synonymous, which is not the case in general (cf. Habel 1990). 285
1993
40
TEXT SEGMENTATION BASED ON SIMILARITY BETWEEN WORDS Hideki Kozima Course in Computer Science and Information Mathematics, Graduate School, University of Electro-Communications 1-5-1, Chofugaoka, Chofu, Tokyo 182, Japan (xkozima@phaeton. cs.uec, ac. jp) Abstract This paper proposes a new indicator of text struc- ture, called the lexical cohesion profile (LCP), which locates segment boundaries in a text. A text segment is a coherent scene; the words in a segment a~e linked together via lexical cohesion relations. LCP records mutual similarity of words in a sequence of text. The similarity of words, which represents their cohesiveness, is computed using a semantic network. Comparison with the text segments marked by a number of subjects shows that LCP closely correlates with the hu- man judgments. LCP may provide valuable in- formation for resolving anaphora and ellipsis. INTRODUCTION A text is not just a sequence of words, but it has coherent structure. The meaning of each word can not be determined until it is placed in the structure of the text. Recognizing the structure of text is an essential task in text understanding, especially in resolving anaphora and ellipsis. One of the constituents of the text struc- ture is a text segment. A text segment, whether or not it is explicitly marked, as are sentences and paragraphs, is defined as a sequence of clauses or sentences that display local coherence. It resem- bles a scene in a movie, which describes the same objects in the same situation. This paper proposes an indicator, called the lexical cohesion profile (LCP), which locates seg- ment boundaries in a narrative text. LCP is a record of lexical cohesiveness of words in a se- quence of text. Lexical cohesiveness is defined as word similarity (Kozima and Furugori, 1993) computed by spreading activation on a semantic network. Hills and valleys of LCP closely corre- late with changing of segments. SEGMENTS AND COHERENCE Several methods to capture segment boundaries have been proposed in the studies of text struc- ture. For example, cue phrases play an impor- tant role in signaling segment changes. (Grosz and Sidner, 1986) However, such clues are not di- rectly based on coherence which forms the clauses or sentences into a segment. Youmans (1991) proposed VMP (vocabu- lary management profile) as an indicator of seg- ment boundaries. VMP is a record of the number of new vocabulary terms introduced in an inter- val of text. However, VMP does not work well on a high-density text. The reason is that coherence of a segment should be determined not only by reiteration of words but also by lexical cohesion. Morris and Hirst (1991) used Roget's the- saurus to determine whether or not two words have lexical cohesion. Their method can cap- ture ahnost all the types of lexical cohesion, e.g. systematic and non-systematic semantic relation. However it does not deal with strength of cohe- siveness which suggests the degree of contribution to coherence of the segment. Computing Lexieal Cohesion Kozima and Furugori (1993) defined lexical co- hesiveness as semantic similarity between words, and proposed a method for measuring it. Sim- ilarity between words is computed by spreading activation on a semantic network which is system- atically constructed from an English dictionary (LDOCE). The similarity cr(w,w') E [0,1] between words w,w ~ is computed in the following way: (1) produce an activated pattern by activating the node w; (2) observe activity of the node w t in the activated pattern. The following examples suggest the feature of the similarity ~r: ¢r (cat, pet) = 0.133722 , o" (cat, hat) = 0.001784 , ¢r (waiter, restaurant) = 0.175699 , cr (painter, restaurant) = 0.006260 . The similarity ~r depends on the significance s(w) E [0, 1], i.e. normalized information of the word w in West's corpus (1953). For example: s(red) = 0.500955 , s(and) = 0.254294 . 286 0.4 0.2 : 0.1 alcohol_ drink_lN I I dr ink_2'q I ~ k.._k r e-d_ 1NL_J ] bott le_Ik___~ wine _ 1~___~ poison-l~] ~ I I I I I swallow~l~___~--1 I I I I I I I I spirit_l 2 4 6 8 10 steps Figure 1. An activated pattern of a word list (produced from {red, alcoholic, drink}). The following examples show the relationship be- tween the word significance and the similarity: (waiter, waiter) = 0.596803 , a (red, blood) ---- 0.111443 , (of, blood) = 0.001041 . LEXICAL COHESION PROFILE LCP of the text T= {wl,'",wg} is a sequence { c( $1 ),. •., e( SN ) } of lexic al cohesiveness e(Si ). Si is the word list which can be seen through a fixed- width window centered on the i-th word of T: Si--{Wl, Wl+l, " " " , wi-1, wi, Wi+l, " " • , Wr--1, Wr}, 1 =i--A (ifi_<A, thenl=l), r = i+A (if i>N--A, then r=N). LCP treats the text T as a word list without any punctuation or paragraph boundaries. Cohesiveness of a Word List Lexical cohesiveness c(Si) of the word list Si is defined as follows: c(S ) = w), where a(P(Si),w) is the activity value of the node w in the activated pattern P(Si). P(Si) is produced by activating each node w E Si with strength s(w)~/~ s(w). Figure 1 shows a sam- ple pattern of {red, alcoholic, drink}. (Note that it has highly activated nodes like bottle and wine.) The definition of c(Si) above expresses that c(Si) represents semantic homogeneity of S/, since P(Si) represents the average meaning of w 6 S~.. For example: c("Molly saw a cat. It was her family pet. She wished to keep a lion." = 0.403239 (cohesive), c( "There is no one but me. Put on your clothes. I can not walk more." ---- 0.235462 (not cohesive). LCP V ~ LCP olo o o o olo[o o...].. words Figure 2. Correlation between LCP and text segments. 0.6 0.5 0.4 0.3 loo 2;o 4oo i (words) Figure 3. An example of LCP (using rectangular window of A=25) LCP and Its Feature A graph of LCP, which plots c(Si) at the text position i, indicates changing of segments: • If S/ is inside a segment, it tends to be co- hesive and makes c(Si) high. • If Si is crossing a segment boundary, it tends to semantically vary and makes c(Si) low. As shown in Figure 2, the segment boundaries can be detected by the valleys (minimum points) of LCP. The LCP, shown in Figure 3, has large hills and valleys, and also meaningless noise. The graph is so complicated that one can not easily deternfine which valley should be considered as a segment boundary. The shape of the window, which defines weight of words in it for pattern production, makes LCP smooth. Experiments on several win- dow shapes (e.g. triangle window, etc.) shows that Hanning window is best for clarifying the macroscopic features of LCP. The width of the window also has effect on the macroscopic features of LCP, especially on separability of segments. Experiments on several window widths (A_ 5 ~ 60) reveals that the Han- ning window of A = 25 gives the best correlation between LCP and segments. 287 LCP 0.7" 0.6 : 0.5 0.4 0.3 16 Segmen- 14 tations 12 lO .6 '4 • ' izi ,. J i i I 0 100 200 300 400 500 600 700 i (words) Figure 4. Correlation between LCP and segment boundaries. VERIFICATION OF LCP This section inspects the correlation between LCP and segment boundaries perceived by the human judgments. The curve of Figure 4 shows the LCP of the simplified version of O.Henry's "Springtime £ la Carte" (Thornley, 1960). The solid bars represent the histogram of segment boundaries reported by 16 subjects who read the text without paragraph structure. It is clear that the valleys of the LCP cor- respond mostly to the dominant segment bound- aries. For example, the clear valley at i = 110 exactly corresponds to the dominant segment boundary (and also to the paragraph boundary shown as a dotted line). Note that LCP can detect segment changing of a text regardless of its paragraph structure. For example, i = 156 is a paragraph boundary, but neither a valley of the LCP nor a segment boundary; i = 236 is both a segment boundary and approximately a valley of the LCP, but not a paragraph boundary. However, some valleys of the LCP do not exactly correspond to segment boundaries. For example, the valley near i = 450 disagree with the segment boundary at i = 465. The reason is that lexical cohesion can not cover all aspect of coherence of a segment; an incoherent piece of text can be lexically cohesive. CONCLUSION This paper proposed LCP, an indicator of seg- ment changing, which concentrates on lexical cohesion of a text segment. The experiment proved that LCP closely correlate with the seg- ment boundaries captured by the human judg- ments, and that lexical cohesion plays main role in forming a sequence of words into segments. Text segmentation described here provides basic information for text understanding: • Resolving anaphora and ellipsis: Segment boundaries provide valuable re- striction for determination of the referents. • Analyzing text structure: Segment boundaries can be considered as segment switching (push and pop) in hier- archical structure of text. The segmentation can be applied also to text summarizing. (Consider a list of average meaning of segments.) In future research, the author needs to ex- amine validity of LCP for other genres -- Hearst (1993) segments expository texts. Incorporating other clues (e.g. cue phrases, tense and aspect, etc.) is also needed to make this segmentation method more robust. ACKNOWLEDGMENTS The author is very grateful to Dr. Teiji Furugori, University of Electro-Communications, for his in- sightful suggestions and comments on this work. REFERENCES Grosz, Barbara J., and Sidner, Candance L. (1986). "Attention, intentions, and the structure of dis- course." Computational Linguistics, 12, 175-204. Halliday, Michael A. K., Hasan, Ruqaiya (1976). Che- sion in English. Longman. Hearst, Marti, and Plaunt, Christian (1993). "Sub- topic structuring for full-length document access," to appear in SIGIR 1993, Pittsburgh, PA. Kozima, Hideki, and Furugori, Teiji (1993). "Simi- larity between words computed by spreading ac- tivation on an English dictionary." to appear in Proceedings o] EA CL-93. Morris, Jane, and Hirst, Graeme (1991). "Lexical cohesion computed by thesaural relations as an indicator of the structure of text." Computational Linguistics, 17, 21-48. Thornley, G. C. editor (1960). British and Ameri- can Short Stories, (Longman Simplified English Series). Longman. West, Michael (1953). A General Service List of En- glish Words. Longman. Youmans, Gilbert (1991). "A new tool for discourse analysis: The vocabulary-management profile." Language, 67, 763-789. 288
1993
41
HOW DO WE COUNT? THE PROBLEM OF TAGGING PHRASAL VERBS IN PARTS Nava A. Shaked The Graduate School and University Center The City University of New York 33 West 42nd Street. New York, NY 10036 [email protected] ABSTRACT This paper examines the current performance of the stochastic tagger PARTS (Church 88) in handling phrasal verbs, describes a problem that arises from the statis- tical model used, and suggests a way to improve the tagger's performance. The solution involves a change in the definition of what counts as a word for the pur- pose of tagging phrasal verbs. 1. INTRODUCTION Statistical taggers are commonly used to preprocess natural language. Operations like parsing, information retrieval, machine translation, and so on, are facilitated by having as input a text tagged with a part of speech label for each lexical item. In order to be useful, a tag- ger must be accurate as well as efficient. The claim among researchers advocating the use of statistics for NLP (e.g. Marcus et al. 92) is that taggers are routinely correct about 95% of the time. The 5% error rate is not perceived as a problem mainly because human taggers disagree or make mistakes at approximately the same rate. On the other hand, even a 5% error rate can cause a much higher rate of mistakes later in processing if the mistake falls on a key element that is crucial to the correct analysis of the whole sentence. One example is the phrasal verb construction (e.g. gun down, back off). An error in tagging this two element sequence will cause the analysis of the entire sentence to be faulty. An analysis of the errors made by the stochastic tag- ger PARTS (Church 88) reveals that phrasal verbs do indeed constitute a problem for the model. 2. PHRASAL VERBS The basic assumption underlying the stochastic pro- cess is the notion of independence. Words are defined as units separated by spaces and then undergo statis- tical approximations. As a result the elements of a phrasal verb are treated as two individual words, each with its own lexical probability (i.e. the probability of observing part of speech i given word j). An interesting pattern emerges when we examine the errors involving phrasal verbs. A phrasal verb such as sum up will be tagged by PARTS as noun + preposition instead of verb + particle. This error influences the tagging of other words in the sentence as well. One typical error is found in infinitive constructions, where a phrase like to gun down is tagged as INTO NOUN IN (a prepo- sitional 'to' followed by a noun followed by another preposition). Words like gun, back, and sum, in iso- lation, have a very high probability of being nouns a.s opposed to verbs, which results in the misclassification described above. However, when these words are fol- lowed by a particle, they are usually verbs, and in the infinitive construction, always verbs. 2.1. THE HYPOTHESIS Tile error appears to follow froln the operation of the stochastic process itself. In a trigram model the proba- bility of each word is calculated by taking into consider- ation two elements: the lexical probability (probability of the word bearing a certain tag) and the contextual probability (probability of a word bearing a certain tag given two previous parts of speech). As a result, if an element has a very high lexical probability of being a noun (gun is a noun in 99 out of 102 occurrences in the Brown Corpus), it will not only influence but will ac- tually override the contextual probability, which might suggest a different assignment. In the case of to gun down the ambiguity of to is enhanced by the ambiguity of gun, and a mistake in tagging gun will automatically lead to an incorrect tagging of to as a preposition. It follows that the tagger should perform poorly on 289 phrasal verbs in those cases where the ambiguous el- ement occurs much more frequenty as a noun (or any other element that is not a verb).The tagger will expe- rience fewer problems handling this construction when the ambiguous element is a verb in the vast majority of instances. If this is true, the model should be changed to take into consideration the dependency between the verb and the particle in order to optimize the perfor- mance of the tagger. 3. THE EXPERIMENT 3.1. DATA The first step in testing this hypothesis was to evalu- ate the current performance of PARTS in handling the phrasal verb construction. To do this a set of 94 pairs of Verb+Particle/Preposition was chosen to represent a range of dominant frequencies from overwhelmingly noun to overwhelmingly verb. 20 example sentences were randomly selected for each pair using an on-line corpus called MODERN, which is a collection of several corpora (Brown, WSJ, AP88-92, HANSE, HAROW, WAVER, DOE, NSF, TREEBANK, and DISS) total- ing more than 400 million words. These sentences were first tagged manually to provide a baseline and then tagged automatically using PARTS. The a priori op- tion of assuming only a verbal tag for all the pairs in question was also explored in order to test if this simple solution will be appropriate in all cases. The accuracy of the 3 tagging approaches was evaluated. 3.2. RESULTS Table 2 presents a sample of the pairs examined in tile first column, PARTS performance for each pair in tile second, and the results of assuming a verbal tag in the third. (The "choice" colunm is explained below.) The average performance of PARTS for this task is 89%, which is lower than the general average perfor- mance of the tagger as claimed in Church 88. Yet we notice that simply assigning a verbal tag to all pairs ac- tually degrades performance because in some cases the content word is a.lmost always a noun rather than a verb. For example, a phrasal verb like box in generally appears with an intervening object (to box something in), and thus when box and in are adjacent (except for those rare cases involving heavy NP shift) box is a noun. Thus we see that there is a need to distinguish be- tween the cases where the two element sequence should be considered as one word for the purpose of assign- iug the Lexical Probability (i.e.,phrasal verb) and cases where we have a Noun + Preposition combination where PARTS' analyses will be preferred. The "choice" in PHRASAL IMP. VERB FREQ. DIST. (BROWN) date-from 1.00 date flesh-out 0.59 flesh bottle-up 0.55 bottle hand-over 0.35 hand narrow-down 0.23 narrow close-down 0.22 close drive-off 0.22 drive cry-out 0.21 cry average-out 0.20 average head-for 0.18 head end-up 0.16 end account-for 0.15 account double-up 0.14 double back-off 0.13 back cool-off 0.13 cool clear-out 0.12 clear cahn-down 0.10 cMm NN/98 VB/6 NN/53 VB/1 NN/77 VB/1 NN/411 VB/8 JJ/61 NN/1 VB/1 J J/81 NN/16 QL/1 RB/95 VB/40 NN/49 VB/46 NN/31 VB/19 J J/64 NN/60 VB/6 J J/4 NN/404 VB/14 NN/359 VB/41 NN/89 VB/28 JJ/37 NN/ll RB/4 VB/6 JJ/27 NN/177 RB/720 VB/26 J J/49 NN/3 RB/1 VB/S J J/197 NN/1 RB/10 VB/15 JJ/22 NN/8 VB/7 Table 1: 10% or more improvement for elements of non verbal frequency. Table 2 shows that allowing a choice between PARTS' analysis and one verbal tag to the phrase by taking the higher performance score, improves the performance of PARTS from 89% to 96% for this task, and reduces the errors in other constructions involving phrasal verbs. When is this alternative needed? In the cases where PARTS had 10% or more errors, most of the verbs oc- cur lnuch more often as nouns or adjectives. This con- firms my hypothesis that PARTS will have a problem solving the N/V ambiguity in cases where the lexical probability of the word points to a noun. These are the very cases that should be treated as one unit in the system. The lexical probability should be assigned to the pair as a whole rather than considering the two elements separately. Table 1 lists the cases where tag- ging improves 10% or more when PARTS is given the additional choice of assigning a verbal tag to the whole expression. Frequency distributions of these tokens in tile Brown Corpus are presented as well, which reflect why statistical probabilities err in these cases. In or- der to tag these expressions correctly, we will have to capture additional information about the pair which is not available froln tile PARTS statistical model. 290 pairs parts all verb choice account-for 0.84 1 1 aim-at 0.90 0.30 0.90 average-out 0.7 0.9 0.9 back-off 0.86 1 1 balance-out 0.92 0.84 0.92 bargain-for 0.87 0.58 0.87 block-in 0.97 0.02 0.97 book-in 1 0 1 bottle-up 0.36 0.90 0.90 bottom-out 0.8 0.85 0.85 box-in 1 0.02 1 break-away 1 1 1 call-back 0.96 0.84 0.96 calm-down 0.85 0.95 0.95 care-for 0.9 0.48 0.93 cash-in 0.95 0.25 0.95 change-into 0.85 0.89 0.89 check-in 0.96 0.48 0.96 clear-out 0.87 1 1 close-down 0.77 1 1 contract-in 1 0.02 1 cool-off 0.86 1 1 credit-with 1 0 1 cry-out 0.79 1 1 date-from 0 1 1 deal-with 0.96 0.92 0.96 demand-of 1 0.04 1 double-up 0.80 0.95 0.95 end-up 0.83 1 1 fall-in 0.92 0.29 0.92 feel-for 0.93 0.33 0.93 flesh-out 0.41 1 1 flow-from 0.94 0.42 0.94 fool-around 0.91 1 1 force-upon 0.84 0.61 0.84 gun-down 0.60 0.62 0.62 hand-over 0.65 1 1 head-for 0.63 0.81 0.81 heat-up 0.94 1 1 hold-down 0.92 1 1 lead-on 1 0.07 1 let-down 0.57 0.57 0.57 live-for 0.91 1 1 move-in 0.96 0.60 0.96 narrow-down 0.77 1 1 part-with 0.79 0.43 0,79 phone-in 0.91 0,12 0.91 TOTAL AVERAGE 0.89 0,79 0.96 Table 2: A Sample of Performance Evaluation 4. CONCLUSION: LINGUISTIC INTUITIONS This paper shows that for some cases of phrasal verbs it is not enough to rely on lexical probability alone: We must take into consideration the dependency between the verb and the particle in order to improve the per- formance of the tagger.The relationship between verbs and particles is deeply rooted in Linguistics. Smith (1943) introduced the term phrasal verb, arguing that it should be regarded as a type of idiom because the el- ements behave as a unit. He claimed that phrasal verbs express a single concept that often has a one word coun- terpart in other languages, yet does not always have compositional meaning. Some particles are syntacti- cally more adverbial in nature and some more preposi- tional, but it is generally agreed that the phrasal verb constitutes a kind of integral functional unit. Perhaps linguistic knowledge can help solve the tagging problem described here and force a redefinition of the bound- aries of phrasal verbs. For now we can redefine the word boundaries for the problematic cases that PARTS doesn't handle well. Future research should concen- trate on the linguistic characteristics of this problem- atic construction to determine if there are other cases where the current assumption that one word equals one unit interferes with successful processing. 5. ACKNOWLEDGEMENT I wish to thank my committee members Virginia Teller, Judith Klavans and John Moyne for their helpful com- ments and support. I am also indebted to Ken Church and Don Hindle for their guidance and help all along. 6. REFERENCES K. W. Church. A Stochastic Parts Program and Noun Phrase Parser for Unrestricted Text. Proc. Conf. on Applied Natural Language Processing, 136-143, 1988. K. W. Church, & R. Mercer. Introduction to the Spe- cial Issue on Computational Linguistics Using Large Corpora. To appear in Computational Linguistics, 1993. C. Le raux. On The Interface of Morphology & Syntax. Evidence from Verb-Particle Combinations in Afi-ican. SPIL 18. November 1988. MA Thesis. M. Marcus, B. Santorini & D. Magerman. First steps towards an annotated database of American English. Dept. of Computer and Information Science, University of Pennsylvania, 1992. MS. L. P. Smith. Words ~" Idioms: Studies in The English Language. 5th ed. London, 1943. 291
1993
42
RAISINS, SULTANAS~ AND CURRANTS: LEXICAL CLASSIFICATION AND ABSTRACTION VIA CONTEXT PRIMING David J. Hutches Department of Computer Science and Engineering, Mail Code 0114 University of California, San Diego La Jolla, CA 92093-0114 [email protected] Abstract In this paper we discuss the results of experiments which use a context, essentially an ordered set of lexical items, as the seed from which to build a network representing statistically important rela- tionships among lexical items in some corpus. A metric is then applied to the nodes in the network in order to discover those pairs of items related by high indices of similarity. The goal of this research is to instantiate a class of items corresponding to each item in the priming context. We believe that this instantiation process is ultimately a special case of abstraction over the entire network; in this abstraction, similar nodes are collapsed into meta- nodes which may then function as if they were sin- gle lexical items. I. Motivation and Background With respect to the processing of language, one of the tasks at which human beings seem rel- atively adept is the ability to determine when it is appropriate to make generalizations and when it is appropriate to preserve distinctions. The process of abstraction and knowing when it might reason- ably be used is a necessary tool in reducing the complexity of the task of processing natural lan- guage. Part of our current research is an investi- gation into how the process of abstraction might be realized using relatively low-level statistical in- formation extracted from large textual corpora. Our experiments are an attempt to discover a method by which class information about the members of some sequence of lexical items may be obtained using strictly statistical methods. For our purposes, the class to which a lexical item be- longs is defined by its instantiation. Given some context such as he walked across the room, we would like to be able to instantiate classes of items corresponding to each item in the context (e.g., the class associated with walked might include items such as paced, stepped, or sauntered). The corpora used in our experiments are the Lancaster-Oslo-Bergen (LOB) corpus and a sub- set of the ACL/DCI Wall Street Journal (WSJ) corpus. The LOB corpus consists of a total of 1,008,035 words, composed of 49,174 unique words. The subset of the WSJ corpus that we use has been pre-processed such that all letters are folded to lower case, and numbers have been collapsed to a single token; the subset consists of 18,188,548 total words and 159,713 unique words. II. Context Priming It is not an uncommon notion that a word may be defined not rigourously as by the as- signment of static syntactic and semantic classes, but dynamically as a function of its usage (Firth 1957, 11). Such usage may be derived from co- occurrence information over the course of a large body of text. For each unique lexical item in a cor- pus, there exists an "association neighbourhood" in which that item lives; such a neighbourhood is the probability distribution of the words with which the item has co-occurred. If one posits that similar lexical items will have similar neighbour- hoods, one possible method of instantiating a class of lexical items would be to examine all unique items in a corpus and find those whose neighbour- hoods are most similar to the neighbourhood of the item whose class is being instantiated. How- ever, the potential computational problems of such an approach are clear. In the context of our ap- proach to this problem, most lexical items in the search space are not even remotely similar to the item for which a class is being instantiated. Fur- thermore, a substantial part of a lexical item's as- sociation neighbourhood provides only superficial information about that item. What is required is a process whereby the search space is reduced dramatically. One method of accomplishing this pruning is via context priming. In context priming, we view a context as the seed upon which to build a network describing that part of the corpus which is, in some sense, close to the context. Thus, just as an individual lexical item has associated with it a unique neighbour- hood, so too does a context have such a neigh- bourhood. The basic process of building a net- work is straightforward. Each item in the priming context has associated with it a unique neighbour- hood defined in terms of those lexical items with which it has co-occurred. Similarly, each of these 292 latter items also has a unique association neigh- bourhood. Generating a network based on some context consists in simply expanding nodes (lexi- cM items) further and further away from the con- text until some threshold, called the depth of the network, is reached. Just as we prune the total set of unique lexical items by context priming, we also prune the neigh- bourhood of each node in the network by using a statistical metric which provides some indication of how important the relationship is between each lexical item and the items in its neighbourhood. In the results we describe here, we use mutual in- formation (Fano 1961, 27-28; Church and Hanks 1990) as the metric for neighbourhood pruning, pruning which occurs as the network is being gen- erated. Yet, another parameter controlling the topology of the network is the extent of the "win- dow" which defines the neighbourhood of a lexi- cal item (e.g., does the neighbourhood of a lexical item consist of only those items which have co- occurred at a distance of up to 3, 5, 10, or 1000 words from the item). III. Operations on the Network The network primed by a context consists merely of those lexical items which are closely reachable via co-occurrence from the priming con- text. Nodes in the network are lexical items; arcs represent co-occurrence relations and carry the value of the statistical metric mentioned above and the distance of co-occurrence. With such a network we attempt to approximate the statisti- cally relevant neighbourhood in which a particular context might be found. In the tests performed on the network thus far we use the similarity metric S(x, y) - IA n BI 2 IA u BI where x and y are two nodes representing lexical items, the neighbourhoods of which are expressed as the sets of arcs A and B respectively. The met- ric S is thus defined in terms of the cardinalities of sets of arcs. Two arcs are said to be equal if they reference (point to) the same lexical item at the same offset distance. Our metric is a modification of the Tanimoto coefficient (Bensch and Savitch 1992); the numerator is squared in order to assign a higher index of similarity to those nodes which have a higher percentage of arcs in common. Our first set of tests concentrated directly on items in the seed context. Using the metric above, we attempted to instantiate classes of lexical items for each item in the context. In those cases where there were matches, the results were often encour- aging. For example, in the LOB corpus, using the seed context John walked across the room, a net- work depth of 6, a mutual information threshold of 6.0 for neighbourhood pruning, and a window of 5, for the item John, we instantiated the class {Edward, David, Charles, Thomas}. A similar test on the WSJ corpus yielded the following class for john r ichard,paul,thomas,edward,david, donald,daniel,f rank,michael,dennis, j oseph,j im,alan,dan,roger Recall that the subset of the WSJ corpus we use has had all items folded to lower case as part of the pre-processing phase, thus all items in an in- stantiated class will also be folded to lower case. In other tests, the instantiated classes were less satisfying, such as the following class gener- ated for wife using the parameters above, the LOB, and the context his wife walked across the room mouth,father,uncle,lordship, } finger s,mother,husband,f ather ' s, shoulder,mother ' s,brother In still other cases, a class could not be instan- tiated at all, typically for items whose neigh- bourhoods were too small to provide meaningful matching information. IV. Abstraction It is clear that even the most perfectly derived lexical classes will have members in common. The different senses of bank are often given as the clas- sic example of a lexically ambiguous word. From our own data, we observed this problem because of our preprocessing of the WSJ corpus; the instan- tiation of the class associated with mark included some proper names, but also included items such as marks, currencies, yen, and dollar, a con- founding of class information that would not have occurred had not case folding taken place. Ide- ally, it would be useful if a context could be made to exert a more constraining influence during the course of instantiating classes. For example, if it is reasonably clear from a context, such as mark loves mary, that the "mark" in question is the human rather than the financial variety, how may we ensure that the context provides the proper constraining information if loves has never co- occurred with mark in the original corpus? 293 In the case of the ambiguous mark above, while this item does not appear in the neighbour- hood of loves, other lexical items do (e.g., every- one, who, him, mr), items which may be members of a class associated with mark. What is proposed, then, is to construct incrementally classes of items over the network, such that these classes may then function as a single item for the purpose of deriv- ing indices of similarity. In this way, we would not be looking for a specific match between mark and loves, but rather a match among items in the same class as mark; items in the same class as loves, and items in the same class as mary. With this in mind, our second set of experiments con- centrated not specifically on items in the priming context, but on the entire network, searching for candidate items to be collapsed into meta-nodes representing classes of items. Our initial experiments in the generation of pairs of items which could be collapsed into meta- nodes were more successful than the tests based on items in the priming context. Using the LOB corpus, the same parameters as before, and the priming context John walked across the room, the following set of pairs represents some of the good matches over the generated network. (minut es,days),(three,f ive),(f ew, five), (2,3),(f ig,t able),(days,years),(40,50), (me,him),(three,f ew),(4,5),(50,100), (currants,sultanas),(sultanas,raisins), (currants,raisins),... Using the WSJ corpus, again the same parameters, and the context john walked across the room, part of the set of good matches generated was (months,weeks),(rose,f ell),(days,weeks), (s ingle-a-plus,t riple-b-plus), (single-a-minus,t riple-b-plus), (lawsuit ,complaint),(analyst ,economist) (j ohn,robert), (next ,past ), ( s ix,f ive), (lower,higher),(goodyear,f irest one), (prof it,loss),(billion,million), (j llne ,march),(concedes ,acknowledges), (days ,weeks ), (months ,years ),... It should be noted that the sets given above repre- sent the best good matches. Empirically, we found that a value of S > 1.0 tends to produce the most meaningful pairings. At S < 1.0, the amount of "noisy" pairings increases dramatically. This is not an absolute threshold, however, as apparently unacceptable pairings do occur at S > 12, such as, for example, the pairs (catching, teamed), (accumulating, rebuffed), and (father, mind). V. Future Research The results of our initial experiments in gen- erating classes of lexical items are encouraging, though not conclusive. We believe that by in- crementally collapsing pairs of very similar items into meta-nodes, we may accomplish a kind of ab- straction over the network which will ultimately allow the more accurate instantiation of classes for the priming context. The notion of incremen- tally merging classes of lexical items is intuitively satisfying and is explored in detail in (Brown, et al. 1992). The approach taken in the cited work is somewhat different than ours and while our method is no less computationally complex than that of Brown, et al., we believe that it is somewhat more manageable because of the prun- ing effect provided by context priming. On the other hand, unlike the work described by Brown, et al., we as yet have no clear criterion for stopping the merging process, save an arbitrary threshold. Finally, it should be noted that our goal is not, strictly speaking, to generate classes over an entire vocabulary, but only that portion of the vocabu- lary relevant for a particular context. It is hoped that, by priming with a context, we may be able to effect some manner of word sense disambiguation in those cases where the meaning of a potentially ambiguous item ,nay be resolved by hints in the context. VI. References Bensch, Peter A. and Walter J. Savitch. 1992. "An Occurrence-Based Model of Word Cat- egorization". Third Meeting on Mathemat- ics of Language. Austin, Texas: Association for Computational Linguistics, Special Inter- est Group on the Mathematics of Language. Brown, Peter F., et al. 1992. "Class-Based n- gram Models of Natural Language". Compu- tational Linguistics 18.4: 467-479. Church, Kenneth Ward, and Patrick Hanks. 1990. "Word Association Norms, Mutual Informa- tion, and Lexicography". Computational Lin- guistics 16.1: 22-29. Fano, Robert M. 1961. Transmission o/ In]or- marion: A Statistical Theory o] Communica- tions. New York: MIT Press. Firth, J[ohn] R[upert]. 1957. "A Synopsis of Lin- guistic Theory, 1930-55." Studies in Linguis- tic Analysis. Philological Society, London. Oxford, England: Basil Blackwelh 1-32. 294
1993
43
Guiding an HPSG Parser using Semantic and Pragmatic Expectations Jim Skon Computer and Information Science Department The Ohio State University Columbus, OH 43210, USA Internet: skon@ cis.ohio-state.edu Abstract 1 Efficient natural language generation has been successfully demonstrated using highly compiled knowledge about speech acts and their related social actions. A design and prototype implementation of a parser which utilizes this same pragmatic knowledge to efficiently guide parsing is presented. Such guidance is shown to prune the search space and thus avoid needless processing of pragmatically unlikely constituent structures. INTRODUCTION The use of purely syntactic knowledge during the parse phase of natural language understanding yields considerable local ambiguity (consideration of impossible subeonstituents) as well global ambiguity (construction of syntactically valid parses not applicable to the socio-pragmatic context). This research investigates bringing socio-pragmatic knowledge to bear during the parse, while maintaining a domain independent grammar and parser. The particular technique explored uses knowledge about the pragmatic context to order the consideration of proposed parse constituents, thus guiding the parser to consider the best (wrt the expectations) solutions first. Such a search may be classified as a best- first search. The theoretical models used to represent the pragmatic knowledge in this study are based on Halliday's Systemic Grammar and a model of the pragmatics of conversation. The model used to represent the syntax and domain independent semantic knowledge is HPSG - Head-driven Phrase Structure Grammar. BACKGROUND Patten, Geis and Becker (1992) demonstrate the application of knowledge compilation to achieve the rapid generation of natural language. Their mechanism is based on Halliday's systemic networks, and on Geis' theory of the pragmatics of conversation. A model of conversation using principled compilation of pragmatic knowledge and other linguistic knowledge is used to permit the application of pragmatic inference without expensive computation. A pragmatic component is used to model social action, including speech acts, and utilize conventions of us.g involving such features of context such as politeness, ~e~gister, and stylistic features. These politeness features are critiqd}l to the account of indirect speech acts. This pragmatic knovCledge is compiled into course-grained knowledge in the form of a classification hierarchy. A planner component uses knowledge about conditions which need to be satisfied (discourse goals) to produce a set of pragmatic features which characterize a desired utterance. These features are mapped into the systemic l Research Funded by The Ohio State Center for Cognitive Science and The Ohio State Departments of Computer and Information Science and Linguistics grammar (using compiled knowledge) which is then used to realize the actual utterance. The syntactic/semantic component used in this study is a parser based on the HPSG (Head Driven Phrase Structure Grammar) theory of grammar (Pollard and Sag, 1992). HPSG models all linguistic constituents in terms of part/a/ information structures called feature structures. Linguistic signs incorporate simultaneous representation of phonological, syntactic, and semantic attributes of grammatical constituents. HPSG is a lexiealized theory, with the lexical definitions, rather then phrase structure rules, specifying most configurational constraints. Control (such as subcategorization, for example) is asserted by the use of HPSG constraints - partially filled in feature structures called feature descriptions, which constrain possible HPSG feature structures by asserting specific attributes and/or labels. A HPSG based chart parser, under development at the author's university, was used for the implementation part of this study. FEATURE MAPPING Planning & generation of coherent "speech" in a conversation requires some understanding of the "hearer's" perspective. Thus the speaker naturally has some (limited) knowledge about possible responses from the hearer. This knowledge can be given to the same planner used for generation, producing a partial set of pragmatic features or expectations. These pragmatic expectations can then be mapped into the systemic grammar, producing a set of semantic and syntactic expectations about what other participants in the conversation will say. The technique explored here is to bring such expectations to bear during the parse process, guiding the parser to the most likely solution in a best-first manner. It is thus necessary that the generated expectations be mapped into a form which can be directly compared with constituents proposed within the HPSG parse. Consider the sentence "Robin promised to come at noon", with the following context: Sandy: "I guess we should get started, what time did they say they would be here?" Kim: "Robin promised to come at noon" A set of plausible partial expectations generated by the pragmatic and systemic components in anticipation of Kim's response might be: ((S) (UNMARKED-DECLARATIVE)) ((S SUBJECT) (PROPER)) ((S BETA) (NONFINITEPRED)) ((S PREDICATOR) (PROMISED)) ((S BETA TEMPORAL) (PP)) ((S BETA PREDICATOR) (ARRIVAL)) In these expectations the first list of each pair (e.g. (S BETA)) represents a functional role within the expected sentence. The 295 second list in each pair are sets (in this case singleton) of expected features for the associated functional roles. These expected features assert expectations which are both semantic (e.g. PROMISED) and syntactic (e.g. ((S BETA TEMPORAL) (PP)) asserts both the existance and location of a temporal adjunct PP). Note that in these expectations the temporal adjunct "at noon" should modify the embedded clause "to come", as would be expected in the specified context. Next consider the possible HPSG parses of the example sentence. Figures 1 and 2 below illustrate two semantically distinct parses generated by our HPSG parser. S H s// H/ vP NP V V V PP Robin promised to come at noon Figure 1. S H / / ,V-- vr \ NP V V V PP Robin promised to come at noon Figure 2. Mapping expected features into HPSG constraints: Features generated from pragmatic expectations can be mapped into constraints on HPSG structures, stated in terms of feature descriptions. Below are the HPSG feature descriptions corresponding to the pragmatically generated features PP and UNMARKED-DECLARATIVE. PP = SYNSEMILOCICAT HEAD prep . [MARKING unmarked]] phrase cat ' Figure 3. UNMARKED-DECLARATIVE = FDTRSIHEAD-DTRISYNSEM v_E phraseLSU~-DTRISYNSEMILOCICATIH EAD __ Figure 4. noun Mapping expected functional roles into HPSG constituent structure: Pragmatic expectations are expected within certain functional roles, such a SUBJECT, PREDICATOR, BETA (the embedded clause) etc. This structural information must be used to assert the constraints into the relevant HPSG substructures. This mapping is not as straightforward as the feature mapping technique, as the structure induced by the systemic grammar is "flatter" than the structure produced by HPSG. Consider the following pragmatically generated expectation: ((S TEMPORAL) (PP)) Such an expectation may be realized by great variety of HPSG structural realizations, e.g.: 1. Kim ran at noon 2. Kim could run home at noon 3. K.im could have been running home at noon 4. Kim ran east at noon. In these examples modal verb operators (1-3) and multiple adjuncts (4) vary the actual structural depth of the temporal PP within the HPSG model. Thus a given systemic role path may have numerous HI~G constituent path realizations. One possible mapping technique is to generate constraints expressing all possible HPSG structural variants. This, however would lead in many cases to a combinatorial explosion of constraints. The technique employed by this study was to add a new clause attribute to verbal HPSG signs, and use this attribute to embed within the signs a "clausally flattened" structures. Each HPSG verbal sign in the same clause structure shares the same clausal value. The clause value is a structure with labels for each systemic role, where each label points to the constituent which fills that role in the given verbal clause. A clausal boundry is said to exist between distinct clausal domains. A clausal structure is illustrated in figure 5: ~"~vPIF~I VI~ v[r~" I ] V / / ~ % P [ [~] P R°bin promlised "E" H[~ H//~p v[N] I I come at noon [C F PREDICATOR V[pr°mised] ]] [] LAUSE | SUBJECT NP[Robin] =- BETA VP[to come at noon]] rEI~LAUSE r PREDICATOR V[come] LTEMPORAL PP[atnoon]] ]] I= Figure 5. The current mapping only considers the mapping of roles within verbal signs. Similar role structures may exist for other constituent types, such as for noun phrase. Thus far the verbal clause boundary definition has been adequate for other phrasal structures. GUIDING THE HPSG PARSE The guidance strategy employed is to evaluate all proposed edges (i.e. complete and partially complete constituents) against the expectations, ranking each based on the relative similarity with the expectations. These edges are 296 then placed in an agenda (a list of priority queues) and removed from the agenda and included in the partial parse in a best first order. Critical to the success of a best-first algorithm is the heuristic evaluation function used to order the proposed constituents. The heuristic evaluation function: The heuristic evaluation function is based on three specific types of tests: I. Role match - does a constituent match a role's set of expected features? II. Role path match - is a constituent role path compatible with the roles of its children? III. Clausal completeness - are all clausal roles expected for this constituent present? Tests II and III above require that constituents under consideration have roles already assigned to them. For example, in the case of II, the test requires roles for both the new constituent and the proposed daughters of the constituent. But since the parse strategy employeed is bottom-up, role paths cannot be anchored to a root, and thus fully known, until parse completion. The solution to this dilemma is to hypothesise a constituent's role using a process similar to abduction. Two types of knowledge are exploited in this process. First, roles with features which subsume or are consistant with a proposed constituent are considered good candidate roles. Also, roles may also be inferred by projecting up from the roles already hypothesized for the children. By intersecting these two sources of role evidence, the list of hypothesized roles can be refined (by ruling out roles without both types of evidence). In this manner the hypothesized roles of later constituents can be refined from descendant constituents. In the case of roles projected from daughters, clausal boundary knowledge must be applied to correctly infer the parent role. EVALUATION & TESTING The techniques described here have been used successfully to guide the parsing of several sentences taken from real conversations. The pragmatic and semantic knowledge already existed from Patten's research (Patten, 1992) to generate these sentences. A subset of this knowledge, judged to represent the partial knowledge available to a listener, was used to generate expectations in the form described above. The parser used in this study by default produced all possible parses. The modified version attempts to converge on the "expected" parse first, and terminate. For each sentence tested the parser converges on the correct parse first. When the expectations are modified to expect a different parse, a different (and correct) parse is found first. The results in terms of speedup vary considerably depending on the level of ambiguity present in the sentence. The most complex sentence parsed thus far exhibits considerable speedup. When unguided, the parser produces 24 parses, and considers a total of 252 distinct constituents. In the guided case, the parser only considers 39 constituents, and converges on the one "correct" parse first. Within the current testing environment, this guidence results in a greater then ten-fold speedup in terms of CPU time. SUMMARY Pragmatic knowledge about language usage in routine conversational contexts can be highly compiled. This knowledge can be used to produce semantic and syntactic expectations about next turns in conversation, especially of next turns that are second members of adjacency pairs (Schegloff & Sacks 1973). By mapping expected features into HPSG constraints, and by augmenting HPSG sign structures to model the role structure of systemic grammar, these expectations can be used as constraints on possible constituent structures of a HPSG constituent. Given this mapping, the expectations may then be used to order the parse process, guiding the parse, and avoiding the consideration of pragmatically unlikely constructions. This process reduces the number of constituents considered during parsing, reducing parse time and permitting the parser to correctly select the parse most like the pragmatic expectations, This solution closely follows a classical A.I. search technique called a best-first search. The heuristic evaluation function used to classify the proposed constituents for best first ordering uses inference similar to abductive reasoning. One benefit of this solution is that it retains the modularity of the syntactic and semantic components, not requiring a specialized grammar for each contextual domain. In additional, as the coverage of the grammar increases, the search space will also increase, and thus possible benefits increase. Work is continuing on this study. Currently the heuristic is being enhanced to consider the specificity of an expectation match, ordering those edges which match the most specific features first. In addition, work is in progress to extend the coverage of the grammar and mapping to include the conversation domain utilized in Patten, Geis & Becker 1992. References Geis, Mike. L. and Harlow, L. "Politeness Strategies in French and English: Implications for Second Language Acquisition" Mac Gregor, R., "LOOM Users Manual", University of Southern California, lnformations Sciences Institute, 1991. Patten, Terry.; Geis, Mike. and Becker, Barbara., "Toward a Theory of Compilation for Natural-l_anguage Generation," Computationallntelligence 8(1), 1992, pp 77-101. Pollard, Carl and Sag, Ivan A., "Head-Driven Phrase Structure Grammar", unpublished manuscript draft, 1992. Pollard, Carl. and Sag, Ivan A., "Information-Based Syntax and Semantics: Volume 1, Fundamentals", Center for the Study of Language and Information, 1987. Schegloff, E.A. and Sacks, H. Opening up closings. Semiotica, 7,4:289-387, 1973. Winograd, Terry. 1983. "Language as a Cognitive Process", Addison-Wesley, Menlo Park, CA. 297
1993
44
THE FORMAL CONSEQUENCES OF USING VARIABLES IN CCG CATEGORIES Beryl Hoffman * Dept. of Computer and Information Sciences University of Pennsylvania Philadelphia, PA 19104 (hoffman @ linc.cis.upenn.edu) Abstract Combinatory Categorial Grammars, CCGs, (Steedman 1985) have been shown by Weir and loshi (1988) to generate the same class of languages as Tree-Adjoining Grammars (TAG), Head Grammars (HG), and Linear Indexed Grammars (LIG). In this paper, I will discuss the effect of using variables in lexical category assign- ments in CCGs. It will be shown that using variables in lexical categories can increase the weak generative capacity of CCGs beyond the class of grammars listed above. A Formal Definition for CCGs In categorial grammars, grammatical entities are of two types: basic categories and functions. A basic category such as NP serves as a shorthand for a set of syntactic and semantic features. A category such as S\NP is a function representing an intransitive verb; the function looks for an argument of type NP on its left and results in the category S. A small set of combinatory rules serve to combine these categories while preserving a transparent relation between syntax and semantics. Application rules allow functions to combine with their arguments, while composition rules allow two functions to combine together. Based on the formal definition of CCGs in (Weir-Joshi 1988), a CCG, G, is denoted by (VT , VN , S, f, R), where • lit is a finite set of terminals, • VN is a finite set of nonterminals, • S is a distinguished member of VN, • f is a function that maps elements of VT U {e} to finite subsets of C(VN), the set of categories, where, - VN C C(VN) and - if el and c2 • C(VN), then (el\c2) and (cl/c2) • *I would like to thank Mark Steedman, Libby Levison, Owen Rambow, and the anonymous referees for their valu- able advice. This work was partially supported by DARPA N00014-90-J- 1863, ARO DAAL03-89-C-0031, NSF IR190- 16592, Ben Franklin 91S.3078C-1. • R is a finite set of combinatory rules where X, Y, Z1, • •., Zn are variables over the set of cat- egories C(VN), and the slash variable ]i can bind to \ or/. Certain restrictions may be placed on the possible instantiations of the variables in the rules. - Forward Application (>): x/y r ---, x - Backward Application (<): Y X\Y --~ X - Generalized Forward Composition (>B(n) or >Bx(n)): For some n > 1, X/Y YllZllz...I.Z. --, XllZ~12...I,Z, - Generalized Backward Composition (<B(n) or <Bx(n)): For some n _> 1, Yl Ztl2...l z XKY --, xllz [2... I z,. The derives relation in a CCG is defined as act =~ cecl c2fl if R contains the rule Cl c2 ---+ c. The language generated by this grammar is defined as L(G) = {al,..., an [ S :~ c,,...,cn, ei • f(al),ai • lit U {e}, 1 < i > n} Under these assumptions, Weir and Joshi (1988) prove that CCGs are weakly equivalent to TAGs, HGs, and LIGs. Their conversion of a CCG to a LIG l relies on the fact that the combinatory rules in the CCG are linear. To preserve linearity in CCGs, only the category X in the combinatory rules can be unbounded in size; the variables Y and Z must be bounded in their possible instantiations. In other words, only a finite number of categories can fill the secondary constituent of each combinatory rule. The secondary constituent is the second of the pair of categories being combined in the forward rules and the first of the pair in the backward rules (e.g. Y]Z1...IZn). Weir and Joshi do not restrict the size of the sec- ondary constituents in the formal definition of the CCG rules, but they prove that the following lemma holds of the grammar. ILinear Indexed Grammars are a restricted version of Indexed Grammars in which no rule can copy a stack of unbounded size to more than one daughter (Gazdar 1985). 298 Lemma: There is a bound (determined by the gram- mar G) on the number of useful categories that can match the secondary constituent of a rule. There are an infinite number of derivable cate- gories in CCGs, however Weir and Joshi show that the number of components that derivable categories have is bounded. The components of a category c = (col,cll2...I, c,d are its immediate components co, ..., c,~ and the components of these immediate com- ponents. A finite set Dc(G) can be defined that con- tains all derivable components of every useful category where a category c is useful if c ~ w for some w in VT*: c 6 De(G) ifc is a component ofd where c' 6 f(a) for some a 6 VT U {E}. Given that every useful category matching the sec- ondary constituents Y and Y IZ1...IZ,~ in the combi- natory rules has components which are in Dc(G), the lemma given above holds. However, this lemma does not hold if there are variables in the lexical categories in VT. Variables can bind a category of any size, and thus useful categories containing variables do not necessarily have all of their derivable components in the finite set Dc (G). The Use of Variables Linguistic Use In CCGs, a type-raising rule can be used in the lexicon to convert basic elements into functions; for exam- ple, an NP category can be type-raised to the category S/(S\NP) representing a function looking for an in- transitive verb on its right. Steedman uses type-raising of NPs to capture syntactic coordination and extraction facts. In Steedman's Dutch grammar (1985), variables are used in the lexical category for type-raised NPs, i.e. the variable v in the category v/(v\NP) general- izes across all possible verbal categories. The use of variables allows the type-raised NPs in the following coordinated sentence to easily combine together, us- ing the forward composition rule, even though they are arguments of different verbs. (1) ...dat [Jan Piet] en [Cecilia Henk] zag zwemmen. ...that [Jan Pietl and [Cecilia Henkl saw swim. ...that Jan saw Piet and Cecilia saw Henk swim. Jan Piet v/(,\NP) ¢/(¢\NP) >B (v' = (v\NP)) v/(v\NP\NP) Formal Power I will show that the use of variables in assigned lexical categories increases the weak generative capacity of CCGs. VAR-CCGs, CCGs using variables, can gener- ate languages that are known not to be Tree-Adjoining Languages; therefore VAR-CCGs are more powerful than the weakly equivalent TAG and CCG formalisms. The following language is known not to be a TAL: L = {a'%'%nd'~e'~ln > O} The following VAR-CCG, G', generates a language L' which is very similar to L: f(c) = s, f(a) = A, f(b) = v\A/(v\B), f(c) = vV3/(v\c), f(d) = v\C/(v\D), f(e) = S\D/S. The rules allowed in this grammar are forward and backward application and forward crossing composi- tion with n < 2. The variable v can bind an arbitrarily large category in the infinite set of categories C(Vjv) defined for the grammar. In the language generated by this grammar, two characters of the same type can combine together using the forward crossing composition rule >Bx(2). The composition of the types for the character e is shown below. A string of e's can be constructed by allowing the result of this composition to combine with another e category. e e S\D/S S\D/S ->Bx(2) S\D\D/S The types for the characters b, c, and d can combine using the same composition rule; these types contain variables (e.g. v and v' below) which can bind to a category of unbounded size. b b v\A/(v\B) v'\A/(v'\B) >Sx(2) (v' = (v\B)) v\A\A/(v\B\B) By applying the forward crossing composition rule to a string of nb's, we can form the complex category v\A, . . . d,~/(v\Bl . . . Bn) representing this string. Thus, during the derivation of anbnc'~d'~e n for n > O, the following complex categories are created: v\A, .. .At/(v\B1. . .B,) ... B /(vkC, ...Cj/(vkD ...Dj) skDl ... Di Once the complex categories for a string of b's, a string of c's, a string of d's, and a string of e's are constructed, we can link one string of a particu- lar character to another using the forward application rule. This rule can only apply to these categories if i = j, j = k, k = l, and l = m where m is the num- ber of A's generated and i, j, k, l are as in the complex categories listed above. For example, 8\D1...Di > (j = i) 299 With each succesful forward application, we ensure that there are equal numbers of two characters: the E's are linked to the D's, the D's are linked to the C's, etc., so that we have the exact same number of all five characters. In fact, the grammar can be easily extended ,,n .. ~ 0} to generate a language such as {a 1 a 2 . at: In _> for any k. The language L' generated by G / intersected with the regular language a*b*c*d*e* gives the language L above. If we assume that L' is a Tree-Adjoining Lan- guage (TAL), then L would be a TAL as well since TALs are closed under intersection with Regular lan- guages. However, since we know that L is not a TAL, L' cannot be a TAL either. Thus, G' generates a lan- guage that TAGs and CCGs cannot. Conclusions We have seen that using variables in the lexical cat- egories of a CCG can increase its weak generative capacity. However, there is some linguistic motivation for looking at the more powerful formalism of VAR- CCGs. As argued by Gazdar (1985), this extra power may be necessary in order to capture coordination in natural languages. We have seen that type-raised cat- egories with variables in CCGs can be used to capture syntactic coordination and extraction facts in Dutch (Steedman 1985). Further research is needed to decide whether this linguistic motivation warrants the move to a more powerful formalism. Although VAR-CCGs have a greater weak gen- erative capacity than the class including TAGs, HGs, CCGs, and LIGs, we conjecture that it is still a mildly context-sensitive grammar as defined by Joshi (1985). The language discussed above is a mildly context- sensitive language since it observes the constant growth and semilinearity properties. It is an open question whether VAR-CCGs can generate languages which are beyond mildly context-sensitive. Note that MC-TAGs, which are a more powerful extension of TAGs, can also generate languages like L, and they are known to be mildly context-sensitive formalisms (Weir 1988). In future research, we will investigate exactly what the resulting generative capacity of VAR-CCGs is. Future Research on Word Order My current research also involves extending the CCG formalism to handle free word order languages. By representing NPs as type-raised categories, we can de- rive a scrambled sentence in which the NPs do not occur in the order that the verb specifies: v/(v\NP2) v/(v\NP,) S\NPx\NP2 .>B S\ N P2 > S In many free word order languages, an NP can be scrambled an unbounded distance away from its verb, i.e. long distance scrambling. If we allow unrestricted composition rules for any n arguments as well as the use of variables in type-raised categories in a CCG, a string of any number of scrambled NPs followed by a string of verbs can be derived. We first combine any number of verbs together, using backward composi- tion, to get a complex verb category looking for all of the NPs; next, we combine each NPs with this com- plex verb category. Any type-raised Npi can combine with the complex verb regardless of the order specified by the complex verb. The variable in the type-raised category can bind a verbal category of unbounded size, e.g. (v = S\Up,\...\Upi_l). v/(v\Npi) S\Np, \Np2...\Npi...\Npn >Bx(n) S\Npl \...\Npi-~\Npi+l...\Np,~ Although we can capture scrambling by using variables in type-raised categories, this analysis is not consistent with incremental processing and cannot account for co- ordination in scrambled sentences; for instance, in the first example given above, NP2 and NP1 cannot com- binetogether before combining with the verb. In future research, I will investigate whether VAR-CCGs is an adequate linguistic formalism in capturing all aspects of free word order languages or whether a formalism such as {}-CCGs (Hoffman 1992), which allows sets of arguments in function categories, is better suited. References [1] Gazdar, G. 1985. Applicability of Indexed Gram- mars to Natural Languages. Technical Report CSLI-85-34, Center for Study of Language and In- formation. [2] Hoffman, Beryl. 1992. A CCG Approach to Free Word Order Languages. Proceedings of the 30th Annual Meeting of ACL, Student Session. [3] Joshi, A.K., 1985. How much context-sensitivity is required to provide reasonable structural descrip- tions: Tree adjoining grammars, in D. Dowty and L. Karttunen and A. Zwicky, editors, Natural Lan- guage Parsing: Psycholinguistic, Computational and Theoretical Perspectives, Cambridge Univer- sity Press. [4] Steedman, Mark. 1985. Dependency and Coordi- nation in the Grammar of Dutch and English. Lan- guage, 6l, 523-568. [5] Weir, David. 1988. Characterising Mildly Context- sensitive Grammar Formalisms. Ph.D dissertation. University of Pennsylvania. [6] Weir, David and Aravind Joshi. 1988. Combina- tory Categorial Grammars: Generative Power and Relationship to Linear Context-Free Rewriting Sys- tems. Proceedings of the 26th Annual Meeting of ACL. 300
1993
45
INTEGRATING WORD BOUNDARY IDENTIFICATION WITH SENTENCE UNDERSTANDING Kok Wee Gan Department of Information Systems eJ Computer Science National University of Singapore Kent Ridge Crescent, Singapore 0511 Internet: [email protected] Abstract Chinese sentences are written with no special delimiters such as space to indicate word boundaries. Existing Chi- nese NLP systems therefore employ preprocessors to seg- ment sentences into words. Contrary to the conventional wisdom of separating this issue from the task of sentence understanding, we propose an integrated model that per- forms word boundary identification in lockstep with sen- tence understanding. In this approach, there is no distinc- tion between rules for word boundary identification and rules for sentence understanding. These two functions are combined. Word boundary ambiguities are detected, es- pecially the fallacious ones, when they block the primary task of discovering the inter-relationships among the var- ious constituents of a sentence, which essentially is the essence of the understanding process. In this approach, statistical information is also incorporated, providing the system a quick and fairly reliable starting ground to carry out the primary task of relationship- building. 1 THE PROBLEM Chinese sentences are written with no special delimiters such as space to indicate word boundaries. Existing Chi- nese NLP systems therefore employ preprocessors to seg- ment sentences into words. Many techniques have been de- veloped for this task, from simple pattern matching meth- ods (e.g., maximum matching, reverse maximum match- ing) (Wang, et al., 1990; Kang & Zheng, 1991), to statis- tical methods (e.g., word association, relaxation) (Sproat & Shih, 1990; Fan & Tsai, 1988), to rule-based approaches (Huang, 1989 ; Yeh & Lee, 1991; He, et al., 1991). However, it is observed that simple pattern matching methods and stochastic methods perform poorly in sen- tences such as (1), (2), and (3), where word boundary am- biguities exist. 1 (1) ta benren sheng le She alone give birth to ASP san ge haizi three CL child She alone gives birth to three children. (2) ta zhi kao duo shi fen H/She only score up to ten mark H/She scores only ten marks. 1The ambiguous fragments in italics in (1), (2), and (3), ben- ten sheng, shi .fen, and he shang, will be wrongly identified as: ben rensheng, shi.fen, and heshang, respectively, by statistical approaches. 301 (3) zhongguo yi kaifa he China already develop and shang wei kaifa de yet not develop ASSOC shiyou ziyuan hen duo oil resource very many There are many developed and not yet developed oil resources in China. This problem can be dealt with in a more systematic and effective way if syntactic and semantic analyses are also in- corporated. The frequency in which this problem occurs justifies the additional effort needed. However, contempo- rary approaches of constructing a standalone, rule-based word segmentor do not offer the solution, as this would mean duplicating the effort of syntactic and semantic anal- yses twice: first in the preprocessing phase, and later in the understanding phase. Moreover, separating the issue of word boundary identification from sentence understand- ing often leads to devising word segmentation rules which are arbitrary and word specific, 2 and hence not useful at all for sentence understanding. Most importantly, the rules devised always face the problem of over-generalization. Contrary to conventional wisdom, we do not view the task of word boundary identification as separated from the task of sentence understanding. Rather, the former is re- garded as one of the tasks an NLP system must handle within the understanding phase. This perspective allows us to devise a more systematic and natural solution to the problem, at the same time avoiding the duplication of mor- phological, syntactic, and semantic analyses in two sepa- rate stages: the preprocessing stage and the understanding stage. The basic principle underlying this approach is: ev- ery constituent in a sentence must be meaningfully re- lated (syntactically and/or semantically) to some other constituent. Understanding a sentence is simply a pro- cess to discover this network of relations. A violation of this principle signifies the presence of abnormal groupings (fallacious word boundaries), which must be removed, a For example, the fallacious grouping rensheng 'life', if it exists in (1), can be detected by observing a violation of the syntactic relation between this group and le, which is 2 For example, a heuristic rule to resolve the ambiguous frag- ment shi fen in (2): adverb shifen 'very' cannot occur at the end of a sentence. This rule rules out the grouping shifen to appear in sentence (2). 3This principle, in its present form, is too tight for handling metonymic usage of language, as well as ill-formed sentences. We will leave this for future work. an aspect marker that cannot be a nominal modifier. In (2), selectional restrictions on the RANGE of the verb kao, which must either be pedagogical (e.g., kao shuzue 'test Mathematics'), resultative (e.g., kao shibai le 'test fail AS- PECT'), or time (e.g., kao le yi ge zingqi 'test ASPECT one week'), rules out the grouping shifen 'very', which is a degree marker. 4 Sentence (3) also requires thematic role interpretation to resolve the ambiguous fragment. Se- lectional restrictions on the PATIENT of the verb kaifa 'develop', which must be either a concrete material (e.g., kaifa meikuang 'develop coal mine') or a location (e.g., kaifa sanqu 'develop rural area'), rules out interpreting the ambiguous fragment he shang as heshang 'monk'. 5 This approach, however, does not totally discard the use of statistical information. On the contrary, we use statistical information s to give our system a quick and fairly reliable initial guess of the likely word boundaries in a sentence. Based on these suggested word boundaries, the system proceeds to the primary task of determining the syntactic and semantic relations that may exist in the sentence (i.e., the understanding process). Any violation encountered in this process signals the presence of abnor- mal groupings, which must be removed. Our approach will not lead to an exceedingly complex system, mainly because we have made use of statistical information to provide us the initial guide. It does not generate all possible word boundary combinations in order to select the best one. Rather, alternative paths are ex- plored only when the current one leads to some violation. This feature makes its complexity not more than that of a two-stage system where syntax and semantics at the later stage of processing signal to the preprocessor that certain lexemes have been wrongly identified. 2 THE PROPOSED MODEL The approach we proposed takes in as input a stream of characters of a sentence rather than a collection of cor- rectly pre-segmented words. It performs word boundary disambiguation concurrently with sentence understanding. In our investigation, we focus on sentences with clearly ambiguous word boundaries as they constitute an appro- priate testbed for us to investigate the deeply interwoven relationships between these two tasks. Since we are proposing an integrated approach to word boundary identification and sentence understanding, con- ventional sequential-based architectures are not appropri- ate. A suitable computational model should have at least 4Notice the difference between this knowledge and the one mentioned in footnote 2. Both are used to disambiguate the fragment shi .fen. The former is more ad hoc while ours comes in naturally as part and parcel of thematic role interpretation. awe would like to stress that rules in this approach are not distinguished into two separate classes, one for resolving word boundary ambiguities and the other for sentence understand- ing. Ours combine these two functions together, performing word boundary identification alongside with sentence under- standing. We will give a detailed description on the effective- ness of the various kinds of information after we have completed our implementation. 6See Section 3 for an example. the following features: (i) linguistic information such as morphology, syntax, and semantics should be available si- multaneously so that it can be drawn upon whenever nec- essary; (ii) the architecture should allow competing inter- pretations to coexist and give each one a chance to develop; (iii) partial solutions should be flexible enough that they can be easily modified and regrouped; (iv) the architec- ture can support localized inferencing which will eventually evolve into a global, coherent interpretation of a sentence. We are using the Copycat model (Hofstadter, 1984; Mitchell, 1990), which has been developed and tested in the domain of analogy-making. There are four compo- pents in this architecture: the conceptual network (en- codes linguistic concepts), the workspace (the working area), the coderack (a pool of codelets waiting to run), and the temperature (controls the rate of understanding). Our model will differ from NLP systems with a similar approach (Goldman, 1990; Hirst, 1988; Small, 1980) pri- marily through the incorporation of statistical methods, and the nondeterministic control mechanism used. 7 For a detailed discussion, see (Gan, et al., 1992). In essence, this model simulates the understanding process as a crys- tallization process, in which high-level linguistic structures (e.g., words; analogous to crystals) are formed and hooked up in a proper way as characters (ions) of a sentence are gradually cooled down. 3 AN EXAMPLE We will use sentence (1) to briefly outline how the model works, s (1) ta benren sheng le san ge haizi 9 . bottom-up structure building The system starts with bottom-up, character-based codelets in the coderack whose task is to evaluate the as- sociative strength between two neighboring characters. 10 One of the codelets will be chosen probabilistieally to run. 11 The executing codelet selects an object from the workspace and tries to build some structures on it. For 7See also footnote 11. SOur description here is oversimplified. Many important issues, such as the representation of linguistic knowledge, the treatment of ambiguous fragments that have multiple equally plausible word boundaries, are omitted. The example discussed in this section is a hand-worked test case which is currently being implemented. 9The English glosses and translation are omitted here, as they have been shown in Section 1. 1°The association between two characters is measured based on mutual information (Fano, 1961). It is derived from the frequency that the two characters occur together versus the frequency that they are independent. Here, we find that statis- tical techniques can be nicely incorporated into the model We will derive this information from a corpus of 46,520 words of to- tal usage frequency of 13019,814 given to us by Liang Nanyuan of the Beijing University of Aeronautics and Astronautics. 11This is another way statistics is used. The selection of which codelet to run, and the selection of which object to work on are decided probabilistically depending on the system tem- perature. This is the nondeterministic control mechanism men- tioned in Section 2. 302 example, it may select the last two characters hai and zi in (1) and evaluate their associative strength as equal to 13.34. This association is so strong that another codelet will be called upon to group these two characters into a word-structure, which forms the word haizi 'children'. * top-down influences The formation of the word-structure haizi activates the WORD 12 node in the network of linguistic concepts. This network is a dynamic controller to ensure that bottom-up processes do not proceed independently of the system's understanding of the global situation. The activation of the WORD node in turn causes the posting of top-down codelets scouting for other would-be word- structures. Thus, single-character words such as ta 'she', le (aspect marker), san 'three', and ge (a classifier) may be discovered. • radical restructuring The characters ren and sheng will be grouped as a word rensheng 'life' by bottom-up, character-based codelets, as the associative strength between them is strong (3.75). This is incorrect in (1). It will be detected when an ASPECT-relation builder, spawned after identifying le as an aspect marker, tries to construct a syntactic relation between the word-structure rensheng 'life' and the word-structure le (ASPECT). Since this relation can only be established with a verb, a violation occurs, which causes the temperature to be set to its maximal value. The problematic structure rensheng will be dissolved, and the system proceeds in its search for an alternative, recording down in its memory that this structure ren- sheng should not be tried again in future, x3 4 SUMMARY In this model, there is an implicit order in which codelets are executed. At the initial stage, the system is more con- cerned with identifying words. After some word-structures have been built, other types of codelets begin to decipher the syntactic and semantic relations between these struc- tures. From then on, the word identification and higher- level analyses proceed hand-in-hand. In short, the main ideas in our model are: (i) a parallel architecture in which hierarchical, linguistic structures are built up in a piece- meal fashion by competing and cooperating chains of sim- ple, independently acting codelets; (ii) a notion of fluid re- conformability of structures built up by the system; (iii) a parallel terraced scan (Hofstadter, 1984) of possible courses of action; (iv) a temperature variable that dynamically ad- justs the amount of randomness in response to how happy the system is with its currently built structures. ACKNOWLEDGMENTS This paper will not be in its present form without the invaluable input from Dr. Martha Palmer. I would like to express my greatest thanks to her. I would also like to 12This is a node in the conceptual network, which is activated when the system finds that the word concept is relevant to the task it is currently investigating. 13We will skip the implementation details here. thank Guojin, Wu Jianhua, Paul Wu, and Wu Zhibiao for their feedback on an earlier draft. REFERENCES Fan, C. K. and Tsai, W. H. (1988) Automatic word identi- fication in Chinese sentences by the relaxation technique. Computer Processing of Chinese and Oriental Languages, 4(1):33-56. Fano, R. (1961) Transmission of information. MIT Press, Cambridge MA. Goldman, R. (1990) A probabilistic approach to lan- guage understanding. PhD thesis, Department of Com- puter Science, Brown University. Gan, K. W., Lua, K. T. and Palmer, M. (1992) Model- ing language understanding as a crystallization process: an application to ambiguous Chinese word boundaries identi- fication. Technical Report TR50/92, Department of Infor- mation Systems and Computer Science, National Univer- sity of Singapore. He, K. K, Xu, H. and Sun, B. (1991) Design principle of expert system for automatic words segmentation in writ- ten Chinese. Journal of Chinese Information Processing, 5(2):1-14 (in Chinese). Hirst, G. (1988) Resolving lexicM ambiguity computa- tionally with spreading activation and polaroid words. In S. L. Small, G. W. Cottrell, M. K. Tanenhaus (Eds.), Lex- icM ambiguity resolution, perspectives from psycholinguis- tics, neuropsychology and artificial intelligence; Morgan Kaufmann Publishers, San Meteo, California, 73-107. Hofstadter, D. R. (1984) The Copycat project: an ex- periment in non-determinism and creative analogies. AI Memo No. 755, Massachusetts Institute of Technology, Cambridge, M. A. Huang, X. X. (1989) A "produce-test" approach to auto- matic segmentation of written Chinese. Journal of Chinese Information Processing, 3(4):42-48 (in Chinese). Kang, L. S. and Zheng, J. H. (1991) An algorithm for word segmentation based on mark. In Proceedings of the 10th anniversary of the Chinese Information Processing Society, Beijing, 222-226 (in Chinese). Mitchell, M. (1990) COPYCAT: a computer model of high-level perception and conceptual slippage in analogy- making. PhD. Dissertation, University of Michigan. Small, S. L. (1980) Word expert parsing: a theory of distributed word-based natural language understanding. PhD. dissertation, University of Maryland. Sproat, R. and Shih, C. L. (1990) A statistical method for finding word boundaries in Chinese text. Computer Processing of Chinese and Oriental Languages, 4(4):336- 351. Wang, Y. C., Su, It. J. and Mo, Y. (1990) Automatic processing Chinese word. Journal of Chinese Information Processing, 4(4):1-11 (in Chinese). Yeh, C. L. and Lee, It. J. (1991) Rule-based word iden- tification for Mandarin Chinese sentences - a unification approach. Computer Processing of Chinese and Oriental Languages, 5(2):97-118. 303
1993
46
EXTENDING KIMMO'S TWO-LEVEL MODEL OF MORPHOLOGY * Anoop Sarkar Centre for Development of Advanced Computing Pune University Campus, Pune 411007, India anoop~parcom.ernet.in Abstract This paper describes the problems faced while us- ing Kimmo's two-level model to describe certain Indian languages such as Tamil and Hindi. The two-level model is shown to be descriptively inad- equate to address these problems. A simple ex- tension to the basic two-level model is introduced which allows conflicting phonological rules to co- exist. The computational complexity of the exten- sion is the same as Kimmo's two-level model. INTRODUCTION Kimmo Koskenniemi's two-level model (Kosken- niemi, 1983, Koskenniemi, 1984) uses finite-state transducers to implement phonological rules. This paper presents the experience of attempting a two- level phonology for certain Indian languages; the problems faced in this attempt and their resolu- tion. The languages we consider are Tamil and Hindi. For the languages considered we want to show that practical descriptions of their morphol- ogy can be achieved by a simple generalization of the two-level model. Although the basic two-level model has been generalized in this paper, the ex- tensions do not affect the complexity or the basic tenets of the two-level model. SOME PROBLEMS FOR THE TWO-LEVEL MODEL The two-level model is descriptively adequate for most morphological processes occuring in Indian languages. However, there are some cases where the basic two-level fails to give an adequate de- scription. One problem is caused by the large number of words imported from Sanskrit in lan- guages such as Hindi, Tamil and Tibetan. The other problem occurs in Tamil where phonology disambiguates between different senses of a mor- pheme. The cases where these occur is common *I would like to thank P. Ramanujan and R. Doctor for their help, and Dr. Darbari for his support. and productive. They cannot be considered as ex- ceptional. For example, in Tamil the verb 1;ula£ (to be similar) is derived from the Sanskrit base word tula (similarity). The past participle of tulai exhibits the following property. (LR and SR refer to the lexical and surface environments respectively). (i) LR: tulai+Ota SR: tolaiOtta (adj. who resembles [something]) In this example, the consonant insertion at the morpheme boundary is consistent with Tamil phonology, but the realization of u as o in the en- vironment of tu follows a morphology that origi- nates in Sanskrit and which causes inconsistency when used as a general rule in Tamil. The follow- ing example illustrates how regular Tamil phonol- ogy works. (2) LR: kudi+Ota SR: kudiOtta (adj. drunk) (3) LR: tolai+0ta SR: tolaiOtta (adj. who has lost [something]) From examples (1) through (3) we see that the same environment gives differing surface realiza- tions. Phonological rules formulated within the two-level model to describe this data have to be mutually exclusive. As all phonological rules are applied simultaneously, the two-level model can describe the above data only with the use of arbi- trary diacritics in the lexical representation. The same problem occurs in Hindi. In Table 1 (6) and (7) follow regular Hindi phonology, while (4) and (5) which have descended from Sanskrit display the use of Sanskrit phonology. All these exam- ples show that any model of this phonological be- haviour will have to allow access for a certain class of words to the phonology of another language whose rules might conflict with its own. 304 Nom. Sing. Ob. Sing. (4) pita pita (5) data data (6) phita phite (7) ladka ladke Nom. Plu. pita data phite ladke Ob. Plu. pitao dat ao phito ladko Table 1: Behaviour of certain Hindi words that use Sanskrit phonology There is one other problem that comes up in Tamil where the phonology disambiguates be- tween two senses of a stem. For instance, for the word padi which means either, 1. to read, or 2. to settle; differing phonological rules apply to the two senses of the word. If, as in (8) gemination is applied the continuous participial of padi means reading, whereas, if nasalized, in (9), it means set- fling (e.g. of dust). (8) LR: padi+0tu+0kondu SR: padiOttuOkkondu (reading) (9) LR: padi+Otu+kondu SR: padiOntuOkondu (settling) The two-level model could be conceivably be used to handle the cases given above by positing ar- bitrary lexical environments for classes of words that do not follow the regular phonology of the language, e.g. in (1) we could have the lexical rep- resentation as tUlai with rules transforming it to the surface form. To handle (8) and (9) we could have lexical forms padiI and padiY tagged with the appropriate sense and with duplicated phono- logical rules. But introducing artificial lexical rep- resentations has the disadvantage that two-level rules that assume the same lexical environment across classes of words have to be duplicated, lead- ing to an inefficient set of rules. A more adequate method, which increases notational felicity with- out affecting the computational complexity of the two-level model is described in the next section. EXTENDING THE TWO-LEVEL MODEL The extended two-level model presented allows each lexical entity to choose a set of phonologi- cal rules that can be applied for its recognition and generation. Consider the two level rules 1 that apply to ex- ample (1). Rule 1 transforms u to o in the proper iThe notations used are: * indicates zero or more instances of an element, parentheses are optional ele- ments, - stands for negation and curly braces indicate sets of elements that match respectively. 0 stands for environment while Rule 2 geminates t. 2 Rla: u:o ~ CV* +:0 t:t Rib: O:t ~ {B,NAS}C +:0 t:t where, C - consonants V- vowels B - voiced stops NAS - nasals We cannot allow the rule R1 to apply to (2) and so we need some method to restrict its ap- plication to a certain set (in this case all words like (1) borrowed from Sanskrit). To overcome this, each lexical entry is associated with a subset of two-level rules chosen from the complete set of possible rules. Each morpheme applies its respec: tive subset in word recognition and generation. Consider a fictional example--(ll) below--to illustrate how the extended model works. 1 2 3 (II) LR: haX + mel + lek SR: hom Orael OOek Rlla: a:o ~ C X: (+:0) Rllb:X:{m,O} ~ a: (+:0) {m, m} Rllc: l:0 ~ l:l (+:0) Rlla transforms a to o in the proper environ- ment, Rllb geminates m and Rllc degeminates 1. 3 Assume rule Rlla that is applied to a in mor- pheme 1--haX--cannot be used in a general way without conflicts with the complete set of two-level rules applicable. To avoid conflict we assign a sub- set of two-level rules, say P1, to morpheme 1 which it applies between its morpheme boundaries. Mor- phemes 2 and 3 both apply rule subset P2 between their respective boundaries. For instance, P1 here will be the rule set {Rlla, Rllb, Rllc} and P2 will be {Rllb, lZllc}. Note that we have to sup- the null character in both the lexical and surface rep- resentations. 2The description presented here is simplified some- what as the purpose of presenting it is illustrative rather than exhaustive. 3In rule Rllb a: means lexical a can be realized as any surface character. 305 ply eac h morpheme enough rules within its sub- set to allow for the left-context and right-context of the rules that realize other surrounding mor- phemes. All the rules are still applied in parallel. At any time in the recognition or generation pro- cess there is still only one complete set of two-level rules being used. Any rule (finite state transducer) that fails and which does not belong to the sub- set claimed by a morpheme being realized is set back to the start state. This mechanism allows mutually conflicting phonological rules to co-exist in the two-level rulebase and allow them to apply in their appropriate environments. For instance, if we have a lexical entry laX in addition to the morphemes introduced in (11), then we can have realizations such as (12) by adding R12 to the above rules. (12) LR: laX+mel+lek SR: limOmelOOek R12: a:i ¢: C X: (+:0) Thus lax uses a rule subset P3 which consists of rules {R12, Rllb, Rllc}. Notice R12 and Rlla are potentially in conflict with each other. In the method detailed above we ignore cer- tain rule failures by resetting it to its start state. Can this be justified within the two-level model? Each rule has a lexical to surface realization which it applies when it finds that the left context and the right context specified in the rule is satisfied. In the extended model, if a rule fails and it does not belong to the rule set associated with the cur- rent morpheme, then by resetting it to its start state we are assuming that the rule's left context has not yet begun. The left context of the rule can begin with the next character in the same mor- pheme. This property means that we can have conflicting rules that apply within the same word. In practice it is better to use an equivalent method where a set of two-level rules that cannot apply between its boundaries is stored with a mor- pheme. If one or more of these rules fail and they belong to the set associated with that morpheme then the rule is simply reset to the start state else we try another path towards the analysis of the word. The model presented handles both additive and mutually exclusive rules, whereas in a system in which a few morphs specify additional rules and inherit the rest, mutually exclusive rules have to be handled with the additional complexity of the defeasible inheritance of two-level rules. It is easy to see that the extensions do not in- crease the computational complexity of the basic two-level model. We have one additional lexical tag per morpheme and one check for set member- ship at every failure of a rule. CONCLUSION We have shown that some examples from lan- guages such as Tamil and Hindi cannot be effec- tively described under Kimmo's two-level model. An extension to the basic two-level model is dis- cussed which allows morphemes to associate with them rule subsets which correspond to a certain phonology which gives the morpheme a valid de- scription. The extension to Kimmo's two-level model gives us the following advantages: * rules that conflict in surface realization can be used, • it gives more descriptive power, • the number of rules are reduced, • no increase in computational complexity over Kimmo's two-level model. We have implemented the extended two-level model using the standard method of represent- ing phonological rules by deterministic finite state automata (Antworth, 1990, Karttunen, 1983) and using PATRICIA (Knuth, 1973) for the storage of lexical entries. REFERENCES Antworth, Evan L., 1990. PC-KIMMO: a two- level processor for morphological analysis. Oc- casional Publications in Academic Computing No. 16. Dallas, TX: Summer Institute of Lin- guistics. Karttunen, Lauri, 1983. KIMMO: a general mor- phological processor. Texas Linguistic Forum 22:163-186. Knuth, Donald E., 1973. The Art of Computer Programming. Vol. 3/Sorting and Searching. Addison Wesley, Reading, MA. Koskenniemi, Kimmo, 1983. A Two Level model for Morphological Analysis. In Proc. 8th Int'l Joint Conf. of AI (IJCAI'83), Karlsruhe. Koskenniemi, Kimmo, 1984. A General Com- putational Model for Word-Form Recognition and Production. In Proc. lOth Int'l Conf. on Comp. Ling. (COLING'84), pp. 178-181, Stan- ford University. 306
1993
47
Towards History-based Grammars: Using Richer Models for Probabilistic Parsing* Ezra Black Fred Jelinek John Lafferty David M. Magerman Robert Mercer Salim Roukos IBM T. J. Watson Research Center Abstract We describe a generative probabilistic model of natural language, which we call HBG, that takes advantage of detailed linguistic information to re- solve ambiguity. HBG incorporates lexical, syn- tactic, semantic, and structural information from the parse tree into the disambiguation process in a novel way. We use a corpus of bracketed sentences, called a Treebank, in combination with decision tree building to tease out the relevant aspects of a parse tree that will determine the correct parse of a sentence. This stands in contrast to the usual ap- proach of further grammar tailoring via the usual linguistic introspection in the hope of generating the correct parse. In head-to-head tests against one of the best existing robust probabilistic pars- ing models, which we call P-CFG, the HBG model significantly outperforms P-CFG, increasing the parsing accuracy rate from 60% to 75%, a 37% reduction in error. Introduction Almost any natural language sentence is ambigu- ous in structure, reference, or nuance of mean- ing. Humans overcome these apparent ambigu- ities by examining the contez~ of the sentence. But what exactly is context? Frequently, the cor- rect interpretation is apparent from the words or constituents immediately surrounding the phrase in question. This observation begs the following question: How much information about the con- text of a sentence or phrase is necessary and suffi- cient to determine its meaning? This question is at the crux of the debate among computational lin- guists about the application and implementation of statistical methods in natural language under- standing. Previous work on disambiguation and proba- bilistic parsing has offered partial answers to this question. Hidden Markov models of words and *Thanks to Philip Resnik and Stanley Chen for their valued input. their tags, introduced in (5) and (5) and pop- ularized in the natural language community by Church (5), demonstrate the power of short-term n-gram statistics to deal with lexical ambiguity. Hindle and Rooth (5) use a statistical measure of lexical associations to resolve structural am- biguities. Brent (5) acquires likely verb subcat- egorization patterns using the frequencies of verb- object-preposition triples. Magerman and Mar- cus (5) propose a model of context that combines the n-gram model with information from dominat- ing constituents. All of these aspects of context are necessary for disambiguation, yet none is suf- ficient. We propose a probabilistic model of context for disambiguation in parsing, HBG, which incor- porates the intuitions of these previous works into one unified framework. Let p(T, w~) be the joint probability of generating the word string w~ and the parse tree T. Given w~, our parser chooses as its parse tree that tree T* for which T" =arg maxp(T, w~) (1) T6~(~) where ~(w~) is the set of all parses produced by the grammar for the sentence w~. Many aspects of the input sentence that might be relevant to the decision-making process participate in the prob- abilistic model, providing a very rich if not the richest model of context ever attempted in a prob- abilistic parsing model. In this paper, we will motivate and define the HBG model, describe the task domain, give an overview of the grammar, describe the proposed HBG model, and present the results of experi- ments comparing HBG with an existing state-of- the-art model. Motivation for History-based Grammars One goal of a parser is to produce a grammatical interpretation of a sentence which represents the 31 syntactic and semantic intent of the sentence. To achieve this goal, the parser must have a mecha- nism for estimating the coherence of an interpreta- tion, both in isolation and in context. Probabilis- tic language models provide such a mechanism. A probabilistic language model attempts to estimate the probability of a sequence of sentences and their respective interpreta- tions (parse trees) occurring in the language, :P(SI TI S2 T2 ... S,, T,~). The difficulty in applying probabilistic mod- els to natural language is deciding what aspects of the sentence and the discourse are relevant to the model. Most previous probabilistic models of parsing assume the probabilities of sentences in a discourse are independent of other sentences. In fact, previous works have made much stronger in- dependence assumptions. The P-CFG model con- siders the probability of each constituent rule in- dependent of all other constituents in the sen- tence. The :Pearl (5) model includes a slightly richer model of context, allowing the probability of a constituent rule to depend upon the immedi- ate parent of the rule and a part-of-speech trigram from the input sentence. But none of these mod- els come close to incorporating enough context to disambiguate many cases of ambiguity. A significant reason researchers have limited the contextual information used by their mod- els is because of the difficulty in estimating very rich probabilistic models of context. In this work, we present a model, the history-based grammar model, which incorporates a very rich model of context, and we describe a technique for estimat- ing the parameters for this model using decision trees. The history-based grammar model provides a mechanism for taking advantage of contextual information from anywhere in the discourse his- tory. Using decision tree technology, any question which can be asked of the history (i.e. Is the sub- ject of the previous sentence animate? Was the previous sentence a question? etc.) can be incor- porated into the language model. The History-based Grammar Model The history-based grammar model defines context of a parse tree in terms of the leftmost derivation of the tree. Following (5), we show in Figure 1 a context- free grammar (CFG) for a'~b "~ and the parse tree for the sentence aabb. The leftmost derivation of the tree T in Figure 1 is: "P1 'r2 'P3 S --~ ASB --* aSB --~ aABB ~-~ aaBB ~-h aabB Y-~ (2) where the rule used to expand the i-th node of the tree is denoted by ri. Note that we have in- aabb S ---, ASBIAB A ---, a B --~ b (, 6 / "., 4-5.: a a b b Figure h Grammar and parse tree for aabb. dexed the non-terminal (NT) nodes of the tree with this leftmost order. We denote by ~- the sen- tential form obtained just before we expand node i. Hence, t~ corresponds to the sentential form aSB or equivalently to the string rlr2. In a left- most derivation we produce the words in left-to- right order. Using the one-to-one correspondence between leftmost derivations and parse trees, we can rewrite the joint probability in (1) as: ~r~ p(T, w~) = H p(r, ]t[) i=1 In a probabilistic context-free grammar (P-CFG), the probability of an expansion at node i depends only on the identity of the non-terminal Ni, i.e., p(r lq) = Thus v(T, = II i----1 So in P-CFG the derivation order does not affect the probabilistic model 1. A less crude approximation than the usual P- CFG is to use a decision tree to determine which aspects of the leftmost derivation have a bear- ing on the probability of how node i will be ex- panded. In other words, the probability distribu- tion p(ri ]t~) will be modeled by p(ri[E[t~]) where E[t] is the equivalence class of the history ~ as determined by the decision tree. This allows our 1Note the abuse of notation since we denote by p(ri) the conditional probability of rewriting the non- terminal AT/. 32 probabilistic model to use any information any- where in the partial derivation tree to determine the probability of different expansions of the i-th non-terminal. The use of decision trees and a large bracketed corpus may shift some of the burden of identifying the intended parse from the grammar- ian to the statistical estimation methods. We refer to probabilistic methods based on the derivation as History-based Grammars (HBG). In this paper, we explored a restricted imple- mentation of this model in which only the path from the current node to the root of the deriva- tion along with the index of a branch (index of the child of a parent ) are examined in the decision tree model to build equivalence classes of histories. Other parts of the subtree are not examined in the implementation of HBG. [N It_PPH1 N] IV indicates_VVZ [Fn [Fn~whether_CSW [N a_AT1 call_NN1 N] [V completed_VVD successfully_RR V]Fn&] or_CC [Fn+ iLCSW [N some_DD error_NN1 N]@ [V was_VBDZ detected_VVN V] @[Fr that_CST [V caused_VVD IN the_AT call_NN1 N] [Ti to_TO fail_VVI Wi]V]Fr]Fn+] Fn]V]._. Figure 2: Sample bracketed sentence from Lan- caster Treebank. Task Domain We have chosen computer manuals as a task do- main. We picked the most frequent 3000 words in a corpus of 600,000 words from 10 manuals as our vocabulary. We then extracted a few mil- lion words of sentences that are completely cov- ered by this vocabulary from 40,000,000 words of computer manuals. A randomly chosen sentence from a sample of 5000 sentences from this corpus is: 396. It indicates whether a call completed suc- cessfully or if some error was detected that caused the call to fail. To define what we mean by a correct parse, we use a corpus of manually bracketed sentences at the University of Lancaster called the Tree- bank. The Treebank uses 17 non-terminal labels and 240 tags. The bracketing of the above sen- tence is shown in Figure 2. A parse produced by the grammar is judged to be correct if it agrees with the Treebank parse structurally and the NT labels agree. The gram- mar has a significantly richer NT label set (more than 10000) than the Treebank but we have de- fined an equivalence mapping between the gram- mar NT labels and the Treebank NT labels. In this paper, we do not include the tags in the mea- sure of a correct parse. We have used about 25,000 sentences to help the grammarian develop the grammar with the goal that the correct (as defined above) parse is among the proposed (by the grammar) parses for sentence. Our most common test set consists of 1600 sentences that are never seen by the gram- marian. The Grammar The grammar used in this experiment is a broad- coverage, feature-based unification grammar. The grammar is context-free but uses unification to ex- press rule templates for the the context-free pro- ductions. For example, the rule template: (3) : n unspec : n corresponds to three CFG productions where the second feature : n is either s, p, or : n. This rule template may elicit up to 7 non-terminals. The grammar has 21 features whose range of values maybe from 2 to about 100 with a median of 8. There are 672 rule templates of which 400 are ac- tually exercised when we parse a corpus of 15,000 sentences. The number of productions that are realized in this training corpus is several hundred thousand. P-CFG While a NT in the above grammar is a feature vector, we group several NTs into one class we call a mnemonic represented by the one NT that is the least specified in that class. For example, the mnemonic VBOPASTSG* corresponds to all NTs that unify with: pos--v 1 v -- ~.ype = be (4) tense - aspect : past We use these mnemonics to label a parse tree and we also use them to estimate a P-CFG, where the probability of rewriting a NT is given by the probability of rewriting the mnemonic. So from a training set we induce a CFG from the actual mnemonic productions that are elicited in pars- ing the training corpus. Using the Inside-Outside 33 algorithm, we can estimate P-CFG from a large corpus of text. But since we also have a large corpus of bracketed sentences, we can adapt the Inside-Outside algorithm to reestimate the prob- ability parameters subject to the constraint that only parses consistent with the Treebank (where consistency is as defined earlier) contribute to the reestimation. From a training run of 15,000 sen- tences we observed 87,704 mnemonic productions, with 23,341 NT mnemonics of which 10,302 were lexical. Running on a test set of 760 sentences 32% of the rule templates were used, 7% of the lexi- cal mnemonics, 10% of the constituent mnemon- ics, and 5% of the mnemonic productions actually contributed to parses of test sentences. Grammar and Model Performance Metrics To evaluate the performance of a grammar and an accompanying model, we use two types of mea- surements: • the any-consistent rate, defined as the percent- age of sentences for which the correct parse is proposed among the many parses that the gram- mar provides for a sentence. We also measure the parse base, which is defined as the geomet- ric mean of the number of proposed parses on a per word basis, to quantify the ambiguity of the grammar. • the Viterbi rate defined as the percentage of sen- tences for which the most likely parse is consis- tent. The any-contsistentt rate is a measure of the gram- mar's coverage of linguistic phenomena. The Viterbi rate evaluates the grammar's coverage with the statistical model imposed on the gram- mar. The goal of probabilistic modelling is to pro- duce a Viterbi rate close to the anty-contsistentt rate. The any-consistent rate is 90% when we re- quire the structure and the labels to agree and 96% when unlabeled bracketing is required. These results are obtained on 760 sentences from 7 to 17 words long from test material that has never been seen by the grammarian. The parse base is 1.35 parses/word. This translates to about 23 parses for a 12-word sentence. The unlabeled Viterbi rate stands at 64% and the labeled Viterbi rate is 60%. While we believe that the above Viterbi rate is close if not the state-of-the-art performance, there is room for improvement by using a more re- fined statistical model to achieve the labeled any- contsistent rate of 90% with this grammar. There is a significant gap between the labeled Viterbiand any-consistent rates: 30 percentage points. Instead of the usual approach where a gram- marian tries to fine tune the grammar in the hope of improving the Viterbi rate we use the combina- tion of a large Treebank and the resulting deriva- tion histories with a decision tree building algo- rithm to extract statistical parameters that would improve the Viterbi rate. The grammarian's task remains that of improving the any-consistent rate. The history-based grammar model is distin- guished from the context-free grammar model in that each constituent structure depends not only on the input string, but also the entire history up to that point in the sentence. In HBGs, history is interpreted as any element of the output struc- ture, or the parse tree, which has already been de- termined, including previous words, non-terminal categories, constituent structure, and any other linguistic information which is generated as part of the parse structure. The HBG Model Unlike P-CFG which assigns a probability to a mnemonic production, the HBG model assigns a probability to a rule template. Because of this the HBG formulation allows one to handle any gram- mar formalism that has a derivation process. For the HBG model, we have defined about 50 syntactic categories, referred to as Syn, and about 50 semantic categories, referred to as Sere. Each NT (and therefore mnemonic) of the gram- mar has been assigned a syntactic (Syn) and a semantic (Sem) category. We also associate with a non-terminal a primary lexical head, denoted by H1, and a secondary lexical head, denoted by H~. 2 When a rule is applied to a non-terminal, it indi- cates which child will generate the lexical primary head and which child will generate the secondary lexical head. The proposed generative model associates for each constituent in the parse tree the probability: p( Syn, Sern, R, H1, H2 [Synp, Setup, P~, Ipc, Hip, H2p ) In HBG, we predict the syntactic and seman- tic labels of a constituent, its rewrite rule, and its two lexical heads using the labels of the parent constituent, the parent's lexical heads, the par- ent's rule P~ that lead to the constituent and the constituent's index Ipc as a child of R~. As we discuss in a later section, we have also used with success more information about the deriva- tion tree than the immediate parent in condition- ing the probability of expanding a constituent. 2The primary lexical head H1 corresponds (roughly) to the linguistic notion of a lexicai head. The secondary lexical head H2 has no linguistic par- allel. It merely represents a word in the constituent besides the head which contains predictive information about the constituent. 34 We have approximated the above probability by the following five factors: 1. p(Syn IP~, X~o, X~, Sy~, Se.~) 2. p( Sern ISyn, Rv, /pc, Hip, H2p, Synp, Sern; ) 3. p( R ]Syn, Sem, 1~, Ipc, Hip, H2p, Synp, Semi) 4. p(H IR, Sw, Sere, I o, 5. p(n2 IH1,1< Sy , Sere, Ipc, Sy, p) While a different order for these predictions is pos- sible, we only experimented with this one. Parameter Estimation We only have built a decision tree to the rule prob- ability component (3) of the model. For the mo- ment, we are using n-gram models with the usual deleted interpolation for smoothing for the other four components of the model. We have assigned bit strings to the syntactic and semantic categories and to the rules manually. Our intention is that bit strings differing in the least significant bit positions correspond to cate- gories of non-terminals or rules that are similar. We also have assigned bitstrings for the words in the vocabulary (the lexical heads) using automatic clustering algorithms using the bigram mutual in- formation clustering algorithm (see (5)). Given the bitsting of a history, we then designed a deci- sion tree for modeling the probability that a rule will be used for rewriting a node in the parse tree. Since the grammar produces parses which may be more detailed than the Treebank, the decision tree was built using a training set constructed in the following manner. Using the grammar with the P-CFG model we determined the most likely parse that is consistent with the Treebank and considered the resulting sentence-tree pair as an event. Note that the grammar parse will also pro- vide the lexical head structure of the parse. Then, we extracted using leftmost derivation order tu- pies of a history (truncated to the definition of a history in the HBG model) and the corresponding rule used in expanding a node. Using the resulting data set we built a decision tree by classifying his- tories to locally minimize the entropy of the rule template. With a training set of about 9000 sentence- tree pairs, we had about 240,000 tuples and we grew a tree with about 40,000 nodes. This re- quired 18 hours on a 25 MIPS RISC-based ma- chine and the resulting decision tree was nearly 100 megabytes. Immediate vs. Functional Parents The HBG model employs two types of parents, the immediate parent and the functional parent. The with R: PPI Syn : PP Sem: With-Data HI : list }{2 : with Sem: Data HI : list H2: a Syn : a Sem: HI: H2 : N Data list I list Figure 3: Sample representation of "with a list" in HBG model. 35 immediate parent is the constituent that immedi- ately dominates the constituent being predicted. If the immediate parent of a constituent has a dif- ferent syntactic type from that of the constituent, then the immediate parent is also the functional parent; otherwise, the functional parent is the functional parent of the immediate parent. The distinction between functional parents and imme- diate parents arises primarily to cope with unit productions. When unit productions of the form XP2 ~ XP1 occur, the immediate parent of XP1 is XP2. But, in general, the constituent XP2 does not contain enough useful information for ambi- guity resolution. In particular, when considering only immediate parents, unit rules such as NP2 --* NP1 prevent the probabilistic model from allow- ing the NP1 constituent to interact with the VP rule which is the functional parent of NP1. When the two parents are identical as it of- ten happens, the duplicate information will be ig- nored. However, when they differ, the decision tree will select that parental context which best resolves ambiguities. Figure 3 shows an example of the represen- tation of a history in HBG for the prepositional phrase "with a list." In this example, the imme- diate parent of the N1 node is the NBAR4 node and the functional parent of N1 is the PP1 node. Results We compared the performance of HBG to the "broad-coverage" probabilistic context-free gram- mar, P-CFG. The any-consistent rate of the gram- mar is 90% on test sentences of 7 to 17 words. The Vi$erbi rate of P-CFG is 60% on the same test cor- pus of 760 sentences used in our experiments. On the same test sentences, the HBG model has a Viterbi rate of 75%. This is a reduction of 37% in error rate. Accuracy P-CFG 59.8% HBG 74.6% Error Reduction 36.8% Figure 4: Parsing accuracy: P-CFG vs. HBG In developing HBG, we experimented with similar models of varying complexity. One discov- ery made during this experimentation is that mod- els which incorporated more context than HBG performed slightly worse than HBG. This suggests that the current training corpus may not contain enough sentences to estimate richer models. Based on the results of these experiments, it appears likely that significantly increasing the sise of the training corpus should result in a corresponding improvement in the accuracy of HBG and richer HBG-like models. To check the value of the above detailed his- tory, we tried the simpler model: 1. p(H1 [HI~, H~, P~, Z~o) 2. p(H2 [H~, H~p, H2p, 1%, Ip~) 3. p(syn IH , 4. v(Sem ISYn, H,, Ip,) 5. p(R [Syn, Sere, H~, H2) This model corresponds to a P-CFG with NTs that are the crude syntax and semantic categories annotated with the lexical heads. The Viterbi rate in this case was 66%, a small improvement over the P-CFG model indicating the value of using more context from the derivation tree. Conclusions The success of the HBG model encourages fu- ture development of general history-based gram- mars as a more promising approach than the usual P-CFG. More experimentation is needed with a larger Treebank than was used in this study and with different aspects of the derivation history. In addition, this paper illustrates a new approach to grammar development where the parsing problem is divided (and hopefully conquered) into two sub- problems: one of grammar coverage for the gram- marian to address and the other of statistical mod- eling to increase the probability of picking the cor- rect parse of a sentence. REFERENCES Baker, J. K., 1975. Stochastic Modeling for Au- tomatic Speech Understanding. In Speech Recognition, edited by Raj Reddy, Academic Press, pp. 521-542. Brent, M. R. 1991. Automatic Acquisition of Sub- categorization Frames from Untagged Free- text Corpora. In Proceedings of the 29th An- nual Meeting of the Association for Computa- tional Linguistics. Berkeley, California. Brill, E., Magerman, D., Marcus, M., and San- torini, B. 1990. Deducing Linguistic Structure from the Statistics of Large Corpora. In Pro- ceedings of the June 1990 DARPA Speech and Natural Language Workshop. Hidden Valley, Pennsylvania. Brown, P. F., Della Pietra, V. J., deSouza, P. V., Lai, J. C., and Mercer, R. L. Class-based n- gram Models of Natural Language. In Pro- ceedings of ~he IBM Natural Language ITL, March, 1990. Paris, France. 36 Church, K. 1988. A Stochastic Parts Program and Noun Phrase Parser for Unrestricted Text. In Proceedings of the Second Conference on Ap- plied Natural Language Processing. Austin, Texas. Gale, W. A. and Church, K. 1990. Poor Estimates of Context are Worse than None. In Proceed- ings of the June 1990 DARPA Speech and Natural Language Workshop. Hidden Valley, Pennsylvania. Harrison, M. A. 1978. Introduction to Formal Language Theory. Addison-Wesley Publishing Company. Hindle, D. and Rooth, M. 1990. Structural Am- biguity and Lexical Relations. In Proceedings of the :June 1990 DARPA Speech and Natural Language Workshop. Hidden Valley, Pennsyl- vania. :Jelinek, F. 1985. Self-organizing Language Model- ing for Speech Recognition. IBM Report. Magerman, D. M. and Marcus, M. P. 1991. Pearl: A Probabilistic Chart Parser. In Proceedings of the February 1991 DARPA Speech and Nat- ural Language Workshop. Asilomar, Califor- nia. Derouault, A., and Merialdo, B., 1985. Probabilis- tic Grammar for Phonetic to French Tran- scription. ICASSP 85 Proceedings. Tampa, Florida, pp. 1577-1580. Sharman, R. A., :Jelinek, F., and Mercer, R. 1990. Generating a Grammar for Statistical Train- ing. In Proceedings of the :June 1990 DARPA Speech and Natural Language Workshop. Hid- den Valley, Pennsylvania. 37
1993
5
USING BRACKETED PARSES TO EVALUATE A GRAMMAR CHECKING APPLICATION Richard H. Wojcik, Philip Harrison, John Bremer Boeing Computer Services Research and Technology Division P.O. Box 24346, MS 7L--43 Seattle, WA 98124-2964 Internet: [email protected], [email protected], [email protected] Abstract We describe a method for evaluating a grammar checking application with hand-bracketed parses. A randomly-selected set of sentences was sub- mitted to a grammar checker in both bracketed and unbracketed formats. A comparison of the result- ing error reports illuminates the relationship be- tween the underlying performance of the parser- grammar system and the error critiques presented to the user. INTRODUCTION The recent development of broad-coverage natural language processing systems has stimu- lated work on the evaluation of the syntactic com- ponent of such systems, for purposes of basic eval- uation and improvement of system performance. Methods utilizing hand-bracketed corpora (such as the University of Pennsylvania Treebank) as a basis for evaluation metrics have been discussed in Black et al. (1991), Harrison et al. (1991), and Black et al. (1992). Three metrics discussed in those works were the Crossing Parenthesis Score (a count of the number of phrases in the machine produced parse which cross with one or more phrases in the hand parse), Recall (the percentage of phrases in the hand parse that are also in the ma- chine parse), and Precision (the percentage of phrases in the machine parse that are in the hand parse). We have developed a methodology for using hand-bracketed parses to examine both the inter- nal and external performance of a grammar checker. The internal performance refers to the behavior of the underlying system--i.e, the toke- nizer, parser, lexicon, and grammar. The external performance refers to the error critiques generated by the system. 1 Our evaluation methodology re- lies on three separate error reports generated from a corpus of randomly selected sentences: 1) a report based on unbracketed sentences, 2) a report based on optimally bracketed sentences with our current system, and 3) a report based on the opti- mal bracketings with the system modified to in- sure the same coverage as the unbracketed corpus. The bracketed report from the unmodified system tells us something about the coverage of our underlying system in its current state. The brack- eted report from the modified system tells us something about the external accuracy of the error reports presented to the user. Our underlying system uses a bottom-up, fun- ambiguity parser. Our error detection method relies on including grammar rules for parsing errorful sentences, with error critiques being gen- erated from the occurrence of an error rule in the parse. Error critiques are based on just one of all the possible parse trees that the system can find for a given sentence. Our major concern about the underlying system is whether the system has a cor- rect parse for the sentence in question. We are also concerned about the accuracy of the selected parse, but our current methodology does not directly address that issue, because correct error reports do not depend on having precisely the cor- rect parse. Consequently, our evaluation of the underlying grammatical coverage is based on a simple metric, namely the parser success rate for satisfying sentence bracketings (i.e. correct parses). Either the parser can produce the optimal parse or it can't. We have a more complex approach to evaluat- ing the performance of the system's ability to detect errors. Here, we need to look at both the 1. We use the term critique to represent an instance of an error detected. Each sentence may have zero or more critiques reported for it. 38 overgeneration and undergeneration of individual error critiques. What is the rate of spurious cri- tiques, or critiques incorrectly reported, and what is the rate of missed critiques, or critiques not reported. Therefore we define two additional met- rics, which illuminate the spurious and missed cri- tique rates, respectively: Precision: the percentage of correct critiques from the unbracketed corpus. Recall: the percentage of critiques generated from an ideal bracketed corpus that are also present among those in the unbracketed corpus. Precision tells us what percentage of reported cri- tiques are reliable, and Recall tells iJs what per- centage of correct critiques have been reported (modulo the coverage). OVERVIEW OF THE APPLICATION The Boeing Simplified English Checker (a.k.a. the BSEC, cf. Hoard, Wojcik, and Holzhauser 1992) is a type of grammar and style checker, but it is more accurately described as a 'controlled En- glish checker' (cf. Adriaens 1992). That is, it re- ports to users on where a text fails to comply with the aerospace standard for maintenance documen- tation known as Simplified English (AECMA 1989). If the system cannot produce a parse, it prints the message "Can't do SE check." At pres- ent, the Checker achieves parses for about 90 per- cent of the input strings submitted to it. 2 The accu- racy of the error critiques over that 90 percent varies, but our subjective experience suggests that most sentence reports contain critiques that are useful in that they flag some bona fide failure to comply with Simplified English. The NLP methodology underlying the BSEC does not rely on the type of pattern matching tech- niques used to flag errors in more conventional checkers. It cannot afford simply to ignore sen- tences that are too complex to handle. As a con- trolled sublanguage, Simplified English requires 2. The 90 percent figure is based on random samplings taken from maintenance documents sub- mitted to the BSEC over the past two years. This figure has remained relatively consistent for main- tenance documentation, although it varies with other text domains. that every word conform to specified usage. That is, each word must be marked as 'allowed' in the lexicon, or it will trigger an error critique. Since the standard generally requires that words be used in only one part of speech, the BSEC produces a parse tree on which to judge vocabulary usage as well as other types of grammatical violations) As one would expect, the BSEC often has to choose between quite a few alternative parse trees, some- times even hundreds or thousands of them. Given its reliance on full-ambiguity parse forests and relatively little semantic analysis, we have been somewhat surprised that it works as well as it does. We know of few grammar and style checkers that rely on the complexity of grammatical analy- sis that the BSEC does, but IBM's Critique is cer- tainly one of the best known. In discussing the ac- curacy of Critique, Richardson and Braden-Harder (1993:86) define it as "the actual 'under the covers' natural language processing in- volved, and the user's perception." In other words, there are really two levels upon which to gauge accuracy--that of the internal parser and that of the reports generated. They add: "Given the state of the art, we may consider it a blessing that it is possible for the latter to be somewhat bet- ter than the former." The BSEC, like Critique, ap- pears to be smarter than it really is at guessing what the writer had in mind for a sentence struc- ture. Most error critiques are not affected by incor- rect phrasal attachment, although grossly incor- rect parses lie behind most sentence reports that go sour. What we have not fully understood in the past is the extent to which parsing accuracy affects error critiques. What if we could eliminate all the bad parses? Would that make our system more ac- curate by reducing incorrect critiques, or would it degrade performance by reducing the overall number of correct critiques reported? We knew that the system was capable of producing good er- ror reports from relatively bad parses, but how many of those error reports even had a reasonably correct parse available to them? 3. The Simplified English (SE) standard allows some exceptions to the 'single part of speech' rule in its core vocabulary of about a thousand words. The BSEC currently does little to guarantee that writers have used a word in the 'Simplified Eng- lish' meaning, only that they have selected the cor- rect part of speech. 39 OVERVIEW OF SIMPLIFIED ENGLISH The SE standard consists of a set of grammar, style, format, and vocabulary restrictions, not all of which lend themselves to computational analy- sis. A computer program cannot yet support those aspects of the standard that require deep under- standing, e.g. the stricture against using a word in any sense other than the approved one, or the re- quirement to begin paragraphs with the topic sen- tence. What a program can do is count the number of words in sentences and compound nouns, detect violations of parts of speech, flag the omission of required words (such as articles) orthe presence of banned words (such as auxiliary have and be, etc.). The overall function of such a program is to pres- ent the writer with an independent check on a fair range of Simplified English requirements. For further details on Simplified English and the BSEC, see Hoard et al. (1992) and Wojcik et al. (1990). Although the BSEC detects a wide variety of Simplified English and general writing violations, only the error categories in Table 1 are relevant to this study: Except for illegal comma usage, which is rather uncommon, the above errors are among the most frequent types of errors detected by the BSEC. To date, The Boeing Company is the only aero- space manufacturer to produce a program that de- tects such a wide range of Simplified English violations. In the past, Boeing and other compa- nies have created checkers that report on all words that are potential violations of SE, but such 'word checkers' have no way of avoiding critiques for word usage that is correct. For example, if the word test is used legally as a noun, the word- checking program will still flag the word as a po- tential verb-usage error. The BSEC is the only Simplified English checker in existence that man- ages to avoid this. a As Richardson and Braden-Harder (p. 88) pointed out: "We have found...that professionals seem much more forgiving of wrong critiques, as 4. Oracle's recently released CoAuthor product, which is designed to be used with the Interleaf word processor, has the potential to produce gram- matical analyses of sentences, but it only works as a Simplified English word checker at present. long as the time required to disregard them is mini- mal." In fact, the chief complaint of Boeing tech- nical writers who use the BSEC is when it pro- duces too many nuisance errors. So word-checking programs, while inexpensive and easy to produce, do not address the needs of Sim- plified English writers. POS A known word is used in in- correct part of speech. NON-SE An unapproved word is used. MISSING Articles must be used wherev- ARTICLE er possible in SE. PASSIVE Passives are usually illegal. TWO- COMMAND Commands may not be con- joined when they represent se- quential activities. Simulta- neous commands may be con- i joined. ING Progressive participles may not be used in SE. COMMA A violation of comma usage. ERROR i WARNING/ CAUTION Warnings and cautions must appear in a special format. Usually, an error arises when a declarative sentence has been used where an imperative one is required. Table 1. Error Types Detected By The BSEC THE PARSER UNDERLYING THE BSEC The parser underlying the Checker (cf. Harri- son 1988) is loosely based on GPSG. The gram- mar contains over 350 rules, and it has been imple- mented in Lucid Common Lisp running on Sun workstations. 5 Our approach to error critiquing differs from that used by Critique (Jensen, Hei- dorn, Miller, and Ravin 1993). Critique uses a two-pass approach that assigns an initial canoni- cal parse in so-called 'Chomsky-normal' form. The second pass produces an altered tree that is an- 5. The production version of the BSEC is actual- ly a C program that emulates the lisp development version. The C version accepts the same rules as the lisp version, but there are some minor differ- ences between it and the lisp version. This paper is based solely on the lisp version of the BSEC. 40 notated for style violations. No-parses cause the system to attempt a 'fitted parse', as a means of producing some information on more serious grammar violations. As mentioned earlier, the BSEC generates parse forests that represent all possible ambiguities vis-a-vis the grammar. There is no 'canonical' parse, nor have we yet im- plemented a 'fitted parse' strategy to reclaim in- formation available in no-parses. 6 Our problem has been the classic one of selecting the best parse from a number of alternatives. Before the SE Checker was implemented, Boeing's parser had been designed to arrive at a preferred or 'fronted' parse tree by weighting grammatical rules and word entries according to whether we deemed them more or less desirable. This strategy is quite similar to the one described in Heidorn 1993 and other works that he cites. In the maintenance manual domain, we simply observed the behavior of the BSEC over many sentences and adjusted the weights of rules and words as needed. To get a better idea of how our approach to fronting works, consider the ambiguity in the fol- lowing two sentences: (1) The door was closed. (2) The damage was repaired. In the Simplified English domain, it is more likely that (2) will be an example of passive usage, thus calling for an error report. To parse (1) as a passive would likely be incorrect in most cases. We there- fore assigned the adjective reading of closed a low weight in order to prefer an adjectival over a verb reading. Sentence (2) reports a likely event rather than a state, and we therefore weight repaired to be preferred as a passive verb. Although this method for selecting fronted parse trees some- times leads to false error critiques, it works well for most cases in our domain. BRACKETED INPUT STRINGS In order to coerce our system into accepting only the desired parse tree, we modified it to ac- cept only parses that satisfied bracketed forms. 6. The BSEC has the capability to report on po- tential word usage violations in no-parses, but the end-users seem to prefer not to use it. It is often difficult to say whether information will be viewed as help or as clutter in error reports. For example, the following sentence produces five separate parses because our grammar attaches prepositional phrases to preceding noun phrases and verb phrases in several ways. The structural ambiguity corresponds to five different interpreta- tions, depending on whether the boy uses a tele- scope, the hill has a telescope on it, the girl on the hill has a telescope, and so on. (3) The boy saw the girl on the hill with a telescope. We created a lisp operation called spe, for "string, parse, and evaluate," which takes an input string and a template. It returns all possible parse trees that fit the template. Here is an example of an spe form for (3): (SPE 'q'he boy saw the girl on the hill with a telescope." (S (NP the boy) (VP (V saw) (NP (NP the girl) (PP on (NP (NP the hill) (PP with a telescope))))))) The above bracketing restricts the parses to just the parse tree that corresponds to the sense in which the boy saw the girl who is identified as be- ing on the hill that has a telescope. If run through the BSEC, this tree will produce an error message that is identical to the unbracketed report--viz. that boy, girl, hill, and telescope are NON-SE words. In this case, it does not matter which tree is fronted. As with many sentences checked, the inherent ambiguity in the input string does not af- fect the error critique. Recall that some types of ambiguity do affect the error reports----e.g, passive vs. adjectival parti- cipial forms. Here is how the spe operation was used to disambiguate a sentence from our data: (SPE "Cracks in the impeller blades are not permitted" (S (NP Cracks in the impeller blades) (VP are not (A permitted)))) We judged the word permitted to have roughly the same meaning as stative 'permissible' here, and that led us to coerce an adjectival reading in the bracketed input. If the unbracketed input had re- suited in the verb reading, then it would have flagged the sentence as an illegal passive. It turned out that the BSEC selected the adjective reading 41 in the unbracketed sentence, and there was no dif- ference between the bracketed and unbracketed er- ror critiques in this instance. METHODOLOGY We followed this procedure in gathering and analyzing our data: First, we collected a set of data from nightly BSEC batch runs extending over a three month period from August through October 1991. The data set consisted of approximately 20,000 sentences from 183 documents. Not all of the documents were intended to be in Simplified English when they were originally written. We wrote a shell program to extract a percentage-stra- tified sample from this data. After extracting a test set, we ended up culling the data for duplicates, tables, and other spurious data that had made it past our initial filter. 7 We ended up with 297 sen- tences in our data set. We submitted the 297 sentences to the current system and obtained an error report, which we call the unbracketed report. We then created spe forms for each sentence. By observing the parse trees with our graphical interface, we verified that the parse tree we wanted was the one produced by the spe operation. For 49 sentences, our system could not produce the desired tree. We ran the current system, using the bracketed sentences to produce the unmodified bracketed report. Next we examined the 24 sentences which did not have parses satisfying their bracketings but did, never- theless, have parses in the unbracketed report. We added the lexical information and new grammar rules needed to enable the system to parse these sentences. Running the resulting system pro- duced the modified bracketed report. These new parses produced critiques that we used to evaluate the critiques previously produced from the unbracketed corpus. The comparison of the unbracketed report and the modified bracketed report produced the estimates of Precision and Recall for this sample. '7. The BSEC falters out tables and certain other types of input, but the success rate varies with the type of text. RESULTS Our 297-sentence corpus had the following characteristics. The length of the sentences ranged between three words and 32 words. The median sentence length was 12 words, and the mean was 13.8 words, s Table 2 shows the aggregated out- comes for the three reports. Checker Unbrack- Unmodi- Modified Outcome eted fled Brack- Brack- eted eted NO 25 49 25 PARSE NO 123 134 137 ERROR ONE OR 149 114 135 MORE ERRORS Totals 297 297 297 Table 2: Overview Of The Results The table shows the coverage of the system and the impact of the spurious parses. The coverage is reflected in the Unmodified Bracketed column, where 248 parses indicates a coverage of 84 per- cent for the underlying system in this domain. The table also reveals that there were 24 spurious parses in the unbracketed corpus, corresponding to no valid parse tree in our grammar. The Modi- fied Bracketed column shows the effect on the report generator of forcing the system to have the same coverage as the unbracketed run. Table 3 shows by type the errors detected in instances where errors were reported. The Spuri- ous Error column indicates the number of errors from the unbracketed sentences which we judged to be bad. The Missed Errors column indicates er- rors which were missed in the unbracketed report, but which showed up in the modified bracketed 8. Since most of the sentences in our corpus were intended to be in Simplified English, it is not sur- prising that they tended to be under the 20 word limit imposed by the standard. 42 report. The modified bracketed report contained only 'actual' Simplified English errors. Category POS NON-SE MISSING ARTICLE NOUN CLUS- TER PASSIVE TWO- COM- MAND ING COMMA ERROR WARN- ING/ CAU- TION Total Table 3: Un- Spuri- Miss- Actual brack- ous ed Errors eted Errors Errors Errors 120 22 7 105 71 6 5 70 38 13 1 26 30 7 5 28 17 7 8 18 14 3 3 14 5 2 0 3 5 4 0 1 2 0 0 2 302 64 29 Types Of Errors Detected 267 For this data, the estimate of Precision (rate of correct error critiques for unbracketed data) is (302-64)/302, or 79 percent. We estimate that this precision rate is accurate to within 5 percent with 95 percent confidence. Our estimate of Recall (rate of correct critiques from the set of possible critiques) is (267-29)/267, or 89 percent. We esti- mate that this Recall rate is accurate to within 4 percent with 95 percent confidence. It is instructive to look at a report that contains an incorrectly identified error. The following re- port resulted from our unbracketed test run: ff strut requires six fluid ounces or more to fill, find leakage source and repair. Two commands - possible error: find leakage source and repair Noun errors: fill Allowed as: Verb Verb errors: requires Use: be necessary Missing articles: strut leakage source The bracketed run produced a no-parse for this sentence because of an inadequacy in our grammar that blocked fill from parsing as a verb. Since it parsed as a noun in the unbracketed run, the sys- tem complained thatfill was allowed as a verb. In our statistics, we counted thefill Noun error as an incorrect POS error and the requires Verb error as a correct one. This critique contains two POS er- rors, one TWO-COMMAND error, and two MIS- SING ARTICLE error. Four of the five error cri- tiques are accurate. DISCUSSION We learned several things about our system through this exercise. First, we learned that the act of comparing unbracketed and unmodified bracketed sentences revealed worse performance in the underlying system than we anticipated. We had expected there to be a few more no-parses with unmodified bracketing, but not so many more. Second, the methodology helped us to detect some obscure bugs in the system. For ex- ample, the TWO-COMMAND and NOUN CLUSTER errors were not being flagged properly in the unmodified bracketed set because of bugs in the report generator. These bugs had not been not- iced because the errors were being flagged proper- ly in some sentences. When a system gets as large and complicated as ours, especially when it gener- ates hundreds or thousands of parse trees for some sentences, it becomes very difficult to detect errors that only show up sporadically and infrequently in 43 the data. Our new methodology provided us with a window on that aspect of system performance. Perhaps a more interesting observation con- cerns the relationship between our system and one like Critique, which relies on no-parses to trigger a fitted parse 'damage repair' phase. We believe that the fitted-parse strategy is a good one, al- though we have not yet felt a strong need to imple- ment it. The reason is that our system generates such rich parse forests that strings which ought to trigger no-parses quite frequently end up trigger- ing 'weird' parses. That is, they trigger parses that are grammatical from a strictly syntactic perspec- five, but inappropriate for the words in their accus- tomed meanings. A fitted parse strategy would not work with these cases, because the system has no way of detecting weirdness. Oddly enough, the existence of weird parses often has the same effect in error reports as parse fitting in that they generate error critiques which are useful. The more ambi- guity a syntactic system generates, the less likely it is to need a fitted parse strategy to handle unex- pected input. The reason for this is that the number of grammatically correct, but 'senseless' parses is large enough to get a parse that would otherwise be ruled out on semantic grounds. Our plans for the use of this methodology are as follows. First, we intend to change our current system to improve deficiencies and lack of cover- age revealed by this exercise. In effect, we plan to use the current test corpus as a training corpus in the next phase. Before deploying the changes, we will collect a new test corpus and repeat our method of evaluation. We are very interested in seeing how this new cycle of development will affect the figures of coverage, Precision, and Recall on the next evaluation. REFERENCES Adriaens, G. 1992. From COGRAM to ALCO- GRAM: Toward a Controlled English Gram- mar Checker. Proceedings of the fifteenth In- ternational Conference on Computational Linguistics. Ch. Boitet, ed. Nantes: COL- ING. Pp. 595-601. AECMA. 1989. A Guide for the Preparation of Aircraft Maintenance Documentation in the Aerospace Maintenance Language. AECMA Simplified English. AECMA Document: PSC-85-16598, Change 5. Paris. Black, E., S. Abney, D. Flickinger, C. Gdaniec, R. Grishman, E Harrison, D. Hindle, R. Ingria, E Jelinek, J. Klavans, M. Liberman, M. Mar- cus, S. Roukos, B. Santorini, and T. Strzal- kowski. 1991. A Procedure for Quantitative- ly Comparing the Syntactic Coverage of English Grammars. Proceedings of the Fourth DARPA Speech and Natural Lan- guage Workshop. Pp. 306-311. Black, E., J. Lafferty, Salim Roukos. 1992. De- velopment and Evaluation of a Broad-Cover- age Probabilistic Grammar of English-Lan- guage Computer Manuals. Proceedings of the 30th Annual Meeting of the Association for Computational Linguistics. Pp. 185-192. Gazdar, G., E. Klein, G. Pullum, and I. Sag. 1985. Generalized Phrase Structure Grammar. Cambridge, Mass.: Harvard University Press. Harrison, P. 1988. A New Algorithm for Parsing Generalized Phrase Structure Grammars. Unpublished Ph.D. dissertation. Seattle: University of Washington. Harrison, E, S. Abney, E. Black, D. Flickinger, C. Gdaniec, R, Grishman, D. Hindle, R. Ingria, M. Marcus, B. Santorini, and T. Strzalkowski. 1991. Evaluating Syntax Performance of Parser/Grammars of English. Proceedings of Natural Language Processing Systems Evalu- ation Workshop. Berkeley, California. Heidorn, G. 1993. Experience with an Easily Computed Metric for Ranking Alternative Parses. In Jensen, Heidorn, and Richardson 1993. Pp. 29-45. Hoard, J. E., R. H. Wojcik, and K. Holzhauser. 1992. An Automated Grammar and Style Checker for Writers of Simplified English. In EO. Holt and N. Williams, eds. 1992. Holt, E O. 1992. Computers and Writing: State of the Art. Boston: Kluwer. Jensen, K. 1993. PEG: The PLNLP English Grammar. In Jensen, Heidorn, and Richard- son 1993. Pp. 29-45. Jensen, K., G. Heidorn, L. Miller, and Y. Ravin. 1993, Parse Fitting and Prose Fixing. In Jen- sen, Heidorn, and Richardson 1993. Pp. 53-64. 44 Jensen, K., G. Heidorn, and S. Richardson, eds. 1993. Natural Language Processing: The PLNLP Approach. Boston: Kluwer. Ravin, Y. 1993. Grammar Errors and Style Weak- nesses in a Text-Critiquing System. In Jen- sen, Heidorn, and Richardson 1993. Pp. 65-76. Richardson, S. and L. Braden-Harder. 1993. The Experience of Developing a Large-Scale Nat- ural Language Processing System: Critique. In Jensen, Heidorn, and Richardson 1993. Pp. 78-89. Wojcik, R. H., J. E. Hoard, K. Holzhauser. 1990. The Boeing Simplified English Checker. Pro- ceedings of the International Conference, Hu- man Machine Interaction and Artificial Intel- ligence in Aeronautics and Space. Toulouse: Centre d'Etudes et de Recherches de Tou- louse. Pp. 43-57. 45
1993
6
A SPEECH-FIRST MODEL FOR REPAIR DETECTION AND CORRECTION Christine Nakatani Division of Applied Sciences Harvard University Cambridge, MA 02138 chn@das, harvard, edu Julia Hirschberg 2D-450, AT&T Bell Laboratories 600 Mountain Avenue Murray Hill, NJ 07974-0636 julia@research, att. com Abstract Interpreting fully natural speech is an important goal for spoken language understanding systems. However, while corpus studies have shown that about 10% of spontaneous utterances contain self-corrections, or RE- PAIRS, little is known about the extent to which cues in the speech signal may facilitate repair processing. We identify several cues based on acoustic and prosodic analysis of repairs in a corpus of spontaneous speech, and propose methods for exploiting these cues to detect and correct repairs. We test our acoustic-prosodic cues with other lexical cues to repair identification and find that precision rates of 89-93% and recall of 78-83% can be achieved, depending upon the cues employed, from a prosodically labeled corpus. Introduction Disfluencies in spontaneous speech pose serious prob- lems for spoken language systems. First, a speaker may produce a partial word or FRAGMENT, a string of phonemes that does not form the complete intended word. Some fragments may coincidentally match words actually in the lexicon, such as fly in Exam- ple (1); others will be identified with the acoustically closest item(s) in the lexicon, as in Example (2). 1 (1) What is the earliest fli- flight from Washington to Atlanta leaving on Wednesday September fourth? (2) Actual string: What is the fare fro- on American Airlines fourteen forty three Recognized string: With fare four American Air- lines fourteen forty three Even if all words in a disfluent segment are correctly recognized, failure to detect a disfluency may lead to interpretation errors during subsequent processing, as in Example (3). 1The presence of a word fragment in examples is indicated by the diacritic '-'. Self-corrected portions of the utterance appear in boldface. All examples in this paper are drawn from the ATIS corpus described below. Recognition output shown in Example (2) is from the system described in (Lee et al., 1990). (3) ... Delta leaving Boston seventeen twenty one ar- riving Fort Worth twenty two twenty one forty... Here, 'twenty two twenty one forty' must be interpreted as a flight arrival time; the system must somehow choose among '21:40', '22:21', and '22:40'. Although studies of large speech corpora have found that approximately 10% of spontaneous utter- ances contain disfluencies involving self-correction, or REPAIRS (Hindle, 1983; Shriberg et al., 1992), little is known about how to integrate repair processing with real-time speech recognition. In particular, the speech signal itself has been relatively unexplored as a source of processing cues for the detection and correction of repairs. In this paper, we present results from a study of the acoustic and prosodic characteristics of 334 repair utterances, containing 368 repair instances, from the AROA Air Travel Information System (ATIS) database. Our results are interpreted within our "speech-first" framework for investigating repairs, the REPAIR IN- TERVAL MODEL (RIM). RIM builds upon Labov (1966) and Hindle (1983) by conceptually extending the EDIT SIGNAL HYPOTHESIS that repairs are acoustically or phonetically marked at the point of interruption of flu- ent speech. After describing acoustic and prosodic characteristics of the repair instances in our corpus, we use these and other lexical cues to test the utility of our "speech-first" approach to repair identification on a prosodically labeled corpus. Previous Computational Approaches While self-correction has long been a topic of psy- cholinguistic study, computational work in this area has been sparse. Early work in computational linguis- tics treated repairs as one type of ill-formed input and proposed solutions based upon extensions to existing text parsing techniques such as augmented transition networks (ATNs), network-based semantic grammars, case frame grammars, pattern matching and determin- istic parsers. Recently, Shriberg et al. (1992) and Bear et al. (1992) have proposed a two-stage method for pro- cessing repairs. In the first stage, lexical pattern 46 matching rules operating on orthographic transcrip- tions would be used to retrieve candidate repair utter- ances. In the second, syntactic, semantic, and acoustic information would filter true repairs from false posi- tives found by the pattern matcher. Results of testing the first stage of this model, the lexical pattern matcher, are reported in (Bear et al., 1992): 309 of 406 utterance containing 'nontrivial' repairs in their 10,718 utterance corpus were correctly identified, while 191 fluent utter- ances were incorrectly identified as containing repairs. This represents recall of 76% with precision of 62%. Of the repairs correctly identified, the appropriate cor- rection was found for 57%. Repaj'r candidates were filtered and corrected by deleting a portion of the ut- terance based on the pattern matched, and then check- ing the syntactic and semantic acceptability of the cor- rected version using the syntactic and semantic com- ponents of the Gemini NLP system. Bear et al. (1992) also speculate that acoustic information might be used to filter out false positives for candidates matching two of their lexical patterns -- repetitions of single words and cases of single inserted words -- but do not report such experimentation. This work promotes the important idea that auto- matic repair processing can be made more robust by integrating knowledge from multiple sources. Such integration is a desirable long-term goal. However, the working assumption that correct transcriptions will be available from speech recognizers is problematic, since current recognition systems rely primarily upon language models and lexicons derived from fluent speech to decide among competing acoustic hypothe- ses. These systems usually treat disfluencies in training and recognition as noise; moreover, they have no way of modeling word fragments, even though these occur in the majority of repairs. We term such approaches that rely on accurate transcription to identify repair candidates "text-first". Text-first approaches have explored the potential contributions of lexical and grammatical information to automatic repair processing, but have largely left open the question of whether there exist acoustic and prosodic cues for repairs in general, rather than po- tential acoustic-prosodic filters for particular pattern subclasses. Our investigation of repairs addresses the problem of identifying such general acoustic-prosodic cues to repairs, and so we term our approach "speech- first". Finding such cues to repairs would provide early detection of repairs in recognition, permitting early pruning of the hypothesis space. One proposal for repair processing that lends it- self to both incremental processing and the integration of speech cues into repair detection is that of Hindle (1983), who defines a typology of repairs and asso- ciated correction strategies in terms of extensions to a deterministic parser. For Hindle, repairs can be (1) full sentence restarts, in which an entire utterance is re- initiated; (2) constituent repairs, in which one syntactic constituent (or part thereof) is replaced by another; 2 or (3) surface level repairs, in which identical strings ap- pear adjacent to each other. An hypothesized acoustic- phonetic edit signal, "a markedly abrupt cut-off of the speech signal" (Hindle, 1983, p.123), is assumed to mark the interruption of fluent speech (cf. (Labov, 1966)). This signal is treated as a special lexical item in the parser input stream that triggers certain correction strategies depending on the parser configuration. Thus, in Hindle's system, repair detection is decoupled from repair correction, which requires only that the location of the interruption is stored in the parser state. Importantly, Hindle's system allows for non- surface-based corrections and sequential application of correction rules (Hindle, 1983, p. 123). In con- trast, simple surface deletion correction strategies can- not readily handle either repairs in which one syntactic constituent is replaced by an entirely different one, as in Example (4), or sequences of overlapping repairs, as in Example (5). (4) I 'd like to a flight from Washington to Denver... (5) I 'd like to book a reser- are there f- is there a first class fare for the flight that departs at six forty p.m. Hindle's methods achieved a success rate of 97% on a transcribed corpus of approximately 1,500 sen- tences in which the edit signal was orthographically represented and lexical and syntactic category assign- ments hand-corrected, indicating that, in theory, the edit signal can be computationally exploited for both repair detection and correction. Our "speech-first" in- vestigation of repairs is aimed at determining the extent to which repair processing algorithms can rely on the edit signal hypothesis in practice. The Repair Interval Model To support our investigation of acoustic-prosodic cues to repair detection, we propose a "speech-first" model of repairs, the REPAIR INTERVAL MODEL (RIM). RIM di- vides the repair event into three consecutive temporal intervals and identifies time points within those inter- vals that are computationally critical. A full repair comprises three intervals, the REPARANDUM INTERVAL, the DISFLUENCY INTERVAL, and the REPAIR INTERVAL. Following Levelt (1983), we identify the REPARANDUM as the lexicai material which is to be repaired. The end of the reparandum coincides with the termination of the fluent portion of the utterance, which we term the INTERRUPTION SITE (IS). The DISFLUENCY INTERVAL (nI) extends from the IS to the resumption of fluent speech, and may contain any combination of silence, pause fillers ('uh', 'urn'), or CUE PHRASES (e.g., 'Oops' 2This is consistent with Levelt (1983)'s observation that the material to be replaced and the correcting material in a repair often share structural properties akin to those shared by coordinated constituents. 47 or 'I mean'), which indicate the speaker's recognition of his/her performance error. The REPAIR INTERVAL corresponds to the utterance of the correcting material, which is intended to 'replace' the reparandum. It ex- tends from the offset of the DI tO the resumption of non-repair speech. In Example (6), for example, the reparandum occurs from 1 to 2, the DI from 2 to 3, and the repair interval from 3 to 4; the Is occurs at 2. (6) Give me airlines 1 [ flying to Sa- ] 2 [ SILENCE uh SILENCE ] 3 [ flying to Boston ] 4 from San Francisco next summer that have business class. RIM provides a framework for testing the extent to which cues from the speech signal contribute to the identification and correction of repair utterances. RIM incorporates two main assumptions of Hindle (1983): (1) correction strategies are linguisticallyrule- governed, and (2) linguistic cues must be available to signal when a disfluency has occurred and to 'trigger' correction strategies. As Hindle noted, if the process- ing of disfluencies were not rule-governed, it would be difficult to reconcile the infrequent intrusion of dis- fluencies on human speech comprehension, especially for language learners, with their frequent rate of oc- currence in spontaneous speech. We view Hindle's results as evidence supporting (1). Our study tests (2) by exploring the acoustic and prosodic features of repairs that might serve as a form of edit signal for rule-governed correction strategies. While Labov and Hindle proposed that an acoustic-phonetic cue might exist at precisely the Is, based on our analyses and on recent psychotinguistic experiments (Lickley et al., 1991), this proposal ap- pears too limited. Crucially, in RIM, we extend the notion of edit signal to include any phenomenon which may contribute to the perception of an "abrupt cut-off" of the speech signal -- including cues such as coartic- ulation phenomena, word fragments, interruption glot- talization, pause, and prosodic cues which occur in the vicinity of the disfluency interval. RIM thus acknowl- edges the edit signal hypothesis, that some aspect of the speech signal may demarcate the computationally key juncture between the reparandum and repair inter- vals, while extending its possible acoustic and prosodic manifestations. Acoustic-Prosodic Characteristics of Repairs We studied the acoustic and prosodic correlates of repair events as defined in the RIM framework with the aim of identifying potential cues for automatic re- pair processing, extending a pilot study reported in (Nakatani and Hirschberg, 1993). Our corpus for the current study consisted of 6,414 utterances produced by 123 speakers from the ARPA Airline Travel and In- formation System (ATIS) database (MADCOW, 1992) collected at AT&T, BBN, CMU, SRI, and TL 334 (5.2%) of these utterances contain at least one repair~ where repair is defined as the self-correction of one or more phonemes (up to and including sequences of words) in an utterance) Orthographic transcriptions of the utterances were prepared by ARPA contractors accord- ing to standardized conventions. The utterances were labeled at Bell Laboratories for word boundaries and intonational prominences and phrasing following Pier- rehumbert's description of English intonation (Pierre- humbert, 1980). Also, each of the three RIM intervals and prosodic and acoustic events within those intervals were labeled. Identifying the Reparandum Interval Our acoustic and prosodic analysis of the reparan- dum interval focuses on acoustic-phonetic properties of word fragments, as well as additional phonetic cues marking the reparandum offset. From the point of view of repair detection and correction, acoustic-prosodic cues to the onset of the reparandum would clearly be useful in the choice of appropriate correction strat- egy. However, recent perceptual experiments indicate that humans do not detect an oncoming disfluency as early as the onset of the reparandum (Lickley et al., 1991; Lickley and Bard, 1992). Subjects were gen- erally able to detect disfluencies before lexical access of the first word in the repair. However, since only a small number of the test stimuli employed in these experiments contained reparanda ending in word frag- ments (Lickley et al., 1991), it is not clear how to generalize results to such repairs. In our corpus, 74% of all reparanda end in word fragments. 4 Since the majority of our repairs involve word frag- mentation, we analyzed several lexical and acoustic- phonetic properties of fragments for potential use in fragment identification. Table 1 shows the broad word class of the speaker's intended word for each fragment, where the intended word was recoverable. There is Lexical Class Content Function Untranscribed Tokens % 121 42% 12 4% 155 54% Table 1: Lexical Class of Word Fragments at Reparan- dum Offset (N=288) a clear tendency for fragmentation at the reparandum offset to occur in content words rather than function words. 3In our pilot study of the SRI and TI utterances only, we found that repairs occurred in 9.1% of utterances (Nakatani and Hirschberg, 1993). This rate is probably more accurate than the 5.2% we find in our current corpus, since repairs for the pilot study were identified from more detailed transcrip- tions than were available for the larger corpus. 4Shriberg et al. (1992) found that 60.2% of repairs in their corpus contained fragments. 48 Table 2 shows the distribution of fragment repairs by length. 91% of fragments in our corpus are one syllable or less in length. Table 3 shows the distri- Syllables Tokens % 0 113 39% 1 149 52% 2 25 9% 3 1 0.3% Table 2: Length of Reparandum Offset Word Frag- ments (N=288) bution of initial phonemes for all words in the corpus of 6,414 ATIS sentences, and for all fragments, single syllable fragments, and single consonant fragments in repair utterances. From Table 3 we see that single con- Class stop vowel fric nasal/ glide/ liquid h N % of % of Words Frags 23% 23% 30% 25% 13% 19% 33% 45% 28% % of One % of One Syll Frags Cons Frags 18% 17% 20% 1% 2% 4% 64896 288 11% 0% 73% 15% 1% 148 114 Table 3: Feature Class of Initial Phoneme in Fragments by Fragment Length sonant fragments occur more than six times as often as fricatives than as stops. However, fricatives and stops occur almost equally as the initial consonant in single syllable fragments. Furthermore, we observe two di- vergences from the underlying distributions of initial phonemes for all words in the corpus. Vowel-initial words show less tendency and fricative-initial words show a greater tendency to occur as fragments, relative to the underlying distributions for those classes. Two additional acoustic-phonetic cues, glottaliza- tion and coarticulation, may help in fragment identi- fication. Bear et al. (1992) note that INTERRUPTION GLO'I~ALIZATION (irregular glottal pulses) sometimes occurs at the reparandum offset. This form of glot- talization is acoustically distinct from LARYNGEALIZA- TION (creaky voice), which often occurs at the end of prosodic phrases; GLOTTAL STOPS, which often pre- cede vowel-initial words; and EPENTHETIC GLOTTAL- tZATtON. In our corpus, 30.2% of reparanda offsets are marked by interruption glottalization. 5 Although interruption glottalization is usually associated with fragments, not all fragments are glottalized. In our database, 62% of fragments are not glottalized, and 9% of glottalized reparanda offsets are not fragments. 5Shriberg et al. (1992) report glottalization on 24 of 25 vowel-final fragments. Also, sonorant endings of fragments in our corpus sometimes exhibit coarticulatory effects of an unre- alized subsequent phoneme. When these effects occur with a following pause (see below), they can be used to distinguish fragments from full phrase-final words -- such as 'fli-' from 'fly' in Example (1). To summarize, our corpus shows that most reparanda offsets end in word fragments. These frag- ments are usually fragments of content words (based upon transcribers' identification of intended words in our corpus), are rarely more than one syllable long, exhibit different distributions of initial phoneme class depending on their length, and are sometimes glottal- ized and sometimes exhibit coarticulatory effects of missing subsequent phonemes. These findings suggest that it is unlikely that word-based recognition mod- els can be applied directly to the problem of fragment identification. Rather, models for fragment identifica- tion might make use of initial phoneme distributions, in combination with information on fragment length and acoustic-phonetic events at the IS. Inquiry into the articulatory bases of several of these properties of self-interrupted speech, such as glottalization and ini- tial phoneme distributions, may further improve the modeling of fragments. Identifying the Disfluency Interval In the RIM model, the D/includes all cue phrases and filled and unfilled pauses from the offset of the reparan- dum to the onset o.f the repair. The literature contains a number of hypotheses about this interval (cf. (Black- met and Mitton, 1991). For our corpus, pause fillers or cue words, which have been hypothesized as repair cues, occur within the DI for only 9.8% (332/368) of repairs, and so cannot be relied on for repair detection. Our findings do, however, support a new hypothesis associating fragment repairs and the duration of pause following the IS. Table 4 shows the average duration of 'silent DI'S (those not containing pause fillers or cue words) com- pared to that of fluent utterance-internal silent pauses for the Tt utterances. Overall, silent DIS are shorter Pausal Juncture Mean Std Dev Fluent 513 msec 676 msec DI 333 msec 417 msec Frags 292 msec 379 msec Non-frags 471 msec 502 msec N 1186 332 255 77 Table 4: Duration of Silent DIS vs. Utterance-Internal Fluent Pauses than fluent pauses (p<.001, tstat=4.60, df=1516). If we analyze repair utterances based on occurrence of fragments, the DI duration for fragment repairs is significantly shorter than for nonfragments (p<.001, tstat=3.36, df=330). The fragment repair DI duration is also significantly shorter than fluent pause intervals 49 (p<.001, tstat=5.05, df=1439), while there is no sig- nificant difference between nonfragment DIS and fluent utterances. So, DIS in general appear to be distinct from fluent pauses, and the duration of DIS in fragment re- pairs might also be exploited to identify these cases as repairs, as well as to distinguish them from nonfrag- ment repairs. Thus, pausal duration may serve as a general acoustic cue for repair detection, particularly for the class of fragment repairs. Identifying the Repair Several influential studies of acoustic-prosodic repair cues have relied upon texical, semantic, and prag- matic definitions of repair types (Levelt and Cutler, 1983; Levelt, 1983). Levelt & Cutler (1983) claim that repairs of erroneous information (ERROR REPAIRS) are marked by increased intonational prominence on the correcting information, while other kinds of repairs, such as additions to descriptions (APPROPRIATENESS REPAIRS), generally are not. We investigated whether the repair interval is marked by special intonational prominence relative to the reparandum for all repairs in our corpus and for these particular classes of repair. To obtain objective measures of relative promi- nence, we compared absolute f0 and energy in the sonorant center of the last accented lexical item in the reparandum with that of the first accented item in the repair interval. 6 We found a small but reliable increase in f0 from the end of the reparandum to the beginning of the repair (mean--4.1 Hz, p<.01, tstat=2.49, df=327). There was also a small but reliable increase in ampli- tude across the oI (mean=+l.5 db, p<.001, tstat=6.07, df=327). We analyzed the same phenomena across utterance-internal fluent pauses for the ATIS TI set and found no reliable differences in either f0 or intensity, although this may have been due to the greater variabil- ity in the fluent population. And when we compared the f0 and amplitude changes from reparandum to re- pair with those observed for fluent pauses, we found no significant differences between the two populations. So, while differences in f0 and amplitude exist between the reparandum offset and the repair onset, we conclude that these differences are too small help distinguish repairs from fluent speech. Although it is not entirely straightforward to compare our objective measures of intonational prominence with Levelt and Cutler's perceptual findings, our results provide only weak support for theirs. And while we find small but significant changes in two correlates of intonational prominence, the distributions of change in f0 and en- ergy for our data are unimodal; when we further test subclasses of Levelt and Cutler's error repairs and ap- propriateness repairs, statistical analysis does not sup- 6We performed the same analysis for the last and first syllables in the reparandum and repair, respectively, and for normalized f0 and energy; results did not substantially differ from those presented here. port Levelt and Cutler's claim that the former -- and only the former -- group is intonationally 'marked'. Previous studies of disfluency have paid consider- able attention to the vicinity of the DI but little to the repair offset. Although we did not find comparative in- tonationai prominence across the DI tO be a promising cue for repair detection, our RIM analysis uncovered one general intonational cue that may be of use for repair correction, namely the prosodic phrasing of the repair interval. We propose that phrase boundaries at the repair offset can serve to delimit the region over which subsequent correction strategies may operate. We tested the idea that repair interval offsets are intonationally marked by either minor or major prosodic phrase boundaries in two ways. First, we used the phrase prediction procedure reported by Wang & Hirschberg (1992) to estimate whether the phrasing at the repair offset was predictable according to a model of fluent phrasing. 7 Second, we analyzed the syntactic and lexical properties of the first major or minor intona- tional phrase including all or part of the repair interval to determine whether such phrasal units corresponded to different types of repairs in terms of Hindle's typol- ogy. The first analysis tested the hypothesis that repair interval offsets are intonationally delimited by minor or major prosodic phrase boundaries. We found that the repair offset co-occurs with minor phrase boundaries for 49% of repairs in the TI set. To see whether these boundaries were distinct from those in fluent speech, we compared the phrasing of repair utterances with the phrasing predicted for the corresponding corrected version of the utterance identified by ATIS transcribers. For 40% of all repairs, an observed boundary occurs at the repair offset where one is predicted; and for 33% of all repairs, no boundary is observed where none is predicted. For the remaining 27% of repairs for which predicted phrasing diverged from observed, in 10% of cases a boundary occurred where none was predicted and in 17%, no boundary occurred when one was predicted. In addition to differences at the repair offset, we also found more general differences from pre- dicted phrasing over the entire repair interval, which we hypothesize may be partly understood as follows: Two strong predictors of prosodic phrasing in flu- ent speech are syntactic constituency (Cooper and Sorenson, 1977; Gee and Grosjean, 1983; Selkirk, 1984), especially the relative inviolability of noun phrases (Wang and Hirschberg, 1992), and the length of prosodic phrases (Gee and Grosjean, 1983; Bachenko 7Wang & Hirschberg use statistical modeling techniques to predict phrasing from a large corpus of labeled ATIS speech; we used a prediction tree that achieves 88.4% accuracy on the ATIS TI corpus using only features whose values could be calculated via automatic text analysis. Results reported here are for prediction on only TI repair utterances. 50 and Fitzpatrick, 1990). On the one hand, we found oc- currences of phrase boundaries at repair offsets which occurred within larger NPs, as in Example (7), where it is precisely the noun modifier -- not the entire noun phrase -- which is corrected. 8 (7) Show me all n- [ round-trip flights [ from Pittsburgh [ to Atlanta. We speculate that, by marking off the modifier intona- tionaily, a speaker may signal that operations relating just this phrase to earlier portions of the utterance can achieve the proper correction of the disfluency. We also found cases of 'lengthened' intonational phrases in repair intervals, as illustrated in the single-phrase reparandum in (8), where the corresponding fluent ver- sion of the reparandum is predicted to contain four phrases. (8) What airport is it [ is located [ what is the name of the airport located in San Francisco Again, we hypothesize that the role played by this un- usually long phrase is the same as that of early phrase boundaries in NPS discussed above. In both cases, the phrase boundary delimits a meaningful unit for sub- sequent correction strategies. For example, we might understand the multiple repairs in (8) as follows: First the speaker attempts a vP repair, with the repair phrase delimited by a single prosodic phrase 'is located'. Then the initially repaired utterance 'What airport is located' is itself repaired, with the reparadum again delimited by a single prosodic phrase, 'What is the name of the airport located in San Francisco'. In the second analysis of lexical and syntactic properties, we found three major classes of phras- ing behaviors, all involving the location of the first phrase boundary after the repair onset: First, for 44% (163/368) of repairs, the repair offset we had initially identified 9 coincides with a phrase boundary, which can thus be said to mark off the repair interval. Of the remaining 205 repairs, more than two-thirds (140/205) have the first phrase boundary after the repair onset at the right edge of a syntactic constituent. We pro- pose that this class of repairs should be identified as constituent repairs, rather than the lexical repairs we had initially hypothesized. For the majority of these constituent repairs (79%, 110/140), the repair interval contains a well-formed syntactic constituent (see Ta- ble 5). If the repair interval does not form a syntactic constituent, it is most often an NP-internal repair (77%, 23/30). The third class of repairs includes those in which the first boundary after the repair onset occurs neither at the repair offset nor at the right edge of a syn- tactic constituent. This class contains surface or lexical 8Prosodic boundaries in examples are indicated by '1'. 9Note crucially here that, in labeling repairs which might be viewed as either constituent or lexical, we preferred the shorter lexical analysis by default. Repair Constituent Tokens Sentence 24 Verb phrase 7 Participial phrase 6 Noun phrase 38 Prepositional phrase 34 Relative clause 1 % 22% 6% 5% 35% 31% 0.9% Table 5: Distribution of Syntactic Categories for Con- stituent Repairs (N= 110) repairs (where the first phrase boundary in the repair interval delimits a sequence of one or more repeated words), phonetic errors, word insertions, and syntactic reformulations (as in Example (4)). It might be noted here that, in general, repairs involving correction of either verb phrases or verbs are far less common than those involving noun phrases, prepositional phrases, or sentences. We briefly note evidence against one alternative (although not mutually exclusive) hypothesis, that the region to be delimited correction strategies is marked not by a phrase boundary near the repair offset, but by a phrase boundary at the onset of the reparandum. In other words, it may be the reparandum interval, not the repair interval, that is intonationally delimited. How- ever, it is often the case that the last phrase boundary before the IS occurs at the left edge of a major syn- tactic constituent (42%, (87/205), even though major constituent repairs are about one third as frequent in this corpus (15%, 31/205). In contrast, phrase bound- aries occur at the left edge of minor constituents 27% (55/205) of the time, whereas minor constituent re- pairs make up 39% (79/205) of the subcorpus at hand. We take these figures as general evidence against the outlined alternative hypothesis, establishing that the demarcation repair offset is a more productive goal for repair processing algorithms. Investigation of repair phrasing in other corpora covering a wider variety of genres is needed in order to assess the generality of these findings. For exam- ple, 35% (8/23) of NP-internal constituent repairs oc- curred within cardinal compounds, which are prevalent in the nTIS corpus due to its domain. The preponder- ance of temporal and locative prepositional phrases may also be attributed to the nature of the task and domain. Nonetheless, the fact that repair offsets in our corpus are marked by intonational phrase boundaries in such a large percentage of cases (82.3%, 303/368), suggests that this is a possibility worth pursuing. Predicting Repairs from Acoustic and Prosodic Cues Despite the small size of our sample and the possibly limited generality of our corpus, we were interested to see how well the characterization of repairs derived 51 from RIM analysis of the ATIS COrpUS would transfer to a predictive model for repairs in that domain. We examined 374 ATIS repair utterances, including the 334 upon which the descriptive study presented above was based. We used the 172 TI and SRI repair utterances from our earlier pilot study (Nakatani and Hirschberg, 1993) as training date; these served a similar purpose in the descriptive analysis presented above. We then tested on the additional 202 repair utterances, which contained 223 repair instances. In our predictions we attemped to distinguish repair Is from fluent phrase boundaries (collapsing major and minor boundaries), non-repair disfluencies, 1° and simple word boundaries. We considered every word boundary to be a potential repair site. 11 Data points are represented below as ordered pairs <wl,wj >, where wi represents the lexical item to the left of the potential IS and wj represents that on the right. For each <wi,wj >, we examined the following features as potential Is predictors: (a) duration of pause between wi and wj; (b) occurrence of a word frag- ment(s) within <w~,wj >; (c) occurrence of a filled pause in <wi,wj >; (d) amplitude (energy) peak within wi, both absolute and normalized for the utterance; (e) amplitude of wi relative to wi-i and to wj; (f) abso- lute and normalized f0 of wi; (g) f0 of wi relative to wi-i and to wj; and (h) whether or not wi was ac- cented, deaccented, or deaccented and cliticized. We also simulated some simple pattern matching strate- gies, to try to determine how acoustic-prosodic cues might interact with lexical cues in repair identification. To this end, we looked at (i) the distance in words of wi from the beginning and end of the utterance; (j) the total number of words in the utterance; and (k) whether wi or wi-1 recurred in the utterance within a window of three words after wi. We were unable to test all the acoustic-prosodic features we examined in our de- scriptive analysis, since features such as glottalization and coarticulatory effects had not been labeled in our data base for locations other than DIs. Also, we used fairly crude measures to approximate features such as change in f0 and amplitude, since these .too had been precisely labeled in our corpus only for repair locations and not for fluent speech./2 We trained prediction trees, using Classification and Regression Tree (CART) techniques (Brieman et al., 1984), on our 172-utterance training set. We first included all our potential identifiers as possible predic- tors. The resulting (automatically generated) decision tree was then used to predict IS locations in our 202- l°These had been marked independently of our study and including all events with some phonetic indicator of disflu- ency which was not involved in a self-repair, such as hesita- tions marked with audible breath or sharp cut-off. llWe also included utterance-final boundaries as data points. 12We used uniform measures for prediction, however, for both repair sites and fluent regions. utterance test set. This procedure identified 186 of the 223 repairs correctly, while predicting 12 false posi- tives and omitting 37 true repairs, for a recall of 83.4% and precision of 93.9%. Fully 177 of the correctly identified ISS were identified via presence of word frag- ments as well as duration of pause in the DL Repairs not containing fragments were identified from lexical matching plus pausal duration in the DI. Since the automatic identification of word frag- ments from speech is an unsolved problem, we next omitted the fragment feature and tried the prediction again. The best prediction tree, tested on the same 202-utterance test set, succeeded in identifying 174 of repairs correctly-- in the absence of fragment informa- tion- with 21 false positives and 49 omissions (78.1% recall, 89.2% precision). The correctly identified re- pairs were all characterized by constraints on duration of pause in the DI. Some were further identified via presence of lexical match to the right of wi within the window of three described above, and word position within utterance. Those repairs in which no lexical match was identified were characterized by lower am- plitude of wi relative to wj and cliticization or deac- centing of wi. Still other repairs were characterized by more complex series of lexical and acoustic-prosodic constraints. These results are, of course, very preliminary. Larger corpora must certainly be examined and more sophisticated versions of the crude measures we have used should be employed. However, as a first ap- proximation to the characterization of repairs via both acoustic-prosodic and lexical cues, we find these re- suits encouraging. In particular, our ability to iden- tify repair sites successfully without relying upon the identification of fragments as such seems promising, although our analysis of fragments suggests that there may indeed be ways of identifying fragment repairs, via their relatively short DI, for example. Also, the combination of general acoustic-prosodic constraints with lexical pattern matching techniques as a strategy for repair identification appears to gain some support from our predictions. Further work on prediction mod- eling may suggest ways of combining these lexical and acoustic-prosodic cues for repair processing. Discussion In this paper, we have presented a"speech-first" model, the Repair Interval Model, for studying repairs in spon- taneous speech. This model divides the repair event into a reparandum interval, a disfluency interval, and a repair interval. We have presented empirical results from acoustic-phonetic and prosodic analysis of a cor- pus of repairs in spontaneous speech, indicating that reparanda offsets end in word fragments, usually of (in- tended) content words, and that these fragments tend to be quite short and to exhibit particular acoustic- phonetic characteristics. We found that the disfluency 52 interval can be distinguished from intonational phrase boundaries in fluent speech in terms of duration of pause, and that fragment and nonfragment repairs can also be distinguished from one another in terms of the duration of the disfluency interval. For our corpus, repair onsets can be distinguished from reparandum offsets by small but reliable differences in f0 and am- plitude, and repair intervals differ from fluent speech in their characteristic prosodic phrasing. We tested our results by developing predictive models for repairs in the ATIS domain, using CART analysis; the best per- forming prediction strategies, trained on a subset of our data, identified repairs in the remaining utterances with recall of 78-83% and precision of 89-93%, depending upon features examined. Acknowledgments We thank John Bear, Barbara Grosz, Don Hindle, Chin Hui Lee, Robin Lickley, Andrej Ljolje, Jan van San- ten, Stuart Shieber, and Liz Shriberg for advice and useful comments. CART analysis employed software written by Daryl Pregibon and Michael Riley. Speech analysis was done with Entropic Research Laboratory's WAVES software. REFERENCES j. Bachenko and E. Fitzpatrick. 1990. A computational grammar of discourse-neutral prosodic phrasing in English. Computational Linguistics, 16(3): 155- 170. John Bear, John Dowding, and Elizabeth Shriberg. 1992. Integrating multiple knowledge sources for detection and correction of repairs in human- computer dialog. In Proceedings of the 30th An- nual Meeting, pages 56-63, Newark DE. Associ- ation for Computational Linguistics. Elizabeth R. Blackmer and Janet L. Mitton. 1991. Theories of monitoring and the timing of repairs in spontaneous speech. Cognition, 39:173-194. Leo Brieman, Jerome H. Friedman, Richard A. Olshen, and Charles J. Stone. 1984. ClassificationandRe- gression Trees. Wadsworth & Brooks, Monterrey CA. W. E. Cooper and J. M. Sorenson. 1977. Funda- mental frequency contours at syntactic bound- aries. Journal of the Acoustical Society of Amer- ica, 62(3):683-692, September. J. P. Gee and E Grosjean. 1983. Performance struc- ture: A psycholinguistic and linguistic apprasial. Cognitive Psychology, 15:411-458. Donald Hindle. 1983. Deterministic parsing of syn- tactic non-fluencies. In Proceedings of the 21st Annual Meeting, pages 123-128, Cambridge MA. Association for Computational Linguistics. William Labov. 1966. On the grammaticality of ev- eryday speech. Paper Presented at the Linguistic Society of America Annual Meeting. C.-H. Lee, L. R. Rabiner, R. Pieraccini, and J. Wilpon. 1990. Acoustic modeling for large vocabulary speech recognition. Computer Speech and Lan- guage, 4:127-165, April. William Levelt and Anne Cutler. 1983. Prosodic mark- ing in speech repair. Journal of Semantics, 2:205- 217. William Levelt. 1983. Monitoring and self-repair in speech. Cognition, 14:41-104. R. J. Lickley and E. G. Bard. 1992. Processing disflu- ent speech: Recognising disfluency before lexical access. In Proceedings of the International Con- ference on Spoken Language Processing, pages 935-938, Banff, October. ICSLP. R. J. Lickley, R. C. Shillcock, and E. G. Bard. 1991. Processing disfluent speech: How and when are disfluencies found? In Proceedings of the Second European Conference on Speech Communication and Technology, Vol. III, pages 1499-1502, Gen- ova, September. Eurospeech-91. MADCOW. 1992. Multi-site data collection for a spoken language corpus. In Proceedings of the Speech and Natural Language Workshop, pages 7-14, Harriman NY, February. DARPA, Morgan Kaufmann. Christine Nakatani and Julia Hirschberg. 1993. A speech-first model for repair identification in spo- ken language systems. In Proceedings of the ARPA Workshop on Human Language Technology, Plainsboro, March. ARPA. Janet B. Pierrehumbert. 1980. The Phonology and Phonetics of English Intonation. Ph.D. thesis, Massachusetts Institute of Technology, September. Distributed by the Indiana University Linguistics Club. E. O. Selkirk. 1984. Phonology and syntax: The relation between sound and structure. In T. Frey- jeim, editor, Nordic Prosody II: Proceedings of the Second Symposium on Prosody in the Nordic language, pages 111-140, Trondheim. TAPIR. Elizabeth Shriberg, John Bear, and John Dowding. 1992. Automatic detection and correction of re- pairs in human-computer dialog. In Proceedings of the Speech and Natural Language Workshop, pages 419--424, Harriman NY. DARPA, Morgan Kaufmann. Michelle Q. Wang and Julia Hirschberg. 1992. Auto- matic classification of intonational phrase bound- aries. Computer Speech and Language, 6:175- 196. 53
1993
7
GEMINI: A NATURAL LANGUAGE SYSTEM FOR SPOKEN-LANGUAGE UNDERSTANDING* John Dowding, Jean Mark Gawron, Doug Appelt, John Bear, Lynn Cherny, Robert Moore, and Douglas Moran SRI International 333 Ravenswood Avenue Menlo Park, CA 94025 Internet: [email protected] 1. INTRODUCTION Gemini is a natural language (NL) under- standing system developed for spoken language applications. This paper describes the details of the system, and includes relevant measurements of size, efficiency, and performance of each of its components. In designing any NL understanding system, there is a tension between robustness and correct- ness. Forgiving an error risks throwing away cru- cial information; furthermore, devices added to a system to enhance robustness can sometimes en- rich the ways of finding an analysis, multiplying the number of analyses for a given input, and mak- ing it more difficult to find the correct analysis. In processing spoken language this tension is height- ened because the task of speech recognition in- troduces a new source of error. The robust sys- tem will attempt to find a sensible interpretation, even in the presence of performance errors by the speaker, or recognition errors by the speech rec- ognizer. On the other hand, a system should be able to detect that a recognized string is not a sen- tence of English, to help filter recognition errors by the speech recognizer. Furthermore, if parsing and recognition are interleaved, then the parser should enforce constraints on partial utterances. The approach taken in Gemini is to con- strain language recognition with fairly conven- tional grammar, but to augment that grammar with two orthogonal rule-based recognition mod- ules, one for glueing together the fragments found during the conventional grammar parsing phase, and another for recognizing and eliminating dis- fluencies known as "repairs." At the same time, *This research was supported by the Advanced Re- search Projects Agency under Contract ONR N00014- 90-C-0085 with the Office of Naval Research. The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either ex- pressed or implied, of the Advanced Research Projects Agency of the U.S. Government. the multiple analyses arising before and after all this added robustness are managed in two ways: first, by highly constraining the additional rule- based modules by partitioning the rules into pref- erence classes, and second, through the addition of a postprocessing parse preference component. Processing starts in Gemini when syntac- tic, semantic, and lexical rules are applied by a bottom-up all-paths constituent parser to populate a chart with edges containing syntactic, seman- tic, and logical form information. Then, a second utterance parser is used to apply a second set of syntactic and semantic rules that are required to span the entire utterance. If no semantically ac- ceptable utterance-spanning edges are found dur- ing this phase, a component to recognize and cor- rect certain grammatical disfluencies is applied. When an acceptable interpretation is found, a set of parse preferences is used to choose a single best interpretation from the chart to be used for sub- sequent processing. Quantifier scoping rules are applied to this best interpretation to produce the final logical form, which is then used as input to a query-answering system. The following sections describe each of these components in detail, with the exception of the query-answering subsystem, which is not described in this paper. In our component-by-component view of Gemini, we provide detailed statistics on each component's size, speed, coverage, and accuracy. These numbers detail our performance on the sub- domain of air-travel planning that is currently be- ing used by the ARPA spoken language under- standing community (MADCOW, 1992). Gem- ini was trained on a 5875-utterance dataset from this domain, with another 688 utterances used as a blind test (not explicitly trained on, but run nmltiple times) to monitor our performance on a dataset on which we did not train. We also report here our results on another 756-utterance fair test set that we ran only once. Table 1 contains a sum- mary of the coverage of the various components on both the training and fair test sets. More detailed 54 explanations of these numbers are given in the rel- evant sections. Training Test Lexicon 99.1% 95.9% Syntax 94.2% 90.9% Semantics 87.4% 83.7% Syntax (repair correction) 96.0% 93.1% Semantics (repair correction) 89.1% 86.0% Table 1: Domain Coverage by Component 2. SYSTEM DESCRIPTION Gemini maintains a firm separation between the language- and domain-specific portions of the system, and the underlying infrastructure and ex- ecution strategies. The Gemini kernel consists of a set of compilers to interpret the high-level lan- guages in which the lexicon and syntactic and se- mantic grammar rules are written, as well as the parser, semantic interpretation, quantifier scop- ing, repair correction mechanisms, and all other aspects of Gemini that are not specific to a lan- guage or domain. Although this paper describes the lexicon, grammar, and semantics of English, Gemini has also been used in a Japanese spo- ken language understanding system (Kameyama, 1992). 2.1. Grammar Formalism Gemini includes a midsized constituent gram- mar of English (described in section 2.3), a small utterance grammar for assembling constituents into utterances (described in section 2.7), and a lexicon. All three are written in a variant of the unification formalism used in the Core Language Engine (Alshawi, 1992) . The basic building block of the grammar for- malism is a category with feature constraints. Here is an example: np: [wh=ynq, case= (nomVacc), pers_num= (3rdAsg) ] This category can be instantiated by any noun phrase with the value ynq for its wh feature (which means it must be a wh-bearing noun phrase like which book, who, or whose mother), either ace (ac- cusative) or nora (nominative) for its case feature, and the conjunctive value 3rdAsg (third and sin- gular) for its person-number feature. This for- malism is related directly to the Core Language Engine, but more conceptually it is closely re- lated to that of other unification-based grammar formalisms with a context-free skeleton, such as PATR-II (Shieber et al., 1983), Categorial Uni- fication Grammar (Uszkoreit, 1986), Generalized Phrase-Structure Grammar (Gazdar et al., 1982), and Lexical Functional Grammar (Bresnan, 1982). Gemini differs from other unification for- malisms in the following ways. Since many of the most interesting issues regarding the formal- ism concern typing, we defer discussing motivation until section 2.5. Gemini uses typed unification. Each category has a set of features declared for it. Each fea- ture has a declared value space of possible values (value spaces may be shared by different fea- tures). Feature structures in Gemini can be re- cursive, but only by having categories in their value space; so typing is also recursive. Typed feature structures are also used in HPSG (Pol- lard and Sag, in press). One important differ- ence with the use in Gemini is that Gemini has no type inheritance. Some approaches do not assume a syntactic skeleton of category-introducing rules (for ex- ample, Functional Unification Grammar (Kay, 1979)). Some make such rules implicit (for example, the various categorial unification ap- proaches, such as Unification Categorial Gram- mar (Zeevat, Klein, and Calder, 1987)). Even when a syntactic skeleton is assumed, some approaches do not distinguish the category of a constituent (for example, rip, vp) from its other features (for example, pers_aum, gapsin, gapsout). Thus, for example, in one version of GPSG, categories were simply feature bundles (attribute value structures) and there was a fea- ture l~hJ taking values like N,V,A, and P which determined the major category of constituent. • Gemini does not allow rules schematizing over syntactic categories. 2.2. Lexicon The Gemini lexicon uses the same category notation as the Gemini syntactic rules. Lexical categories are types as well, with sets of features defined for them. The lexical component of Gem- ini includes the lexicon of base forms, lexical tem- plates, morphological rules, and the lexical type and feature default specifications. The Gemini lexicon used for the air-travel planning domain contains 1,315 base entries. These expand by morphological rules to 2,019. In the 5875-utterance training set, 52 sentences con- tained unknown words (0.9%), compared to 31 sentences in the 756-utterance fair test set (4.1%). 2.3. Constituent Grammar A simplified example of a syntactic rule is 55 syn (whq_ynq_slash_np, [ s: [sentence_type=whq, form=tnsd, gapsin=G, gapsout=G], np: [wh=ynq, pers_num=N] , s : [sentence_type=ynq, form=tnsd, gapsin=np: [pets_hum=N], gapsout =null] ] ). This syntax rule (named whq_ynq..$1ash..np) says that a sentence (category s) can be built by finding a noun phrase (category np) followed by a sentence. It requires that the daughter np have the value ynq for its wh feature and that it have the value 1~ (a variable) for its person-number feature. It requires that the daughter sentence have a cat- egory value for its gapsin feature, namely an np with a person number value N, which is the same as the person number value on the wh-bearing noun phrase. The interpretation of the entire rule is that a gapless sentence with sentence_type whq can be built by finding a wh-phrase followed by a sentence with a noun phrase gap in it that has the same person number as the wh-phrase. Semantic rules are written in much the same rule format, except that in a semantic rule, each of the constituents mentioned in the phrase structure skeleton is associated with a logical form. Thus, the semantics for the rule above is s em (whq_ynq_slash_np, [( [,hq, S], s : ['1 ), (Np, np: []), (S, s : [gapsin=np: [gapsem=Np] ] )] ). Here the semantics of the mother s is just the semantics of the daughter s with the illocution- ary force marker whq wrapped around it. In addi- tion, the semantics of the s gap's np's gapsem has been unified with the semantics of the wh-phrase. Through a succession of unifications this will end up assigning the wh-phrase's semantics to the gap position in the argument structure of the s. Al- though each semantic rule must be keyed to a pre- existing syntactic rule, there is no assumption of rule-to-rule uniqueness. Any number of semantic rules may be written for a single syntactic rule. We discuss some further details of the semantics in section 2.6 The constituent grammar used in Gemini con- tains 243 syntactic rules, and 315 semantic rules. Syntactic coverage on the 5875-utterance training set was 94.2%, and on the 756-utterance test set it was 90.9%. 2.4. Parser Since Gemini was designed with spoken lan- guage interpretation in mind, key aspects of the Gemini parser are motivated by the increased needs for robustness and efficiency that charac- terize spoken language. Gemini uses essentially a pure bottom-up chart parser, with some limited left-context constraints applied to control creation of categories containing syntactic gaps. Some key properties of the parser are • The parser is all-paths bottom-up, so that all possible edges admissible by the grammar are found. • The parser uses subsumption checking to reduce the size of the chart. Essentially, an edge is not added to the chart if it is less general than a preexisting edge, and preexisting edges are re- moved from the chart if the new edge is more general. • The parser is on-line (Graham, Harrison, and Russo, 1980), essentially meaning that all edges that end at position i are constructed before any that end at position i + 1. This feature is particularly desirable if the final architecture of the speech understanding system couples Gem- ini tightly with the speech recognizer, since it guarantees for any partial recognition input that all possible constituents will be built. An important feature of the parser is the mechanism used to constrain the construction of categories containing syntactic gaps. In earlier work (Moore and Dowding, 1991), we showed that approximately 80% of the edges built in an all- paths bottom-up parser contained gaps, and that it is possible to use prediction in a bottom-up parser only to constrain the gap categories, with- out requiring prediction for nongapped categories. This limited form of left-context constraint greatly reduces the total number of edges built for a very low overhead. In the 5875-utterance training set, the chart for the average sentence contained 313 edges, but only 23 predictions. 2.5. Typing The main advantage of typed unification is for grammar development. The type information on features allows the lexicon, grammar, and seman- tics compilers to provide detailed error analysis re- garding the flow of values through the grammar, and to warn if features are assigned improper val- ues, or variables of incompatible types are unified. Since the type-analysis is performed statically at compile time, there is no run-time overhead asso- ciated with adding types to the grammar. The major grammatical category plays a spe- cial role in the typing scheme of Gemini. For each category, Gemini makes a set of declarations stipu- lating its allowable features and the relevant value spaces. Thus, the distinction between the syntac- tic category of a constituent and its other features can be cashed out as follows: the syntactic cat- egory can be thought of as the feature structure 56 type. The only other types needed by Gemini are the value spaces used by features. Thus for ex- ample, the type v (verb) admits a feature vforra, whose value space vforra-types call be instanti- ated with values like present participle, finite, and past participle. Since all recursive features are category-valued, these two kinds of types suffice. 2.6. Interleaving Syntactic and Se- mantic Information Sortal Constraints Selectional restrictions are imposed in Gemini through the sorts mecha- nism. Selectional restrictions include both highly domain-specific information about predicate- argument and very general predicate restrictions. For example, in our application the object of the transitive verb depart (as in flights departing Boston) is restricted to be an airport or a city, obviously a domain-specific requirement. But the same machinery also restricts a determiner like all to take two propositions, and an adjective like fur- ther to take distances as its measure-specifier (as in thirty miles further). In fact, sortal constraints are assigned to every atomic predicate and opera- tor appearing in the logical forms constructed by the semantic rules. Sorts are located in a conceptual hierarchy and are implemented as Prolog terms such that more general sorts subsume more specific sorts (Mellish, 1988). This allows the subsumption checking and packing in the parser to share struc- ture whenever possible. Semantic coverage with sortal constraints applied was 87.4% on the train- ing set, and on the test set it was 83.7%. Interleaving Semantics with Parsing In Gemini, syntactic and semantic processing is fully interleaved. Building an edge requires that syntac- tic constraints be applied, which results in a tree structure, to which semantic rules can be applied, which results in a logical form to which sortal con- traints can be applied. Only if the syntactic edge leads to a well-sorted semantically-acceptable log- ical form fragment is it added to the chart. Interleaving the syntax and semantics in this way depends on a crucial property of the seman- tics: a semantic interpretation is available for each syntactic node. This is guaranteed by the seman- tic rule formalism and by the fact that every lexical item has a semantics associated with it. Table 2 contains average edge counts and parse timing statistics 1 for the 5875-utterance training set. 1Gemini is implemented primarily in Quintus Pro- log version 3.1.1. All timing numbers given in this paper were run on a lightly loaded Sun SPARCsta- tion 2 with at least 48 MB of memory. Under normal conditions, Gemini runs in under 12 MB of memory. Edges Time Syntax only 197 3.4 sec. Syntax -t- semantics 234 4.47 sec. Syntax q- semantics ÷ sorts 313 13.5 sec. Table 2: Average Number of Edges Built by In- terleaved Processing 2.7. Utterance Parsing The constituent parser uses the constituent grammar to build all possible categories bottom- up, independent of location within the string. Thus, the constituent parser does not force any constituent to occur either at the beginning of the utterance, or at the end. Those constraints are stated in what we call the utterance grammar. They are applied after constituent parsing is com- plete by the utterance parser. The utterance gram- mar specifies ways of combining the categories found by the constituent parser into an analysis of the complete utterance. It is at this point that the system recognizes whether the sentence was a simple complete sentence, an isolated sentence fragment, a run-on sentence, or a sequence of re- lated fragments. Many systems (Carbonell and Hayes, 1983), (Hobbs et al., 1992), (Seneff, 1992), (Stallard and Bobrow, 1992) have added robustness with a sim- ilar postprocessing phase. The approach taken in Gemini differs in that the utterance grammar uses the same syntactic and semantic rule for- malism used by the constituent grammar. Thus, the same kinds of logical forms built during con- stituent parsing are the output of utterance pars- ing, with the same sortal constraints enforced. For example, an utterance consisting of a sequence of modifier fragments (like on Tuesday at three o'clock on United) is interpreted as a conjoined property of a flight, because the only sort of thing in the ATIS domain that can be on Tuesday at three o'clock on United is a flight. The utterance parser partitions the utterance grammar into equivalence classes and considers each class according to an ordering. Utterance parsing terminates when all constituents satisfy- ing the rules of the current equivalence class are built, unless there are none, in which case the next class is considered. The highest ranked class con- sists of rules to identify simple complete sentences, the next highest class consists of rules to iden- tify simple isolated sentence fragments, and so on. Thus, the utterance parser allows us to enforce a very coarse form of parse preferences (for exam- ple, prefering complete sentences to sentence frag- ments). These coarse preferences could also be enforced by the parse preference component de- 57 scribed in section 2.9, but for the sake of efficiency we choose to enforce them here. The utterance grammar is significantly smaller than the constituent grammar - only 37 syntactic rules and 43 semantic rules. 2.8. Repairs Grammatical disfluencies occur frequently in spontaneous spoken language. We have imple- mented a component to detect and correct a large subclass of these disfluencies (called repairs, or self-corrections) where the speaker intends that the meaning of the utterance be gotten by deleting one or more words. Often, the speaker gives clues of their intention by repeating words or adding cue words that signal the repair: (1) a. How many American airline flights leave Denver on June June tenth. b. Can you give me information on all the flights from San Francisco no from Pitts- burgh to San Francisco on Monday. The mechanism used in Gemini to detect and correct repairs is currently applied as a fallback if no semantically acceptable interpretation is found for the complete utterance. The mechanism finds sequences of identical or related words, possibly separated by a cue word (for example, oh or no) that might indicate the presence of a repair, and deletes the first occurrence of the matching por- tion. Since there may be several such sequences of possible repairs in the utterance, the mechanism produces a ranked set of candidate corrected ut- terances. These candidates are ranked in order of the fewest deleted words. The first candidate that can be given an interpretation is accepted as the intended meaning of the utterance. This ap- proach is presented in detail in (Bear, Dowding, and Shriberg, 1992). The repair correction mechanism helps in- crease the syntactic and semantic coverage of Gemini (as reported in Table 1). In the 5875- utterance training set, 178 sentences contained nontrivial repairs 2, of which Gemini found 89 (50%). Of the sentences Gemini corrected, 81 were analyzed correctly (91%), and 8 contained repairs but were corrected wrongly. Similarly, the 756- utterance test set contained 26 repairs, of which Gemini found 11 (42%). Of those 11, 8 were ana- lyzed correctly (77%), and 3 were analyzed incor- rectly. Since Gemini's approach is to extend lan- guage analysis to recognize specific patterns char- acteristic of spoken language, it is important for 2For these results, we ignored repairs consisting of only an isolate fragment word, or sentence-initial filler words like "yes" and "okay". components like repair correction (which provide the powerful capability of deleting words) not to be applied in circumstances where no repair is present. In the 5875-utterance training set, Gem- ini misidentified only 15 sentences (0.25%) as con- taining repairs when they did not. In the 756- utterance test set, only 2 sentences were misiden- tiffed as containing repairs (0.26%). While the repair correction component cur- rently used in Gemini does not make use of acous- tic/prosodic information, it is clear that acoustics can contribute meaningful cues to repair. In fu- ture work, we hope to improve the performance of our repair correction component by incorporating acoustic/prosodic techniques for repair detection (Bear, Dowding, and Shriberg, 1992) (Nakatani and Hirschberg, 1993) (O'Shaughnessy, 1992). A central question about the repairs module concerns its role in a tightly integrated system in which the NL component filters speech recognition hypotheses. The open question: should the repairs module be part of the recognizer filter or should it continue to be a post-processing component? The argument for including it in the filter is that without a repairs module, the NL system rejects many sentences with repairs, and will thus dispre- fer essentially correct recognizer hypotheses. The argument against including it is efficiency and the concern that with recognizer errors present, the repair module's precision may suffer: it may at- tempt to repair sentences with no repair in them. Our current best guess is that recognizer errors are essentially orthogonal to repairs and that a filter including the repairs module will not suffer from precision problems. But we have not yet per- formed the experiments to decide this. 2.9. Parse Preference Mechanism In Gemini, parse preferences are enforced when extracting syntactically and semantically well-formed parse trees from the chart. In this respect, our approach differs from many other approaches to the problem of parse preferences, which make their preference decisions as pars- ing progresses, pruning subsequent parsing paths (Frazier and Fodor, 1978), (Hobbs and Bear, 1990), (Marcus 1980). Applying parse prefer- ences requires comparing two subtrees spanning the same portion of the utterance. The parse preference mechanism begins with a simple strategy to disprefer parse trees contain- ing specific "marked" syntax rules. As an example of a dispreferred rule, consider: Book those three flights to Boston. This sentence has a parse on which those three is a noun phrase with a miss- ing head (consider a continuation of the discourse Three of our clients have sufficient credit). After penalizing such dispreferred parses, the preference 58 mechanism applies attachment heuristics based on the work by Pereira (1985) and Shieber (1983) Pereira's paper shows how the heuristics of Minimal Attachment and Right Association (Kim- ball, 1973) can both be implemented using a bottom-up shift-reduce parser. (2)(a) John sang a song for Mary. (b) John canceled the room Mary reserved yes- terday. Minimal Attachment selects for the tree with the fewest nodes, so in (2a), the parse that makes for Mary a complement of sings is preferred. Right Association selects for the tree that incorporates a constituent A into the rightmost possible con- stituent (where rightmost here means beginning the furthest to the right). Thus, in (2b) the parse in which yesterday modifies reserved is preferred. The problem with these heuristics is that when they are formulated loosely, as in the pre- vious paragraph, they appear to conflict. In par- ticular, in (2a), Right Association seems to call for the parse that makes for Mary a modifier of song. Pereira's goal is to show how a shift-reduce parser can enforce both heuristics without conflict and enforce the desired preferences for examples like (2a) and (2b). He argues that Minimal At- tachment and Right Association can be enforced in the desired way by adopting the following heuris- tics for resolving conflicts: 1. Right Association: In a shift-reduce conflict, prefer shifts to reduces. 2. Minimal Attachment: In a reduce-reduce con- flict, prefer longer reduces to shorter reduces. Since these two principles never apply to the same choice, they never conflict. For purposes of invoking Pereira's heuristics, the derivation of a parse can be represented as the sequence of S's (Shift) and R's (Reduce) needed to construct the parse's unlabeled bracketing. Con- sider, for example, the choice between two unla- beled bracketings of (2a): (a) [John [sang [a song ] [for Mary ] ] ] S S S S R S S RRR (b) [John [sang [[a song] [for Mary ]] ]] S S S S R S S RRRR There is a shift for each word and a reduce for each right bracket. Comparison of the two parses consists simply of pairing the moves in the shift- reduce derivation from left to right. Any parse making a shift move that corresponds to a reduce move loses by Right Association. Any parse mak- ing a reduce move that corresponds to a longer reduce loses by Minimal Attachment. In deriva- tion (b) above, the third reduce move builds the constituent a song for Mary from two constituents, while the corresponding reduce in (a) builds sang a song for Mary from three constituents. Parse (b) thus loses by Minimal Attachment. Questions about the exact nature of parse preferences (and thus about the empirical ade- quacy of Pereira's proposal) still remain open, but the mechanism sketched does provide plausible re- sults for a number of examples. 2.10. Scoping The final logical form produced by Gemini is the result of applying a set of quantifier scop- ing rules to the best interpretation chosen by the parse preference mechanism. The semantic rules build quasi-logical forms, which contain complete semantic predicate-argument structure, but do not specify quantifier scoping. The scoping algorithm that we use combines syntactic and semantic in- formation with a set of quantifier scoping prefer- ence rules to rank the possible scoped logical forms consistent with the quasi-logical form selected by parse preferences. This algorithm is described in detail in (Moran, 1988). 3. CONCLUSION In our approach to resolving the tension be- tween overgeneration and robustness in a spoken language understanding system, some aspects of Gemini are specifically oriented towards limiting overgeneration, such as the on-line property for the parser, and fully interleaved syntactic and se- mantic processing. Other components, such as the fragment and run-on processing provided by the utterance grammar, and the correction of recog- nizable grammatical repairs, increase the robust- ness of Gemini. We believe a robust system can still recognize and disprefer utterances containing recognition errors. Research in the construction of the Gemini system is ongoing to improve Gemini's speed and coverage, as well as to examine deeper integration strategies with speech recognition, and integration of prosodic information into spoken !anguage dis- ambiguation. REFERENCES Alshawi, tI. (ed) (1992). The Core Language En- gine, MIT Press, Cambridge. Bear, J., Dowding, J., and Shriberg, E. (1992). "Integrating Multiple Knowledge Sources for the Detection and Correction of Repairs in Human-Computer Dialog", in Proceedings of the 30lh Annual Meeting of the Association for Computational Linguists, Newark, DE,pp. 56-63. 59 Bresnan, ,]. (ed) (1982). The Mental Represen- tation of Grammatical Relations, MIT Press, Cambridge. Carbonell, J., and Hayes, P. (1983). "Recovery Strategies for Parsing Extragrammatical Lan- guage", American Journal of Computational Linguistics, Vol. 9, Numbers 3-4, pp. 123-146. Frazier, L., and Fodor, J.D. (1978). "The Sausage Machine: A New Two-Stage Parsing Model", Cognition, Vol. 6, pp. 291-325. Gazdar, G., Klein, E., Pullum, G., and Sag, I. (1982). Generalized Phrase Structure Gram- mar, Harvard University Press, Cambridge. Graham, S., ttarrison, M., and Ruzzo, W. (1980). "An Improved Context-Free Recog- nizer", ACM Transactions on Programming Languages and Systems, Vol. 2, No. 3, pp. 415- 462. Hobbs, J., and Bear, J. (1990). "Two Principles of Parse Preference", in Proceedings of the 13th International Conference on Computa- tional Linguistics, Helsinki, Vol. 3, pp. 162- 167. Hobbs, J., Appelt, D., Bear, J., Tyson, M., and Magerman, D. (1992). "Robust Processing of Real-World Natural-Language Texts", in Text Based Intelligent Systems, ed. P. Jacobs, Lawrence Erlbaum Associates, Hillsdale, N J, pp. 13-33. Kameyama, M. (1992). "The Syntax and Seman- tics of the Japanese Language Engine", forth- coming. In Mazuka, R., and N. Nagai, Eds. Japanese Syntactic Processing, Hillsdale, NJ: Lawrence Erlbaum Associates. Kay, M. (1979). "Functional Grammar", in Pro- ceedings of the 5th Annual Meeting of the Berkeley Linguistics Society. pp. 142-158. Kimball, J. (1973). "Seven Principles of Surface Structure Parsing in Natural Language", Cog- nition, Vol. 2, No. 1, pp. 15-47. MADCOW (1992). "Multi-site Data Collection for a Spoken Language Corpus", in Proceedings of the DARPA Speech and Natural Language Workshop, February 23-26, 1992. Marcus, M. (1980). A Theory of Syntactic Recog- nition for Natural Language, MIT Press, Cambridge. Moran, D. (1988). "Quantifier Scoping in the SRI Core Language Engine", in Proceedings of the 26th Annual Meeting of the Association for Computational Linguistics, State University of New York at Buffalo, Buffalo, NY, pp. 33- 40. Mellish, C. (1988). "Implementing Systemic Clas- sification by Unification". Computational Lin- guistics Vol. 14, pp. 40-51. Moore, R., and Dowding, J. (1991). "Efficient Bottom-up Parsing", in Proceedings of the DARPA Speech and Natural Language Work- shop, February 19-22, 1991, pp. 200-203. Nakatani, C., and Hirschberg, J. (1993). "A Speech-First Model for Repair Detection and Correction", in Proceedings of the ARPA Workshop on Human Language Technology, March 21-24, 1993, Plainsboro, NJ. O'Shaughnessy, D. (1992). "Analysis of False Starts in Spontaneous Speech", in Proceed- ings of the 1992 International Conference on Spoken Language Processing, October 12-16, 1992, Banff, Alberta, Canada, pp. 931-934. Pereira, F. (1985). "A New Characterization of At- tachment Preferences", in Natural Language Parsing, Ed. by Dowty, D., Karttunen, L., and Zwicky, A., Cambridge University Press, Cambridge, pp. 307-319. Pollard, C., and Sag, I. (in press). Information- Based Syntax and Semantics, Vol. 2, CSLI Lecture Notes. Seneff, S. (1992). "A Relaxation Method for Un- derstanding Spontaneous Speech Utterances", in Proceedings of the Speech and Natural Lan- guage Workshop, Harriman, NY, pp. 299-304. Shieber, S. (1983). "Sentence Disambiguation by a Shift-Reduce Parsing Technique", in Proceed- ings of the 21 Annual Meeting of the Associ- ation for Computational Linguistics, Boston, Massachusetts, pp. 113-118. Shieber, S., Uszkoreit, H., Pereira, F., Robinson, J., and Tyson, M. (1983). "The Formalism and Implementation of PATR-II", in Grosz, B. and Stickel, M. (eds) Research on Interac- tive Acquisition and Use of Knowledge, SRI International, pp. 39-79. Stallard, D., and Bobrow, R. (1992). "Fragment Processing in the DELPHI System", in Pro- ceedings of the Speech and Natural Language Workshop, Harriman, NY, pp. 305-310. Uszkoreit, H. (1986). "Categorial Unification Grammars", in Proceedings of the 11th Inter- national Conference on Computational Lin- guistics and the 24th Annual Meeting of the Association for Computational Linguistics, Institut fur Kummunikkationsforschung und Phonetik, Bonn University. Zeevat, H., Klein, E., and Calder, J. (1987). "An Introduction to Unification Categorial Gram- mar", in IIaddock, N., Klein, E., Merrill, G. 60 (eds.) Edinburgh Working Papers in Cogni- tive Science, Volume 1: Calegorial Grammar, Unification Grammar, and Parsing. 61
1993
8
THE EFFECT OF ESTABLISHING COHERENCE IN ELLIPSIS AND ANAPHORA RESOLUTION Andrew Kehler Harvard University Aiken Computation Laboratory 33 Oxford Street Cambridge, MA 02138 [email protected] Abstract This paper presents a new model of anaphoric pro- cessing that utilizes the establishment of coherence relations between clauses in a discourse. We sur- vey data that comprises a currently stalemated ar- gument over whether VP-ellipsis is an inherently syntactic or inherently semantic phenomenon, and show that the data can be handled within a uni- form discourse processing architecture. This archi- tecture, which revises the dichotomy between ellip- sis vs. Model Interpretive Anaphora given by Sag and Hankamer (1984), is also able to accommodate divergent theories and data for pronominal refer- ence resolution. The resulting architecture serves as a baseline system for modeling the role of cohe- sive devices in natural language. 1 Introduction There has been much debate concerning the ap- propriate level of language processing at which to treat VP-ellipsis resolution. Syntactic accounts (Fiengo and May, 1990; Hgik, 1987; Hellan, 1988; Hestvik, 1993; Lappin, 1993; Lappin and McCord, 1990) claim that syntactic material is copied from the antecedent clause and reconstructed in the elided clause, whereas semantic accounts (Dalrym- ple, 1991; Dalrymple et al., 1991; Gawron and Pe- ters, 1990; Hardt, 1992; Kehler, 1993; Klein, 1987) claim this material is retrieved from semantic rep- resentations. This debate is currently deadlocked; indeed a survey of the data seems to indicate that ellipsis must be both a syntactic and semantic phe- nomenon. In Section 2, we examine five types of ellipsis contexts, and show a pattern that has gone unnoticed in the literature. In Section 3 we break the deadlock by presenting a discourse processing architecture from which the correct predictions nat- urally result. We present further evidence for this architecture from pronominal NP reference resolu- tion data in Section 4, and conclude in Section 5. 2 VP Ellipsis Data A simple example of verb phrase (VP) ellipsis is given in sentence (1): (1) Ross likes his mother, and Bill does too. The stranded auxiliary in the second clause (the target clause) indicates the deletion of a verb phrase, a representation for which is to be located from an- other clause (the source clause), in this case, the first clause. Sentence (1) displays a strict/sloppy ambiguity: Bill may like Ross's mother (the strict reading) or his own mother (the sloppy reading). In this section we examine five types of elliptical contexts, and show the following heretofore unno- ticed pattern. 1 When the relationship between the source clause A and the target clause B is what we term parallel (as exemplified by the sentential struc- ture "A and B too", as in sentence (1)), the data indicates that a syntactic representation is recon- structed at the target site. That is, a syntactically- parallel source VP must be available, and recon- struction of this VP in the target clause is subject to syntactic constraints. However, in non-parallel con- structions (for example, contrastive conjunctions ("A but B"), subordinates ("A because B'), and domparatives ("A better than B")) neither a syn- tactic source VP nor compliance with syntactic con- straints is necessary, instead only a suitable seman- tic source representation is required. These data strongly suggest a discourse processing architecture that is sensitive to the establishment of coherence relations between clauses, which is described in Sec- tion 3. 2.1 Passive/Active Alternation Active elided target VPs can receive interpretations from passive source VPs: (2) In March, four fireworks manufacturers asked that the decision be reversed, and on Monday the ICC did. [ reverse the decision ] (from text of Rosenthal (1988), cited in Dal- rymple (1991)) 2 1Although space precludes us from discussing past work in detail, no approach known to us addresses (let alone accounts for) all of the phenomena discussed herein. 2This use of and in this example does not signify a parallel relationship between thc source and target, as 62 (3) This problem was to have been looked into, but obviously nobody did. [ look into the problem ] (Vincent Della Pietra, in conversation) Less frequent, but still extant, are cases of passive targets with active sources: (4) In addition to inducing lethality during the first instar, it 3 retards embryonic development, but not to the extent that the growth cones were. [ retarded ] (from text of (Jarecki, 1992)) (5) Actually I have implemented it 4 with a man- ager, but it doesn't have to be. [ implemented with a manager ] (Steven Ketchpel, in conversation) These examples are problematic for syntactic anal- yses of ellipsis, because the source clause is not a surface VP constituent. In contrast, semantic anal- yses cannot account for the unacceptability of sim- ilar examples with parallel constructions: (6) * This problem was looked into by John, and Bob did too. (7) * This agent retards embryonic development, and the growth cones were too. Syntactic parallelism is apparently required in the parallel construction, but not in the non-parallel ones. 2.2 Condition C Violations Fiengo and May (1990) and Lappin (1993) note the unacceptability of examples such as (8) and (9): (8) * I hit Billi, and then hei did too. (9) * Mary introduced Johnj to everyone, and hej did too. Under a syntactic reconstruction analysis, this is predicted by a Condition C violation within Binding Theory (Chomsky, 1981): (10) * I hit Bill/, and then he/hit Bill/too. (11) * Mary introduced Johnj to everyone, and hej introduced Johnj to everyone too. Because syntactic constraints do not apply at the semantic level, semantic accounts predict these cases to be acceptable. However, Dalrymple (1991) gives examples where Condition C should apply but apparently does not: 5 the use of too would be inappropriate under the desired reading. We might term this relationship to be result. 3Here, it refers to a previously mentioned chemical agent. 4Here, it refers to a previously mentioned computer system. 5These particular cases also violate the Vehicle Change proposal of Fiengo and May (1990), which ade- quately handles other acceptable cases that violate Con- dition C. (12) I expected Billi to win even when he/ didn't. (13) The lawyer defended Billj against the accusa- tions better than hej could have. As these examples are non-parallel constructions, again it appears that syntactic constraints apply in parallel constructions but not in non-parallel ones. 2.3 Condition A Violations As predicted by Condition A of Binding Theory, it is generally difficult to obtain a strict reading when the source clause contains a reflexive pronoun: (14) ?? Johni defended himselfi, and Bobj did too. [ defended John/] (15) ?? Fredi voted for himself/, and Garyj did too. [ voted for Fredi ] Given appropriate semantic context, judgements improve but still remain somewhat stilted: 6 (16) ? The alleged murderer/ defended himself/, and his lawyerj did too. [ defended the alleged murdereri ] (17) ? Bill Clinton/ voted for himself/, and his campaign managerj did too. [ voted for Bill Clinton/] The stiltedness of reflexives under a strict reading disappears, however, in non-parallel constructions (from Dalrymple (1991)): (18) Billi defended himself/against the accusations better than his lawyerj did. [ defended Billi ] (19) John/ voted for himself/ even though no one elsej did. [ voted for Johni ] In these cases, the strict reading is readily available and perhaps preferred. Again, there appears to be a syntactic dependency in the parallel cases that is absent from the non-parallel ones. 2.4 Non-VP Antecedents In the following examples, the source representa- tion is not a syntactic VP but instead comes from a nominalization: 7 (20) This letter deserves a response, but before you do, .... [ respond ] (Gregory Ward, p.c.) (21) Meanwhile, they sense a drop in visitors to the city. Those who do, they say, are not taking cabs. [ visit the city ] (Chicago Tribune, courtesy Gregory Ward) SThere appears to be a dialect that readily allows strict readings with reflexives. However, even for those speakers, the non-parallel constructions given below are more acceptable under the strict interpretation than the parallel cases. ~Some speakers find these cases slightly awkward or stilted. Again, however, most find these better than the parallel construction cases given below. 63 Semantic analyses can account for these cases if nominalizations are assumed to evoke event repre- sentations into the discourse model. However, in parallel constructions, discourse-model events from nominalizations are apparently not available: (22) * ,This letter provoked a response from Bush, and Clinton did too. [ responded ] (23) * There is a rise in American visitors to the city, and Canadians do too. [ visit the city ] A similar pattern is seen in cases where the an- tecedent of the ellipsis is evoked by an adjectival phrase: (24) First person pronouns aren't very shiftable, al- though the plural ones can be. [ shifted ] (Barbara Partee, in conversation) The acceptability of example (24) is to be compared with the relative unacceptability of an otherwise similar parallel construction case: (25) * First person pronouns aren't very shiftable, and the plural ones also don't. [ shift ] Again, non-syntactic source representations appear to only be available in the non-parallel construction cases. 2.5 Subjacency Violations Ha'ik (1987) gives examples of apparent subjacency violations in antecedent contained deletion (ACD): (26) John read everything which Bill believes he did. (27) * John read everything which Bill believes the claim that he did. (28) * John read everything which Bill wonders why he did. This data is problematic for a purely semantic the- ory of ellipsis, as there should be no syntactic de- pendency at the ellipsis site. However, sentence (29), which has a subordinate conjunction, does not display the expected subjacency violation (from Rooth (1981)): s (29) Which problem did you think John would solve because of the fact that Susan did? Without ellipsis, the gap remains and a subjacency violation results: (30) * Which problem did you think John would solve because of the fact that Susan solved? In our account, the relative pronoun which does not specify an interclausal coherence link, and therefore sentences (26-28) are parallel constructions. Conse- quently, again the source representation for the par- allel construction is apparently syntactic, whereas that for the non-parallel construction (e.g., example (29)) is semantic. 8I thank Stuart Shieber for bringing this example to my attention. 3 An Architecture that Exploits Coherence The data given in Section 2 suggests that VP- ellipsis resolution copies a syntactic representation in parallel constructions and a semantic represen- tation in non-parallel ones. In this section, we present a discourse processing architecture from which these predictions naturally result. We first describe Sag and Hankamer's (1984) (henceforth S&H) earlier work, the representations from which our analysis will utilize. 3.1 Sag and Hankamer~s Architecture S&H give a performance-based view of anaphoric processing that utilizes two forms of representa- tion, a propositional representation and a discourse model. Propositional representations, which they conjecture are held by short-term registers in mem- ory (henceforth propositional registers), maintain the surface syntactic constituent structure of an utterance as well as binding relations; however, discourse anaphors are resolved. 9 These repre- sentations are built in tandem with a discourse model. S&H claim that the ellipsis resolution pro- cess obtains referents from propositional represen- tations, whereas what they term Model Interpre- tive Anaphora (MIA) (e.g., 'do it' anaphora) ob- tains referents from the discourse model. They give the following example to illustrate (judgements are theirs): (31) The children asked to be squirted with the hose, so a. they were. [ ellipsis ] b. * we did. [ ellipsis ] c. we did it. [ MIA ] In their theory, example (31a) is acceptable because the source representation is a surface VP and there- fore is retrievable from its propositional representa- tion. Example (31b) is unacceptable because the source squirt the children with the hose is not a sur- face VP in the propositional representation3 ° Sen- tence (31c) is acceptable because 'do it' anaphora is an MIA process, and therefore obtains referents from the discourse model, in which a representation for squirt the children with the hose is assumed to exist. One problem with this account is that it does not explain the dichotomy of judgements for the data given in Section 2. For each of these phenom- ena, the S~zH approach predicts that all cases are 9In fact, they suggest that propositional represen- tations (as opposed to classical surface structures) are what the parser constructs, a view consistent with our account. 1°Ellipsis, in their formulation, is subject to an identity-o]-logical-]orm constraint on propositional rep- resentations. See Dalrymple et al. (1991) and Hardt (1992) for arguments that this condition is flawed. 64 either acceptable or unacceptable without regard to the type of construction. The data instead sug- gests that VP-ellipsis within the parallel construc- tion behaves like S&H's definition of ellipsis, and in non-parallel constructions it behaves like their MIA. We believe that their dichotomy of phenom- ena is somewhat illusory, and a more compelling and elegant treatment is possible by appealing to discourse structure and coherence. 3.2 A Revised Architecture We follow S&H and earlier arguments by Johnson- Laird (1983) in specifying a discourse processing architecture that includes both propositional rep- resentations and a discourse model. We propose that a main function of the propositional repre- sentations is to serve as a "bridge" in establishing clause-to-clause coherence. That is, as clauses are parsed, propositional representations are generated and held in the short-term registers. When it is established that a non-parallel coherence relation- ship is present between a clause (or set of clauses) and a subsequent clause, the propositional repre- sentation(s) for the first is integrated into the dis- course model so that coherence can be computed, thereby freeing the short-term propositional reg- ister for subsequent representations. Non-parallel constructions specify a non-parallel coherence rela- tionship overtly (e.g., 'contrast' (because), 'compar- ison' (better than), 'result' (the use of and in exam- ple 2)) 11, thereby identifying the first clause as a unit for coherence establishment and invoking inte- gration of its propositional representation into the discourse model. Parallel constructions, on the other hand, do not invoke this integration. Hobbs (1979), in discussing the parallel coherence relation he defines, suggests one basis for this distinction: One frequent function of the Parallel re- lation is to suggest or support the gener- alization which the two segments are spe- cific instances of. The relation often links segments which together function as an Exemplification or Elaboration of a more general statement. In our terms, clauses conjoined by a parallel relation will form a group and cohere as a unit with prior and subsequent statements. Therefore, this rela- tion in itself does not cause the representation for the first clause to be integrated into the discourse model. Instead, the integration of both representa- tions into the discourse model as a unit is invoked by subsequent coherence establishment. 11This use of and raises the question of how the lis- tener knows which meaning of and is present during processing. We assume that the listener can consider multiple possibilities in parallel, although it may also be that in these cases the interclausal relationship has been established by the time the ellipsi~ site is processed. The preceding analysis makes the predictions necessary to account for the ellipsis data given in Section 2. Under our account, the representation of an utterance either exists in a propositional regis- ter or in the discourse model; these are not cre- ated in tandem as S&H suggest. An elided VP then receives an interpretation from whichever rep- resentation is available. The parallel construction cases in Section 2 are unacceptable because the source clause has not been integrated into the dis- course model when the ellipsis site is processed. The source must therefore be retrieved from the propositional representation, where surface syntac- tic structure and binding relations are maintained. Reconstructing this representation requires syntac- tic parallelism (ruling out passive/active and nomi- nalization cases) and can result in violation of syn- tactic constraints (such as Condition C violations, Condition A violations, or subjacency violations). The non-parallel construction cases in Section 2 are acceptable because the antecedent for the ellipsis has been integrated into the discourse model when the ellipsis site is processed. Because the vestiges of syntactic information are no longer present in the representation, syntactic constraint violations do not occur; furthermore source representations from nominalizations and clauses of differing voice now exist in the discourse model, x2 3.3 Examples In this section, we work through two examples to illustrate the proposals that have been made thus far. For resolution at the syntactic level, we as- sume an S&H-like propositional representation and a straightforward mechanism whereby a VP repre- sentation is copied from the source clause represen- tat.ion into the target clause representation. For res- olution in the discourse model, we will use the event representation and resolution algorithm defined in (Kehler, 1993). The focus of this section, however, is to offer a general illustration of the architecture rather than to make specific claims concerning the 12Differences remain between the distribution of S&tt's ellipsis and MIA phenomena that need to be accounted for in a comprehensive treatment of event anaphora, as examples (31a-c) show (although judge- ments as to the unacceptability of sentence (31b) vary). Interestingly, contra S&H, MIA phenomena also appear to be sensitive to syntactic constraints in certain con- texts, as the following example from Dalrymple (1991) shows: * I hit Suei, and then she1 did it. One hypothesis is that VP-ellipsis is actually event anaphora with an empty pronoun; it may then be that distributional differences between "do ¢", "do it', and "do that" are due only to the anaphoric properties of the event pronoun involved, and not due to a fundamental difference between ellipsis and MIA phenomena. This hypothesis is the subject of ongoing work. 65 nature of the representations involved. Examples (32) and (33) exhibit the contrast be- tween parallel and non-parallel constructions with active target clauses that have passive source clauses, as discussed in Section 2.1: (32) * John was shot by Bob, and Bill did too. (33) John was shot by Bob because Bill wouldn't. The propositional representation for the source clause in these sentences is shown in representation (34), where P denotes the passive voice: (34) [ P [ was.shot'(by(Bob')) ] (John') ] Because the two clauses in sentence (32) stand in a parallel relationship, the source representation has not been integrated into the discourse model at the time the ellipsis site is processed; therefore the el- lipsis is resolved at the propositional level of repre- sentation. A representation is constructed with the information present in the target clause: (35) [ A [... ] (Bill') ] Here A denotes the active voice, a feature which is indicated in the target clause through the use of did. When the elided VP is retrieved, a mismatch occurs: the passive VP cannot be brought into a representation marked as having active voice. The copying can therefore not be completed, resulting in the unacceptability of the sentence. Sentence (33) also has representation (34) for its source clause. However, because the two clauses stand in a non-parallel relationship, representation (34) has already been integrated into the discourse model when the ellipsis site is processed, and thus resolution occurs in the discourse model. The rep- resentation for the source clause is: (36) el: [predicate: shot time: past polarity: positive modality: necessity agent: Bob theme: John ] Because this representation is based upon thematic (and not syntactic) roles, the representations for ac- tive and passive forms of a sentence are identical. For the target clause, a parallel event representa- tion is created with empty roles, and the role fillers present in the target clause are filled in: (37) e~: [ predicate: time: past polarity: negative modality: volitional_possibility agent: Bill theme:] Representations for the the remaining role fillers are retrieved from the source clause representation: (38) e2: [predicate: shot time: past polarity: negative modality: volitional_possibility agent: Bill theme: John ] This resolution successfully yields the correct rep- resentation for the target clause in sentence (33). Examples (39) and (40) illustrate the contrast be- tween parallel and non-parallel constructions with respect to potential Condition C violations, as de- scribed in Section 2.2: (39) * The lawyer defended Bil4 and hei did too. (40) The lawyer defended Bill/ better than hei could have. In each sentence, the propositional representation for the source clause takes the form given in (41): (41) [[ defend'(nill') ] (lawyerl') ] Because the two clauses in sentence (39) stand in a parallel relationship, the source representation has not been integrated into the discourse model at the time the ellipsis site is processed. The ellipsis is then resolved at the propositional level of represen- tation. After filling in the information present in the target clause and copying the representation of the source VP, representation (42) results: 13 (42) [[ defend'(Bill/) ] (he/') ] A manifestation of Condition C applying at this level rules out this representation asill-formed, be- cause the pronoun he c-commands the coreferential NP form Bill. Sentence (40) also has representation (41) for its source clause. Because the two clauses stand in a non-parallel relation, representation (41) has already been integrated into the discourse model when the ellipsis site is processed. Resolution then occurs in the discourse model. The representation for the source clause is: (43) e3: [predicate: defend time: past modality: necessity agent: lawyer 1 theme: Bill ] . After creating a parallel event representation, fill- ing in role fillers present in the target clause, and retrieving remaining role fillers from the source rep- resentation, representation (44) results: (44) e4: [predicate: defend time: past modality: possibility agent: Bill theme: Bill ] 13Recall that pronouns have been resolved at this level of representation; we indicate this by coindexing. 66 Because no syntactic constraints apply at this level of representation, representation (44) is well-formed and yields the correct interpretation for the target clause. In summary, our architecture accounts for data supporting both the syntactic and semantic ac- counts of ellipsis resolution in an elegant and in- tuitive way. Section 4 examines pronominal NP resolution in this model. 4 Pronoun Resolution There are also dichotomous views in the literature concerning the process by which pronoun resolu- tion is performed. Theories incorporating a notion of local focus generally utilize syntactic information in their specifications. For example, the Centering framework of Grosz et al. (1986) utilizes grammat- ical role information in specifying the accessibility of potential referents on an ordered forward-looking center list. Kameyama's work (Kameyama, 1988) contains rules for property sharing utilizing gram- matical roles. Passonneau (1991), in looking at the distribution of 'it' and 'that' for NP reference in naturally-occurring texts, concludes that both syn- tactic form of the antecedent and its grammatical role are needed to adequately account for the data. Furthermore, she suggests that the function of the propositional register discussed by S&H is appro- priate for accommodating her rules. Alternatively, some researchers (Hobbs, 1979; Wilensky, 1978) have suggested that coreference is determined as a by-product of coherence determi- nation between sentences. In Hobbs' account, for example, pronouns are modeled as free variables and are assigned to objects during the process of establishing coherence relations. However, Hobbs himself acknowledges the power of grammatical role-based heuristics, 14 noting that upon hearing example (45), (45) John can open Bill's safe. He ... one is likely to assume that John is the referent of He. The existence of a garden-path effect in ex- ample (46), where He refers to Bill instead of John, suggests that pronominal reference resolution is not guided by coherence considerations alone: (46) John can open Bill's safe. He's going to have to get the combination changed soon. As focus-based theories would predict, the reader assigns John as the referent of He, and double- takes when semantic information later suggests otherwise. 15 Our architecture provides an expla- nation for this phenomenon. Since a coherence 14Hobbs (1976) found that a heuristic favoring sub- jects over objects was 90% accurate for written texts. 15This effect causes Hobbs to admit that "this strongly suggests that some psychological reality un- derlies the heuristic [ favoring subjects over objects ]." relation has not been established at the time the pronoun is processed, the propositional representa- tion for the first clause (which preserves information that focus-based theories utilize, such as surface- string ordering and depth-of-embedding of poten- tial referents) is the representation available to the reference resolution algorithm. 16 However, when a non-parallel coherence link is overt, our architecture would predict that a semantically-based resolution process would be used because the propositional representation containing potential referents has al- ready been integrated into the discourse model at the time the pronoun is processed. This predic- tion is borne-out empirically; consider the follow- ing two sentence prefixes (complete sentences taken from (Ehrlich, 1980)): (47) Steve blamed Frank and he ... [ spilt the cof- fee ]. (48) Steve blamed Frank because he ... [ spilt the coffee ]. Focus-based theories predict the strong bias toward the referent of he in example (47) being the subject (i.e., Steve), even though he is consistent with both potential referents. Because this sentence is a par- allel construction (i.e., the meaning of "and" is not result), our architecture also makes this prediction in accordance with those theories. The heuristic preferring subjects does not apply in example (48), where Frank is perhaps the preferred referent of he, seemingly as a result of reasoning using semantic features of the verb blame. Our architecture cor- rectly predicts that the pronoun in sentence (48) does not cause processing problems while the one in example (46) does, because only in sentence (48) has the clause containing the referent of he been in- tegrated into the discourse model at the time the pronoun is processed. Ehrlich (1980) gives experimental evidence sup- porting this view. Ehrlich's goal was to test the bi- asing effect of the so-called "implicit causality" fea- ture (Garvey and Caramazza, 1974) of verbs such as blame in pronoun reference assignment in two- clause sentences with conjunctions other than be- cause (which was the only conjunction used in pre- vious work (Garvey el al., 1976)). In her experi- ments, subjects were tested for their ability to iden- tify correct referents of pronouns in three versions of six two-clause sentences (such as those in sen- tences (47) and (48)), where each of the sentences contained one of the conjunctions and, but, and be- cause. It was found that subjects were significantly more accurate in determining correct referents of aSAfter garden-pathing, "off-line" reasoning appar- ently allows the reader of example (46) to identify the correct referent of the pronoun. This reasoning may al- low propositional representations to be integrated into the discourse model so that Hobbs-like coherence deter- mination can be performed. 67 pronouns when the conjunction used was because or but, and therefore that the effect of implicit causal- ity was not constant with respect to the conjunction used. While a detailed analysis of her work is be- yond the scope of this paper, two generalizations that she draws as a result of her experiments are: (1) that subjects were more able to utilize 'gen- eral knowledge' in determining the referent when the conjunction used was because or but than when it was and; and (2) that hearers analyze language a clause at a time. The first of these results sup- ports our view that semantic information required for reasoning is primarily available in the discourse model (since the representation for the first clause is integrated into the discourse model when the con- junction used is but or because); the second point supports our claim that the propositional registers hold clause-level representations. 17 In summary, our architecture also accommo- dates evidence supporting competing theories of how pronominal NP resolution is performed. 5 Conclusions This paper presents a model for anaphoric process- ing that incorporates the role of establishing coher- ence relationships between clauses in a discourse. By postulating the existence of propositional rep- resentations in addition to a discourse model, we account for ellipsis data that has gridlocked work on the topic. Furthermore, arguments for dichoto- mous approaches to pronoun resolution are resolv- able within this framework. It should be noted that coherence establishment is not likely to be the only discourse factor involved in integrating propositional representations into the discourse model. Therefore, the analysis described herein only indicates tendencies, as opposed to pre- dicting cut-and-dry judgements on the basis of type of construction alone. For instance, example (49) has been judged by some speakers to be acceptable under a strict reading: is (49) I voted for myself, and I hope you did too! Our account predicts that this case would be at least somewhat stilted due to a Condition A viola- tion. One factor distinguishing this example from 17 Ehrhch's results with the conjunction and are mixed with respect to our theory, as in some cases her partic- ipants preferred a non-subject position referent over a subject position one. In particular, she notes that this happens when the main verb of the second clause is the stative verb be, as in Sue criticized Penny and she was gloomy. These sentences contain the resultmeaning of and as opposed to the parallel one. Unfortunately, Ehrlich's original data was not available at the time of this writing so an analysis distinguishing between uses of and could not be performed. lsI thank an anonymous reviewer for this example. others we have discussed is the use of first and sec- ond person pronouns, and a second is the fact that the pronominal referent necessary to yield the strict reading is also present in the target clause. Future work is needed to further analyze the effects of these differences. The theory presented here evokes many other questions for future study. One such question is how the postulated representations should be fur- ther formalized, and how reasoning with these for- malizations is to be performed. A second question is how this conception of discourse processing may be integrated with theories of discourse structure (Grosz and Sidner, 1986; Scha and Polanyi, 1988; Webber, 1991). While we have looked primarily at two-clause structures, the ramifications that the claims have on multi-clause discourse structure re- quire further investigation. Such studies will form the basis for further characterization of the role of coherence establishment in anaphoric processing. Acknowledgments This work was supported in part by National Sci- ence Foundation Grant IRI-9009018, National Sci- ence Foundation Grant IRI-9157996, and a match- ing grant for the latter from the Xerox Corporation. I would like to thank Mary Dalrymple, Barbara Grosz, Shalom Lappin, Karen Lochbaum, Christine Nakatani, Stuart Shieber, and two anonymous re- viewers for valuable discussions and comments on earlier drafts. References (Chomsky, 1981) Noam Chomsky. Lectures in Gov- ernment and Binding. Foris, Dordrecht, 1981. (Dalrymple et al., 1991) Mary Dalrymple, Stuart M. Shieber, and Fernando Pereira. Ellipsis and higher-order unification. Linguistics and Philo- sophy, 14:399-452, 1991. (Dalrymple, 1991) Mary Dalrymp!e. Against re- construction in ellipsis. Technical Report SSL- 91-114, Xerox, 1991. (Ehrlich, 1980) Kate Ehrlich. Comprehension of pronouns. Quarterly Journal of Experimental Psychology," 32:247-255, 1980. (Fiengo and May, 1990) Robert Fiengo and Robert May. Anaphora and ellipsis, ms., City Univer- sity of New York and University of California at Irvine, 1990. (Garvey and Caramazza, 1974) C. Garvey and A. Caramazza. Implicit causality in verbs. Lin- guistic Inquiry, 5:549-564, 1974. (Garvey et al., 1976) C. Garvey, A. Caramazza, and J. Yates. Factors underlying assignment of pronoun antecedents. Cognition, 3:227-243, 1976. 68 (Gawron and Peters, 1990) Mark Gawron and Stanley Peters. Anaphora and Quantification in Situation Semantics. CSLI/University of Chicago Press, Stanford University, 1990. CSLI Lecture Notes, Number 19. (Grosz and Sidner, 1986) Barbara Grosz and Can- dace Sidner. Attention, intentions, and the strc- ture of discourse. Computational Linguistics, 12(3):175-204, 1986. (Grosz et al., 1986) Barbara J. Grosz, Aravind K. Joshi, and Scott Weinstein. Towards a computa- tional theory of discourse interpretation. Unpub- lished manuscript, 1986. (Ha'/k, 1987) Isabelle Ha'ik. Bound variables that need to be. Linguistics and Philosophy, 11:503- 530, 1987. (Hardt, 1992) Daniel Hardt. VP ellipsis and con- textual interpretation. In Proceedings of the In- ternational Conference on Computational Lin- guistics (COLING-92), Nantes, July 1992. (Hellan, 1988) Lars Hellan. Anaphora in Norwe- gian and the Theory of Grammar. Studies in Generative Grammar 32. Forts, Dordrecht, 1988. (Hestvik, 1993) Arild Hestvik. Strict reflexives and the subordination effect. In S. Berman and A. Hestvik, editors, Proceedings of the Stuttgart Workshop on Ellipsis: Arbeitspapiere des Son- derforschungsbereich 340, Berichl Nr. 29-1992, SFB 340. University of Stuttgart, University of Tuebingen, and IBM Germany, 1993. (Hobbs, 1976) Jerry Hobbs. Pronoun resolution. Technical Report 76-1, Department of Computer Science, City University of New York, 1976. (Hobbs, 1979) Jerry Hobbs. Coherence and coref- erence. Cognitive Science, 3:67-90, 1979. (Johnson-Laird, 1983) P. N. Johnson-Laird. Men- tal Models: Towards a Cognitive Science of Lan- guage, Inference, and Consciousness. Cambridge University Press, 1983. (Kameyama, 1988) Megumi Kameyama. Japanese zero pronominal binding: Where syntax and dis- course meet. In William J. Poser, editor, Pa- pers from the Second International Workshop on Japanese Syntax, pages 47-74. CLSI, 1988. (Kehler, 1993) Andrew Kehler. A discourse copy- ing algorithm for ellipsis and anaphora resolu- tion. In Proceedings of the Sixth Conference of the European Chapter of the Association for Com- putational Linguistics (EACL-93), Utrecht, the Netherlands, April 1993. (Klein, 1987) Ewan Klein. VP-Ellipsis in DR the- ory. In Groenindijk and Stokhof, editors, Studies in Discourse Representation Theory and the The- ory of Generalized Quantifiers. Forts, 1987. (Lappin and McCord, 1990) Shalom Lappin and Michael McCord. Anaphora resolution in slot grammar. Computational Linguistics, 16:197- 212, 1990. (Lappin, 1993) Shalom Lappin. The syntactic ba- sis of ellipsis resolution. In S. Berman and A. Hestvik, editors, Proceedings of the Stuttgart Workshop on Ellipsis: Arbeitspapiere des Son- derforschungsbereich 340, Bericht Nr. 29-i992, SFB 340. University of Stuttgart, University of Tuebingen, and IBM Germany, 1993. (Passonneau, 1991) Rebecca Passonneau. Persis- tence of linguistic form in discourse processing. In Proceedings of the Fall Symposium on Dis- course Structure in Natural Language Under- standing and Generation, Monterey, CA, Novem- ber 1991. (Rooth, 1981) Mats Rooth. A comparison of three theories of verb phrase ellipsis. University of Mas- sachusetts Occasional Papers in Linguistics, Vol- ume 7, 1981. (Sag and Hankamer, 1984)Ivan Sag and Jorge Hankamer. Toward a theory of anaphoric pro- cessing. Linguistics and Philosophy, 7:325-345, 1984. (Scha and Polanyi, 1988) Remko Scha and Livia Polanyi. An augmented context free grammar for discourse. In Proceedings of the Interna- tional Conference on Computational Linguistics (COLING-88), pages 573-577, Budapest, August 1988. (Webber, 1991)Bonnie Lynn Webber. Struc- ture and ostension in the interpretation of dis- course deixis. Language and Cognitive Processes, 6(2):107-135, 1991. (Wilensky, 1978) Robert Wilensky. Understanding Goal-Based Stories. PhD thesis, Yale, 1978. Re- search Report #140. Sources of Examples (Jarecki, 1992) Jill Jarecki. The role ofsynaptic ac- tivity during embryonic neuromuscular junction development in drosophila melangestr. Unpub- lished Yale University Prospectus, August 1992. (Rosenthal, 1988) Harry F. Rosenthal. Agency or- ders UPS to resume fireworks deliveries-after July 4th. Associated Press Newswire, June 1988. 69
1993
9
Discourse Obligations in Dialogue Processing David R. Traum and James F. Allen Department of Computer Science University of Rochester Rochester, NY 14627-0226 traum@cs, rochester, edu and j ames@cs, rochester, edu Abstract We show that in modeling social interaction, particularly di- alogue, the attitude of obligation can be a useful adjunct to the popularly considered attitudes of belief, goal, and inten- tion and their mutual and shared counterparts. In particular, we show how discourse obligations can be used to account in a natural manner for the connection between a question and its answer in dialogue and how obligations can be used along with other parts of the discourse context to extend the coverage of a dialogue system. 1 Motivation Most computational models of discourse are based pri- marily on an analysis of the intentions of the speakers (e.g., [Cohen and Perrault, 1979; Allen and Perrault, 1980; Grosz and Sidner, 1986]). An agent has certain goals, and communication results from a planning process to achieve these goals. The speaker will form intentions based on the goals and then act on these intentions, producing utterances. The hearer will then reconstruct a model of the speaker's intentions upon hearing the utterance. This approach has many strong points, but does not provide a very satisfac- tory account of the adherence to discourse conventions in dialogue. For instance, consider one simple phenomena: a question is typically followed by an answer, or some explicit statement of an inability or refusal to answer. The intentional story account of this goes as follows. From the production of a question by Agent B, Agent A recognizes Agent B's goal to find out the answer, and she adopts a goal to tell B the answer in order to be co-operative. A then plans to achieve the goal, thereby generating the answer. This provides an elegant account in the simple case, but requires a strong assumption of co-operativeness. Agent A must adopt agent B's goals as her own. As a result, it does not explain why A says anything when she does not know the answer or when she is not predisposed to adopting B's goals. Several approaches have been suggested to account for this behavior. [Litman and Allen, 1987] introduced an intentional analysis at the discourse level in addition to the domain level, and assumed a set of conventional multi-agent actions at the discourse level. Others have tried to account for this kind of behavior using social intentional constructs such as Joint intentions [Cohen and Levesque, 1991 ] or Shared Plans [Grosz and Sidner, 1990]. While these accounts do help explain some discourse phenomena more satisfactorily, they still require a strong degree of cooperativity to account for dialogue coherence, and do not provide easy explanations of why an agent might act in cases which do not support high-level mutual goals. Consider a stranger approaching an agent and asking, "Do you have the time?" It is unlikely that there is a joint intention or shared plan, as they have never met before. From a purely strategic point of view, the agent may have no interest in whether the stranger's goals are met. Yet, typically agents will still respond in such situations. As another example, consider a case in which the agent's goals are such that it prefers that an interrogating agent not find out the requested information. This might block the formation of an intention to inform, but what is it that inspires the agent to respond at all? As these examples illustrate, an account of question an- swering must go beyond recognition of speaker intentions. Questions do more than just provide evidence of a speaker's goals, and something more than adoption of the goals of an interlocutor is involved in the formulating a response to a question. Some researchers, e.g., [Mann, 1988; KowtkoetaL, 1991], assume a library of discourse level actions, sometimes called dialogue games, which encode common communicative in- teractions. To be co-operative, an agent must always be par- ticipating in one of these games. So if a question is asked, only a fixed number of activities, namely those introduced by a question, are cooperative responses. Games provide a better explanation of coherence, but still require the agent's to recognize each other's intentions to perform the dialogue game. As a result, this work can be viewed as a special case of the intentional view. An interesting model is described by [Airenti et al., 1993], which separates out the conversational games from the task-related games in a way similar way to [Litman and Allen, 1987]. Because of this separation, they do not have to assume co-operation on the tasks each agent is performing, but still require recognition of intention and co- operation at the conversational level. It is left unexplained what goals motivate conversational co-operation. The problem with systems which impose cooperativity in the form of automatic goal adoption is that this makes it im- possible to reason about cases in which one might want to violate these rules, especially in cases where the conversa- tional co-operation might conflict with the agent's personal goals. We are developing an alternate approach that takes a step back from the strong plan-based approach. By the strong plan-based account, we mean models where there is a set of personal goals which directly motivates all the behavior of the agent. While many of the intuitions underlying these approaches seems close to right, we claim it is a mistake to attempt to analyze this behavior as arising entirely from the agent's high-level goals. We believe that people have a much more complex set of motivations for action. In particular, much of one's behavior arises from a sense of obligation to behave within limits set by the society that the agent is part of. A model based on obligations differs from an intention-based approach in that obligations are independent of shared plans and intention recognition. Rather, obligations are the result of rules by which an agent lives by. Social interactions are enabled by their being a sufficient compatibility between the rules affecting the interacting agents. One responds to a question because this is a social behavior that is strongly encouraged as one grows up, and becomes instilled in the agent. 2 Sketch of Solution The model we propose is that an agent's behavior is deter- mined by a number of factors, including that agent's current goals in the domain, and a set of obligations that are induced by a set of social conventions. When planning, an agent con- siders both its goals and obligations in order to determine an action that addresses both to the extent possible. When prior intentions and obligations conflict, an agent generally will delay pursuit of its intentions in order to satisfy the obliga- tions, although the agent may behave otherwise at the cost of violating its obligations. At any given time, an agent may have many obligations and many different goals, and plan- ning involves a complex tradeoff between these different factors. Returning to the example about questions, when an agent is asked a question, this creates an obligation to respond. The agent does not have to adopt the goal of answering the question as one of her personal goals in order to explain the behavior. Rather it is a constraint on the actions that the agent may plan to do. In fact, the agent might have an ex- plicit goal not to answer the question, yet still is obliged to offer a response (e.g., consider most politicians at press con- ferences). The planning task then is to satisfy the obligation of responding to the question, without revealing the answer if at all possible. In cases where the agent does not know the answer, the obligation to respond may be discharged by some explicit statement of her inability to give the answer. 3 Obligations and Discourse Obligations Obligations represent what an agent should do, according to some set of norms. The notion of obligation has been studied for many centuries, and its formal aspects are examined using Deontic Logic. Our needs are fairly simple, and do not require an extensive survey of the complexities that arise in that literature. Still, the intuitions underlying that work will help to clarify what an obligation is. Generally, obligation is defined in terms of a modal operator often called permissible. An action is obligatory if it is not permissible not to do it. An action is forbidden if it is not permissible. An informal semantics of the operator can be given by positing a set of rules of behavior R. An action is obligatory if its occurrence logically follows from R, and forbidden if its non-occurrence logically follows from R. An action that might occur or not- occur according to R is neither obligatory nor forbidden. Just because an action is obligatory with respect to a set of rules R does not mean that the agent will perform the action. So we do not adopt the model suggested by [Shoham and Tennenholtz, 1992] in which agents' behavior cannot vio- late the defined social laws. If an obligation is not satisfied, then this means that one of the rules must have been broken. We assume that agents generally plan their actions to violate as few rules as possible, and so obligated actions will usu- ally occur. But when they directly conflict with the agent's personal goals, the agent may choose to violate them. Obli- gations are quite different from and can not be reduced to intentions and goals. In particular, an agent may be obliged to do an action that is contrary to his goals (for example, consider a child who has to apologize for hitting her younger brother). Obligations also cannot be reduced to simple expectations, although obligations may act as a source of expectations. Expectations can be used to guide the action interpretation and plan-recognition processes (as proposed by [Carberry, 1990]), but expectations do not in and of themselves provide a sufficient motivation for an agent to perform the expected action - in many cases there is nothing wrong with doing the unexpected or not performing an expected action. The interpretation of an utterance will often be clear even without coherence with prior expectations. We need to allow for the possibility that an agent has performed an action even when this violates expectations. If an agent actually violates obligations as well then the agent can be held accountable. 1 Specific obligations arise from a variety of sources. In a conversational setting, an accepted offer or a promise will incur an obligation. Also, a command or request by the other party will bring about an obligation to perform the requested action. If the obligation is to say something then we call this a discourse obligation. Our model of obligation is very simple. We use a set of rules that encode discourse conventions. Whenever a new conversation act is determined 1 [McRoy, 1993] uses expectations derived from Adjacency Pair structure [Schegloff and Sacks, 1973], as are many of the discourse obligations considered in this paper. These expectations correspond to social norms and do impose the same notion of accountabil- ity. However, the analysis there is oriented towards discovering misconceptions based on violated expectations, and the alternative possibility of violated obligations is not considered in the utter- ance recognition process, nor allowed in the utterance production process. 2 to have been performed, then any future action that can be inferred from the conventional rules becomes an obligation. We use a simple forward chaining technique to introduce obligations. Some obligation rules based on the performance of con- versation acts are summarized in Table 1. When an agent performs a promise to perform an action, or performs an acceptance of a suggestion or request by another agent to perform an action, the agent obliges itself to achieve the action in question. When another agent requests that some action be performed, the request itself brings an obligation to address the request: that is either accept it or to reject it (and make the decision known to the requester) - the requestee is not permitted to ignore the request. A question establishes an obligation to answer the question. If an utterance has not been understood, or is believed to be deficient in some way, this brings about an obligation to repair the utterance. source of obligation obliged action SI Accept or Promise A $1 achieve A St Request A $2 address Request: accept A or reject A S I YNQ whether P $2 Answer-if P S j WHQ P(x) $2 Inform-ref x utterance not understood repair utterance or incorrect Table I: Sample Obligation Rules 3.1 Obligations and Behavior Obligations (or at least beliefs that the agent has obligations) will thus form an important part of the reasoning process of a deliberative agent, e.g., the architecture proposed by [Bratman et al., 1988]. In addition to considering beliefs about the world, which will govern the possibility of per- forming actions and likelyhood of success, and desires or goals which will govern the utility or desirability of actions, a social agent will also have to consider obligations, which govern the permissibility of actions. There are a large number of strategies that may be used to incorporate obligations into the deliberative process, based on how much weight they are given compared to the agents goals. [Conte and Castelfranchi, 1993] present several strate- gies of moving from obligations to actions, including: auto- matically performing an obligated action, adopting all obli- gations as goals, or adopting an obligated action as a goal only when performing the action results in a state desired by the agent. In the latter cases, these goals still might conflict with other goals of the agent, and so are not guaranteed to be performed. In general, we will want to allow action based on obli- gations to supersede performance of intended actions. For instance, consider an agent with an intention to do something as soon as possible. If an obligation is imposed, it will still be possible to perform the intended action, but a well-behaved agent might need to delay performance until the obligation is dealt with. For example, if the intention is to perform a series of inform acts, and then a listener requests repair of one, a well-behaved agent will repair that inform before proceeding to initiate the next intended one. 4 Using Discourse Obligations in a Dialogue System We have built a system that explicitly uses discourse obli- gations and communicative intentions to partake in natural dialogue. This system plays the role of the dialogue manager in the TRAINS dialogue system which acts as an intelligent planning assistant in a transportation domain. While this is a domain where the assumption of co-operation is generally valid, the obligation model still provides for a much simpler analysis of the discourse behavior than a strongly plan-based account. An example of a dialogue that the TRAINS system can engage in is shown in Figure 1. Below we describe parts of the discourse model in more detail and then show how it is used to account for aspects of this dialogue. Utt. # 1 2 3-3=6 3-7 3-8 4 5-1 5-2 6 7-1~ 7-3 8 9=13 14 15-2--4 15-5=7 15-8=10 16 17 18-3 19 Speaker: Utterance U: We better ship a boxcar of oranges to Bath by 8 AM. S : Okay. U: So we need to get a boxcar to Coming where there are oranges. U: There are oranges at Corning. U: Right? S : Right. U: So we need an engine to move the boxcar. U: Right? S : Right. U: So there's an engine at Avon. U: Right? S : Right. U: So we should move the engine at Avon, en- gine El, to Dansville to pick up the boxcar there. S : Okay. U: And move it from Dansville to Corning. U: Load up some oranges into the boxcar. U: And then move it on to Bath. S : Okay. U: How does that sound? S : That's no problem. U: Good. Figure 1: Sample dialogue 2 processed by TRAINS-93. The TRAINS System [Allen and Schubert, 1991] is alarge integrated natural language conversation and plan reasoning 2This is a slightly simplified version of a spoken dialogue be- tween two people. The original is dialogue 91-6.1 from [Gross et al., 1993]. The utterance numbering system used here reflects the relation to the turn and utterance numbering used there. '3-7' represents utterance 7 within turn 3. '=' is used to indicate merged utterances. Thus '3-3=6' spans four utterances in turn 3 of the original, and 9=13 replaces turns 9 through 13 in the original. 3 system. We concentrate here, however, on just one part of that system, the discourse actor which drives the actions of the dialogue manager module. Figure 2 illustrates the system from the viewpoint of the dialogue manager. I I User I I NL Input I NL Interpretation Modules Observed Conversation Acts Dialogue ] Manager j~ I Domain Directives I Domain Task Interaction 1 Modules '~'1 I I NL Output j- I NL Generation 1 Module Intended Conversation Acts Domain Observations and Directive Responses Figure 2: Dialogue Manager's High-Level View of the Ar- chitecture of the TRAINS Conversation System The dialogue manager is responsible for maintaining the flow of conversation and making sure that the conversational goals are met. For this system, the main goals are that an executable plan which meets the user's goals is constructed and agreed upon by both the system and the user and then that the plan is executed. The dialogue manager must keep track of the current state of the dialogue, determine the effects of observed conversa- tion acts, generate utterances back, and send commands to the domain plan reasoner and domain plan executor when appropriate. Conversational action is represented using the theory of Conversation Acts [Traum and Hinkelman, 1992] which augments traditional Core Speech Acts with levels of acts for turn-taking, grounding [Clark and Schaefer, 1989], and argumentation. Each utterance will generally contain acts (or partial acts) at each of these levels. 4.1 Representing Mental Attitudes As well as representing general obligations within the tem- poral logic used to represent general knowledge, the system also maintains two stacks (one for each conversant) of pend- ing discourse obligations. Each obligation on the stack is represented as an obligation type paired with a content. The stack structure is appropriate because, in general, one must respond to the most recently imposed obligation first. As explained in Section 4.2, the system will attend to obliga- tions before considering other parts of the discourse context. Most obligations will result in the formation of intentions to communicate something back to the user. When the inten- tions are formed, the obligations are removed from the stack, although they have not yet actually been met. If, for some reason, the system dropped the intention without satisfying it and the obligation were still current, the system would place them back on the stack. The over-riding goal for the TRAINS domain is to con- struct and execute a plan that is shared between the two participants. This leads to other goals such as accepting pro- posals that the other agent has suggested, performing domain plan synthesis, proposing plans to the other agent which the domain plan reasoner has constructed, or executing a com- pleted plan. 4.2 The Discourse Actor Algorithm In designing an agent to control the behavior of the dialogue manager, we choose a reactive approach in which the system will not deliberate and add new intentions until after it has performed the actions which are already intended. As shown above, though, new obligations will need to be addressed before performing intended actions. The agent's deliberative behavior could thus be characterized in an abstract sense as: loop perceive world and update beliefs if system has obligations then address obligations else if system has performable intentions then perform actions else deliberate on goals When deciding what to do next, the agent first considers obligations and decides how to update the intentional struc- ture (add new goals or intentions) based on these obligations. Obligations might also lead directly to immediate action. If there are no obligations, then the agent will consider its in- tentions and perform any actions which it can to satisfy these intentions. If there are no performable intentions, then the system will deliberate on its overall goals and perhaps adopt some new intentions (which can then be performed on the next iteration). For the discourse actor, special consideration must be given to the extra constraints which participation in a conver- sation imposes. This includes some weak general obligations (such as acknowledging utterances by others and not inter- rupting) as well as some extra goals coming from the domain setting to maintain a shared view of the world and the domain plans which are to be executed. We prioritize the sources for the deliberations of the actor as follows: 1. Discourse Obligations from Table 1 2. Weak Obligation: Don't interrupt user's turn 3. Intended Speech Acts 4. Weak Obl: Grounding (coordinate mutual beliefs) 5. Discourse Goals: Domain Plan Negotiation 6. High-level Discourse Goals 4 The implemented actor serializes consideration of these sources into the algorithm in Figure 3. (1) while conversation is not finished (2) if system has obligations (3) then address obligations (4) else if system has turn (5) then if system has intended conversation acts (6) then call generator to produce NL utterances (7) else if some material is ungrounded (8) then address grounding situation (9) else if some proposal is not accepted (I 0) then consider proposals (11 ) else if high-level goals are unsatisfied (12) then address goals (13) else release turn or attempt to end conversation (I 4) else if no one has turn (15) then take turn (16) else if long pause (17) then take turn Figure 3: Discourse Actor Algorithm The updating of the conversational state due to perceived conversation acts or actions of other modules of the system progresses asynchronously with the operation of the dis- course actor. Whenever the discourse actor is active, it will first decide on which task to attempt, according to the pri- orities given in Figure 3, and then work on that task. After completing a particular task, it will then run through the loop again, searching for the next task, although by then the con- text may have changed due to, e.g., the observance of a new utterance from the user. The actor is always running and decides at each iteration whether to speak or not (accord- ing to turn-taking conventions); the system does not need to wait until a user utterance is observed to invoke the actor, and need not respond to user utterances in an utterance by utterance fashion. Lines 2-3 of the algorithm in Figure 3 indicate that the actor's first priority is fulfilling obligations. If there are any, then the actor will do what it thinks best to meet those obligations. If there is an obligation to address a request, the actor will evaluate whether the request is reasonable, and if so, accept it, otherwise reject it, or, if it does not have sufficient information to decide, attempt to clarify the parameters. In any case, part of meeting the obligation will be to form an intention to tell the user of the decision (e.g., the acceptance, rejection, or clarification). When this intention is acted upon and the utterance produced, the obligation will be discharged. Other obligation types are to repair an uninterpretable utterance or one in which the presuppositions are violated, or to answer a question. In question answering, the actor will query its beliefs and will answer depending on the result, which might be that the system does not know the answer. In most cases, the actor will merely form the intention to produce the appropriate utterance, waiting for a chance, according to turn-taking conventions, to actually generate the utterance. In certain cases, though, such as a repair, the system will actually try to take control of the turn and pro- duce an utterance immediately. For motivations other than obligations, the system adopts a fairly "relaxed" conversa- tional style; it does not try to take the turn until given it by the user unless the user pauses long enough that the conversation starts to lag (lines 14-17). When the system does not have the turn, the conversational state will still be updated, but the actor will not try to deliberate or act. When the system does have the turn, the actor first (af- ter checking obligations) examines its intended conversa- tion acts. If there are any, it calls the generator to produce an utterance 3 (lines 5-6 of the discourse actor algorithm). Whatever utterances are produced are then reinterpreted (as indicated in Figure 2) and the conversational state updated accordingly. This might, of course, end up in releasing the turn. It might not be convenient to generate all the intended acts in one utterance, in which case there will remain some intended acts left for future utterances to take care of (unless the subsequent situation merits dropping those intentions). Only intended speech acts that are part of the same argumen- tation acts as those which are uttered will be kept as intentions - others will revert back to whatever caused the intention to be formed, although subsequent deliberation might cause the intentions to be re-adopted. If there are no intended conversation acts, the next thing the actor considers is the grounding situation (lines 7-8). The actor will try to make it mutually believed (or grounded) whether particular speech acts have been performed. This will involve acknowledging or repairing user utterances, as well as repairing and requesting acknowledgement of the system's own utterances. Generally, groundingis considered less urgent than acting based on communicative intentions, although some grounding acts will be performed on the basis of obligations which arise while interpreting prior utterances. If all accessible utterances are grounded, the actor then considers the negotiation of domain beliefs and intentions (lines 9-10). The actor will try to work towards a shared domain plan, adding intentions to perform the appropriate speech acts to work towards this goal. This includes ac- cepting, rejecting, or requesting retraction of user proposals, requesting acceptance of or retracting system proposals, and initiating new system proposals or counterproposals. The actor will first look for User proposals which are not shared. If any of these are found, it will add an intention to accept the proposal, unless the proposal is deficient in some way (e.g., it will not help towards the goal or the system has already come up with a better alternative). In this latter case, the system will reject the user's proposal and present or argue for its own proposal. Next, the actor will look to see if any of its own proposals have not been accepted, requesting the user to accept them if they have been, or retracting or reformulating them if they have already been rejected. Finally, the actor will check its private plans for 3Actually, if the only utterance is an acknowledgement, the actor will postpone the production until it checks that there is nothing else that it can combine in the same utterance, such as an acceptance or answer. 5 any parts of the plan which have not yet been proposed. If it finds any here, it will adopt an intention to make a suggestion to the user. If none of the more local conversational structure con- straints described above require attention, then the actor will concern itself with its actual high-level goals. For the TRAINS system, this will include making calls to the domain plan reasoner and domain executor, which will often return material to update the system's private view of the plan and initiate its own new proposals. It is also at this point that the actor will take control of the conversation, pursuing its own objectives rather than responding to those of the user. Finally, if the system has no unmet goals that it can work towards achieving (line 13), it will hand the turn back to the user or try to end the conversation if it believes the user's goals have been met as well. 4.3 Examples The functioning of the actor can be illustrated by its behavior in the dialogue in Figure 1. While the discussion here is informal and skips some details, the dialogue is actually processed in this manner by the implemented system. More detail both on the dialogue manager and its operation on this example can be found in [Traum, 1994]. Utterance 1 is interpreted both (literally) as the initiation 4 of an inform about an obligation to perform a domain action (shipping the oranges). This utterance is also seen as an (indirect) suggestion that this action be the goal of a shared domain plan to achieve the performance of the action. In addition, this utterance releases the turn to the system. Fig- ure 4 shows the relevant parts of the discourse state after interpretation of this utterance. Discourse Obligations: Turn Holder: System Intended Speech Acts: Unack'd Speech Acts: [ INFORM- 1 ], [ SUGGEST-4 ] Unaccepted Proposals: Discourse Goals: Get-goal Build-Plan Execute-Plan Figure 4: Discourse Context after Utterance 1 After interpreting utterance 1, the system first decides to acknowledge this utterance (lines 7-8 in the actor algorithm) - moving the suggestion from an unacknowledged to unac- cepted - and then to accept the proposal (lines 9-10). Finally, the system acts on the intentions produced by these deliber- ations (lines 4-5) and produces the combined acknowledge- ment/acceptance of utterance 2. This acceptance makes the goal shared and also satisfies the first of the discourse goals, that of getting the domain goal to work on. Utterances 3-3=6 and 3-7 are interpreted, but not re- sponded to yet since the user keeps the turn (in this case by following up with subsequent utterances before the sys- tem has a chance to act). Utterance 3-8 invokes a discourse 4According to the theory of Conversation Acts [Traum and Hinkelman, 1992], Core Speech Acts such as inform are multi- agent actions which have as their effect a mutual belief, and are not completed unless/until they are grounded. obligation on the system to respond to the User's assertion in 3-7 and also gives the turn to the system. The resulting discourse context (after the system decides to acknowledge) is shown in Figure 5. Discourse Obligations: (CHECK-IF ( :AT . . . ) ) Turn Holder: System Intended Speech Acts: (Ack [INFORM-7] .... ) Unack'd Speech Acts: Unaccepted Proposals: [ SUGGEST-10 ], [ SUGGEST-15 ] Discourse Goals: Build-Plan Execute-Plan Figure 5: Discourse Context after Utterance 2 The system queries its domain knowledge base and de- cides that the user is correct here (there are, indeed, oranges at Coming), and so decides to meet this obligation (lines 2-3) by answering in the affirmative. This results in forming an intention to inform, which is then realized (along with the acknowledgement of the utterances) by the generation of utterance 4. Similar considerations hold for the system responses 6 and 8. The reasoning leading up to utterance 14 is similar to that leading to utterance 2. Here the user is suggesting domain actions to help lead to the goal, and the system, when it gets the turn, acknowledges and accepts this suggestion. Utterances 15-2--4, 15-5=7, and 15-8=10 are interpreted as requests because of the imperative surface structure. The discourse obligation to address the request is incurred only when the system decides to acknowledge the utterances and ground them. After the decision to acknowledge, the obliga- tions are incurred, and the system then addresses the requests, deciding to accept them all, and adding intentions to perform an accept speech act, which is then produced as 16. Utterance 17 is interpreted as a request for evaluation of the plan. When the system decided to acknowledge, this creates a discourse obligation to address the request. The system considers this (invoking the domain plan reasoner to search the plan for problems or incomplete parts) and decides that the plan will work, and so decides to perform the requested action - an evaluation speech act. This is then generated as 18-3. The discourse state after the decision to acknowledge is shown in Figure 6. Discourse Obligations: (ADDRESS [REQUEST-49 ] ) Turn Holder: System Intended Speech Acts: (Ack [REQUEST-49 ] ) Unack'd Speech Acts: Unaccepted Proposals: Discourse Goals: Build-Plan Execute-Plan Figure 6: Discourse Context after Utterance 17 After the user's assent, the system then checks its goals, and, having already come up with a suitable plan, executes this plan in the domain by sending the completed plan to the domain plan executor. This example illustrates only a small fraction of the capa- bilities of the dialogue model. In this dialogue, the system needed only to follow the initiative of the user. However this architecture can handle varying degrees of initiative, while remaining responsive. The default behavior is to allow the user to maintain the initiative through the plan construction phase of the dialogue. If the user stops and asks for help, or even just gives up the initiative rather than continuing with further suggestions, the system will switch from plan recog- nition to plan elaboration, and will incrementally devise a plan to satisfy the goal (although this plan would probably not be quite the same as the plan constructed in this dialogue). We can illustrate the system behaving more on the basis of goals than obligations with a modification of the previous example. Here, the user releases the turn back to the system after utterance 2, and the deliberation proceeds as follows: the system has no obligations, no communicative intentions, nothing is ungrounded, and there are no unaccepted pro- posals, so the system starts on its high-level goals. Given its goal to form a shared plan, and the fact that the current plan (consisting of the single abstract move-commodS_ty action) is not executable, the actor will call the domain plan reasoner to elaborate the plan. This will return a list of augmentations to the plan which can be safely assumed (in- cluding a move- eng 5_ne event which generates the move- commodity, given the conditions that the oranges are in a boxcar which is attached to the engine), as well as some choice point where one of several possibilities could be added (e.g., a choice of the particular engine or boxcar to use). Assuming that the user still has not taken the turn back, the system can now propose these new items to the user. The choice could be resolved in any of several ways: the domain executor could be queried for a preference based on prior experience, or the system could put the matter up to the user in the form of an alternative question, or it could make an arbitrary choice and just suggest one to the user. The user will now be expected to acknowledge and react to these proposals. If the system does not get an acknowl- edgement, it will request acknowledgement the next time it considers the grounding situation. If the proposal is not ac- cepted or rejected, the system can request an acceptance. If a proposal is rejected, the system can negotiate and offer a counterproposal or accept a counter proposal from the user. Since the domain plan reasoner [Ferguson, 1994] performs both plan recognition and plan elaboration in an incremental fashion, proposals from system and user can be integrated naturally in a mixed-initiative fashion. The termination con- dition will be a shared executable plan which achieves the goal, and each next action in the collaborative planning pro- cess will be based on local considerations. 5 Discussion We have argued that obligations play an important role in accounting for the interactions in dialog. Obligations do not replace the plan-based model, but augment it. The result- ing model more readily accounts for discourse behavior in adversarial situations and other situations where it is implau- sible that the agents adopt each others goals. The obligations encode learned social norms, and guide each agent's behav- ior without the need for intention recognition or the use of shared plans at the discourse level. While such complex intention recognition may be required in some complex in- teractions, it is not needed to handle the typical interactions of everyday discourse. Furthermore, there is no require- ment for mutually-agreed upon rules that create obligations. Clearly, the more two agents agree on the rules, the smoother the interaction becomes, and some rules are clearly virtually universal. But each agent has its own set of individual rules, and we do not need to appeal to shared knowledge to account for local discourse behavior. We have also argued that an architecture that uses obli- gations provides a much simpler implementation than the strong plan-based approaches. In particular, much of local discourse behavior can arise in a "reactive manner" without the need for complex planning. The other side of the coin, however, is a new set of problems that arise in planning ac- tions that satisfy the multiple constraints that arise from the agent's personal goals and perceived obligations. The model presented here allows naturally for a mixed- initiative conversation and varying levels of cooperativity. Following the initiative of the other can be seen as an obli- gation driven process, while leading the conversation will be goal driven. Representing both obligations and goals explic- itly allows the system to naturally shift from one mode to the other. In a strongly cooperative domain, such as TRAINS, the system can subordinate working on its own goals to lo- cally working on concerns of the user, without necessarily having to have any shared discourse plan. In less coopera- tive situations, the same architecture will allow a system to still adhere to the conversational conventions, but respond in different ways, perhaps rejecting proposals and refusing to answer questions. Acknowledgements This material is based upon work supported by ONR/DARPA under grant number N00014-92-J-1512. We would like to thank the rest of the TRAINS group at the University of Rochester for providing a stimulating research environment and a context for implementing these ideas within an inte- grated system. References [Airenti et aL, 1993] Gabriella Airenti, Bruno G. Bara, and Marco Colombetti, "Conversation and Behavior Games in the Pragmatics of Dialogue," Cognitive Science, 17:197- 256, 1993. [Allen andPerrault, 1980] James Allen and C. Perrault, "Analyzing Intention in Utterances," Artificial Intelli- gence, 15(3):143-178, 1980. [Allen and Schubert, 1991] James F. Allen and Lenhart K. Schubert, "The TRAINS Project," TRAINS Techni- cal Note 91-1, Computer Science Dept. University of Rochester, 1991. [Bratman et al., 1988] Michael E. Bratman, David J. Israel, and Martha E. Pollack, "Plans and Resource-Bounded Practical Reasoning" Technical Report TR425R, SRI International, September 1988, Appears in Computational Intelligence, Vol. 4, No. 4, 1988. 7 [Carberry, 1990] S. Carberry, Plan Recognition in Natural Language Dialogue, The MIT Press, Cambridge, MA, 1990. [Clark and Schaefer, 1989] Herbert H. Clark and Edward E Schaefer, "Contributing to Discourse," Cognitive Science, 13:259 - 94, 1989. [Cohen and Levesque, 1991] PhillipR. Cohen and Hector J. Levesque, "Confirmations and Joint Action" In Proceed- ings IJCAI-91, pages 951-957, 1991. [Cohen and Perrault, 1979] Phillip R. Cohen and C. R. Per- rault, "Elements of a Plan-Based Theory of Speech Acts," Cognitive Science, 3(3): 177-212, 1979. [Conteand Castelfranchi, 1993] Rosaria Conte and Cris- tiano Castelfranchi, '~qorms as mental objects. From nor- mative beliefs to normative goals" In Working NotesAAAl Spring Symposium on Reasoning about Mental States: Formal Theories and Applications., pages 40-47, March 1993. [Ferguson, 1994] George Ferguson, "Domain Plan Reason- ing in TRAINS-93"' Trains technical note, Computer Sci- ence Dept. University of Rochester, forthcoming, 1994. [Gross et al., 1993] Derek Gross, James Allen, and David Traum, "I'he TRAINS 91 Dialogues" TRAINS Tech- nical Note 92-1, Computer Science Dept. University of Rochester, July 1993. [Grosz and Sidner, 1986] Barbara Grosz and Candice Sid- ner, "Attention, Intention, and the Structure of Discourse" CL, 12(3): 175-204, 1986. [Grosz and Sidner, 1990] Barbara J. Grosz and Candace L. Sidner, "Plans for Discourse" In P. R. Cohen, J. Morgan, and M. E. Pollack, editors, Intentions in Communication. MIT Press, 1990. [Kowtkoetal., 1991] J. Kowtko, S. Isard, and G. Doherty, "Conversational games within dialogue." In Proceedings of the ESPRIT Workshop on Discourse Coherence, 1991. [Litman and Allen, 1987] D. J. Litman and J. E Allen, "A Plan Recognition model for subdialogues in conversa- tion" Cognitive Science, 11:163-200, 1987. [Mann, 1988] William C. Mann, "Dialogue Games: Con- ventions of Human Interaction" Argumentation, 2:511- 532, 1988. [McRoy, 1993] Susan McRoy, Abductive Interpretation and Reinterpretation of Natural Language Utterances, PhD thesis, University of Toronto, 1993, Reproduced as TR CSRI-288 Department of Computer Science, University of Toronto. [Schegloffand Sacks, 1973] E. A. Schegloff and H. Sacks, "Opening Up Closings" Semiotica, 7:289-327, 1973. [Shoham and Tennenholtz, 1992] Yoav Shoham and Moshe Tennenholtz, "On the synthesis of useful social laws for artificial agent societies" In Proceedings AAAI-92, pages 276-281, 1992. [Traum, 1994] David R. Traum, "The TRAINS-93 Dialogue Manager" Trains technical note, Computer Science Dept. University of Rochester, forthcoming, 1994. [Traum and Hinkelman, 1992] David R. Traum and Eliz- abeth A. Hinkelman, "Conversation Acts in Task- oriented Spoken Dialogue" Computational Intelligence, 8(3):575-599, 1992, Special Issue on Non-literal lan- guage. 8
1994
1
A STOCHASTIC FINITE-STATE WORD-SEGMENTATION ALGORITHM FOR CHINESE Richard Sproat Chilin Shih William Gale AT&T Bell Laboratories 600 Mountain Avenue, Room {2d-451,2d-453,2c-278} Murray Hill, NJ, USA, 07974-0636 {rws, cls, gale}@research, att. com Nancy Chang Harvard University Division of Applied Sciences Harvard University Cambridge, MA 02138 nchang@das, harvard, edu Abstract We present a stochastic finite-state model for segment- ing Chinese text into dictionary entries and produc- tively derived words, and providing pronunciations for these words; the method incorporates a class-based model in its treatment of personal names. We also evaluate the system's performance, taking into account the fact that people often do not agree on a single seg- mentation. THE PROBLEM The initial step of any text analysis task is the tok- enization of the input into words. For many writing systems, using whitespace as a delimiter for words yields reasonable results. However, for Chinese and other systems where whitespace is not used to delimit words, such trivial schemes will not work. Chinese writing is morphosyllabic (DeFrancis, 1984), meaning that each hanzi- 'Chinese character' - (nearly always) represents a single syllable that is (usually) also a sin- gle morpheme. Since in Chinese, as in English, words may be polysyllabic, and since hanzi are written with no intervening spaces, it is not trivial to reconstruct which hanzi to group into words. While for some applications it may be possible to bypass the word-segmentation problem and work straight from hanzi, there are several reasons why this approach will not work in a text-to-speech (TI'S) sys- tem for Mandarin Chinese -- the primary intended application of our segmenter. These reasons include: 1. Many hanzi are homographs whose pronunciation depends upon word affiliation. So, ~ is pronounced deO ~ when it is a prenominal modification marker, but di4 in the word [] ~ mu4di4 'goal'; ~ is nor- mally ganl 'dry',but qian2 in a person's given name. 2. Some phonological rules depend upon correct word- segmentation, including Third Tone Sandhi (Shih, 1986), which changes a 3 tone into a 2 tone be- fore another 3 tone: ,J~]~ xiao3 [lao3 shu3] 'lit- t We use pinyin transliteration with numbers representing tones. 66 tie rat', becomes xiao3 [ lao2-shu3 ], rather than xiao2 [ lao2-shu3 ], because the rule first applies within the word lao3-shu3, blocking its phrasal ap- plication. While a minimal requirement for building a Chi- nese word-segmenter is a dictionary, a dictionary is in- sufficient since there are several classes of words that are not generally found in dictionaries. Among these: I. Morphologically Derived Words: PJ~l~f{l xiao3- jiang4-menO (little general-plural) 'little generals'. 2. Personal Names: ~,~ zhoul enl-lai2 'Zhou Enlai'. 3. Transliterated Foreign Names: ~i~::,,~ bu4- lang 3-shi4-wei2-ke4 'Brunswick'. We present a stochastic finite-state model for seg- menting Chinese text into dictionary entries and words derived via the above-mentioned productive processes; as part of the treatment of personal names, we dis- cuss a class-based model which uses the Good-Turing method to estimate costs of previously unseen personal names. The segmenter handles the grouping of hanzi into words and outputs word pronunciations, with de- fault pronunciations for hanzi it cannot group; we focus here primarily on the system's ability to segment text appropriately (rather than on its pronunciation abili- ties). We evaluate various specific aspects of the seg- mentation, and provide an evaluation of the overall segmentation performance: this latter evaluation com- pares the performance of the system with that of several human judges, since even people do not agree on a sin- gle correct way to segment a text, PREVIOUS WORK There is a sizable literature on Chinese word segmenta- tion: recent reviews include (Wang et al., 1990; Wu and Tseng, 1993). Roughly, previous work can be classi- fied into purely statistical approaches (Sproat and Shih, 1990), statistical approaches which incorporate lexical knowledge (Fan and Tsai, 1988; Lin et al., 1993), and approaches that include lexical knowledge combined with heuristics (Chen and Liu, 1992). Chert and Liu's (1992) algorithm matches words of an input sentence against a dictionary; in cases where various parses are possible, a set of heuristics is applied to disambiguate the analyses. Various morphological rules are then applied to allow for morphologically complex words that are not in the dictionary. Preci- sion and recall rates of over 99% are reported, but note that this covers only words that are in the dictionary: "the... statistics do not count the mistakes [that occur] due to the existence of derived words or proper names" (Chen and Liu, 1992, page 105). Lin et al. (1993) de- scribe a sophisticated model that includes a dictionary and a morphological analyzer. They also present a gen- eral statistical model for detecting 'unknown words' based on hanzi and part-of-speech sequences. How- ever, their unknown word model has the disadvantage that it does not identify a sequence of hanzi as an un- known word of a particular category, but merely as an unknown word (of indeterminate category). For an ap- plication like TTS, however, it is necessary to know that a particular sequence ofhanzi is of a particular category because, for example, that knowledge could affect the pronunciation. We therefore prefer to build particular models for different classes of unknown words, rather than building a single general model. DICTIONARY REPRESENTATION The lexicon of basic words and stems is represented as a weightedfinite-state tranducer (WFST) (Pereira et al., 1994). Most transitions represent mappings between hanzi and pronunciations, and are costless. Transitions between orthographic words and their parts-of-speech are represented by e-to-category transductions and a unigram cost (negative log probability) of that word estimated from a 20M hanzi training corpus; a portion of the WFST is given in Figure 1. 2 Besides dictionary words, the lexicon contains all hanzi in the Big 5 Chi- nese code, with their pronunciation(s), plus entries for other characters (e.g., roman letters, numerals, special symbols). Given this dictionary representation, recognizing a single Chinese word involves representing the input as a finite-state acceptor (FSA) where each arc is labeled with a single hanzi of the input. The left-restriction of the dictionary WFST with the input FSA contains all and only the (single) lexical entries correspond- ing to the input. This WFST includes the word costs on arcs transducing c to category labels. Now, input 2The costs are actually for strings rather than words: we currently lack estimates for the words themselves. We assign the string cost to lexical entries with the likeliest pronuncia- tion, and a large cost to all other entries. Thus ~j~/adv, with the commonest pronunciafionjiangl has cost 5.98, whereas ~/nc, with the rarer pronunciatJonjiang4, is assigned a high cost. Note also that the current model is zeroeth order in that it uses only unigram costs. Higher order models, e.g. bigram word models, could easily be incorporated into the present architecture if desired. sentences consist of one or more entries from the dic- tionary, and we can generalize the word recognition problem to the word segmentation problem, by left- restricting the transitive closure of the dictionary with the input. The result of this left-restriction is an WFST that gives all and only the possible analyses of the in- put FSA into dictionary entries. In general we do not want all possible analyses but rather the best analysis. This is obtained by computing the least-cost path in the output WFST. The final stage of segmentation involves traversing the best path, collecting into words all se- quences of hanzi delimited by part-of-speech-labeled arcs. Figure 2 shows an example of segmentation: the sentence [] 5~,~-~ "How do you say octopus in Japanese?", consists of four words, namely [] ri4-wen2 'Japanese', ~, zhangl-yu2 'octopus', ,~ zen3-mo 'how', and -~ shuol 'say'. In this case, [] ri4 is also a word (e.g. a common abbreviation for Japan) as are 3~ wen2-zhangl 'essay', and ~, yu2 'fish', so there is (at least) one alternate analysis to be considered. MORPHOLOGICAL ANALYSIS The method just described segments dictionary words, but as noted there are several classes of words that should be handled that are not in the dictionary. One class comprises words derived by productive morpho- logical processes, such as plural noun formation us- ing the suffix ~I menO. The morphological anal- ysis itself can be handled using well-known tech- niques from finite-state morphology (Koskenniemi, 1983; Antworth, 1990; Tzoukermann and Liberman, 1990; Karttunen et al., 1992; Sproat, 1992); so, we represent the fact that ~ attaches to nouns by allowing c-transitions from the final states of all noun entries, to the initial state of the sub-WFST representing ~I. However, for our purposes it is not sufficient to rep- resent the morphological decomposition of, say, plu- ral nouns: we also need an estimate of the cost of the resulting word. For derived words that occur in our corpus we can estimate these costs as we would the costs for an underived dictionary entry. So, ~I jiang4-menO '(military) generals' occurs and we esti- mate its cost at 15.02. But we also need an estimate of the probability for a non-occurring though possi- ble plural form like 15/)~I nan2-gual-menO 'pump- kins'. Here we use the Good-Turing estimate (Baayen, 1989; Church and Gale, 1991), whereby the aggre- gate probability of previously unseen members of a construction is estimated as NI/N, where N is the total number of observed tokens and N1 is the num- ber of types observed only once. For r~l this gives prob(unseen(f~) I f~l), and to get the aggregate prob- ability of novel ~l-constructions in a corpus we multi- ply this by prob,e~,(¢{~) to get probte~t(unseen(f~)). Finally, to estimate the probability of particular unseen word i~1/1 ~I, we use the simple bigram backoff model prob(~)lI ~ ) - prob(~i )lI )p~ob,,~, (unsee,~(M)); 67 : jlengl : 0.0 = :.==..o.o ~~;:oo I L~ : mini : o:o c.: 1 () @'I I"~. : guo2 : 0.0 (Repubilc of Chr, a) Figure 1: Partial chinese Lexicon (NC= noun; NP = proper noun) i ESSAY FISH I~ :_nc ~ :wen2 ]~- :zhangl I~ :_nc ,,~,. :yu2 JAPAN[] "ri4 . . . . . . . . . . . . . . . . . . ~ ....... . ....}...~"~.~;%" %. ..... *o";.2; "% ./ 10.28 i e : nc : JAPANESE OCTOPUS , - HOW SAY i [] :ri4 ~ :wen2 g :_nc -~-:zhangl ,~! .~/u2 E: nc [ ~ :zen3 ~ :moO E:_adv -~:shuol g:_vb i I 10.63 13.18 7.96 5.55 Figure 2: Input lattice (top) and two segmentations (bottom) of the sentence 'How do you say octopus in Japanese'. A non-optimal analysis is shown with dotted lines in the bottom frame. 68 ~i~ : pare91 : 0.0 ~ ) :/klcql4 : GO E : JP, DV: SAt, :mlnl:nn I :ill :GO ~ : Jq¢: 40.0 IE:e :GO laJ : ml~: GO : ..NC : 4A1 : kilt.." 10JIl Figure 3: An example of affixation: the plural affix cost(~r]) is computed in the obvious way. Fig- ure 3 shows how this model is implemented as part of the dictionary WFST. There is a (costless) transition between the NC node and ~]. The transition from ~] to a final state transduces c to the grammatical tag \PL with cost costte~t(unseen(~])): cost(l~}~f~ ) = cost(~)~) + costt,~t(unseen([])), as desired. For the seen word ~1 'generals', there is an e:nc transduction from ~ to the node preceding t~]; this arc has cost cost(~]) - costt,~:t(unseen(~])), so that the cost of the whole path is the desired cost(~t~] ). This representation gives ~] an appropriate mor- phological decomposition, preserving information that would be lost by simply listing ~[~I as an unanalyzed form. Note that the backoffmodel assumes that there is a positive correlation between the frequency of a singu- lar noun and its plural. An analysis of nouns that occur both in the singular and the plural in our database re- veals that there is indeed a slight but significant positive correlation -- R 2 = 0.20, p < 0.005. This suggests that the backoff model is as reasonable a model as we can use in the absence of further information about the expected cost of a plural form. CHINESE PERSONAL NAMES Full Chinese personal names are in one respect sim- ple: they are always of the form FAMILY+GIVEN. The FAMILY name set is restricted: there are a few hundred single-hanzi FAMILY names, and about ten double-hanzi ones. Given names are most commonly two hanzi long, occasionally one-hanzi long: there are thus four possible name types. The difficulty is that GIVEN names can consist, in principle, of any hanzi or pair ofhanzi, so the possible GIVEN names are limited only by the total number of hanzi, though some hanzi are certainly far more likely than others. For a sequence of hanzi that is a possible name, we wish to assign a probability to that sequence qua name. We use an esti- mate derived from (Chang et al., 1992). For example, given a potential name of the form FI G1 G2, where F1 is a legal FAMILY name and G1 and G2 are each hanzi, we estimate the probability of that name as the prod- uct of the probability of finding any name in text; the probability of F1 as a FAMILY name; the probability of the first hanzi of a double GIVEN name being G1; the probability of the second hanzi of a double GIVEN name being G2; and the probability of a name of the form SINGLE-FAMILY+DOUBLE-GIVEN. The first probability is estimated from a name count in a text database, whereas the last four probabilities are esti- mated from a large list of personal names) This model is easily incorporated into the segmenter by building an WFST restricting the names to the four licit types, with costs on the arcs for any particular name summing to an estimate of the cost of that name. This WFST is then summed with the WFST implementing the dictionary and morphological rules, and the transitive closure of the resulting transducer is computed. 3We have two such lists, one containing about 17,000 full names, and another containing frequencies of hanzi in the various name positions, derived from a million names. 69 There are two weaknesses in Chang et al.'s (1992) model, which we improve upon. First, the model as- sumes independence between the first and second hanzi of a double GIVEN name. Yet, some hanzi are far more probable in women's names than they are in men's names, and there is a similar list of male-oriented hanzi: mixing hanzi from these two lists is generally less likely than would be predicted by the independence model. As a partial solution, for pairs ofhanzi that cooccur suf- ficiently often in our namelists, we use the estimated bigram cost, rather than the independence-based cost. The second weakness is that Chang et al. (1992) as- sign a uniform small cost to unseen hanzi in GIVEN names; but we know that some unseen hanzi are merely accidentally missing, whereas others are missing for a reason -- e.g., because they have a bad connotation. We can address this problem by first observing that for many hanzi, the general 'meaning' is indicated by its so-called 'semantic radical'. Hanzi that share the same 'radical', share an easily identifiable structural component: the plant names 2, -~ and M share the GRASS radical; malady names m[, ~,, and t~ share the SICKNESS radical; and ratlike animal names 1~, [~, and 1~ share the RAT radical. Some classes are bet- ter for names than others: in our corpora, many names are picked from the GRASS class, very few from the SICKNESS class, and none from the RAT class. We can thus better predict the probability of an unseen hanzi occurring in a name by computing a within-class Good-Turing estimate for each radical class. Assum- ing unseen objects within each class are equiprobable, their probabilities are given by the Good-Turing theo- rem as: p~t, o~ E(N{ u) N , E(N~tS ) (1) where p~t, is the probability of one unseen hanzi in class cls, E(N{ t') is the expected number of hanzi in cls seen once, N is the total number of hanzi, and E(N~ t') is the expected number of unseen hanzi in class cls. The use of the Good-Turing equation pre- sumes suitable estimates of the unknown expectations it requires. In the denominator, the N~ u are well mea- sured by counting, and we replace the expectation by the observation. In the numerator, however, the counts of N{ l' are quite irregular, including several zeros (e.g. RAT, none of whose members were seen), However, there is a strong relationship between N{ t" and the number of hanzi in the class. For E(N~ZS), then, we substitute a smooth against the number of class ele- ments. This smooth guarantees that there are no zeroes estimated. The final estimating equation is then: S( N~'N;,! (2) p~U oc N * The total of all these class estimates was about 10% off from the Turing estimate Nt/N for the probability of all unseen hanzi, and we renormalized the estimates so that they would sum to Nt/N. This class-based model gives reasonable results: for six radical classes, Table 1 gives the estimated cost for an unseen hanzi in the class occurring as the second hanzi in a double GIVEN name. Note that the good classes JADE, GOLD and GRASS have lower costs than the bad classes SICKNESS, DEATH and RAT, as desired. TRANSLITERATIONS OF FOREIGN WORDS Foreign names are usually transliterated using hanzi whose sequential pronunciation mimics the source lan- guage pronunciation of the name. Since foreign names can be of any length, and since their original pronunci- ation is effectively unlimited, the identification of such names is tricky. Fortunately, there are only a few hun- dred hanzi that are particularly common in translitera- tions; indeed, the commonest ones, such as ~ bal, ~I er3, and PJ al are often clear indicators that a sequence of hanzi containing them is foreign: even a name like ~Y~f xia4-mi3-er3 'Shamir', which is a legal Chi- nese personal name, retains a foreign flavor because of i~J. As a first step towards modeling transliterated names, we have collected all hanzi occurring more than once in the roughly 750 foreign names in our dictionary, and we estimate the probability of occurrence of each hanzi in a transliteration (pT~;(hanzii)) using the max- imum likelihood estimate. As with personal names, we also derive an estimate from text of the probabil- ity of finding a transliterated name of any kind (PTN). Finally, we model the probability of a new transliter- ated name as the product of PTN and pTg(hanzii) for each hanzii in the putative name. 4 The foreign name model is implemented as an WFST, which is then summed with the WFST implementing the dictionary, morphological rules, and personal names; the transitive closure of the resulting machine is then computed. EVALUATION In this section we present a partial evaluation of the current system in three parts. The first is an evaluation of the system's ability to mimic humans at the task of segmenting text into word-sized units; the second eval- uates the proper name identification; the third measures the performance on morphological analysis. To date we have not done a separate evaluation of foreign name recognition. Evaluation of the Segmentation as a Whole: Pre- vious reports on Chinese segmentation have invariably 4The current model is too simplistic in several respects. For instance, the common 'suffixes', -nia (e.g., Virginia) and -sia are normally transliterated as ~=~ ni2-ya3 and ~]~ ~n~ xil-ya3, respectively. The interdependence between ]:~ or ~, and ~r~ is not captured by our model, but this could easily be remedied. 70 Table I: The cost as a novel GIVEN name (second position) for hanzi from various radical classes. JADE GOLD GRASS SICKNESS DEATH RAT 14.98 15.52 15.76 16.25 16.30 16.42 cited performance either in terms of a single percent- correct score, or else a single precision-recall pair. The problem with these styles of evaluation is that, as we shall demonstrate, even human judges do not agree perfectly on how to segment a given text. Thus, rather than give a single evaluative score, we prefer to com- pare the performance of our method with the judgments of several human subjects. To this end, we picked 100 sentences at random containing 4372 total hanzi from a test corpus. We asked six native speakers -- three from Taiwan (T1-T3), and three from the Mainland (M1-M3) -- to segment the corpus. Since we could not bias the subjects towards a particular segmentation and did not presume linguistic sophistication on their part, the instructions were simple: subjects were to mark all places they might plausibly pause if they were reading the text aloud. An examination of the subjects' bracketings confirmed that these instructions were sat- isfactory in yielding plausible word-sized units. Various segmentation approaches were then com- pared with human performance: 1. A greedy algorithm, GR: proceed through the sen- tence, taking the longest match with a dictionary entry at each point. 2. An 'anti-greedy' algorithm, AG: instead of the longest match, take the shortest match at each point. 3. The method being described -- henceforth ST. Two measures that can be used to compare judgments are: 1. Precision. For each pair of judges consider one judge as the standard, computing the precision of the other's judgments relative to this standard. 2. Recall. For each pair of judges, consider one judge as the standard, computing the recall of the other's judgments relative to this standard. Obviously, for judges J1 and J2, taking ,/1 as stan- dard and computing the precision and recall for J2 yields the same results as taking J2 as the standard, and computing for Jr, respectively, the recall and pre- cision. We therefore used the arithmetic mean of each interjudge precision-recall pair as a single measure of interjudge similarity. Table 2 shows these similarity measures. The average agreement among the human judges is .76, and the average agreement between ST and the humans is .75, or about 99% of the inter-human agreement. (GR is .73 or 96%.) One can better visu- alize the precision-recall similarity matrix by produc- ing from that matrix a distance matrix, computing a multidimensional scaling on that distance matrix, and plotting the first two most significant dimensions. The result of this is shown in Figure 4. In addition to the automatic methods, AG, GR and ST, just discussed, we also added to the plot the values for the current algorithm using only dictionary entries (i.e., no pro- ductively derived words, or names). This is to allow for fair comparison between the statistical method, and GR, which is also purely dictionary-based. As can be seen, GR and this 'pared-down' statistical method perform quite similarly, though the statistical method is still slightly better. AG clearly performs much less like humans than these methods, whereas the full statisti- cal algorithm, including morphological derivatives and names, performs most closely to humans among the automatic methods. It can be also seen clearly in this plot, two of the Taiwan speakers cluster very closely together, and the third Taiwan speaker is also close in the most significant dimension (the z axis). Two of the Mainlanders also cluster close together but, interest- ingly, not particularly close to the Taiwan speakers; the third Mainlander is much more similar to the Taiwan speakers. Personal Name Identification: To evaluate personal name identification, we randomly selected 186 sen- tences containing 12,000 hanzi from our test corpus, and segmented the text automatically, tagging personal names; note that for names there is always a single un- ambiguous answer, unlike the more general question of which segmentation is correct. The performance was 80.99% recall and 61.83% precision. Interest- ingly, Chang et al. reported 80.67% recall and 91.87% precision on an 11,000 word corpus: seemingly, our system finds as many names as their system, but with four times as many false hits. However, we have reason to doubt Chang et al.'s performance claims. Without using the same test corpus, direct comparison is ob- viously difficult; fortunately Chang et al. included a list of about 60 example sentence fragments that ex- emplified various categories of performance for their system. The performance of our system on those sen- tences appeared rather better than theirs. Now, on a set of 11 sentence fragments where they reported 100% re- call and precision for name identification, we had 80% precision and 73% recall. However, they listed two sets, one consisting of 28 fragments and the other of 22 fragments in which they had 0% precision and recall. On the first of these our system had 86% precision and 64% recall; on the second it had 19% precision and 33% recall. Note that it is in precision that our over- all performance would appear to be poorer than that of Chang et al., yet based on their published examples, our 71 Table 2: Similarity matrix for segmentation judgments Judges AG GR ST M1 M2 M3 T1 T2 T3 AG 0.70 0.70 0.43 0.42 0.60 0.60 0.62 0.59 GR 0.99 0.62 0.64 0.79 0.82 0.81 0.72 ST 0.64 0.67 0.80 0.84 0.82 0.74 0.77 M1 M2 M3 T1 T2 0.69 0.71 0.69 0.70 0.72 0.73 0.71 0.70 0.89 0.87 0.80 0.88 0.82 0.78 system appears to be doing better precisionwise. Thus we have some confidence that our own performance is at least as good that of(Chang et al., 1992). s Evaluation of Morphological Analysis: In Table 3 we present results from small test corpora for some productive affixes; as with names, the segmentation of morphologically derived words is generally either right or wrong. The first four affixes are so-called resultative affixes: they denote some property of the resultant state of an verb, as in ~,,:;~ ~" wang4-bu4-1iao3 (forget-not- attain) 'cannot forget'. The last affix is the nominal plural. Note that ~ in ~,:~: ]" is normally pronounced as leO, but when part of a resultative it is liao3. In the table are the (typical) classes of words to which the affix attaches, the number found in the test corpus by the method, the number correct (with a precision measure), and the number missed (with a recall measure). CONCLUSIONS In this paper we have shown that good performance can be achieved on Chinese word segmentation by us- ing probabilistic methods incorporated into a uniform stochastic finite-state model. We believe that the ap- proach reported here compares favorably with other reported approaches, though obviously it is impossible to make meaningful comparisons in the absence of uni- form test databases for Chinese segmentation. Perhaps the single most important difference between our work and previous work is the form of the evaluation. As we have observed there is often no single right answer to word segmentation in Chinese. Therefore, claims to the effect that a particular algorithm gets 99% accuracy are meaningless without a clear definition of accuracy. ACKNOWLEDGEMENTS We thank United Informatics for providing us with our corpus of Chinese text, and BDC for the 'Behav- 5We were recently pointed to (Wang et al., 1992), which we had unfortunately missed in our previous literature search. We hope to compare our method with that of Wang et al. in a future version of this paper. ior Chinese-English Electronic Dictionary'. We fur- ther thank Dr. J.-S. Chang of Tsinghua University, for kindly providing us with the name corpora. Finally, we thank two anonymous ACL reviewers for comments. REFERENCES Evan Antworth. 1990. PC-KIMMO: A Two-Level Pro- cessor for Morphological Analysis. Occasional Publications in Academic Computing, 16. Sum- mer Institute of Linguistics, Dallas, TX. Harald Baayen. 1989. A Corpus-Based Approach to Morphological Productivity: Statistical Analysis and Psycholinguistic Interpretation. Ph.D. thesis, Free University, Amsterdam. Jyun-Shen Chang, Shun-De Chen, Ying Zheng, Xian- Zhong Liu, and Shu-Jin Ke. 1992. Large-corpus- based methods for Chinese personal name recogni- tion. Journal of Chinese Information Processing, 6(3):7-15. Keh-Jiann Chen and Shing-Huan Liu. 1992. Word identification for Mandarin Chinese sentences. In Proceedings of COLING-92, pages 101-107. COLING. Kenneth Ward Church and William Gale. 1991. A comparison of the enhanced Good-Turing and deleted estimation methods for estimating prob- abilities of English bigrams. Computer Speech and Language, 5(1): 19-54. John DeFrancis. 1984. The Chinese Language. Uni- versity of Hawaii Press, Honolulu. C.-K. Fan and W.-H. Tsai. 1988. Automatic word identification in Chinese sentences by the relax- ation technique. Computer Processing of Chinese and Oriental Languages, 4:33-56. Lauri Karttunen, Ronald Kaplan, and Annie Zaenen. 1992. Two-level morphology with composition. In COLING-92, pages 141-148. COLING. Kimmo Koskenniemi. 1983. Two-Level Morphology: a General Computational Model for Word-Form Recognition and Production. Ph.D. thesis, Uni- versity of Helsinki, Helsinki. 72 o~ Q) _E c) o d ~q o. O o o U3 o. c~ o antlgreedy greedy current method dict. 0nly Taiwan Mainland • • X~ O i i ! i i i -0.3 -0.2 -0.1 0.0 0.1 0.2 Dimension I (62%) Figure 4: Classical metric multidimensional scaling of distance matrix, showing the two most significant dimensions. The percentage scores on the axis labels represent the amount of data explained by the dimension in question. Table 3: Performance on morphological analysis. Affix Pron Base category N found N correct (prec.) N missed (rec.) ~T c bu2-xia4 verb 20 20 (100%) 12 (63%) ~-F~ bu2-xia4-qu4 verb 30 29 (97%) 1 (97%) :~T bu4-1iao3 verb 72 72 (100%) 15 (83%) ~$tT de2-liao3 verb 36 36 (100%) 11 (77%) r~ menO noun 141 139 (99%) 6 (96%) Ming-Yu Lin, Tung-Hui Chiang, and Keh-Yi Su. 1993. A preliminary study on unknown word problem in Chinese word segmentation. In ROCLING 6, pages 119-141. ROCLING. Fernando Pereira, Michael Riley, and Richard Sproat. 1994. Weighted rational transductions and their application to human language processing. In ARPA Workshop on Human Language Technol- ogy, pages 249-254. Advanced Research Projects Agency, March 8-11. Chilin Shih. 1986. The Prosodic Domain of Tone Sandhi in Chinese. Ph.D. thesis, UCSD, La Jolla, CA. Richard Sproat and Chilin Shih. 1990. A statistical method for finding word boundaries in Chinese text. Computer Processing of Chinese and Orien- tal Languages, 4:336-35 i. Richard Sproat. 1992. Morphology and Computation. MIT Press, Cambridge, MA. Evelyne Tzoukermann and Mark Liberman. 1990. A finite-state morphological processor for Spanish. In COLING-90, Volume 3, pages 3: 277-286. COLING. Yongheng Wang, Haiju Su, and Yan Mo. 1990. Au- tomatic processing of chinese words. Journal of Chinese Information Processing, 4(4): 1-11. Liang-Jyh Wang, Wei-Chuan Li, and Chao-Huang Chang. 1992. Recognizing unregistered names for mandarin word identification. In Proceedings of COLING-92, pages 1239-1243. COLING. Zimin Wu and Gwyneth Tseng. 1993. Chinese text segmentation for text retrieval: Achievements and problems. Journal of the American Society for Information Science, 44(9):532-542. 73
1994
10
PRECISE N-GRAM PROBABILITIES FROM STOCHASTIC CONTEXT-FREE GRAMMARS Andreas Stolcke and Jonathan Segal University of California, Berkeley and International Computer Science Institute 1947 Center Street Berkeley, CA 94704 {stolcke, j segal}@icsi, berkeley, edu Abstract We present an algorithm for computing n-gram probabil- ities from stochastic context-free grammars, a procedure that can alleviate some of the standard problems associated with n-grams (estimation from sparse data, lack of linguis- tic structure, among others). The method operates via the computation of substring expectations, which in turn is ac- complished by solving systems of linear equations derived from the grammar. The procedure is fully implemented and has proved viable and useful in practice. INTRODUCTION Probabilistic language modeling with n-gram grammars (particularly bigram and trigram) has proven extremely use- ful for such tasks as automated speech recognition, part-of- speech tagging, and word-sense disambiguation, and lead to simple, efficient algorithms. Unfortunately, working with these grammars can be problematic for several reasons: they have large numbers of parameters, so reliable estimation requires a very large training corpus and/or sophisticated smoothing techniques (Church and Gale, 1991); it is very hard to directly model linguistic knowledge (and thus these grammars are practically incomprehensible to human inspec- tion); and the models are not easily extensible, i.e., if a new word is added to the vocabulary, none of the information contained in an existing n-gram will tell anything about the n-grams containing the new item. Stochastic context-free grammars (SCFGs), on the other hand, are not as suscep- tible to these problems: they have many fewer parameters (so can be reasonably trained with smaller corpora); they capture linguistic generalizations, and are easily understood and written, by linguists; and they can be extended straight- forwardly based on the underlying linguistic knowledge. In this paper, we present a technique for computing an n-gram grammar from an existing SCFG--an attempt to get the best of both worlds. Besides developing the mathematics involved in the computation, we also discuss efficiency and implementation issues, and briefly report on our experience confirming its practical feasibility and utility. The technique of compiling higher-level grammat- ical models into lower-level ones has precedents: Zue et al. (1991) report building a word-pair grammar from more elaborate language models to achieve good coverage, by random generation of sentences. In our own group, the current approach was predated by an alternative one that essentially relied on approximating bigram probabili- ties through Monte-Carlo sampling from SCFGs. PRELIMINARIES An n-gram grammar is a set of probabil- ities P(w,~lWlW2...wn_a), giving the probability that wn follows a word string Wl w2... wn-1, for each possible com- bination of the w's in the vocabulary of the language. So for a 5000 word vocabulary, a bigram grammar would have approximately 5000 x 5000 = 25,000,000 free parameters, and a trigram grammar would have ~ 125,000,000,000. This is what we mean when we say n-gram grammars have many parameters. A SCFG is a set of phrase-structure rules, annotated with probabilities of choosing a certain production given the left- hand side nonterminal. For example, if we have a simple CFG, we can augment it with the probabilities specified: S --+ NPVP [1.0] NP --+ N [0.4] N P -+ Det N [0.6] VP --+ V [0.8] V P --+ V UP [0.2] Det --~ the [0.4] Det --+ a [0.6] N ~ book [1.0] V --+ close [0.3] V ~ open [0.7] The language this grammar generates contains 5 words. Including markers for sentence beginning and end, a bigram grammar would contain 6 x 6 probabilities, or 6 x 5 = 30 74 free parameters (since probabilities have to sum to one). A trigram grammar would come with (5 x 6 + 1) x 5 = 155 parameters. Yet, the above SCFG has only 10 probabilities, only 4 of which are free parameters. The divergence between these two types of models generally grows as the vocabulary size increases, although this depends on the productions in the SCFG. The reason for this discrepancy, of course, is that the struc- ture of the SCFG itself is a discrete (hyper-)parameter with a lot of potential variation, but one that has been fixed before- hand. The point is that such a structure is comprehensible by humans, and can in many cases be constrained using prior knowledge, thereby reducing the estimation problem for the remaining probabilities. The problem of estimating SCFG parameters from data is solved with standard techniques, usually by way of likelihood maximization and a variant of the Baum-Welch (EM) algorithm (Baker, 1979). A tutorial introduction to SCFGs and standard algorithms can be found in Jelinek et al. (1992). MOTIVATION There are good arguments that SCFGs are in principle not ad- equate probabilistic models for natural languages, due to the conditional independence assumptions they embody (Mager- man and Marcus, 1991; Jones and Eisner, 1992; Briscoe and Carroll, 1993). Such shortcomings can be partly remedied by using SCFGs with very specific, semantically oriented categories and rules (Jurafsky et al., 1994). If the goal is to use n-grams nevertheless, then their their computation from a more constrained SCFG is still useful since the results can be interpolated with raw n-gram estimates for smoothing. An experiment illustrating this approach is reported later in the paper. On the other hand, even if vastly more sophisticated lan- guage models give better results, r~-grams will most likely still be important in applications such as speech recogni- tion. The standard speech decoding technique of frame- synchronous dynamic programming (Ney, 1984) is based on a first-order Markov assumption, which is satisfied by bi- grams models (as well as by Hidden Markov Models), but not by more complex models incorporating non-local or higher- order constraints (including SCFGs). A standard approach is therefore to use simple language models to generate a prelim- inary set of candidate hypotheses. These hypotheses, e.g., represented as word lattices or N-best lists (Schwartz and Chow, 1990), are re-evaluated later using additional criteria that can afford to be more costly due to the more constrained outcomes. In this type of setting, the techniques developed in this paper can be used to compile probabilistic knowledge present in the more elaborate language models into n-gram estimates that improve the quality of the hypotheses gener- ated by the decoder. Finally, comparing directly estimated, reliable n-grams with those compiled from other language models is a poten- tially useful method for evaluating the models in question. For the purpose of this paper, then, we assume that comput- ing n-grams from SCFGs is of either practical or theoretical interest and concentrate on the computational aspects of the problem. It should be noted that there are alternative, unrelated methods for addressing the problem of the large parameter space in n-gram models. For example, Brown et al. (1992) describe an approach based on grouping words into classes, thereby reducing the number of conditional probabilities in the model. THE ALGORITHM Normal form for SCFGs A grammar is in Chomsky Normal Form (CNF) if every production is of the form A ~ B C or A ~ terminal. Any CFG or SCFG can be converted into one in CNF which generates exactly the same language, each of the sentences with exactly the same probability, and for which any parse in the original grammar would be reconstructible from a parse in the CNF grammar. In short, we can, without loss of generality, assume that the SCFGs we are dealing with are in CNF. In fact, our algorithm generalizes straightforwardly to the more general Canonical Two-Form (Graham et al., 1980) format, and in the case of bigrams (n =- 2) it can even be modified to work directly for arbitrary SCFGs. Still, the CNF form is convenient, and to keep the exposition simple we assume all SCFGs to be in CNF. Probabilities from expectations The first key insight towards a solution is that the n-gram probabilities can be obtained from the associated expected frequencies for n-grams and (n - 1)-grams: c(wl...wnlL) P(w,dwlw2...w,~-a) = c(wx ...wn-llL) (1) where c(wlL ) stands for the expected count of occurrences of the substring w in a sentence of L.1 Proof Write the expectation for n-grams recursively in terms of those of order n - 1 and the conditional n-gram probabilities: C(Wl...Wr~[L) ~--- C(Wl...W~_llL)P(w~lw lw2...wr~_l). So if we can compute c(wlG) for all substrings w of lengths n and n - 1 for a SCFG G, we immediately have an n-gram grammar for the language generated by G. Computing expectations Our goal now is to compute the substring expectations for a given grammar. Formalisms such as SCFGs that have a recursive rule structure suggest a divide-and-conquer algo- rithrn that follows the recursive structure of the grammar, z We generalize the problem by considering c(wIX), the expected number of (possibly overlapping) occurrences of 1The only counts appearing in this paper are expectations, so be will not be using special notation to make a distinction between observed and expected values. 2A similar, even simpler approach applies to probabilistic finite state (i.e., Hidden Markov) models. 75 X Y Z W (a) X X Y Z Y Z (b) (c) Figure 1: Three ways of generating a substring w from a nonterminal X. 113 .~- 2131 ... W n in strings generated by an arbitrary nonter- minal X. The special case c(wIS) = c(wlL) is the solution sought, where S is the start symbol for the grammar. Now consider all possible ways that nonterminal X can generate string w = wl ... wn as a substring, denoted by X ::~ ... wl • .. wn .... and the associated probabilities. For each production of X we have to distinguish two main cases, assuming the grammar is in CNF. If the string in question is of length I, w = wl, and if X happens to have a production X --~ Wl, then that production adds exactly P(X --~ wt) to the expectation c(w IX). If X has non-terminal productions, say, X ~ YZ then w might also be generated by recursive expansion of the right-hand side. Here, for each production, there are three subcases. (a) First, Y can by itself generate the complete w (see Figure l(a)). (b) Likewise, Z itself can generate w (Figure l(b)). (c) Finally, Y could generate wl ... wj as a suffix (Y ~R wl...wj) and Z, Wj+l...wn as a prefix (Z ~L wj+l ... w,O, thereby resulting in a single occurrence of w (Figure l(c)). 3 Each of these cases will have an expectation for generating wl ... wn as a substring, and the total expectation c(w}X) will be the sum of these partial expectations. The total expectations for the first two cases (that of the substring being completely generated by Y or Z) are given recursively: c(wlY) and c(wlY ) respectively. The expectation for the third case is n--1 E P(Y :~zR wl... wj)P(Z :~'L wj+,... W,), (2) j=l where one has to sum over all possible split points j of the string w. 3We use the notation X =~R c~ to denote that non-terminal X generates the string c~ as a suffix, and X :~z c~ to denote that X generates c~ as a prefix. Thus P(X :~t. ~) and P(X ::~n o~) are the probabilities associated with those events. To compute the total expectation c(wlX), then, we have to sum over all these choices: the production used (weighted by the rule probabilities), and for each nonterminal rule the three cases above. This gives c(wlx) = P(x -~ w) + E P(X~YZ) X-+ Y Z ( c(w[Y) + ~(~lz) n--1 + P(Y :;R j=l \ P(Z ::~L wj+,.., wn)) J (3) In the important special case of bigrams, this summation simplifies quite a bit, since the terminal productions are ruled out and splitting into prefix and suffix allows but one possi- bility: c(wlw21X) = E P(X ~ YZ) X--~ Y Z C(WlW2IY) q- C(WlW2IZ) \ +P(Y ---~t~ w,)P(Z :~L w2)) (4) For unigrams equation (3) simplifies even more: C(WllX) = P(X --+ wl) + ~_, P(X-+YZ)(c(w'IY)+c(w1IZ)) X--+YZ (5) We now have a recursive specification of the quantities c(wlX ) we need to compute. Alas, the recursion does not necessarily bottom out, since the c(wlY) and c(wlZ) quanti- ties on the right side of equation (3) may depend themselves on c(wlX). Fortunately, the recurrence is linear, so for each string w, we can find the solution by solving the linear system formed by all equations of type (3). Notice there are exactly 76 as many equations as variables, equal to the number of non- terminals in the grammar. The solution of these systems is further discussed below. Computing prefix and suffix probabilities The only substantial problem left at this point is the com- putation of the constants in equation (3). These are derived from the rule probabilities P(X ~ w) and P(X --+ YZ), as well as the prefix/suffix generation probabilities P(Y =~R wl ... wj) and P(Z =~z wj+l ... w,~). The computation of prefix probabilities for SCFGs is gen- erally useful for applications, and has been solved with the LRI algorithm (Jelinek and Lafferty, 1991). Recently, Stolcke (1993) has shown how to perform this computation efficiently for sparsely parameterized SCFGs using a proba- bilistic version of Earley's parser (Earley, 1970). Computing suffix probabilities is obviously a symmetrical task; for ex- ample, one could create a 'mirrored' SCFG (reversing the order of right-hand side symbols in all productions) and then run any prefix probability computation on that mirror gram- mar. Note that in the case of bigrams, only a particularly simple form of prefix/suffix probabilities are required, namely, the 'left-corner' and 'right-corner' probabilities, P(X ~z wl) and P(Y ~ R w2), which can each be obtained from a single matrix inversion (Jelinek and Lafferty, 1991). It should be mentioned that there are some technical con- ditions that have to be met for a SCFG to be well-defined and consistent (Booth and Thompson, 1973). These condi- tion are also sufficient to guarantee that the linear equations given by (3) have positive probabilities as solutions. The details of this are discussed in the Appendix. Finally, it is interesting to compare the relative ease with which one can solve the substring expectation problem to the seemingly similar problem of finding substringprobabilities: the probability that X generates (one or more instances of) w. The latter problem is studied by Corazza et al. (1991), and shown to lead to a non-linear system of equations. The crucial difference here is that expectations are additive with respect to the cases in Figure 1, whereas the corresponding probabilities are not, since the three cases can occur simul- taneously. EFFICIENCY AND COMPLEXITY ISSUES Summarizing from the previous section, we can compute any n-gram probability by solving two linear systems of equations of the form (3), one with w being the n-gram itself and one for the (n - 1)-gram prefix wl ... wn-1. The latter computation can be shared among all n-grams with the same prefix, so that essentially one system needs to be solved for each n-gram we are interested in. The good news here is that the work required is linear in the number of n-grams, and correspondingly limited if one needs probabilities for only a subset of the possible n-grams. For example, one could compute these probabilities on demand and cache the results. Let us examine these systems of equations one more time. Each can be written in matrix notation in the form (I - A)c = b (6) where I is the identity matrix, A = (axu) is a coefficient matrix, b = (bx) is the right-hand side vector, and c rep- resents the vector of unknowns, c(wlX ). All of these are indexed by nonterminals X, U. We get axu = Z P(X-+ YZ)(6(Y,U)+6(Z,U))(7) X--+ YZ bx = P(X ~ w) + Z P(X--4 YZ) X--+ YZ n-1 ~ P(Y :~R wl ...wj) j=l P(Z ~L Wj+l,.. 'OJn) (8) where 6(X, Y) = 1 ifX = Y, and 0 otherwise. The expression I - A arises from bringing the variables c(wlY ) and c(wlZ ) to the other side in equation (3) in order to collect the coefficients. We can see that all dependencies on the particular bigram, w, are in the right-hand side vector b, while the coefficient matrix I - A depends only on the grammar. This, together with the standard method of LU decomposition (see, e.g., Press et al. (1988)) enables us to solve for each bigram in time O(N2), rather than the standard O(N 3) for a full sys- tem (N being the number of nonterminals/variables). The LU decomposition itself is cubic, but is incurred only once. The full computation is therefore dominated by the quadratic effort of solving the system for each n-gram. Furthermore, the quadratic cost is a worst-case figure that would be in- curred only if the grammar contained every possible rule; empirically we have found this computation to be linear in the number of nonterminals, for grammars that are sparse, i.e., where each nonterminal makes reference only to a bounded number of other nonterminals. SUMMARY Listed below are the steps of the complete computation. For concreteness we give the version specific to bigrams (n = 2). 1. Compute the prefix (left-corner) and suffix (right- corner) probabilities for each (nonterminal,word) pair. 2. Compute the coefficient matrix and right-hand sides for the systems of linear equations, as per equations (4) and (5). 3. LU decompose the coefficient matrix. 4. Compute the unigram expectations for each word in the grammar, by solving the LU system for the unigram right-hand sides computed in step 2. 5. Compute the bigram expectations for each word pair by solving the LU system for the bigram right-hand sides computed in step 2. 77 . Compute each bigram probability P (w2 ]wl ), by divid- ing the bigram expectation c(wlw2[S) by the unigram expectation C(Wl IS). EXPERIMENTS The algorithm described here has been implemented, and is being used to generate bigrams for a speech recognizer that is part of the BeRP spoken-language system (Jurafsky et al., 1994). An early prototype of BeRP was used in an experiment to assess the benefit of using bigram probabili- ties obtained through SCFGs versus estimating them directly from the available training corpus. 4 The system's domain are inquiries about restaurants in the city of Berkeley. The train- ing corpus used had only 2500 sentences, with an average length of about 4.8 words/sentence. Our experiments made use of a context-free grammar hand-written for the BeRP domain. With 1200 rules and a vocabulary of 1 I00 words, this grammar was able to parse 60% of the training corpus. Computing the bigram proba- bilities from this SCFG takes about 24 hours on a SPARC- station 2-class machine. 5 In experiment 1, the recognizer used bigrams that were estimated directly from the training corpus, without any smoothing, resulting in a word error rate of 35.1%. In ex- periment 2, a different set of bigram probabilities was used, computed from the context-free grammar, whose probabil- ities had previously been estimated from the same training corpus, using standard EM techniques. This resulted in a word error rate of 35.3%. This may seem surprisingly good given the low coverage of the underlying CFGs, but notice that the conversion into bigrams is bound to result in a less constraining language model, effectively increasing cover- age. Finally, in experiment 3, the bigrams generated from the SCFG were augmented by those from the raw training data, in a proportion of 200,000 : 2500. We have not attempted to optimize this mixture proportion, e.g., by deleted interpola- tion (Jelinek and Mercer, 1980). 6 With the bigram estimates thus obtained, the word error rate dropped to 33.5%. (All error rates were measured on a separate test corpus.) The experiment therefore supports our earlier argument that more sophisticated language models, even if far from perfect, can improve n-gram estimates obtained directly from sample data. 4Corpus and grammar sizes, as well as the recognition per- formance figures reported here are not up-to-date with respect to the latest version of BeRP. For ACL-94 we expect to have revised results available that reflect the current performance of the system. 5Unlike the rest of BeRP, this computation is implemented in Lisp/CLOS and could be speeded up considerably if necessary. 6This proportion comes about because in the original system, predating the method described in this paper, bigrams had to be estimated from the SCFG by random sampling. Generating 200,000 sentence samples was found to give good converging estimates for the bigrams. The bigrams from the raw training sentences were then simply added to the randomly generated ones. We later verified that the bigrams estimated from the SCFG were indeed identical to the ones computed directly using the method described here. CONCLUSIONS We. have described an algorithm to compute in closed form the distribution of n-grams for a probabilistic language given by a stochastic context-free grammar. Our method is based on computing substring expectations, which can be expressed as systems of linear equations derived from the grammar. The algorithm was used successfully and found to be practical in dealing with context-free grammars and bigram models for a medium-scale speech recognition task, where it helped to improve bigram estimates obtained from relatively small amounts of data. Deriving n-gram probabilities from more sophisticated language models appears to be a generally useful technique which can both improve upon direct estimation of n-grams, and allow available higher-level linguistic knowledge to be effectively integrated into the speech decoding task. ACKNOWLEDGMENTS Dan Jurafsky wrote the BeRP grammar, carried out the recog- nition experiments, and was generally indispensable. Steve Omohundro planted the seed for our n-gram algorithm dur- ing lunch at the California Dream Caf6 by suggesting sub- string expectations as an interesting computational linguis- tics problem. Thanks also to Jerry Feldman and Lokendra Shastri for improving the presentation with their comments. This research has been supported by ICSI and ARPA con- tract #N0000 1493 C0249. C This leads to APPENDIX: CONSISTENCY OF SCFGS Blindly applying the n-gram algorithm (and many others) to a SCFG with arbitrary probabilities can lead to surprising results. Consider the following simple grammar S-~ z Iv] S ---r SS [q= l-p] (9) What is the expected frequency of unigram x? Using the abbreviation c = c(X]S) and equation 5, we see that P(S --4 z) + P(S ~ SS)(c + c) p + 2qe P - P (10) c-- 1-2q 2p- 1 Now, for p = 0.5 this becomes infinity, and for probabilities p < 0.5, the solution is negative! This is a rather striking manifestation of the failure of this grammar, for p < 0.5, to be consistent. A grammar is said to be inconsistent if the underlying stochastic derivation process has non-zero probability of not terminating (Booth and Thompson, 1973). The expected length of the generated strings should therefore be infinite in this case. Fortunately, Booth and Thompson derive a criterion for checking the consistency of a SCFG: Find the first-order ex- pectancy matrix E = (exy), where exy is the expected number of occurrences of nonterminal Y in a one-step ex- pansion of nonterminal X, and make sure its powers E k 78 converge to 0 as k ~ oe. If so, the grammar is consistent, otherwise it is not] For the grammar in (9), E is the 1 x 1 matrix (2q). Thus we can confirm our earlier observation by noting that (2q) k converges to 0 iff q < 0.5, or p > 0.5. Now, it so happens that E is identical to the matrix A that occurs in the linear equations (6) for the n-gram computation. The actual coefficient matrix is I - A, and its inverse, if it exists, can be written as the geometric sum (I-A) -~ = I+A+A2+A 3 +... This series converges precisely if A k converges to 0. We have thus shown that the existence of a solution for the n- gram problem is equivalent to the consistency of the grammar in question. Furthermore, the solution vector c = (I - A)-lb will always consist of non-negative numbers: it is the sum and product of the non-negative values given by equations (7) and (8). REFERENCES James K. Baker. 1979. Trainable grammars for speech recognition. In Jared J. Wolf and Dennis H. Klatt, editors, Speech Communication Papers for the 97th Meeting of the Acoustical Society of America, pages 547-550, MIT, Cambridge, Mass. Taylor L. Booth and Richard A. Thompson. 1973. Ap- plying probability measures to abstract languages. IEEE Transactions on Computers, C-22(5):442--450. Ted Briscoe and John Carroll. 1993. Generalized prob- abilistic LR parsing of natural language (corpora) with unification-based grammars. Computational Linguistics, 19(1):25-59. Peter E Brown, Vincent J. Della Pietra, Peter V. deSouza, Jenifer C. Lai, and Robert L. Mercer. 1992. Class-based n-gram models of natural language. Computational Lin- guistics, 18(4):467--479. Kenneth W. Church and William A. Gale. 1991. A compar- ison of the enhanced Good-Turing and deleted estimation methods for estimating probabilities of English bigrams. Computer Speech and Language, 5:19-54. Anna Corazza, Renato De Mori, Roberto Gretter, and Gior- gio Satta. 1991. Computation of probabilities for an island-driven parser. IEEE Transactions on Pattern Anal- ysis and Machine Intelligence, 13(9):936-950. Jay Earley. 1970. An efficient context-free parsing algo- rithm. Communications of the ACM, 6(8):451-455. Susan L. Graham, Michael A. Harrison, and Walter L. Ruzzo. 1980. An improved context-freerecognizer. ACM Transactions on Programming Languages and Systems, 2(3):415-462. 7A further version of this criterion is to check the magnitude of the largest of E's eigenvalues (its spectral radius). If that value is > 1, the grammar is inconsistent; if < 1, it is consistent. Frederick Jelinek and John D. Lafferty. 1991. Computa- tion of the probability of initial substring generation by stochastic context-free grammars. Computational Lin- guistics, 17(3):315-323. Frederick Jelinek and Robert L. Mercer. 1980. Interpo- lated estimation of Markov source parameters from sparse data. In Proceedings Workshop on Pattern Recognition in Practice, pages 381-397, Amsterdam. Frederick Jelinek, John D. Lafferty, and Robert L. Mer- cer. 1992. Basic methods of probabilistic context free grammars. In Pietro Laface and Renato De Mori, editors, Speech Recognition and Understanding. Recent Advances, Trends, and Applications, volume F75 of NATO Advanced Sciences Institutes Series, pages 345-360. Springer Ver- lag, Berlin. Proceedings of the NATO Advanced Study Institute, Cetraro, Italy, July 1990. Mark A. Jones and Jason M. Eisner. 1992. A probabilistic parser applied to software testing documents. In Proceed- ings of the 8th National Conference on Artificial Intelli- gence, pages 332-328, San Jose, CA. AAAI Press. Daniel Jurafsky, Chuck Wooters, Gary Tajchman, Jonathan Segal, Andreas Stolcke, and Nelson Morgan. 1994. In- tegrating grammatical, phonological, and dialect/accent information with a speech recognizer in the Berkeley Restaurant Project. In Paul McKevitt, editor, AAAI Work- shop on the Integration of Natural Language and Speech Processing, Seattle, WA. To appear. David M. Magerman and Mitchell P. Marcus. 1991. Pearl: A probabilistic chart parser. In Proceedings of the 6th Conference of the European Chapter of the Association for Computational Linguistics, Berlin, Germany. Hermann Ney. 1984. The use of a one-stage dynamic programming algorithm for connected word recognition. IEEE Transactions on Acoustics, Speech, and Signal Pro- cessing, 32(2):263-271. William H. Press, Brian P. Flannery, Saul A. Teukolsky, and William T. Vetterling. 1988. Numerical Recipes in C: The Art of Scientific Computing. Cambridge University Press, Cambridge. Richard Schwartz and Yen-Lu Chow. 1990. The N-best algorithm: An efficient and exact procedure for finding the n most likely sentence hypotheses. In Proceedings IEEE Conference on Acoustics, Speech and Signal Processing, volume 1, pages 81-84, Albuquerque, NM. Andreas Stolcke. 1993. An efficient probabilistic context- free parsing algorithm that computes prefix probabilities. Technical Report TR-93-065, International Computer Sci- ence Institute, Berkeley, CA. To appear in Computational Linguistics. Victor Zue, James Glass, David Goodine, Hong Leung, Michael Phillips, Joseph Polifroni, and Stephanie Sen- eft. 1991. Integration of speech recognition and natu- ral language processing in the MIT Voyager system. In Proceedings IEEE Conference on Acoustics, Speech and Signal Processing, volume 1, pages 713-716, Toronto. 79
1994
11
ALIGNING A PARALLEL ENGLISH-CHINESE CORPUS STATISTICALLY WITH LEXICAL CRITERIA Dekai Wu HKUST Department of Computer Science University of Science &: Technology Clear Water Bay, Hong Kong Internet: dekai¢cs.ust.hk Abstract We describe our experience with automatic align- ment of sentences in parallel English-Chinese texts. Our report concerns three related topics: (1) progress on the HKUST English-Chinese Par- allel Bilingual Corpus; (2) experiments addressing the applicability of Gale ~ Church's (1991) length- based statistical method to the task of align- ment involving a non-Indo-European language; and (3) an improved statistical method that also incorporates domain-specific lexical cues. INTRODUCTION Recently, a number of automatic techniques for aligning sentences in parallel bilingual corpora have been proposed (Kay & RSscheisen 1988; Catizone e~ al. 1989; Gale & Church 1991; Brown et al. 1991; Chen 1993), and coarser approaches when sentences are difficult to identify have also been advanced (Church 1993; Dagan e~ al. 1993). Such corpora contain the same material that has been translated by human experts into two lan- guages. The goal of alignment is to identify match- ing sentences between the languages. Alignment is the first stage in extracting structural information and statistical parameters from bilingual corpora. The problem is made more difficult because a sen- tence in one language may correspond to multiple sentences in the other; worse yet, •sometimes sev- eral sentences' content is distributed across multi- ple translated sentences. Approaches to alignment fall into two main classes: lexical and statistical. Le×ically-based techniques use extensive online bilingual lexicons to match sentences. In contrast, statistical tech- niques require almost no prior knowledge and are based solely on the lengths of sentences. The empirical results to date suggest that statistical methods yield performance superior to that of cur- rently available lexical techniques. However, as far as we know, the literature on automatic alignment has been restricted to al- phabetic Indo-European languages. This method- ological flaw weakens the arguments in favor of either approach, since it is unclear to what extent a technique's superiority depends on the similar- ity between related languages. The work reported herein moves towards addressing this problem. 1 In this paper, we describe our experience with automatic alignment of sentences in paral- lel English-Chinese texts, which was performed as part of the SILC machine translation project. Our report concerns three related topics. In the first of the following sections, we describe the objectives of the HKUST English-Chinese Parallel Bilingual Corpus, and our progress. The subsequent sec- tions report experiments addressing the applica- bility of a suitably modified version of Gale & Church's (1991) length-based statistical method to the task of aligning English with Chinese. In the final section, we describe an improved statistical method that also permits domain-specific lexical cues to be incorporated probabilistically. THE ENGLISH-CHINESE CORPUS The dearth of work on non-Indo-European lan- guages can partly be attributed to a lack of the prequisite bilingual corpora. As a step toward remedying this, we are in the process of construct- ing a suitable English-Chinese corpus. To be in- cluded, materials must contain primarily tight, lit- eral sentence translations. This rules out most fic- tion and literary material. We have been concentrating on the Hong Kong Hansard, which are the parliamentary pro- ceedings of the Legislative Council (LegCo). Anal- ogously to the bilingual texts of the Canadian Hansard (Gale & Church 1991), LegCo tran- scripts are kept in full translation in both English 1Some newer methods are also intended to be ap- plied to non-Indo-European languages in the future (Fung $z Church 1994). 80 and Cantonese. 2 However, unlike the Canadian Hansard, the Hong Kong Hansard has not pre- viously been available in machine-readable form. We have obtained and converted these materials by special arrangement. The materials contain high-quality literal translation. Statements in LegCo may be made using either English or Cantonese, and are tran- scribed in the original language. A translation to the other language is made later to yield com- plete parallel texts, with annotations specifying the source language used by each speaker. Most sentences are translated 1-for-1. A small propor- tion are 1-for-2 or 2-for-2, and on rare occasion 1-for-3, 3-for-3, or other configurations. Samples of the English and Chinese texts can be seen in figures 3 and 4. 3 Because of the obscure format of the origi- nal data, it has been necessary to employ a sub- stantial amount of automatic conversion and ref- ormatting. Sentences are identified automatically using heuristics that depend on punctuation and spacing. Segmentation errors occur occasionally, due either to typographical errors in the original data, or to inadequacies of our automatic conver- sion heuristics. This simply results in incorrectly placed delimiters; it does not remove any text from the corpus. Although the emphasis is on clean text so that markup is minimal, paragraphs and sentences are marked following TEI-conformant SGML (Sperberg-McQueen & Burnard 1992). We use the term "sentence" in a generalized sense including lines in itemized lists, headings, and other non- sentential segments smaller than a paragraph. The corpus currently contains about 60Mb of raw data, of which we have been concentrating on approximately 3.2Mb. Of this, 2.1Mb is text comprised of approximately 0.35 million English words, with the corresponding Chinese translation occupying the remaining 1.1Mb. STATISTICALLY-BASED ALIGNMENT The statistical approach to alignment can be sum- marized as follows: choose the alignment that maximizes the probability over all possible align- ments, given a pair of parallel texts. Formally, 2Cantonese is one of the four major Han Chinese languages. Formal written Cantonese employs the same characters as Mandarin, with some additions. Though there are grammatical and usage differences between the Chinese languages, as between German and Swiss German, the written forms can be read by all. 3For further description see also Fung &: Wu (1994). choose (1) arg m~x Pr(A VT1, if-2) where .A is an alignment, and ~ and "T2 are the English and Chinese texts, respectively. An align- ment .A is a set consisting of L1 ~ L~ pairs where each L1 or L2 is an English or Chinese passage. This formulation is so extremely general that it is difficult to argue against its pure form. More controversial are the approximations that must be made to obtain a tractable version. The first commonly made approximation is that the probabilities of the individual aligned pairs within an alignment are independent, i.e., Pr(A[TI,'T2) ~ H Pr(Li ~-- L2[~,9-2) (LI.~-L~)EA The other common approximation is that each Pr(L1 ~- L217-t,7-2) depends not on the entire texts, but only on the contents of the specific pas- sages within the alignment: Pr(AI~'T2) ~ H Pr(L1 ~-- L~IL1,L~ ) (LI~---L2)E,A Maximization of this approximation to the alignment probabilities is easily converted into a minimum-sum problem: (2) arg rnAax Pr (.AI~ , ~r~) ~. argm~x H Vr(L1 = L21L1,L2) (Lt.~--L2)E.A = argn~n E -logPr(L1 ~-~ L2IL1,L2) (Lt~L2)E.A The minimization can be implemented using a dy- namic programming strategy. Further approximations vary according to the specific method being used. Below, we first discuss a pure length-based approximation, then a method with lexical extensions. APPLICABILITY OF LENGTH- BASED METHODS TO CHINESE Length-based alignment methods are based on the following approximation to equation (2): (3) Pr(/1 ~- L2[LI,L2) ~ er(L1 ~-- L~lll,l~ ) where 11 = length(L1) and l~ = length(L2), mea- sured in number of characters. In other words, the only feature of Lt and L2 that affects their alignment probability is their length. Note that there are other length-based alignment methods 81 that measure length in number of words instead of characters (Brown et al. 1991). However, since Chinese text consists of an unsegmented character stream without marked word boundaries, it would not be possible to count the number of words in a sentence without first parsing it. Although it has been suggested that length- based methods are language-independent (Gale & Church 1991; Brown et al. 1991), they may in fact rely to some extent on length correlations arising from the historical relationships of the languages being aligned. If translated sentences share cog- nates, then the character lengths of those cognates are of course correlated. Grammatical similarities between related languages may also produce cor- relations in sentence lengths. Moreover, the combinatorics of non-Indo- European languages can depart greatly from Indo- European languages. In Chinese, the majority of words are just one or two characters long (though collocations up to four characters are also com- mon). At the same time, there are several thou- sand characters in daily use, as in conversation or newspaper text. Such lexical differences make it even less obvious whether pure sentence-length cri- teria are adequately discriminating for statistical alignment. Our first goal, therefore, is to test whether purely length-based alignment results can be repli- cated for English and Chinese, languages from unrelated families. However, before length-based methods can be applied to Chinese, it is first nec- essary to generalize the notion of "number of char- acters" to Chinese strings, because most Chinese text (including our corpus) includes occasional English proper names and abbreviations, as well as punctuation marks. Our approach is to count each Chinese character as having length 2, and each English or punctuation character as having length 1. This corresponds to the byte count for text stored in the hybrid English-Chinese encod- ing system known as Big 5. Gale & Church's (1991) length-based align- ment method is based on the model that each English character in L1 is responsible for generat- ing some number of characters in L2. This model leads to a further approximation which encapsu- lates the dependence to a single parameter 6 that is a function of 11 and 1s: Pr(L1 = L2IL1,L2) .~ Pr(L1 ~-- L216(11,12)) However, it is much easier to estimate the distrib- utions for the inverted form obtained by applying Bayes' Rule: Pr(L1 = L216) = Pr(6]L1 ~ L2) Pr(nl ~- n2) Pr(6) where Pr(6) is a normalizing constant that can be ignored during minimization. The other two distributions are estimated as follows. First we choose a function for 6(11,12). To do this we look at the relation between 11 and 12 under the generative model. Figure 1 shows a plot of English versus Chinese sentence lengths for a hand-aligned sample of 142 sentences. If the sentence lengths were perfectly correlated, the points would lie on a diagonal through the origin. We estimate the slope of this idealized diagonal c = E(r) = E(12/ll) by averaging over the training corpus of hand-aligned L1 ~- L2 pairs, weighting by the length of L1. In fact this plot displays sub- stantially greater scatter than the English-French data of Gale & Church (1991). 4 The mean number of Chinese characters generated by each English character is c = 0.506, with a standard deviation ~r = 0.166. We now assume that 12 - llc is normally dis- tributed, following Gale & Church (1991), and transform it into a new gaussian variable of stan- dard form (i.e., with mean 0 and variance 1) by appropriate normalization: 12 - 11 c (4) x/~l tr 2 This is the quantity that we choose to define as 6(/1,12). Consequently, for any two pairs in a pro- posed alignment, Pr(6[Lt ~- L~) can be estimated according to the gaussian assumption. To check how accurate the gaussian assump- tion is, we can use equation (4) to transform the same training points from figure 1 and produce a histogram. The result is shown in figure 2. Again, the distribution deviates from a gaussian distri- bution substantially more than Gale & Church (1991) report for French/German/English. More- over, the distribution does not resemble ally smooth distribution at all, including the logarith- mic normal used by Brown el al. (1991), raising doubts about the potential performance of pure length-based alignment. Continuing nevertheless, to estimate the other term Pr(L1 ~ L2), a prior over six classes is con- structed, where the classes are defined by the nmn- ber of passages included within L1 and L2. Table 1 shows the probabilities used. These probabilities are taken directly from Gale & Church (1991); slightly improved performance might be obtained by estimating these probabilities from our corpus. The aligned results using this model were eval- uated by hand for the entire contents of a ran- 4The difference is also partly due to the fact that Gale & Church (1991) plot paragraph lengths instead of sentence lengths. We have chosen to plot sentence lengths because that is what the algorithm is based on. 82 1. ¶MR FRED LI ( in Cantonese ) : J 2. I would like to talk about public assistance. J 3. I notice from your address that under the Public AssistanceScheme, thebasicrateof$825amonth~ra~825~950~,~15%o ] single adult will be increased by 15% to $950 a month. l 4. However, do you know that the revised rate plus all other grants will give each recipient no more than $2000 a month? On average, each recipient will receive $1600 to $1700 a month. ] 5. In view of Hong Kong's prosperity and high living cost, this figure is very ironical. J 6. May I have your views and that of the Government? ] 7. Do you think that a comprehensive review should be conducted on the method of calculating public assistance? ] 8. Since the basic rate is so low, it will still be far below the current level of living even if it is further increased by 20% to 30%. If no comprehensive review is carried out in this aspect, this " safety net " cannot provide any assistance at all for those who are really in need. J 9. I hope Mr Governor will give this question a serious response. J 10. ¶THE GOVERNOR: J 11. It is not in any way to belittle the importance of the point that the Honourable Member has made to say that, when at the outset of our discussions I said that I did not think that the Government would be regarded for long as having been extravagant yesterday, I did not realize that the criticisms would begin quite as rapidly as they have. ] 12. The proposals that we make on public assistance, both the increase in scale rates, and the relaxation of the absence rule, are substantial steps forward in Hong Kong which will, I think, be very widely welcomed. J 13. But I know that there will always be those who, I am sure for very good reason, will say you should have gone further, you should have clone more. J 14. Societies customarily make advances in social welfare because there are members of the community who develop that sort of case very often with eloquence and verve. ] N,~B~1600~N1700~o] N ~ ~ ~ ? J N~N~,A~2o%~3o%,~~ ~ ~ o J ~ ~ N ~ o J A E ~ ~ N ~ , A ~ # ~ ~ ~ ~o ~ ~ ~ , ~ ~ D ~ ~ ~ ~ , ~ ~ , ~ N ~ - - ~ , ~ ~ ~ o J ~ , ~ ~ X - - ~ , ~ ~ - - ~ , ~ ~ ~ , ~ ~ ~ $ ~ ~ oJ Figure 3: A sample of length-based alignment output. domly selected pair of English and Chinese files corresponding to a complete session, comprising 506 English sentences and 505 Chinese sentences. Figure 3 shows an excerpt from this output. Most of the true 1-for-1 pairs are aligned correctly. In (4), two English sentences are correctly aligned with a single Chinese sentence. However, the Eng- lish sentences in (6, 7) are incorrectly aligned 1- for- 1 instead of 2-for- 1. Also, (11, 12) shows an ex- ample of a 3-for-l, 1-for-1 sequence that the model has no choice but to align as 2-for-2, 2-for-2. Judging relative to a manual alignment of the English and Chinese files, a total of 86.4% of the true L1 ~- L~ pairs were correctly identified by the length-based method. However, many of the errors occurred within the introductory ses- sion header, whose format is domain-specific (dis- 83 140 120 100 SQ 60 40 20 0 4, • e m • =o° ~" gO L i i *mxam.ll" • Figure 1: English versus Chinese sentence lengths. 16 • 14 12 I0 e 6 4 2 -S -4 .3 -2 -1 • i • i" io i" "i i o ,* o ** *o 0 1 2 3 4 Figure 2: English versus Chinese sentence lengths. cussed below). If the introduction is discarded, then the proportion of correctly aligned pairs rises to 95.2%, a respectable rate especially in view of the drastic inaccuracies in the distributions as- sumed. A detailed breakdown of the results is shown in Table 2. For reference, results reported for English/French generally fall between 96% and 98%. However, all of these numbers should be in- terpreted as highly domain dependent, with very small sample size. The above rates are for Type I errors. The alternative measure of accuracy on Type II er- rors is useful for machine translation applications, where the objective is to extract only 1-for-1 sen- tence pairs, and to discard all others. In this case, we are interested in the proportion of 1-for-1 out- put pairs that are true 1-for-1 pairs. (In informa- tion retrieval terminology, this measures precision whereas the above measures recall.) In the test session, 438 1-for-1 pairs were output, of which 377, or 86.1%, were true matches. Again, how- ever, by discarding the introduction, the accuracy rises to a surprising 96.3%. segments L1 L2 0 1 1 0 1 1 1 2 2 1 2 2 Pr(L1 ~ L2) 0.0099 0.0099 0.89 0.089 0.089 0.011 Table 1: Priors for Pr(L1 ~-- L2). The introductory session header exemplifies a weakness of the pure length-based strategy, namely, its susceptibility to long stretches of pas- sages with roughly similar lengths. In our data this arises from the list of council members present and absent at each session (figure 4), but similar stretches can arise in many other domains. In such a situation, two slight perturbations may cause the entire stretch of passages between the perturba- tions to be misaligned. These perturbations can easily arise from a number of causes, including slight omissions or mismatches in the original par- allel texts, a 1-for-2 translation pair preceding or following the stretch of passages, or errors in the heuristic segmentation preprocessing. Substantial penalties may occur at the beginning and ending boundaries of the misaligned region, where the perturbations lie, but the misalignment between those boundaries incurs little penalty, because the mismatched passages have apparently matching lengths. This problem is apparently exacerbated by the non-alphabetic nature of Chinese. Because Chinese text contains fewer characters, character length is a less discriminating feature, varying over a range of fewer possible discrete values than the corresponding English. The next section discusses a solution to this problem. In summary, we have found that the statisti- cal correlation of sentence lengths has a far greater variance for our English-Chinese materials than with the Indo-European materials used by Gale & Church (1991). Despite this, the pure length- based method performs surprisingly well, except for its weakness in handling long stretches of sen- tences with close lengths. STATISTICAL INCORPORATION OF LEXICAL CUES To obtain further improvement in alignment accu- racy requires matching the passages' lexical con- tent, rather than using pure length criteria. This is particularly relevant for the type of long mis- matched stretches described above. Previous work on alignment has employed ei- 84 Total Correct Incorrect % Correct 1-1 1-2 2-1 2-2 1-3 3-1 3-3 433 20 21 2 1 1 1 361 17 20 0 0 0 0 11 3 1 2 1 1 1 87.1 85.0 95.2 0.0 0.0 0.0 0.0 Table 2: Detailed breakdown of length-based alignment results. 1. ¶THE DEPUTY PRESIDENT THE HONOURABLE ¶~~J--J::-~, K.B.E., L.V.O., J.P. J JOHN JOSEPH SWAINE, C.B.E., Q.C., J.P. J 2. ¶THE CHIEF SECRETARY THE HONOURABLE SIR DAVID ROBERT FORD, K.B.E., L.V.O., J.P. J 3. ¶THE FINANCIAL SECRETARY THE HONOURABLE NATHANIEL WILLIAM HAMISH MACLEOD, C.B.E., J.P. J i 37 misaligned matchings omitted 41. ¶THE HONOURALBE MAN SAI - CHEONG J 42. ¶THE HONOURABLE STEVEN POON KWOK - LIM THE HONOURABLE HENRY TANG YING - YEN, J.P. ] 43. ¶THE HONOURABLE TIK CHI- YUEN J ¶~~:N~iN, C.B.E., J.P. J ¶~N,.~g~, C.M.G., J.P. J j Figure 4: A sample of misalignment using pure length criteria. ther solely lexical or solely statistical length cri- teria. In contrast, we wish to incorporate lexical criteria without giving up the statistical approach, which provides a high baseline performance. Our method replaces equation (3) with the fol- lowing approximation: Pr(La ~--- L21L1, L2) Pr(LI ~- L2111,12, vl, Wl .... , vn, Wn) where vi = #occurrences(English cuei,L1) and wi = #occurrences(Chinese cuei, L2). Again, the dependence is encapsulated within difference pa- rameters & as follows: Pr(L1 ~ L2[L1, L2) Pr( L1 = L2} ~0(~l,~2),(~l(V1,Wl),... ,~n(Vrt,Wn)) Bayes' Rule now yields Pr(L1 .-~ L2160, 61,62,. •. , 6n) o¢ Pr((f0,61,... ,5,~1L1 ~-- L2)Pr(L1 = L2) The prior Pr(L1 ~ L2) is evaluated as before. We assume all 6i values are approximately indepen- dent, giving (5) n Pr(60, .., nlL1 = 1-I Pr( ,lL1 = L2) i=0 The same dynamic programming optimization can then be used. However, the computation and memory costs grow linearly with the number of lexical cues. This may not seem expensive until one considers that the pure length-based method only uses resources equivalent to that of a single lexical cue. It is in fact important to choose as few lexical cues as possible to achieve the desired accuracy. Given the need to minimize the number of lex- ical cues chosen, two factors become important. First, a lexical cue should be highly reliable, so that violations, which waste the additional com- putation, happen only rarely. Second, the chosen lexical cues should occur frequently, since comput- ing the optimization over many zero counts is not useful. In general, these factors are quite domain- specific, so lexical cues must be chosen for the par- ticular corpus at hand. Note further that when these conditions are met, the exact probability dis- tribution for the lexical 6/ parameters does not have much influence on the preferred alignment. The bilingual correspondence lexicons we have employed are shown in figure 5. These lexical items are quite common in the LegCo domain. Items like "C.B.E." stand for honorific titles such as "Commander of the British Empire"; the other cues are self-explanatory. The cues nearly always appear 14o-1 and the differences 6/therefore have 85 governor f~ C.B.E. C.B.E. J.B.E. J.B.E. L.V.O. L.V.O. Q.C. March June September December Wednesday Saturday Q.C. C.M.G. C.M.G. I.S.O. I.S.O. J.P. J.P. K.B.E. K.B.E. O.B.E. M.B.E. January April July O.B.E. February May August November M.B.E. October Monday Thursday Sunday Tuesday Friday Figure 5: Lexicons employed for paragraph (top) and sentence (bottom) alignment. a mean of zero. Given the relative unimportance of the exact distributions, all were simply assumed to be normally distributed with a variance of 0.07 instead of sampling each parameter individually. This variance is fairly sharp, but nonetheless, con- servatively reflects a lower reliability than most of the cues actually possess. Using the lexical cue extensions, the Type I results on the same test file rise to 92.1% of true L1 ~ L2 pairs correctly identified, as compared to 86.4% for the pure length-based method. The im- provement is entirely in the introductory session header. Without the header, the rate is 95.0% as compared to 95.2% earlier (the discrepancy is in- significant and is due to somewhat arbitrary deci- sions made on anomolous regions). Again, caution should be exercised in interpreting these percent- ages. By the alternative Type II measure, 96.1% of the output 1-for-1 pairs were true matches, compared to 86.1% using the pure length-based method. Again, there is an insignificant drop when the header is discarded, in this case from 96.3% down to 95.8%. CONCLUSION Of our raw corpus data, we have currently aligned approximately 3.5Mb of combined English and Chinese texts. This has yielded 10,423 pairs clas- sifted as 1-for-l, which we are using to extract more refined information. This data represents over 0.217 million English words (about 1.269Mb) plus the corresponding Chinese text (0.659Mb). To our knowledge, this is the first large-scale empirical demonstration that a pure length-based method can yield high accuracy sentence align- ments between parallel texts in Indo-European and entirely dissimilar non-alphabetic, non-Indo- European languages. We are encouraged by the results and plan to expand our program in this direction. We have also obtained highly promising im- provements by hybridizing lexical and length- based alignment methods within a common sta- tistical framework. Though they are particularly useful for non-alphabetic languages where charac- ter length is not as discriminating a feature, we be- lieve improvements will result even when applied to alphabetic languages. ACKNOWLEDGEMENTS I am indebted to Bill Gale for helpful clarifying discussions, Xuanyin Xia and Wing Hong Chan for assistance with conversion of corpus materials, as well as Graeme Hirst and Linda Peto. REFERENCES BROWN, PETER F., JENNIFER C. LAI, ~5 ROBERT L. MERCER. 1991. Aligning sen- tences in parallel corpora. In Proceedings of the 29lh Annual Conference of the Associa- tion for Computational Linguistics, 169-176, Berkeley. CATIZONE, ROBERTA, GRAHAM RUSSELL, ~,5 SU- SAN WARWICK. 1989. Deriving translation data from bilingual texts. In Proceedings of the First International Acquisition Workshop, Detroit. CHEN, STANLEY F. 1993. Aligning sentences in bilingual corpora using lexical information. In Proceedings of the 31st Annual Conference of the Association for Computational Linguis- tics, 9-16, Columbus, OH. CHURCH, KENNETH W. 1993. Char-align: A pro- gram for aligning parallel texts at the char- acter level. In Proceedings of the 31st Annual Conference of the Association for Computa- tional Linguistics, 1-8, Columbus, OH. 86 DAGAN, IDO, KENNETH W. CHURCH, WILLIAM A. GALE. 1993. Robust bilingual word alignment for machine aided translation. In Proceedings of the Workshop on Very Large Corpora, 1-8, Columbus, OH. FUNG, PASCALE ~ KENNETH W. CHURCH. 1994. K-vec: A new approach for aligning parallel texts. In Proceedings of the Fifteenth Interna- tional Conference on Computational Linguis- tics, Kyoto. To appear. FUNG, PASCALE & DEKAI WU. 1994. Statistical augmentation of a Chinese machine-readable dictionary. In Proceedings of the Second An- nual Workshop on Very Large Corpora, Ky- oto. To appear. GALE, WILLIAM A. L: KENNETH W. CHURCH. 1991. A program for aligning sentences in bilingual corpora. In Proceedings of the 29th Annual Conference of the Association for Computational Linguistics, 177-184, Berke- ley. KAY, MARTIN & M. RSSCHE1SEN. 1988. Text- translation alignment. Technical Report P90- 00143, Xerox Palo Alto Research Center. SPERnERG-MCQUEEN, C. M. &Lou BURNARD, 1992. Guidelines for electronic text encoding and interchange. Version 2 draft. 87
1994
12
DECISION LISTS FOR LEXICAL AMBIGUITY RESOLUTION: Application to Accent Restoration in Spanish and French David Yarowsky* Department of Computer and Information Science University of Pennsylvania Philadelphia, PA 19104 yarowsky©unagi, cis. upenn, edu Abstract This paper presents a statistical decision procedure for lexical ambiguity resolution. The algorithm exploits both local syntactic patterns and more distant collo- cational evidence, generating an efficient, effective, and highly perspicuous recipe for resolving a given ambigu- ity. By identifying and utilizing only the single best dis- ambiguating evidence in a target context, the algorithm avoids the problematic complex modeling of statistical dependencies. Although directly applicable to a wide class of ambiguities, the algorithm is described and eval- uated in a realistic case study, the problem of restoring missing accents in Spanish and French text. Current accuracy exceeds 99% on the full task, and typically is over 90% for even the most difficult ambiguities. INTRODUCTION This paper presents a general-purpose statistical deci- sion procedure for lexical ambiguity resolution based on decision lists (Rivest, 1987). The algorithm considers multiple types of evidence in the context of an ambigu- ous word, exploiting differences in collocational distri- bution as measured by log-likelihoods. Unlike standard Bayesian approaches, however, it does not combine the log-likelihoods of all available pieces of contextual evi- dence, but bases its classifications solely on the single most reliable piece of evidence identified in the target context. Perhaps surprisingly, this strategy appears to yield the same or even slightly better precision than the combination of evidence approach when trained on the same features. It also brings with it several ad- ditional advantages, the greatest of which is the abil- ity to include multiple, highly non-independent sources of evidence without complex modeling of dependencies. Some other advantages are significant simplicity and ease of implementation, transparent understandability *This research was supported by an NDSEG Fellowship, ARPA grant N00014-90-J-1863 and ARO grant DAAL 03- 89-C0031 PRI. The author is also affiliated with the Lin- guistics Research Department of AT&T Bell Laboratories, and greatly appreciates the use of its resources in support of this work. He would like to thank Jason Eisner, Libby Levison, Mark Liberman, Mitch Marcus, Joseph Rosenzweig and Mark Zeren for their valuable feedback. of the resulting decision list, and easy adaptability to new domains. The particular domain chosen here as a case study is the problem of restoring missing accents 1 to Spanish and French text. Because it requires the res- olution of both semantic and syntactic ambiguity, and offers an objective ground truth for automatic evalua- tion, it is particularly well suited for demonstrating and testing the capabilities of the given algorithm. It is also a practical problem with immediate application. PROBLEM DESCRIPTION The general problem considered here is the resolu- tion of lexical ambiguity, both syntactic and seman- tic, based on properties of the surrounding context. Accent restoration is merely an instance of a closely- related class of problems including word-sense disam- biguation, word choice selection in machine translation, homograph and homophone disambiguation, and capi- talization restoration. The given algorithm may be used to solve each of these problems, and has been applied without modification to the case of homograph disam- biguation in speech synthesis (Sproat, Hirschberg and Yarowsky, 1992). It may not be immediately apparent to the reader why this set of problems forms a natural class, similar in origin and solvable by a single type of algorithm. In each case it is necessary to disambiguate two or more semantically distinct word-forms which have been con- flated into the same representation in some medium. In the prototypical instance of this class, word- sense disambiguation, such distinct semantic concepts as river bank, financial bank and to bank an airplane are conflated in ordinary text. Word associations and syn- tactic patterns are sufficient to identify and label the correct form. In homophone disambiguation, distinct semantic concepts such as ceiling and sealing have also become represented by the same ambiguous form, but in the medium of speech and with similar disambiguat- ing clues. Capitalization restoration is a similar problem in that distinct semantic concepts such as AIDS/aids (disease or helpful tools) and Bush~bush (president or shrub) 1For brevity, the term accent will typically refer to the general class of accents and other diacritics, including $,$,$,5 88 are ambiguous, but in the medium of all-capitalized (or casefree) text, which includes titles and the beginning of sentences. Note that what was once just a capital- ization ambiguity between Prolog (computer language) and prolog (introduction) has is becoming a "sense" am- biguity since the computer language is now often writ- ten in lower case, indicating the fundamental similarity of these problems. Accent restoration involves lexical ambiguity, such as between the concepts cSle (coast) and cSld (side), in textual mediums where accents are missing. It is traditional in Spanish and French for diacritics to be omitted from capitalized letters. This is particularly a problem in all-capitalized text such as headlines. Ac- cents in on-line text may also be systematically stripped by many computational processes which are not 8-bit clean (such as some e-mail transmissions), and may be routinely omitted by Spanish and French typists in in- formal computer correspondence. Missing accents may create both semantic and syn- tactic ambiguities, including tense or mood distinctions which may only be resolved by distant temporal mark- ers or non-syntactic cues. The most common accent ambiguity in Spanish is between the endings -o and -5, such as in the case of completo vs. complet6. This is a present/preterite tense ambiguity for nearly all -at verbs, and very often also a part of speech ambi- guity, as the -o form is a frequently a noun as well. The second most common general ambiguity is between the past-subjunctive and future tenses of nearly all-at verbs (eg: terminara vs. lerminard), both of which are 3rd person singular forms. This is a particularly challenging class and is not readily amenable to tradi- tional part-of-speech tagging algorithms such as local trigram-based taggers. Some purely semantic ambigui- ties include the nouns secretaria (secretary) vs. secre- tarla (secretariat), sabana (grassland) vs. sdbana (bed sheet), and politica (female politician) vs. polilica (pol- itics). The distribution of ambiguity types in French is similar. The most common case is between -e and -d, which is both a past participle/present tense ambigu- ity, and often a part-of-speech ambiguity (with nouns and adjectives) as well. Purely semantic ambiguities are more common than in Spanish, and include traitd/traile (treaty/draft), marche/raarchd (step/market), and the cole example mentioned above. Accent restoration provides several advantages as a case study for the explication and evaluation of the pro- posed decision-list algorithm. First, as noted above, it offers a broad spectrum of ambiguity types, both syn- tactic and semantic, and shows the ability of the algo- rithm to handle these diverse problems. Second, the correct accent pattern is directly recoverable: unlim- ited quantities of test material may be constructed by stripping the accents from correctly-accented text and then using the original as a fully objective standard for automatic evaluation. By contrast, in traditional word-sense disambiguation, hand-labeling training and test data is a laborious and subjective task. Third, the task of restoring missing accents and resolving ambigu- ous forms shows considerable commercial applicability, both as a stand-alone application or part of the front- end to NLP systems. There is also a large potential commercial market in its use in grammar and spelling correctors, and in aids for inserting the proper diacrit- ics automatically when one types 2. Thus while accent restoration may not be be the prototypical member of the class of lexical-ambiguity resolution problems, it is an especially useful one for describing and evaluating a proposed solution to this class of problems. PREVIOUS WORK The problem of accent restoration in text has received minimal coverage in the literature, especially in En- glish, despite its many interesting aspects. Most work in this area appears to done in the form of in-house or commercial software, so for the most part the prob- lem and its potential solutions are without comprehen- sive published analysis. The best treatment I've discov- ered is from Fernand Marly (1986, 1992), who for more than a decade has been painstakingly crafting a system which includes accent restoration as part of a compre- hensive system of syntactic, morphological and phonetic analysis, with an intended application in French text- to-speech synthesis. He incorporates information ex- tracted from several French dictionaries and uses basic collocational and syntactic evidence in hand-built rules and heuristics. While the scope and complexity of this effort is remarkable, this paper will focus on a solution to the problem which requires considerably less effort to implement. The scope of work in lexical ambiguity resolution is very large. Thus in the interest of space, discussion will focus on the direct historic precursors and sources of inspiration for the approach presented here. The central tradition from which it emerges is that of the Bayesian classifier (Mosteller and Wallace, 1964). This was expanded upon by (Gale et al., 1992), and in a class-based variant by (Yarowsky, 1992). Decision trees (Brown, 1991) have been usefully applied to word-sense ambiguities, and HMM part-of-speech taggers (Jelinek 1985, Church 1988, Merialdo 1990) have addressed the syntactic ambiguities presented here. Hearst (1991) presented an effective approach to modeling local con- textual evidence, while Resnik (1993) gave a classic treatment of the use of word classes in selectional con- straints. An algorithm for combining syntactic and se- mantic evidence in lexical ambiguity resolution has been realized in (Chang et al., 1992). A particularly success- ful algorithm for integrating a wide diversity of evidence types using error driven learning was presented in Brill (1993). While it has been applied primarily to syntac- tic problems, it shows tremendous promise for equally impressive results in the area of semantic ambiguity res- olution. 2Such a tool would particularly useful for typing Spanish or French on Anglo-centric computer keyboards, where en- tering accents and other diacritic marks every few keystrokes can be laborious. 89 The formal model of decision lists was presented in (Pdvest, 1987). I have restricted feature conjuncts to a much narrower complexity than allowed in the original model- namely to word and class trigrams. The current approach was initiMly presented in (Sproat et al., 1992), applied to the problem of homograph resolution in text- to-speech synthesis. The algorithm achieved 97% mean accuracy on a disambiguation task involving a sample of 13 homographs 3. ALGORITHM Step 1: Identify the Ambiguities in Accent Pattern Most words in Spanish and French exhibit only one ac- cent pattern. Basic corpus analysis will indicate which is the most common pattern for each word, and may be used in conjunction with or independent of dictionaries and other lexical resources. The initial step is to take a histogram of a corpus with accents and diacritics retained, and compute a table of accent pattern distributions as follows: De-accented Form Accent Pattern cesse cesse cessd cout cofit couta coute cofita cofit6 cofite cote c6t~ c6te cote cot6 cotiere c6ti~re % Number 53% 669 47% 593 100% 330 100% 41 53% 107 47% 96 69% 2645 28% 1040 3% 99 <1% 15 100% 296 For words with multiple accent patterns, steps 2-5 are applied. Step 2: Collect Training Contexts For a particular case of accent ambiguity identified above, collect 4-k words of context around all occur- rences in the corpus, label the concordance line with the observed accent pattern, and then strip the accents from the data. This will yield a training set such as the following: Pattern Context (1) c6td du laisser de cote faute de temps (1) c6td appeler l' autre cote de l' atlantique (1) c6td passe de notre cote de la frontiere (2) cSte vivre sur notre cote ouest toujours verte (2) c6te creer sur la cote du labrador des (2) cSte travaillaient cote a cote , ils avaient The training corpora used in this experiment were the Spanish AP Newswire (1991-1993, 49 million words), SBaseline accuracy for this data (using the most common pronunciation) is 67%. the French Canadian Hansards (1986-1988, 19 million words), and a collection from Le Monde (1 million words). Step 3: Measure Collocational Distributions The driving force behind this disambiguation Mgorithm is the uneven distribution of collocations 4 with respect to the ambiguous token being classified. Certain collo- cations will indicate one accent pattern, while different collocations will tend to indicate another. The goal of this stage of the algorithm is to measure a large num- ber of collocational distributions to select those which are most useful in identifying the accent pattern of the ambiguous word. The following are the initial types of collocations con- sidered: • Word immediately to the right (+1 W) • Word immediately to the left (-1 W) • Word found in =t=k word window 5 (+k W) • Pair of words at offsets -2 and -1 • Pair of words at offsets -1 and +1 • Pair of words at offsets +1 and +2 For the two major accent patterns of the French word cote, below is a small sample of these distributions for several types of collocations: Position -1 w +lw +lw,+2w -2w,-lw +k w +k w +k w Collocation c6te cSt~ du cote 0 536 la cote 766 1 un cote 0 216 notre cote 10 70 cole ouest 288 1 cole est 174 3 cote du 55 156 cote du gouvernement 0 62 cote a cole 23 0 poisson (in +k words) 20 0 ports (in =t=k words) 22 0 opposition (in +k words ) 0 39 This core set of evidence presupposes no language- specific knowledge. However, if additional language re- sources are available, it may be desirable to include a larger feature set. For example, if lemmatization proce- dures are available, collocational measures for morpho- logical roots will tend to yield more succinct and gener- alizable evidence than measuring the distributions for each of the inflected forms. If part-of-speech informa- tion is available in a lexicon, it is useful to compute the 4The term collocation is used here in its broad sense, meaning words appearing adjacent to or near each other (literally, in the same location), and does not imply only idiomatic or non-compositional associations. SThe optimal value of k is sensitive to the type of ambi- guity. Semantic or topic-based ambiguities warrant a larger window (k ~ 20-50), while more local syntactic ambiguities warrant a smaller window (k ~ 3 or 4) 90 distributions for part-of-speech bigrams and trigrams as above. Note that it's not necessary to determine the actual parts-of-speech of words in context; using only the most likely part of speech or a set of all possibil- ities will produce adequate, if somewhat diluted, dis- tributional evidence. Similarly, it is useful to compute collocational statistics for arbitrary word classes, such as the class WEEKDAY ----( domingo, lunes, martes, ... }. Such classes may cover many types of associations, and need not be mutually exclusive. For the French experiments, no additional linguistic knowledge or lexical resources were used. The decision lists were trained solely on raw word associations with- out additional patterns based on part of speech, mor- phological analysis or word class. Hence the reported performance is representative of what may be achieved with a rapid, inexpensive implementation based strictly on the distributional properties of raw text. For the Spanish experiments, a richer set of evidence was utilized. Use of a morphological analyzer (devel- oped by Tzoukermann and Liberman (1990)) allowed distributional measures to be computed for associations of lemmas (morphological roots), improving general- ization to different inflected forms not observed in the training data. Also, a basic lexicon with possible parts of speech (augmented by the morphological analyzer) allowed adjacent part-of-speech sequences to be used as disambiguating evidence. A relatively coarse level of analysis (e.g. NOUN, ADJECTIVE, SUBJECT-PRONOUN, ARTICLE, etc.), augmented with independently mod- eled features representing gender, person, and num- ber, was found to be most effective. However, when a word was listed with multiple parts-of-speech, no rel- ative frequency distribution was available. Such words were given a part-of-speech tag consisting of the union of the possibilities (eg ADJECTIVE-NOUN), as in Ku- piec (1989). Thus sequences of pure part-of-speech tags were highly reliable, while the potential sources of noise were isolated and modeled separately. In addition, sev- eral word classes such as WEEKDAY and MONTH were defined, primarily focusing on time words because so many accent ambiguities involve tense distinctions. To build a full part of speech tagger for Spanish would be quite costly (and require special tagged corpora). The current approach uses just the information avail- able in dictionaries, exploiting only that which is useful for the accent restoration task. Were dictionaries not available, a productive approximation could have been made using the associational distributions of suffixes (such as -aba, -aste, -amos) which are often satisfactory indicators of part of speech in morphologically rich lan- guages such as Spanish. The use of the word-class and part-of-speech data is illustrated below, with the example of distinguishing terminara/terminard (a subjunctive/future tense am- biguity): Collocation PREPOSITION que ~erminara de que terminara para que terminara NOUN que terminara carrera que terminara reunion que terminara acuerdo que terminara que terminara WEEKDAY (within ±k words) domingo (within ±k words) 0 viernes (within ±k words) 0 Step 4: Sort by Log-Likelihood Decision Lists termin- terinin- ara ar~ 31 0 15 0 14 0 0 13 0 3 0 2 0 2 42 37 0 23 10 4 into The next step is to compute the ratio called the log- likelihood: A .... Pr(Accent_Patternl [Collocationi) ,~ ostLogt ~ ~ j~ The collocations most strongly indicative of a partic- ular pattern will have the largest log-likelihood. Sorting by this value will list the strongest and most reliable ev- idence first 6. Evidence sorted in the above manner will yield a deci- sion list like the following, highly abbreviated exampleT: LogL Evidence Classification 8.28 t7.24 t7.14 6.87 6.64 5.82 t5.45 PREPOSITION que terminara ~ terminara de que terminara ==~ terminara para que terminara ==~ terminara y terminara =:~ terminar£ WEEKDAY (within ±k words) ::~ terminar£ NOUN que terminara ==~ terminar£ domingo (within ±k words) ==~ terminar£ The resulting decision list is used to classify new ex- amples by identifying the highest line in the list that matches the given context and returning the indicated SProblems arise when an observed count is 0. Clearly the probability of seeing c~td in the context of poisson is not 0, even though no such collocation was observed in the training data. Finding a more accurate probability estimate depends on several factors, including the size of the train- ing sample, nature of the collocation (adjacent bigrams or wider context), our prior expectation about the similarity of contexts, and the amount of noise in the training data. Several smoothing methods have been explored here, includ- ing those discussed in (Gale et al., 1992). In one technique, all observed distributions with the same 0-denominator raw frequency ratio (such as 2/0) are taken collectively, the av- erage agreement rate of these distributions with additional held-out training data is measured, and from this a more realistic estimate of the likelihood ratio (e.g. 1.8/0.2) is computed. However, in the simplest implementation, satis- factory results may be achieved by adding a small constant a to the numerator and denominator, where c~ is selected empirically to optimize classification performance. For this data, relatively small a (between 0.1 and 0.25) tended to be effective, while noisier training data warrant larger a. rEntries marked with t are pruned in Step 5, below. 91 classification. See Step 7 for a full description of this process. Step 5: Optional Pruning and Interpolation A potentially useful optional procedure is the interpo- lation of log-likelihood ratios between those computed from the full data set (the globalprobabilities) and those computed from the residual training data left at a given point in the decision list when all higher-ranked pat- terns failed to match (i.e. the residual probabilities). The residual probabilities are more relevant, but since the size of the residual training data shrinks at each level in the list, they are often much more poorly es- timated (and in many cases there may be no relevant data left in the residual on which to compute the dis- tribution of accent patterns for a given collocation). In contrast, the global probabilities are better estimated but less relevant. A reasonable compromise is to inter- polate between the two, where the interpolated estimate is/3 × global + 7 × residual. When the residual proba- bilities are based on a large training set and are well es- timated, 7 should dominate, while in cases the relevant residual is small or non-existent, /3 should dominate. If always/3 = 0 and 3' = 1 (exclusive use of the resid- ual), the result is a degenerate (strictly right-branching) decision tree with severe sparse data problems. Alter- nately, if one assumes that likelihood ratios for a given collocation are functionally equivalent at each line of a decision list, then one could exclusively use the global (always/3 = 1 and 3' = 0). This is clearly the easiest and fastest approach, as probability distributions do not need to be recomputed as the list is constructed. Which approach is best? Using only the global proa- bilities does surprisingly well, and the results cited here are based on this readily replicatable procedure. The reason is grounded in the strong tendency of a word to exhibit only one sense or accent pattern per collocation (discussed in Step 6 and (Yarowsky, 1993)). Most clas- sifications are based on a x vs. 0 distribution, and while the magnitude of the log-likelihood ratios may decrease in the residual, they rarely change sign. There are cases where this does happen and it appears that some inter- polation helps, but for this problem the relatively small difference in performance does not seem to justify the greatly increased computational cost. Two kinds of optional pruning can also increase the efficiency of the decision lists. The first handles the problem of "redundancy by subsumption," which is clearly visible in the example decision lists above (in WEEKDAY and domingo). When lemmas and word- classes precede their member words in the list, the latter will be ignored and can be pruned. If a bigram is un- ambiguous, probability distributions for dependent tri- grams will not even be generated, since they will provide no additional information. The second, pruning in a cross-validation phase, com- pensates for the minimM observed over-modeling of the data. Once a decision list is built it is applied to its own training set plus some held-out cross-validation data (not the test data). Lines in the list which contribute to more incorrect classifications than correct ones are removed. This also indirectly handles problems that may result from the omission of the interpolation step. If space is at a premium, lines which are never used in the cross-validation step may also be pruned. However, useful information is lost here, and words pruned in this way may have contributed to the classification of test- ing examples. A 3% drop in performance is observed, but an over 90% reduction in space is realized. The op- timum pruning strategy is subject to cost-benefit anal- ysis. In the results reported below, all pruning except this final space-saving step was utilized. Step 6: Train Decision Lists for General Classes of Ambiguity For many similar types of ambiguities, such as the Span- ish subjunctive/future distinction between -ara and ard, the decision lists for individual cases will be quite similar and use the same basic evidence for the classifi- cation (such as presence of nearby time adverbials). It is useful to build a general decision list for all -ara/ard ambiguities. This also tends to improve performance on words for which there is inadequate training data to build a full individual decision lists. The process for building this general class disambiguator is basically identical to that described in Steps 2-5 above, except that in Step 2, training contexts are pooled for all in- dividual instances of the class (such as all -ara/-ard ambiguities). It is important to give each individual - ara word roughly equal representation in the training set, however, lest the list model the idiosyncrasies of the most frequent class members, rather than identify the shared common features representative of the full class. In Spanish, decision lists are trained for the general ambiguity classes including -o/-6, -e/-d, -ara/-ard, and -aran/-ardn. For each ambiguous word belonginging to one of these classes, the accuracy of the word-specific decision list is compared with the class-based list. If the class's list performs adequately it is used. Words with idiosyncrasies that are not modeled well by the class's list retain their own word-specific decision list. Step 7: Using the Decision Lists Once these decision lists have been created, they may be used in real time to determine the accent pattern for ambiguous words in new contexts. At run time, each word encountered in a text is looked up in a table. If the accent pattern is unam- biguous, as determined in Step 1, the correct pattern is printed. Ambiguous words have a table of the pos- sible accent patterns and a pointer to a decision list, either for that specific word or its ambiguity class (as determined in Step 6). This given list is searched for the highest ranking match in the word's context, and a classification number is returned, indicating the most likely of the word's accent patterns given the context s . Slf all entries in a decision list fail to match in a par- ticular new context, a final entry called DEFAULT is used; 92 From a statistical perspective, the evidence at the top of this list will most reliably disambiguate the target word. Given a word in a new context to be assigned an accent pattern, if we may only base the classification on a single line in the decision list, it should be the highest ranking pattern that is present in the target context. This is uncontroversial, and is solidly based in Bayesian decision theory. The question, however, is what to do with the less- reliable evidence that may also be present in the target context. The common tradition is to combine the avail- able evidence in a weighted sum or product. This is done by Bayesian classifiers, neural nets, IR-based clas- sifiers and N-gram part-of-speech taggers. The system reported here is unusual in that it does no such combi- nation. Only the single most reliable piece of evidence matched in the target context is used. For example, in a context of cote containing poisson, ports and allan- tique, if the adjacent feminine article la cote (the coast) is present, only this best evidence is used and the sup- porting semantic information ignored. Note that if the masculine article le cote (the side) were present in a sim- ilar maritime context, the most reliable evidence (gen- der agreement) would override the semantic clues which would otherwise dominate if all evidence was combined. If no gender agreement constraint were present in that context, the first matching semantic evidence would be used. There are several motivations for this approach. The first is that combining all available evidence rarely pro- duces a different classification than just using the single most reliable evidence, and when these differ it is as likely to hurt as to help. In a study comparing results for 20 words in a binary homograph disambiguation task, based strictly on words in local (4-4 word) con- text, the following differences were observed between an algorithm taking the single best evidence, and an other- wise identical algorithm combining all available match- ing evidence: 9 Combining vs. Not Combining Probabilities Agree - Both classifications correct 92% Both classifications incorrect 6% Disagree - Single best evidence correct 1.3% Combined evidence correct 0.7% Total - 100% Of course that this behavior does not hold for all classification tasks, but does seem to be characteristic of lexically-based word classifications. This may be ex- plained by the empirical observation that in most cases, and with high probability, words exhibit only one sense in a given collocation (Yarowsky, 1993). Thus for this type of ambiguity resolution, there is no apparent detri- ment, and some apparent performance gain, from us- it indicates the most likely accent pattern in cases where nothing matches. 9In cases of disagreement, using the single best evidence outperforms the combination of evidence 65% to 35%. This observed difference is 1.9 standard deviations greater than expected by chance and is statistically significant. ing only the single most reliable evidence in a classifi- cation. There are other advantages as well, including run-time efficiency and ease of parallelization. However, the greatest gain comes from the ability to incorporate multiple, non-independent information types in the de- cision procedure. As noted above, a given word in con- text (such as Castillos) may match several times in the decision list, once for its parts of speech, ]emma, capi- talized and capitalization-free forms, and possible word- classes as well. By only using one of these matches, the gross exaggeration of probability from combining all of these non-independent log-likelihoods is avoided. While these dependencies may be modeled and corrected for in Bayesian formalisms, it is difficult and costly to do so. Using only one log-likelihood ratio without combi- nation frees the algorithm to include a wide spectrum of highly non-independent information without additional algorithmic complexity or performance loss. EVALUATION Because we have only stripped accents artificially for testing purposes, and the "correct" patterns exist on- line in the original corpus, we can evaluate perfor- mance objectively and automatically. This contrasts with other classification tasks such as word-sense dis- ambiguation and part-of-speech tagging, where at some point human judgements are required. Regrettably, however, there are errors in the original corpus, which can be quite substantial depending on the type of ac- cent. For example, in the Spanish data, accents over the i (1) are frequently omitted; in a sample test 3.7% of the appropriate i accents were missing. Thus the fol- lowing results must be interpreted as agreement rates with the corpus accent pattern; the true percent correct may be several percentage points higher. The following table gives a breakdown of the differ- ent types of Spanish accent ambiguities, their relative frequency in the training corpus, and the algorithm's performance on each: 1° Summary of Performance on Spanish: Ambiguous Cases (18% of tokens): Type Freq. Agreement Prior -o/-5 81% 98 % 86% -ara/-ard,-aran/-ardn 4 % 92 % 84% Function Words 13 % 98 % 94% Other 2 % 97 % 95% Total 98 % 93% " Unambiguous Cases (82% of tokens): ] I 100 % ] 100% Overall Performance: I I 99.6 % I 98.7% As observed before, the prior probabilities in favor of the most common accent pattern are highly skewed, so one does reasonably well at this task by always using the most common pattern. But the error rate is still 1°The term prioris a measure of the baseline performance one would expect if the algorithm always chose the most common option. 93 roughly 1 per every 75 words, which is unacceptably high. This algorithm reduces that error rate by over 65%. However, to get a better picture of the algorithm's performance, the following table gives a breakdown of results for a random set of the most problematic cases - words exhibiting the largest absolute number of the non-majority accent patterns. Collectively they consti- tute the most common potential sources of error. Performance on Individual Spanish: Pattern 1 anuncio registro marco completo retiro duro paso regalo terminara llegara deje gane Pattern 2 anunci5 registr6 marc6 complet6 retir6 dur6 pas6 regal6 terminar£ llegar~ dej6 gan6 secretaria secretaria seria hacia esta mi serfa hacia est~ ml Ambiguities French: cesse d6cid6 laisse commence c6t~ trait~ cesse d6cide laiss6 commenc6 c6te traite Agrmnt Prior N 98.4% 57% 9459 98.4% 60% 2596 98.2% 52% 2069 98.1% 54% 1701 97.5% 56% 3713 96.8% 52% 1466 96.4% 50% 6383 90.7% 56% 280 82.9% 59% 218 78.4% 64% 860 89.1% 68% 313 80.7% 60% 279 84.5% 52% 1065 97.7% 93% 1065 97.3% 91% 2483 97.1% 61% 14140 93.7% 82% 1221 97.7% 53% 1262 96.5% 64% 3667 95.5% 50% 2624 95.2% 54% 2105 98.1% 69% 3893 95.6% 71% 2865 Evaluation is based on the corpora described in the algorithm's Step 2. In all experiments, 4/5 of the data was used for training and the remaining 1/5 held out for testing. More accurate measures of algorithm per- formance were obtained by repeating each experiment 5 times, using a different 1/5 of the data for each test, and averaging the results. Note that in every experi- ment, results were measured on independent test data not seen in the training phase. It should be emphasized that the actual percent cor- rect is higher than these agreement figures, due to errors in the original corpus. The relatively low agreement rate on words with accented i's (1) is a result of this. To study this discrepancy further, a human judge fluent in Spanish determined whether the corpus or decision list algorithm was correct in two cases of disagreement. For the ambiguity case of mi/ml, the corpus was incor- rect in 46% of the disputed tokens. For the ambiguity anuncio/anunciS, the corpus was incorrect in 56% of the disputed tokens. I hope to obtain a more reliable source of test material. However, it does appear that in some cases the system's precision may rival that of the AP Newswire's Spanish writers and translators. DISCUSSION The algorithm presented here has several advantages which make it suitable for general lexical disambigua- tion tasks that require integrating both semantic and syntactic distinctions. The incorporation of word (and optionally part-of-speech) trigrams allows the modeling of many local syntactic constraints, while colloeational evidence in a wider context allows for more semantic distinctions. A key advantage of this approach is that it allows the use of multiple, highly non-independent ev- idence types (such as root form, inflected form, part of speech, thesaurus category or application-specific clus- ters) and does so in a way that avoids the complex modeling of statistical dependencies. This allows the decision lists to find the level of representation that best matches the observed probability distributions. It is a kitchen-sink approach of the best kind - throw in many types of potentially relevant features and watch what floats to the top. While there are certainly other ways to combine such evidence, this approach has many ad- vantages. In particular, precision seems to be at least as good as that achieved with Bayesian methods applied to the same evidence. This is not surprising, given the observation in (Leacock et al., 1993) that widely diver- gent sense-disambiguation algorithms tend to perform roughly the same given the same evidence. The distin- guishing criteria therefore become: • How readily can new and multiple types of evidence be incorporated into the algorithm? • How easy is the output to understand? • Can the resulting decision procedure be easily edited by hand? • Is it simple to implement and replicate, and can it be applied quickly to new domains? The current algorithm rates very highly on all these standards of evaluation, especially relative to some of the impenetrable black boxes produced by many ma- chine learning algorithms. Its output is highly perspicu- ous: the resulting decision list is organized like a recipe, with the most useful evidence first and in highly read- able form. The generated decision procedure is also easy to augment by hand, changing or adding patterns to the list. The algorithm is also extremely flexible - it is quite straightforward to use any new feature for which a probability distribution can be calculated. This is a considerable strength relative to other algorithms which are more constrained in their ability to handle diverse types of evidence. In a comparative study (Yarowsky, 1994), the decision list algorithm outperformed both an N-Gram tagger and Bayesian classifier primarily be- cause it could effectively integrate a wider range of available evidence types than its competitors. Although a part-of-speech tagger exploiting gender and number agreement might resolve many accent ambiguities, such constraints will fail to apply in many cases and are dif- ficult to apply generally, given the the problem of iden- tifying agreement relationships. It would also be at considerable cost, as good taggers or parsers typically 94 involve several person-years of development, plus often expensive proprietary lexicons and hand-tagged train- ing corpora. In contrast, the current algorithm could be applied quite quickly and cheaply to this problem. It was originally developed for homograph disambiguation in text-to-speech synthesis (Sproat et al., 1992), and was applied to the problem of accent restoration with virtually no modifications in the code. It was applied to a new language, French, in a matter of days and with no special lexical resources or linguistic knowledge, basing its performance upon a strictly self-organizing analysis of the distributional properties of French text. The flex- ibility and generality of the algorithm and its potential feature set makes it readily applicable to other prob- lems of recovering lost information from text corpora; I am currently pursuing its application to such problems as capitalization restoration and the task of recovering vowels in Hebrew text. CONCLUSION This paper has presented a general-purpose algorithm for lexical ambiguity resolution that is perspicuous, easy to implement, flexible and applied quickly to new domains. It incorporates class-based models at sev- eral levels, and while it requires no special lexical re- sources or linguistic knowledge, it effectively and trans- parently incorporates those which are available. It suc- cessfully integrates part-of-speech patterns with local and longer-distance collocational information to resolve both semantic and syntactic ambiguities. Finally, al- though the case study of accent restoration in Spanish and French was chosen for its diversity of ambiguity types and plentiful source of data for fully automatic and objective evaluation, the algorithm solves a worth- while problem in its own right with promising commer- cial potential. References [1] Brill, Eric, "A Corpus-Based Approach to Language Learning," Ph.D. Thesis, University of Pennsylvania, 1993. [2] Brown, Peter, Stephen Della Pietra, Vincent Della Pietra, and Robert Mercer, "Word Sense Disam- biguation using Statistical Methods," Proceedings of the 29th Annual Meeting of the Association for Com- putational Linguistics, pp. 264-270, 1991. [3] Chang, Jing-Shin, Yin-Fen Luo and Keh-Yih Su, "GPSM: A Generalized Probabilistie Semantic Model for Ambiguity Resolution," in Proceedings of the 30th Annual Meeting of the Association for Com- putational Linguistics, pp. 177-184, 1992. [4] Church, K.W., "A Stochastic Parts Program and Noun Phrase Parser for Unrestricted Text," in Pro- ceedings of the Second Conference on Applied Natural Language Processing, ACL, 136-143, 1988. [5] Gale, W., K. Church, and D. Yarowsky, "A Method for Disambiguating Word Senses in a Large Corpus," Computers and the Humanities, 26,415-439, 1992. [6] Hearst, Marti, "Noun Homograph Disambiguation Using Local Context in Large Text Corpora," in Us- ing Corpora, University of Waterloo, Waterloo, On- tario, 1991. [7] Jelinek, F., "Markov Source Modeling of Text Generation," in Impact of Processing Techniques on Communication, J. Skwirzinski, ed., Dordrecht, 1985. [8] Kupiec, Julian, "Probabilistic Models of Short and Long Distance Word Dependencies in Running Text," in Proceedings, DARPA Speech and Natu- ral Language Workshop, Philadelphia, February, pp. 290-295, 1989. [9] Leacock, Claudia, Geoffrey Towell and Ellen Voorhees "Corpus-Based Statistical Sense Resolu- tion," in Proceedings, ARPA Human Language Tech- nology Workshop, 1993. [10] Marty, Fernand, "Trois syst~mes informatiques de transcription phon~tique et graph@mique", in Le Frangais Moderne, pp. 179-197, 1992. [11] Marty, F. and R.S. Hart, "Computer Program to Transcribe French Text into Speech: Problems and Suggested Solutions", Technical Report No. LLL-T- 6-85. Language Learning Laboratory; University of Illinois. Urbana, Illinois, 1985. [12] Merialdo, B., 'Tagging Text with a Probabilistic Model," in Proceedings of the IBM Natural Language ITL, Paris, France, pp. 161-172, 1990. [13] Mosteller, Frederick, and David Wallace, Inference and Disputed Authorship: The Federalist, Addison- Wesley, Reading, Massachusetts, 1964. [14] Resnik, Philip, "Selection and Information: A Class-Based Approach to Lexical Relationships," Ph.D. Thesis, University of Pennsylvania, 1993. [15] Rivest, R. L., "Learning Decision Lists," in Ma- chine Learning, 2, 229-246, 1987. [16] Sproat, Richard, Julia Hirschberg and David Yarowsky "A Corpus-based Synthesizer," in Proceed- ings, International Conference on Spoken Language Processing, Banff, Alberta, October 1992. [17] Tzoukermann, Evelyne and Mark Liberman, " A Finite-state Morphological Processor for Spanish," in Proceedings, COLING-90, Helsinki, 1990. [18] Yarowsky, David, "Word-Sense Disambigua- tion Using Statistical Models of Roget's Cate- gories Trained on Large Corpora," in Proceedings, COLING-92, Nantes, France, 1992. [19] Yarowsky, David, "One Sense Per Collocation," in Proceedings, ARPA Human Language Technology Workshop, Princeton, 1993. [20] Yarowsky, David, "A Comparison of Corpus-based Techniques for Restoring Accents in Spanish and French Text," to appear in Proceedings, 2nd An- nual Workshop on Very Large Text Corpora, Kyoto, Japan, 1994. 95
1994
13
AN EMPIRICAL MODEL OF ACKNOWLEDGMENT FOR SPOKEN-LANGUAGE SYSTEMS David 6:. Novick and Stephen Sutton Interactive Systems Group Department of Computer Science and Engineering Oregon Graduate Institute 20000 N.W. Walker Rd. P.O. Box 91000 Portland, OR 97291-1000 novick~cse, ogi. edu Abstract We refine and extend prior views of the descrip- tion, purposes, and contexts-of-use of acknowledg- ment acts through empirical examination of the use of acknowledgments in task-based conversa- tion. We distinguish three broad classes of ac- knowledgments (other--*ackn, self--*other--*ackn, and self+ackn) and present a catalogue of 13 pat- terns within these classes that account for the spe- cific uses of acknowledgment in the corpus. 1 MOTIVATION This study is motivated by the need for better di- alogue models in spoken-language systems (SLSs). Dialogue models contribute directly to the interac- tion by providing inter-utterance coherence. Flu- ent understanding and use of acknowledgments should improve spoken-language systems in at least the following ways: • Preventing miscommunication. Acknowledg- ments are an important device for establishing mutual knowledge and signaling comprehension. Early detection and correction of cases of mis- communication and misunderstanding should prevent failure that would otherwise have been even more catastrophic. • Improved naturalness. Acknowledgments are a prominent feature of human-human dialogue. Supporting the use of acknowledgments for both the system and the user will emphasize the "nat- uralness" of interfaces and improve their utility. • Dialogue control. Humans cope with dialogue control (e.g., turn-taking) with seemingly little or no effort. Acknowledgments form an intricate relationship with dialogue control mechanisms. Understanding these control mechanisms is cen- tral to the development and success of spoken language systems in order to "track" dialogues and determine appropriate system actions. • Improved recognition. To the extent that a di- alogue model can narrow the range of possible contexts for interpretation of a user's utterance, a spoken-language system's speech recognition performance will be improved (Young et al., 1989). We seek to refine and extend prior views of the description, purposes, and contexts-of-use of acknowledgment acts through empirical examina- tion of the use of acknowledgments in task-based conversation. In particular, we seek to describe systematically (1) the communicative value of an acknowledgment and (2) the circumstances of its use. The scope of our inquiry involves spoken interaction. We present a catalogue of types of acknowledgment. This catalogue is based on a process model of acknowledgment that explains instances of these acts in a corpus of task-based conversations. 2 RELATED WORK Clark and Schaefer (1989) suggested that acknowl- edgments are an important component of a larger framework through which communicating parties provide evidence of understanding. Conversants have a range of means, which vary with respect to strength, for indicating acceptance of a presen- tation. These include continued attention, initi- ation of the next relevant contribution, acknowl- edgment, demonstration, and display. Thus acknowledgments are common linguistic devices used to provide feedback. Broadly speak- ing, acknowledgments are responsive acts. 1 That is, they are usually uttered in (possibly partial) response to a production by another speaker; ac- knowledgment acts express beliefs and intentions of one conversant with respect to the mutuality of prior exchanges involving some other conversant. The intended perlocutionary effect of an acknowl- edgment act is generally the perception of mutu- ality of some belief. 1A notable exception is the self-acknowledgment which will be discussed shortly 96 In previous research, the function of acknowl- edgments has been most readily characterized in terms of attention, understanding and acceptance on the recipient's behalf (Kendon, 1967; Schegloff, 1982). In addition, it has been suggested that they serve to facilitate active participation in dia- logues and promote "smooth" conversations (Dun- can and Fiske, 1987). Schegloff (1982) described two main types of acknowledgment: continuers and assessments. Continuers, such as "uh huh," "quite," and "I see," act as bridges between units. Conversants use acknowledgments as continuers to signal con- tinued attention and to display the recipient's un- derstanding that the speaker is in an extended turn that is not yet complete. Moreover, con- tinuers indicate the turning down of an oppor- tunity to undertake a repair subdialogue regard- ing the previous utterance or to initiate a new turn. Assessments--such as "oh wow" and "gosh, really?"-- are essentially an elaboration on con- tinuers. That is, they occur in much the same environment and have similar properties to con- tinuers, but in addition express a brief assessment of the previous utterance. Empirical analysis of conversations has in- dicated that the occurrence of acknowledgments is not arbitrary. Acknowledgments mostly occur at or near major grammatical boundaries, which serve as transition-relevance places for turn-taking (Sacks et al., 1974; Hopper, 1992). In particu- lar, work by Orestrom (1983) and Goodwin (1986) suggested a tendency for continuers to overlap with the primary speaker's contribution, in such a way that they serve as bridges between two turn- constructional units. Assessments, on the other hand, are generally engineered without overlap. Goodwin suggested that conversants make special efforts to prevent assessments from intruding into subsequent units. That is, the speaker typically delays production of the subsequent unit until the recipient 's assessment has been brought to com- pletion. Clearly, acknowledgments are an important device for providing evidence of understanding and for avoiding miscommunication between parties. Just as next-relevant-contributions include the en- tire range of potential task or domain actions, the task-based role of acknowledgments can be differ- entiated within their class as acceptances. Beyond continuers and assessments, we will demonstrate that acknowledgments incorporate a larger set of conversational actions, many of which relate to co- herence of multi-utterance contributions. 3 DIALOGUE ANALYSIS In this section, we describe the task characteris- tics and the corpus used for this study, present a theoretical model of acknowledgment acts in task-based dialogue, and present an analysis of ac- knowledgment acts based on corpus material. 3.1 THE VEHICLE NAVIGATION SYSTEM CORPUS The corpus we analyzed was collected by U S WEST Advanced Technologies in the domain of a vehicle navigation system (VNS). A VNS is in- tended to provide travel directions to motorists by cellular telephone: the system interacts with the caller to determine the caller's identity, current location and destination, and then gives driving directions a step at a time under the caller's con- trol. U S WEST collected the dialogues through a Wizard-of-Oz experiment (Brunner et M., 1992) in which the wizard was free to engage in linguis- tically unconstrained interaction in the VNS task. Each of the 21 subjects performed three tasks in the VNS domain. As a whole, the corpus con- tained 2499 turns and 1107 acknowledgments. 3.2 A TASK-BASED MODEL OF ACKNOWLEDGMENT ACTS The generally accepted view of acknowledg- ments, as noted earlier, distinguishes between two classes--namely continuers and assessments (Schegloff, 1982). Indeed, there were many oc- currences of continuers (and a few assessments) in the VNS corpus. However, our analysis suggests that acknowledgments perform functions beyond these two classes. For instance, we observed sev- eral instances of acknowledgment being used at the beginning of a turn by the same speaker. This contrasts with the notions of continuers and as- sessments which, by definition, occur as unitary productions in the context of extended turns by another speaker. Clearly, an acknowledgment oc- curring at the beginning of a turn is not serving as a prompt for the other speaker to continue. To account for the evidence provided by the VNS corpus, we propose to extend Schegloff's clas- sification scheme into a task-based model of ac- knowledgment acts. This model formalizes the meaning and usage characteristics of acknowledg- ments, based On an analysis of what acknowledg- ments mean and when acknowledgments are used in the VNS dialogues. A useful way of looking at the role of acknowl- edgments in the context of turns is to consider the basic structural context of exchanges. We begin by reviewing the concept of an adjacency pair (Schegloff and Sacks, 1973; Clark and Schae- 97 fer, 1989). An adjacency pair is formed by two consecutive utterances that have a canonical re- lationship, such as question-answer and greeting- greeting. An acknowledgment can be produced as the second phase of an adjacency pair or follow- ing a complete adjacency pair; in each case, the utterance may contain multiple acceptances. Of course, an acknowledgment can be produced also as a single turn that does not relate to an adja- cency pair. Thus, based on exchange structure one can distinguish three broad structural classes of acknowledgments: 2 • Other-*ackn, where the acknowledgment forms the second phase of an adjacency pair; • Sclf--*other--*ackn, where Self initiates an ex- change, Other (eventually) completes the ex- change, and Self then utters an acknowledg- ment; and • Self÷ackn, where Self includes an acknowledg- ment in an utterance outside of an adjacency pair. In the other--*ackn class, the exchange is a ba- sic adjacency pair; Other's act will be composed of a single turn. In the self--*other-*ackn class, the exchange initiated and eventually acknowledged by Self may be composed of multiple turns, with multiple utterances by both Self and Other. In the self÷ackn class, the acknowledgment occurs in a single, extended turn by Self that may contain multiple utterances. 3.3 A CATALOGUE OF ACKNOWLEDGMENT ACTS IN TASK-BASED DIALOGUE In this section, we elaborate the structural classes of acknowledgment through a catalogue of pat- terns of speech acts that occur in each class. This catalogue provides broad coverage of patterns typically encountered in task-oriented discourse. These patterns describe the context of acknowl- edgments in terms of exchanges and are derived from utterances in the VNS corpus. Each act in an exchange is represented in terms of speech-act verbs based on the set described by Wierzbicka (1987) . Table 1 summarizes the speech-act pat- terns in the catalogue. In the following sections, we will consider each of the structural classes in turn and provide examples of selected patterns from the VNS corpus. We also consider embed- 2The notation for structural class names indicates turns delimited by arrows (--*). Acts combined within a turn are joined with a plus (+) symbol. Other and self refer to non-acknowledgment acts by the respec- tive conversants. "Self" refers to the party producing the acknowledgment; "Other" is the other party. ded exchanges, in which basic patterns are used to build more complex patterns. 3.3.1 Other-*Aekn Acknowledgments in the other--*ackn class relate principally to the im- mediately antecedent utterance as opposed to the prior exchange, which is covered by the self-*other-*ackn class. In Clark and Schae- fer's (1989) terms, Self's acknowledgment in the other-*ackn class serves as the acceptance phase for Other's presentation. As listed in Table 1, the canonical other--* ackn patterns axe inform--* ackn, inform-*ackn+mrequest, request-*ackn÷inform, mdirect-~ackn and preclose-*ackn. 3 In each of these cases, the first turn is by Other and the sec- ond turn is Self's acknowledgment. In some cases, Self's turn also extends to include other signifi- cant utterances. We illustrate the other-*ackn class through examination of the inform-*ackn and inform-*ackn+mrequest patterns. Inform-*Aekn The inform-*aekn pattern covers cases where Other performs an inform act and Self responds with an acknowledgment of that act. In the follow- ing example 4 of an inform-* ackn, the wizard gives directions to the user, who acknowledges these di- rections. This is an example of an acknowledg- ment that Schegloff (1982) would call a continuer. Example 1 (U6.3.1) 5 (1.1) Wizard: On Evans, you need to turn left and head West for approximately three quarters of a mile to Clermont. (1.2) User: Okay. (1.3) Wizard: And, urn, on Clermont you turn left, heading South for about two blocks to Iliff. Here, the "okay" at turn 1.2 indicates the user's acceptance of the wizaxd's utterance. That is, the acknowledgment implies that the user understood information given by the wizard-- more emphatically than a simple next-relevant- contribution response. Use of the acknowledg- 3The mrequest and redirect acts are forms of meta- act in which the speaker initiates a clarification subdi- alogue or otherwise explicitly addresses the control of the conversation; rarequest and redirect are extensions of Wierzbicka's (1987) speech-act categories following Novick's (1988) meta-act analysis. 4In the examples, the acknowledgment of principal interest is highlighted. ~All examples are extracted from a corpus of tele- phone dialogues from a task-oriented "Wizard-of-Oz" protocol collection study described in Section 3.1. The examples in this paper are notated with the corpus di- alogue reference number and each turn is numbered for purposes of reference. 98 Other--~ Ackn [ Self--~Other--~ Ackn Self÷ Ackn inform~ackn inform--*ackn-bmrequest request--*ackn+inform mdirect-*ackn preclose--*ackn inform~ackn-*ackn request--*inform---~ackn mrequest--*inform--~ackn mdirect--~ackn--*ackn inform+ackn+inform mrequest+ackn mdirect+ackn Table 1: A Summary of Speech-Act Patterns for Structural Classes of Acknowledgment ment would be strong evidence of understanding in Clark and Schaefer's (1989) terms. An important point to stress here is that the wizard cannot rely on the user necessarily having received the infor- mation that was actually conveyed or formed the intended interpretation. Rather, the wizard is left with the user's response indicating that the user was apparently satisfied with the wizard's original presentation. Inform--* Ackn+ MRequest The inform--*ackn+mrequest class repre- sents a significant functional variation on the inform--~ackn class just considered. It covers cases where Other performs an inform act, Self responds with an acknowledgment of that act and then goes on to seek clarification of the content of the inform act. Requests for clarification are kinds of meta- act because they are concerned with aspects of di- alogue control rather than the task itself. That is, requests for clarification are concerned with the specifics of old information rather than seeking to elicit largely new information--unlike request- inform acts. Example 2 (U4.3.1) (2.1) Wizard: Okay. Then you want to go north on Speer Boulevard for one and one half miles to Alcott Street. (2.1) User: Okay. I want to go right on Speer? (2.2) Wizard: It will be a left. In this example, the repair is a potential re- quest for specification (Lloyd, 1992). That is, the user's clarification at 2.2 ("I want to go right on Speer?") focuses on information which was missing from the surface structure of the origi- nal inform act but which is potentially available-- namely "right" instead of "north." 3.3.2 Self--~Other---~Ackn Acknowledgments in the self--~other--*ackn class relate to the pre- vious exchange, rather than just the previous ut- terance. Broadly speaking, they express the cur- rent state of the dialogue in addition to embody- ing the functionality of other--~ackn acknowledg- ments. That is, they explicitly mark the com- pletion of the antecedent exchange and indicate that the dialogue will either enter a new exchange or resume an existing exchange. Furthermore, self--~other--~ackn acknowledgments signal under- standing and acceptance of both the previous ex- change and the previous utterance. The canon- ical patterns of the self--* other--* ackn class, as listed in Table 1, include inform--*ackn---~ackn, request-* inform-~ ackn, mrequest-* inf orm--~ ackn and mdirect--~ackn--*ackn. We illustrate the self--*other--~ackn class through examination of the request-~inform--, ackn pattern. Request --*Inform--~Aekn The request--~inform--*ackn class covers cases where Self requests an inform act of Other. Other then performs that inform act and Self acknowl- edges. Note that the acknowledgment in this case follows a completed request-inform adjacency pair. Earlier, we mentioned that question-answer adja- cency pairs can be regarded as special cases of request-inform adjacency pairs (Searle, 1969). In the following example, the wizard requests the user's start location. The user satisfies this re- quest by communicating the desired information and the wizard then acknowledges. Here the ac- knowledgment at 3.3 serves to indicate acceptance (that is, receipt, understanding and agreement) of the user's inform act and is a signal that the re- quest initiated by the wizard at 3.1 has been sat- isfied and thus the exchange is complete. Example 3 (U2.1.1) (3.1) Wizard: Okay and uh, what's your starting location? (3.2) User: I'm at 36th and Sheridan at the Park-n-Ride. (3.3) Wizard: Okay, one moment please. 3.3.3 Self-bAckn Self-acknowledgments occur when Self issues an acknowledgment following some action (either verbal or physical) performed by Self. These are not responsive acts, unlike other acknowledgment usages considered; however, they are still closely tied with the idea of establish- ing mutual beliefs. The canonical patterns, as 99 listed in Table 1, include inform+ackn+inform, mrequest+ackn, and mdirect+ackn. We illustrate the self+ackn class through examination of the in- form +ackn +inform pattern. Inform+Ackn+Inform In this pattern, Self uses an acknowledgment in the middle of an extended turn. Consider the following example: Example 4 (U5.3.1) (4.1) Wizard: All right, urn, the first thing you need to do is go South on Logan Street for one and a half miles to Evans Avenue. Then turn left on Evans Avenue and go one and a quarter miles to South Josephine Street. Okay, then you'll turn left on South Josephine Street. Nineteen Forty South Josephine is within the first block. This particular self-acknowledgment is very similar to a continuer--indeed it may be regarded as a self-continuer. The wizard's acknowledgment in this example represents a sort of temporizing, a chance for the wizard to "catch his mental breath." For the user, this sort of "Okay" thus signals that the wizard intends to continue his turn. This is functionally distinct from a meta-request of the form "Okay?" because there is no rising intona- tion and the wizard does not wait for a response. In fact, use of a self-acknowledgment at the end of a turn would be peculiar. 3.3.4 Embedded Exchanges We noted earlier that basic patterns can used to build more com- plex patterns. This can lead potentially to vari- ations in patterns of acknowledgments. In par- ticular, it is possible to observe cascades of ac- knowledgments as nested exchanges are "popped" one by one. Simple acts may be replaced by more complex exchanges, so that an inform act may be replaced by an exchange that accomplishes an in- form via a sequence of informs, clarifications and acknowledgments. In this section we will consider one of the variations encountered in the VNS cor- pus; where an {nform---*ackn--~ackn replaces the inform act in an inform--*ackn sequence. In the following example, there are three successive ac- knowledgment acts. The first acknowledgment at 5.2 is accompanied by a verbatim response by the user. It is the second phase of the inform--*ackn adjacency pair, indicating understanding and ac- ceptance of the wizard's inform act in which a di- rection was clarified. The second acknowledgment, issued by the wizard at 5.3, marks the completion of the inform--*ackn exchange. That is, the wiz- ard recognizes that it is his or her turn yet has nothing more to add, so indicates passing up the turn with an acknowledgment. The third acknowl- edgment, issued by the user at 5.4, is associated with the user recognizing that the wizard has fin- ished clarifying directions; the user thus acknowl- edges this embedded inform act. The user then indicates satisfaction and approval of the wizard's directions with the assessment "Sounds good." Example 5 (U6.2.1) (5.1) Wizard: Okay, it was, urn, on Evans it's three and three quarter miles to Jas- mine. (5.2) User: Three, okay. (5.3) Wizard: Okay. (5.4) User: All right, sounds good. 4 CONCLUSION Why is a conversation-analytic study of acknowl- edgment useful in the development of spoken language systems? SLSs developers face the dual challenges of creating both domain-based dialogues and repair-oriented dialogues. Lack- ing systematic mechanisms for natural mainte- nance of mutuality, SLSs tend to rely on do- main structures--producing rather stolid interac- tion. The most advanced systems incorporate re- pair acts, but are unable to relate the repairs to the main dialogue structures in a natural way. The ac- knowledgment model described in this paper pro- vides a systematic method of maintaining mutu- ality of knowledge for both domain and control information. More concretely, using this model SLSs can account for acknowledgments by both user and system. The corpus evidence suggests that users' utterances in unconstrained dialogues contain many instances of acknowledgment. In interpret- ing these utterances, identification of the appro- priate acknowledgment function affects the state of the dialogue model and thus plays an important role in determining an appropriate response by the system. In producing such responses, the acknowl- edgment model can provide structurally appropri- ate utterances. The fundamental idea is to pro- duce contextually appropriate acknowledgments that advances the dialogue seamlessly with respect to both domain and control functions. That is, the system needs to give the right signals at the right time. The evidence of the U S WEST VNS cor- pus suggests that understanding and production of domain and control utterances are closely linked; they thus cannot be implemented as independent mechanisms in an SLS. For example, giving direc- tions involves presenting large amounts of infor- mation for which an installment approach often proved effective. Typically the user was given the opportunity to choose the style of presentation of 100 directions, either step-by-step or all at once. The choice of presentation method by the conversants was a dynamic one: in cases where it became ap- parent that the user was experiencing difficulties with either hearing or understanding directions, the wizard often resorted to the step-by-step ap- proach. This form of repair changed the process of interaction so that the comprehension of each in- stallment was verified before proceeding with the next. The conversants in the VNS corpus displayed relatively higher rates of use of acknowledgment in repair situations or when unplanned events arose (e.g., the user had gotten lost). Intuitively, people make more effort to establish mutual knowledge when it is apparent that miscommunication has occurred than at other times; their certainty cri- terion for mutuality (Clark and Marshall, 1981) is raised as a result of the need for repair. This suggests that a facility for acknowledgment is an important element in the development of robust SLSs because use of acknowledgment is most crit- ical precisely when the conversation has gone awry. We are currently developing a computational model of acknowledgment based on the empirical work presented in this paper. This model is in- tended for integration into a SLS where it will serve both to predict when acknowledgments are appropriate from the system and when to expect acknowledgments from the user. Briefly, deter- mining the applicability of an acknowledgment in- volves interpreting exchanges in terms of speech acts and then mapping these speech-act patterns onto the acknowledgment classes described. This, we believe, will facilitate improved SLS robustness through achievement of a greater degree of mutual understanding and provide a more natural and in- tuitive interaction. The utility and implementa- tion of the empirical model will be the focus of a later paper. 5 ACKNOWLEDGMENTS This work was supported by U S WEST Advanced Technologies and the Oregon Advanced Comput- ing Institute. References H. Brunner, G. Whittemore, K. Ferrara, and J. Hsu. 1992. An assessment of writ- ten/interactive dialogue for information re- trieval applications. Human Computer Inter- action, 7:197-249. H.H. Clark and C.R. Marshall. 1981. Definite reference and mutual knowledge. In A.K. Joshi, B.L. Webber, and I.A. Sag, editors, El- ements of discourse understanding, pages 10- 63. Cambridge University Press, Cambridge. H.H. Clark and E.F. Schaefer. 1989. Contributing to discourse. Cognitive Science, 13:259-294. S. Duncan and D.W. Fiske. 1987. Face-to-face in- teraction: Research methods and theory. Wi- ley, New York. C. Goodwin. 1986. Between and within: Alterna- tive sequential treatments of continuers and assessments. Human Studies, 9:205-217. R. Hopper. 1992. Telephone conversations. Uni- versity of Indiana, Bloomington, IN. A. Kendon. 1967. Some functions of gaze in social interaction. Acta Psychologica, 26:22-63. P. Lloyd. 1992. The role of clarification requests in children's communication of route directions by telephone. Discourse Processes, 15:357- 374. D.G. Novick. 1988. Control of mixed-initiative discourse through meta-locutionary acts: A computational model. Doctoral dissertation, Department of Computer Science and Infor- mation Science, University of Oregon, Decem- ber. B. Orestrom. 1983. Turn-taking in english con- versation. Gleerup, Lund, Sweden. H. Sacks, E. Schegloff, and G. Jefferson. 1974. A simplest systematics for the organization of turn-taking in conversation. Language, 50:696-735. E.A. Schegloff and H. Sacks. 1973. Opening up closings. Semiotica, 8:289-327. E.A. Schegloff. 1982. Discourse as an interac- tional achievement: Some uses of 'uh huh' and other things that come between sentences. In D. Tannen, editor, Analyzing Discourse: Text and Talk, pages 71-93. Georgetown University Press, Washington, D.C. J.R. Searle. 1969. Speech acts: An essay in the philosophy of language. Cambridge University Press, Cambridge. A. Wierzbicka. 1987. English speech act verbs: A semantic dictionary. Academic Press, Sydney, Australia. S. Young, A. Hauptmann, W. Ward, E. Smith, and P. Werner. 1989. High level knowledge sources in usable speech recognition systems. Communications of the ACM, 32(2):183-194. 101
1994
14
SPEECH DIALOGUE WITH FACIAL DISPLAYS: MULTIMODAL HUMAN-COMPUTER CONVERSATION Katashi Nagao and Akikazu Takeuchi Sony Computer Science Laboratory Inc. 3-14-13 Higashi-gotanda, Shinagawa-ku, Tokyo 141, Japan E-mail: { nagao,t akeuchi} @csl.sony.co.j p Abstract Human face-to-face conversation is an ideal model for human-computer dialogue. One of the major features of face-to-face communication is its multi- plicity of communication channels that act on mul- tiple modalities. To realize a natural multimodal dialogue, it is necessary to study how humans per- ceive information and determine the information to which humans are sensitive. A face is an in- dependent communication channel that conveys emotional and conversational signals, encoded as facial expressions. We have developed an experi- mental system that integrates speech dialogue and facial animation, to investigate the effect of intro- ducing communicative facial expressions as a new modality in human-computer conversation. Our experiments have showen that facial expressions are helpful, especially upon first contact with the system. We have also discovered that featuring facial expressions at an early stage improves sub- sequent interaction. Introduction Human face-to-face conversation is an ideal nmdel for human-computer dialogue. One of the major features of face-to-face communication is its mul- tiplicity of communication channels that act on multiple modalities. A channel is a communica- tion medium associated with a particular encod- ing method. Examples are the auditory channel (carrying speech) and the visual channel (carry- ing facial expressions). A modality is the sense used to perceive signals from the outside world. Many researchers have been developing mul- timodal dialogue systems. In some cases, re- searchers have shown that information in one channel complements or modifies information in another. As a simple example, the phrase "delete it" involves the coordination of voice with ges- ture. Neither makes sense without the other. Re- searchers have also noticed that nonverbal (ges- ture or gaze) information plays a role in set- ting the situational context which is useful in re- stricting the hypothesis space constructed dur- ing language processing. Anthropomorphic inter- faces present another approach to nmltimodal di- alogues. An anthropomorphic interface, such as Guides [Don et al., 1991], provides a means to realize a new style of interaction. Such research attempts to computationally capture the commu- nicative power of the human face and apply it to human-computer dialogue. Our research is closely related to the last ap- proach. The aim of this research is to improve human-computer dialogue by introducing human- like behavior into a speech dialogue system. Such behavior will include factors such as facial expres- sions and head and eye movement. It will help to reduce any stress experienced by users of comput- ing systems, lowering the complexity associated with understanding system status. Like most dialogue systems developed by nat- ural language researchers, our current system can handle domain-dependent, information-seeking di- alogues. Of course, the system encounters prob- lems with ambiguity and missing intbrmation (i.e., anaphora and ellipsis). The system tries to re- solve them using techniques from natural language understanding (e.g., constraint-based, case-based. and plan-based methods). We are also studying the use of synergic multimodality to resolve lin- guistic problems, as in conventional multimodal systems. This work will bc reported in a separate publication. In this paper, we concentrate on the role of nonverbal nlodality for increasing flexibility of human-computer dialogue and reducing the men- tal barriers that many users associate with com- puter systems. Research Overview of Multimodal Dialogues Multimodal dialogues that combine verbal and nonverbal communication have been pursued 102 mainly from the following three viewpoints. 1. Combining direct manipulation with natural lan- guage (deictic) expressions "Direct manipulation (DM)" was suggested by Shneiderinan [1983]. The user can interact di- rectly with graphical objects displayed on the computer screen with rapid, iNcremeNtal, re- versible operations whose effects on the objects of interest are immediately visible. The semantics of natural language (NL) ex- pressions is anchored to real-world objects and events by means of pointing and demoNstratiNg actions and deictic expressions such as "this," "that," "here," "there," "theN," and "now." Some research on dialogue systems has coin- bined deictic gestures aNd natural language such as Put-That-There [Bolt, 1980], CUBRICON [Neal et al., 1988], and ALFREsco [Stock, 1991]. One of the advantages of combined NL/DM in- teraction is that it can easily resolve the miss- ing information in NL expressions. For exam- ple, wheN the system receives a user request in speech like "delete that object," it can fill in the missing information by looking for a pointing gesture from the user or objects on the screen at the time the request is made. 2. Using nonverbal inputs to specify the ;~ontext and filter out unrelated information The focus of attention or the focal point plays a very important role in processing applications with a broad hypothesis space such as speech recognition. One example of focusing modality is following the user's looking behavior. Fixa- tion or gaze is useful for the dialogue system to determine the context of the user's inter- est. For example, when a user is looking at a car, that the user says at that time may be related to the car. Prosodic information (e.g., voice tones) in the user's utterance also helps to determine focus. In this case, the system uses prosodic information to infer the user's be- liefs Or intentions. Combining gestural informa- tion with spoken language comprehension shows another example of how context may be deter- mined by the user's nonverbal behavior [Ovi- att et al., 1993]. This research uses multimodal forms that prompt a user to speak or write into labeled fields. The forms are capable of guiding and segmenting inputs, of conveying the kind of information the system is expecting, and of re- ducing ambiguities in utterances by restricting syntactic and semantic complexities. 3. Incorporating human-like behavior into dialogue systems to reduce operation complexity and stress often associated with computer systems Designing human-computer dialogue requires that the computer makes appropriate backchan- nel feedbacks like NoddiNg or expressions such as "aha" and "I see." One of the major ad- vantages of using such nonverbal behavior in human-computer conversation is that reactions are quicker than those fl'om voice-based re- spouses. For example, the facial backchannel plays an important role in hulnan face-to-face conversation. We consider such quick reac- tions as being situated actions [Suchman, 1987] which are necessary for resource-bounded dia- logue participants. Timely responses are crucial to successfid conversation, since some delay in reactions can imply specific meanings or make messages unnecessarily ambiguous. Generally, visual channels contribute to quick user recognition of system status. For example, the system's gaze behavior (head and eye move- meat) gives a strong impression of whether it is paying attention or not. If the system's eyes wander around aimlessly, the user easily recog- nizes the system's attention elsewhere, perhaps even unaware that he or she is speaking to it. Thus, gaze is an important indicator of system (in this case, speech recognition) status. By using human-like nonverbal behavior, the system can more flexibly respond to the user than is possible by using verbal modality alone. We focused on the third viewpoint and devel- oped a system that acts like a human. We em- ployed communicative facial expressions as a new modality in human-computer conversation. We have already discussed this, however, in another paper [Takeuchi and Nagao, 1993]. Here, we con- sider our implemented system as a testbed for in- corporating human-like (nonverbal) behavior into dialogue systems. The following sections give a system overview, an example dialogue along with a brief explanation of the process, and our experimental results. Incorporating Facial Displays into a Speech Dialogue System Facial Displays as a New Modality The study of facial expressions has attracted the interest of a number of different disciplines, in- cluding psychology, ethology, and interpersonal communications. Currently, there are two basic schools of thought. One regards facial expres- sions as beiu~ expressioNs of emotion [Ekman and Friesen, 1984]. The other views facial expressions in a social context, regarding them as being com- municative signals [Chovil, 1991]. The term "fa- cial displays" is essentially the same as "facial ex- pressions," but is less reminiscent of emotion. In this paper, therefore, we use "facial displays." 103 A face is an independent communication chan- nel that conveys emotional and conversational sig- nals, encoded as facial displays. Facial displays can be also regarded as being a modality because the human brain has a special circuit dedicated to their processing. Table 1 lists all the communicative facial dis- plays used in the experiments described in a later section. The categorization framework, terminol- ogy, and individual displays are based on the work of Chovil [1991], with the exception of the em- phasizer, underliner, and facial shrug. These were coined by Ekman [1969]. Table 1: Communicative Facial Displays Used in the Experiments. (Categorization based mostly on Chovil [1991]) Syntactic Display ~ a t i o n 2. Question mark 3. Emphasizer 4. Underliner 5. Punctuation 6. End of an utterance 7. Beginning of a story 8. Story continuation 9. End of a story 10. Think'rag Remembering 11. Facial shrug: "I don't know" 12. Interactive: "You know?" 13. Metacommunicative: Indication of sarcasm or joke 14. "Yes" 15, "No" 15, "Not" 17. *'But" Listener Comment Disp--~ay 18. Backchannel: Indication of attendance 19. Indication of loudness Understanding levels 20. Confident 21. Moderately confident 22, Not confident 23. "Yes" ~ g Eyebrow raising or lowering Eyebrow raising or lowering Longer eyebrow raising Eyebrow movement Eyebrow raising Eyebrow raising Avoid eye contact Eye contact Eyebrow raising or lowering-T- closing the eyes, pulling back one mouth side Eyebrow flashes, mouth corners pulled down, mouth corners pulled back Eyebrow raising Eyebrow raising and looking up and off Eyebrow actions Eyebrow actions Eyebrow actions Eyebrow actions Eyebrow raising, mouth corners turned down Eyebrows drawn to center Eyebrow raising, head nod Eyebrow raising Eyebrow lowering Eyebrow raising Evaluation of utterances 24. Agreement Eyebrow raising 25. Request for more information Eyebrow raising 26. Incredulity Longer eyebrow raising Three major categories are defined as follows. Syntactic displays. These are facial displays that (1) place stress on particular words or clauses, (2) are connected with the syntactic aspects of an utterance, or (3) are connected with the organiza- tion of the talk. Speaker displays. Speaker displays are facial displays that (1) illustrate the idea being verbally conveyed, or (2) add additional information to the ongoing verbal content. Listener comment displays. These are facial displays made by the person who is not speaking, in response to the utterances of the speaker. An Integrated System of Speech Dialogue and Facial Animation We have developed an experimental system that integrates speech dialogue and facial animation to investigate the effects of human-like behavior in human-computer dialogue. The system consists of two subsystems, a fa- cial animation subsystem that generates a three- dimensional face capable of a range of facial dis- plays, and a speech dialogue subsystem that rec- ognizes and interprets speech, and generates voice outputs. Currently, the animation subsystem runs on an SGI 320VGX and the speech dialogue sub- system on a Sony NEWS workstation. These two subsystems communicate with each other via an Ethernet network. Figure 1 shows the configuration of tlle inte- grated system. Figure 2 illustrates the interaction of a user with the system. i .................. t. ~-~T~ --6---~.~ -.,. Speech recognition \~ 11 , ~. ~ Word sequence ~\ ~ ~ Symactic & semantic analysis ~ ~',. • ...................................... \ -,I i. .... ,o° . ....... .......... ~. sr,~E's in=ntion "\ 1"~--~'.'. ...... L:...il ~ ........ : , i ..... "'"~ . _"~'~------~i'y m of fa~ ~'1 di~C"~"~--- __ } ~ Muscle paramemrs i ! ~ System's response i ] Facial animation ~ i ! I Voice synthesis .:. ............. ~-_..-=.::..-:E...~to_:o.,.!!~, ~_.~-~...:=~...~ .......... ~ ....... Facial display ~ Voice Facial animation subsystem Speech dialogue subsystcm Figure 1: System Configuration Facial Animation Subsystem The face is modeled three-dimensionally. Our cur- rent version is composed of approximately 500 polygons. The face can be rendered with a skin- like surface material, by applying a texture map taken from a photograph or a video frame. In 3D computer graphics, a facial display is realized by local deformation of the polygons rep- resenting the face. Waters showed that deforma- tion that simulates the action of muscles under- lying the face looks more natural [Waters, 1987]. We therefore use munerical equations to simulate muscle actions, as defined by Waters. Currently, 104 o ii iiiiiiiiiiiiiiiiiiiiiiiiiiiiii!iiiii!iii!iiiii~iiii!iiiiiii)iiiii i! !iiiiii:jiiii +iiiiiiiiiiiiiii+il i iiiiii i+ i i ' .......... ;ill Figure 2: Dialogue Snapshot the system incorporates 16 muscles and 10 pa- rameters, controlling mouth opening, jaw rotation, eye movement, eyelid oI)ening, and head orienta- tion. These 16 nmscles were deternfined by Wa- ters, considering the correspondence with action units in the Facial Action Coding System (FACS) [Ekman and Friesen. 1978]. For details of the fa- cial modeling and animation system, see [Takeuchi and Franks, 1992]. We use 26 synthesized facial displays, corre- sponding to those listed in Table 1, and two ad- ditional displays. All facial displays are generated by the above method, and rendered with a texture map of a young boy's face. The added displays are "Smile" and "Neutral." The "Neutral" display features no muscle contraction whatsoever, and is used when no conversational signal is needed. At run-time, the animation subsystem awaits a request fi'om the speech subsystem. When the animation subsystem receives a request that spec- ifies values for the 26 parameters, it starts to de- form the face, on the basis of the received values. The deformation process is controlled by the dif- ferential equation ff = a - f, where f is a param- eter value at time t and f' is its time derivative at time t. a is the target value specified in the request,. A feature of this equation is that defor- mation is fast in the early phase but soon slows, corresponding closely to the real dynamics of fa- cial displays. Currently, the base performance of the animation subsystem is around 20-25 frames per second when running on an SGI Power Series. This is sufficient to enable real-time animation. Speech Dialogue Subsystem Our speech dialogue subsystem works as follows. First, a voice input is acoustically analyzed by a built-in sound processing board. Then, a speech recognition module is invoked to output word se- quences that have been assigned higher scores by a probabilistic phoneme model. These word se- quen(:es are syntactically and semantically ana- lyzed and disambiguated by applying a relatively loose grammar and a restricted domain knowledge. Using a semantic representation of the input ut- terance, a I)lan recognition module extracts the speaker's intention. For example, ti'om the ut- terance "I am interested in Sony's workstation." the module interprets the speaker's intention as "he wants to get precise information about Sony's workstation." Once the system deternfines the speaker's intention, a response generation module is invoked. This generates a response to satisfy the speaker's request. Finally, the system's response is output as voice by a voice synthesis module. This module also sends the information about lip syn- chronization that describes phonemes (including silence) in the response and their time durations to the facial animation subsystem. With the exception of the voice synthesis nmd- ule, each nmdule can send messages to the facial animation subsystem to request the generation of a facial display. The relation between the speech dialogues and facial displays is discussed later. In this case, the specific task of the system is to provide information about Sony's computer- related products. For example, the system can an- swer questions about price, size, weight, and spec- ifications of Sony's workstations and PCs. Below, we describe the modules of the speech diMogue subsystem. Speech recognition. This module was jointly developed with the ElectrotechnicM Laboratory and Tokyo Institute of Technology. Speaker- independent continuous speech inputs are ac- cepted without special hardware. To obtain a high level of accuracy, context-dependent pho- netic hidden Marker models are used to construct phoneme-level hypotheses [Itou et al.. 1992]. This nmdule can generate N-best word-level hypothe- ses. Syntactic and semantic analysis. This mod- ule consists of a parsing n~echanism, a semantic analyzer, a relatively loose grammar consisting of 24 rules, a lexicon that includes 34 nouns. 8 verbs. 4 adjectives and 22 particles, and a fl'ame-based knowledge base consisting of 61 conceptual frames. Our semantic analyzer can handle ambiguities in syntactic structures and generates a semantic rep- resentation of the speaker's utterance. We ap- plied a preferential constraint satisfaction tech- nique [Nagao, 1992] for perfornfing disambigua- tion and semantic analysis. By allowing the prefer- ences to control the application of the constraints. 105 ambiguities can be efficiently resolved, thus avoid- ing combinatorial explosions. Plan recognition. This module determines the speaker's intention by constructing a model of his/her beliefs, dynamically adjusting and expand- ing the model as the dialogue progresses [Nagao, 1993]. The model deals with the dynamic nature of dialogues by applying the following two mech- anisms. First, preferences among the contexts are dynamically computed based on the facts and as- sumptions within each context. The preference provides a measure of the plausibility of a context. The currently most preferable context contains a currently recognized plan. Secondly, changing the most plausible context among mutually exclusive contexts within a dialogue is formally treated as belief revision of a plan-recognizing agent. How- ever, in some dialogues, many alternatives may have very similar preference values. In this situ- ation, one may wish to obtain additional infor- mation, allowing one to be more certain about committing to the preferable context. A crite- rion for detecting such a critical situation based on the preference measures for mutually exclusive contexts is being explored. The module also main- tains the topic of the current dialogue and can han- dle anaphora (reference of pronouns) and ellipsis (omission of subjects). Response generation. This module generates a response by using domain knowledge (database) and text templates (typical patterns of utter- ances). It selects appropriate templates and com- bines them to construct a response that satisfies the speaker's request. In our prototype system, the method used to comprehend speech is a specific combination of specific types of knowledge sources with a rather fixed information flow, preventing flexible inter- action between them. A new method that en- ables flexible control of omni-directional informa- tion flow in a very context-sensitive fashion has been announced [Nagao et al., 19931. Its archi- tecture is based on dynamical constraint [Hasida et al., 19931 which defines a fine classification based on the dimensions of satisfaction and the vi- olation of constraints. A constraint is represented in terms of a clausal logic program. A fine-grained declarative semantics is defined for this constraint by measuring the degree of violation in terms of real-valued potential energy. A field of force arises along the gradient of this energy, inferences be- ing controlled on the basis of the dynamics. This allows us to design combinatorial behaviors un- der declarative semantics within tractable com- putational complexity. Our forthcoming system can, therefore, concentrate on its computational resources according to a dynamic focal point that is important to speech processing with broad by- pothesis space, and apply every kind of constraint, from phonetic to pragmatic, at the same time. Correspondence between Conversational Situations and Facial Displays The speech dialogue subsystem recognizes a num- ber of typical conversational situations that are important to dialogues. We associate these situ- ations with an appropriate facial display(s). For example, in situations where speech input is not recognized or where it is syntactically invalid, the listener comment display "Not confident" is dis- played. If the speaker's request exceeds the range of the system's domain knowledge, then the sys- tem displays a facial shrug and replies "I cannot answer such a question." The relationships be- tween conversational situations and facial displays are listed in Table 2. Example Dialogue Here is an example of a possible dialogue with the system 1. Un: indicates user speech inputs, Sn: indicates voice responses delivered by the system to the user. Terms enclosed within square brackets illustrate the types of facial displays introduced in previous section. "Continuing" means that the system facial expression remains the same. UI: Hello. SI: ]Attend and BOStory] Hi. This is Sony Com- puter Science Laboratory. S2: [Continuing] I can answer ally question about computer-related products. U2: uh ... $3: [NotConfident ] I beg your pardon. U3: I want to know about a personal (:(unlmter. $4: [Question] Do you want to know about a Sony personal computer? U4: No, I don't. $5: [Shrug] I cannot answer such a question. US: Please tell me about a Sony personal com- puter. $6: [BOStory] Sony personal computer "Quar- terL" is a standard IBM comI)atible notebook- style personal computer. U6: What can I do with it? $7: [BOStory] You can use all IBM PC software. $8: [Continuing] For example, you can use a word processor, and a spreadsheet. UT: Can I use UNIX with it? $9: [BOStory] If you want to use UNIX, 1Actually, the system accepts and speaks only Japanese. The example is a translation. 106 Table 2: Relation between Conversational Situations and Facial Displays CONVERSATIONAL SITUATION Recognition failure Syntactically invalid utterance Many recognition cmldidates with close scores Beginning of a dialogue Introduction to a topic Shift "to 'another topic Clarification dialogue "Underline a remark Answer "Yes" Answer "No" Out of the domain Answer "Yes" With emphasis Violation of pragmatic constraints Reply to "Thmlks" FACIAL DISPLAY(S) NotConfident (Listener comment display "Not confident") NotConfident ModConfident (Listener comment display "Moderately confident") Attend (Listener comment display "Indication of attendance") BOStory (Syntactic display "Beginning of a story") EOStory (Syntactic display "End of a story") and BOStory Question (Syntactic display "Question mark") Underliner (Syntactic display "Underliner") SpeakerNo (Speaker display "No") Shrug (Speaker display "Facial shrug") SpeakerYes and Enlphasizer (Syntactic display "Emphasizer") Incredulity (Listener comment display "Incredulity") ListenerYes (Listener comment display "Yes") and Smile (Complementary display "Smile" ) $10: [Underliner] I recommend you get a work- station. U8: Tell me about a workstation. Sl1: [EOStory and BOStory] Sony workstation "NEWS" is a high-performance laptop worksta- tion. S12: [Continuing] Its CPU is an R3081 RISC pro- cessor, and its processing speed of 37 MIPS is the fastest in this class. U9: Is it large? SlO: [SpeakerNo and Emphasizer] No, it isn't. Sl1: [BOStory] "NEWS" is 32.4 cm in width, 36.4 cm in depth, and 6.9 cm in height. U10: Is it light? 812: [SpeakerYes and Emphasizer] Yes, it is. S13: [BOStory] The weight of "NEWS" is 4.5 kg. Ull: How much? S14: [BOStory] "NEWS" costs 700,000 yen. U12: What does the personal computer cost? S15: [EOStory and BOStory] "QuarterL" costs 398,000 yen. U13: Thank you. S16: [ListenerYes and Smile] You are welcome. S17: [Continuing] It's my pleasure. U2 is an example of noisy inputs. The system could not recognize the expression and displayed the facial backchannel NotConfident and replied "I beg your pardon." In U3, there is missing infor- mation about personal computer maker. So, the system enters a clarification dialogue $4, showing the Question display. In this case, the system tried to drive the user into the domMn with which the system is familiar. However, the user refused to agree with the system by utterance U4, then the system revealed its discouragement by showing a facial shrug. In US, the user changes the topic, by asking for workstation information. The system recognizes this by comparison with the prior topic (i.e., personal computers). Therefore, in response to question Sll, the system displays EOStory and subsequently BOStory to indicate the shift to a different topic. The system also manages the topic structure so that it can handle anaphora and el- lipsis in utterances such as ug, UIO, and Ull. Experimental Results To examine the effect of facial displays on the in- teraction between humans and computers, exper- iments were performed using the prototype sys- tem. The system was tested on 32 volunteer sub- jects. Two experiments were prepared. In one experiment, called F, the subjects held a conver- sation with the system, which used facial displays to reinforce its response. In the other experiment, called N, the subjects held a conversation with the system, which answered using short phrases instead of facial displays. The short phrases were two- or three-word sentences that described the corresponding facial displays. For example, in- stead of the "Not confident" display, it simply displayed the words "I am not confident." The subjects were divided into two groups, FN and NF. As the names indicate, the subjects in the FN group were first subjected to experiment F and then N. The subjects in the NF group were first subjected to N and then F. In both experi- ments, the subjects were assigned the goal of en- 107 quiring about the functions and prices of Sony's computer products. In each experiment, the sub- jects were requested to complete the conversation within 10 minutes. During the experiments, the number of occurrences of each facial display was counted. The conversation content was also evalu- ated based on how many topics a subject covered intentionally. The degree of task achievement re- flects how it is preferable to obtain a greater num- ber of visit more topics, and take the least amount of time possible. According to the frequencies of appeared facial displays and the conversational scores, the conversations that occurred during the experiments can be classified into two types. The first is "smooth conversation" in which the score is relatively high and the displays "Moderately con- fident," "Beginning of a story," and "Indication of attendance" appear most often. The second is "dull conversation," characterized by a lower score and in which the displays "Neutral" and "Not con- fident" appear more frequently. The results are summarized as follows. The details of the experiments were presented in an- other paper [Takeuchi and Nagao, 1993]. 1. The first experiments of the two groups are compared. Conversation using facial displays is clearly more successful (classified as smooth conversation) than that using short phrases. We can therefore conclude that facial displays help conversation in the case of initial contact. 2. The overall results for both groups are com- pared. Considering that the only difference be- tween the two groups is the order in which the experiments were conducted, we can conclude that early interaction with facial displays con- tributes to success in the later interaction. 3. The experiments using facial displays 1 e and those using short phrases N are compared. Con- trary to our expectations, the result indicates that facial displays have little influence on suc- cessful conversation. This means that the learn- ing effect, occurring over the duration of the ex- periments, is equal in effect to the facial dis- plays. However, we believe that the effect of the facial displays will overtake the learning ef- fect once the qualities of speech recognition and facial animation have been improved. The premature settings of the prototype sys- tem, and the strict restrictions imposed on the conversation inevitably detract from the poten- tial advantages available from systems using com- municative facial displays. We believe that fur- ther elaboration of the system will greatly im- prove the results. The subjects were relatively well-experienced in using computers. Experiments with computer novices should also be done. Concluding Remarks and Further Work Our experiments showed that facial displays are helpful, especially upon first contact with the sys- tem. It was also shown that early interaction with facial displays improves subsequent interac- tion, even though the subsequent interaction does not use facial displays. These results prove quan- titatively that interfaces with facial displays help to break down the mental barrier that many users have toward computing systems. As a future research direction, we plan to in- tegrate more communication channels and modal- ities. Among these, the prosodic information pro- cessing in speech recognition and speech synthe- sis are of special interest, as well as the recogni- tion of users' gestures and facial displays. Also, further work needs to be done on the design and implementation of the coordination of mul- tiple communication modalities. We believe that such coordination is an emergent phenomenon from the tight interaction between the system and its ever-changing environments (including humans and other interactive systems) by means of situ- ated actions and (more deliberate) cooperative ac- tions. Precise control of multiple coordinated ac- tivities is not, therefore, directly implementable. Only constraints or relationships among percep- tion, conversational situations, and action will be implementable. To date, conversation with computing sys- tems has been over-regulated conversation. This has been made necessary by communication be- ing done through limited channels, making it nec- essary to avoid information collision in the nar- row channels. Multiple chamlels reduce the ne- cessity for conversational regulation, allowing new styles of conversation to appear. A new style of conversation has smaller granularity, is highly in- terruptible, and invokes more spontaneous utter- ances. Such conversation is (:loser to our daily con- versation with families and friends, and this will further increase familiarity with computers. Co-constructive conversation, that is less con- strained by domMns or tasks, is one of our fu- ture goals. We are extending our conversational model to deal with a new style of human-computer interaction called social interaction [Nagao and Takeuchi, 1994] which includes co-constructive conversation. This style of conversation features a group of individuMs where, say, those individ- uals talk about the food they ate together in a restraurant a month ago. There are no special roles (like the chairperson) for the participants to play. They all have the same role. The conversa- tion terminates only once all the participants are satisfied with the conclusion. 108 We are also interested in developing interac- tive characters and stories as an application for interactive entertainment. We are now building a conversational, anthropomorphic computer char- acter that we hope will entertain us with some pleasant stories. ACKNOWLEDGMENTS The authors would like to thank Mario Tokoro and colleagues at Sony CSL for their encouragement and helpful advice. We also extend our thanks to Nicole Chovil for her useful comments on a draft of this paper, and Sat0ru Hayamizu, Katunobu Itou, and Steve Franks for their contributions to the implementation of the prototype system. Spe- ciM thanks go to Keith Waters for granting per- mission to access his original animation system. REFERENCES [Bolt, 1980] Richard A. Bolt. 1980. Put-That-There: Voice and gesture at the graphics interface. Com- puter Graphics, 14(3):262-270. [Chovil, 1991] Nicole Chovil. 1991. Discourse-oriented facial displays in conversation. Research on Lan. guage and Social Interaction, 25:163-194. [Don et aL, 1991] Abbe Don, Tim Oren, and Brenda Laurel. 1991. Guides 3.0. In Proceedings of ACM CHI'91: Conference on Human Factors in Comput- ing Systems, pages 447-448. ACM Press. [Ekmaal and Friesen, 1969] Paul Ekman and Wal- lace V. Friesen. 1969. The repertoire of nonverbal behavior: Categories, origins, usages, and coding. Semiotics, 1:49-98. [Ekman and Friesen, 1978] Paul Ekman and Wal- lace V. Friesen. 1978. Facial Action Coding System.. Consulting Psychologists Press, Palo Alto, Califor- nia. [Ekman and Friesen, 1984] Paul Ekman and Wal- lace V. Friesen. 1984. Unmasking the Face. Con- sulting Psychologists Press, Palo Alto, California. [Hasida et al., 1993] K(3iti Hasida, Katashi Nagao, and Takashi Miyata. 1993. Joint utterance: In- trasentential speaker/hearer switch as an emergent phenomenon. In Proceedings of the Thirteenth In- ternational Joint Conference on Artificial Intelli- gence (IJCAI-93), pages 1193-1199. Morgan Kauf- mann Publishers, Inc. [Itouet al., 1992] Katunobu Itou, Satoru ttayamizu, and Hozumi Tanaka. 1992. Continuous speech recognition by context-dependent phonetic HMM and an efficient algorithm for finding N-best sen- tence hypotheses. In Proceedings of the Interna- tional Conference on Acoustics, Speech, and Signal Processing (ICASSP-92), pages 1.21-I.24. IEEE. [Nagao and Takeuchi, 1994] Katashi Nagao and Akikazu Takeuchi. 1994. Social interaction: Multimodal conversation with social agents. In Pro- ceedings of the Twelfth National Conference on Ar- tificial Intelligence (AAAI-9~). The MIT Press. [Nagao et al., 1993] Katashi Nagao, KSiti Hasida, and Takashi Miyata. 1993. Understanding spoken natural laalguage with omni-directional information flow. In Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence (IJCAI- 93), pages 1268-1274. Morgan Kaufmann Publish- ers, Inc. [Nagao, 1992] Katashi Nagao. 1992. A preferential constraint satisfaction technique for natural lan- guage analysis. In Proceedings of the Tenth Euro- pean Conference on Artificial Intelligence (ECAI- 92), pages 523-527. John Wiley & Sons. [Nagao, 1993] Katashi Nagao. 1993. Abduction and dynamic preference in plan-based dialogue under- standing. In Proceedings of the Thirteenth Inter- national Joint Conference on Artificial Intelligence (IJCAI-93), pages 1186-1192. Morgan Kaufmann Publishers, Inc. [Neal et al., 1988l Jeannette G. Neal, Zuzana Dobes, Keith E. Bettinger, and Jong S. Byoun. 1988. Multi- modal references in human-computer dialogue. In Proceedings of the Seventh National Conference on Artificial Intelligence (AAAI-88)~ pages 819-823. Morgan Kaufmann Publishers, Inc. [Oviatt et al., 1993] Sharon L. Oviatt, Philip R. Co- hen, and Michelle Wang. 1993. Reducing linguis- tic variability in speech and handwriting through selection of presentation format. In Proceedings of the International Symposium on Spoken Dia- logue (ISSD- 93), pages 227-230. Waseda University, Tokyo, Japan. [Shneiderman, 1983] Ben Shneiderman. 1983. Direct manipulation: A step beyond programming lan- guages. IEEE Computer, 16:57-69. [Stock, 1991] Oliviero Stock. 1991. Natural language and exploration of an information space: the AL- FRESCO interactive system. In Proceedings of the Twelfth International Joint Conference on Artifi- cial Intelligence (IJCAI-91), pages 972-978. Mor- gan Kaufmann Publishers, Inc. [Suchman, 1987] Lucy Suchman. 1987. Plans and Sit- uated Actions. Cambridge University Press. [Takeuchi and Franks, 1992] Akikazu Takeuchi and Steve Franks. 1992. A rapid face construction lab. Technical Report SCSL-TR-92-010, Sony Computer Science Laboratory Inc., Tokyo, Japan. [Takeuchi and Nagao, 1993] Akikazu Takeuchi and Katashi Nagao. 1993. Communicative facial dis- plays as a new conversational modality. In Proceed- ings of ACM/IFIP INTERCHI'93: Conference on Human Factors in Computing Systems, pages 187- 193. ACM Press. [Waters, 1987] Keith Waters. 1987. A muscle model for animating three-dimensional facial expression. Computer Graphics, 21(4):17-24. 109
1994
15
INTERLEAVING SYNTAX AND SEMANTICS IN AN EFFICIENT BOTTOM-UP PARSER* John Dowding, Robert Moore, Franqois Andry~ and Douglas Moran SRI International 333 Ravenswood Avenue Menlo Park, CA 94025 {dowding,bmoore,andry, moran)@ai.sri.com Abstract We describe an efficient bottom-up parser that in- terleaves syntactic and semantic structure build- ing. Two techniques are presented for reducing search by reducing local ambiguity: Limited left- context constraints are used to reduce local syn- tactic ambiguity, and deferred sortal-constraint application is used to reduce local semantic am- biguity. We experimentally evaluate these tech- niques, and show dramatic reductions in both number of chart edges and total parsing time. The robust processing capabilities of the parser are demonstrated in its use in improving the ac- curacy of a speech recognizer. INTRODUCTION The parsing problem is typically framed as a recognition problem: Given a grammar and a word string, determine if the word string is a member of the language described by the grammar. For some applications, notably robust natural-language pro- cessing and spoken-language understanding, this is insufficient, since many utterances will not be accepted by the grammar, because of nonstandard language, inadequate grammatical coverage, or er- rors made in speech recognition. In these cases, it is still desirable to determine what well-formed phrases occurred {n the word string, even when the entire string is not recognized. The goal of the parser described here is to construct a chart, as ef- ficiently as possible, that contains all the syntacti- cally well-formed semantically meaningful phrases *This research was supported by the Advanced Re- search Projects Agency under Contract ONR N00014- 90-C-0085 with the Office of Naval Research. The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either ex- pressed or implied, of the Advanced Research Projects Agency of the U.S. Government. tCurrent address: CAP GEMINI Innovation, 86- 90 Rue Thiers, 92513-Boulogne Billancourt, France, andry@capsoget i. fr. that occur in the word string. The most efficient practical context-free parsers (Earley, 1970; Graham, Harrison, and Ruzzo, 1980) are left-corner parsers, which gain efficiency by their ability to constrain the search to find only phrases that might contribute to a sentence that starts at the left edge of the string being parsed. These strong left-context syntac- tic constraints can prevent the parser from finding some phrases that are well-formed, however. This is a problem for us that is avoided by bottom- up parsers (Kasami, 1965;Younger, 1967), but at the expense of creating many more edges, which can lead to dramatic increases in parse time. Since our goal is to find only the phrases that are semantically meaningful as well as syntacti- cally well-formed, we also need to compute se- mantic constraints for every syntactic phrase we construct. This requires making finer distinctions than syntax-only parsing, which can introduce ad- ditional ambiguity, multiplying the number of dis- tinct phrases found and increasing parse time. We describe two special techniques for speed- ing up bottom-up parsing by reducing local am- biguity without sacrificing completeness. One technique, "limited left-context checking," reduces local syntactic ambiguity; the other, "deferred sortal-constraint application," reduces local se- mantic ambiguity. Both techniques are applied to unification-based grammars. We analyze the per- formance of these techniques on a 194-utterance subset of the AP~PA ATIS corpus (MADCOW, 1992), using a broad-coverage grammar of English. Finally, we present results using the output of the parser to improve the accuracy of a speech recog- nizer in a way that takes advantage of our ability to find all syntactically well-formed semantically meaningful phrases. SYNTACTIC PARSING The parsing algorithm described here is imple- mented in the Gemini spoken-language under- 110 standing system (Dowding et al., 1993), which features a broad-coverage unification-based gram- mar of English, with independent syntactic, se- mantic and lexical components, in the style of the SRI Core Language Engine (Alshawi, 1992). Although we describe the syntactic parsing algo- rithm as though it were parsing purely context- free grammars, the ideas extend in a natural way to unification-based grammar parsing. While the chart for a context-free grammar contains edges labeled by atomic nonterminal symbols, the chart for a unification-based grammar contains edges la- beled by complex feature-structure nonterminals. For efficiency, we maintain edges in the chart in only their most general form--new edges are added to the chart only if they are more general than existing edges, and we delete existing edges that are less general than the new edge. Like the Core Language Engine, we use a technique called packing to prevent local ambiguity from multiply- ing out into distinct edges at higher levels in the tree. Packing is implemented by collapsing phrasal analyses that share the same parent nonterminal and using only the parent for further processing. Limited Left-Context Checking The motivation behind limited left-context check- ing is the observation that most of the phrases found by a pure bottom-up parser using our uni- fication grammar contain syntactic gaps not li- censed by any possible gap filler. In a pure bottom-up parser, syntactic gaps must be hypoth- esized between every pair of words and lead to many spurious phrases being built. Earlier work (Moore and Dowding, 1991) showed that over 80% of the edges built by a bottom-up parser using our grammar were in this class. Since these phrases are semantically incomplete, they are of no inter- est if they cannot be tied to a gap filler, even in the robust processing applications we are concerned with. Our approach is to use left-context check- ing in a limited way to restrict the construction of only this class of phrases. We partition the set of grammatical cate- gories in our grammar into two groups, context- independent and context-dependent. Context- independent phrases will be always be constructed bottom-up whenever possible. Context-dependent phrases will only be constructed if they are pre- dicted by previously constructed phrases to the left. For our purposes, the set of context- dependent phrases are those that contain a syn- tactic gap with no gap filler, and the context- independent set is everything else. Note, how- ever, that there is no constraint on the algorithm that forces this. If every grammatical category is context-dependent, then this algorithm reduces to a left-corner parser, and if every category is context-independent, then this algorithm reduces to a pure bottom-up parser. One caveat is that for the algorithm to work correctly, the set of context-dependent categories must be closed un- der the possible-left-corner-of relation. The question remains of how to produce pre- dictions for only those phrases in the context- dependent set. As in Earley's algorithm, pre- dictions are implemented as dotted grammar rules. Unlike Earley's algorithm, however, predic- tions are used only to license the construction of context-dependent categories. Predictions are not created for context-independent categories, and they are not used in a completion phase to find new reductions. Predictions deriving from rules that create context-dependent categories must themselves be predicted. Thus, predictions are also divided into context-independent and context-dependent. A context-independent prediction will always be added to the chart after the first child on the right- hand side has been found. A context-dependent prediction will only be added to the chart when the first child on the right-hand side has been found, and the head of the rule has been previously pre- dicted or is a possible left corner of a category that has been previously predicted. Tables contain- ing the possible context-dependent and context- independent predictions are constructed at com- pile time. An outline of the parser algorithm is given in Figure 1. The algorithm is basically an all-paths, left-to-right, bottom-up parser, with the modifica- tions that (1) the edge resulting from a reduction is added to the chart only if it is either a context- independent phrase or is predicted, and (2) pre- dictions are added at each point in the input for the context-dependent phrases that are licensed at that point. Some details of the parser have been omitted, particularly those related to pars- ing unification-based grammars that do not arise when parsing context-free grammars. In addition, the parser maintains a skeletal copy of the chart in which edges are labeled only by the nonterminal symbols contained in their context-free backbone, which gives us more efficient indexing of the full grammar rules. Other optimizations include us- ing one-word look-ahead before adding new pre- dictions, and using restrictors (Shieber, 1985) to increase the generality of the predictions. Comparison with Other Parsers Table 1 compares the average number of edges, average number of predictions, and average parse times 1 (in seconds) per utterance for the limited 1All parse times given in this paper were produced on a Sun SPARCstation 10/51, running Quintus Pro- 111 For grammar with start symbol ~, phrase struc- ture rules P, lexicon L, context-independent cate- gories CI, and context-dependent categories CD; and for word string w = wl...wn: Variant Edges Preds Secs Bottom-Up 1191 0 14.6 Limited Left-Context 203 25 1.0 Left-Corner 112 78 4.0 Table h Comparison of Syntax-Only Parsers if ~ E CD, predict(T, 0); add_empty_categories (0) ; for i from I to n do foreach C such that C--+wi EL do add_edge_to_chart(C, i-- i, i) ; make_new_predictions(C, i- i, i) ; find_new-reductions(C, i- l,i) end add_empty_categories (i) ; end sub findmew-reductions(B, j, k) { foreach A and a such that A-~ ~B 6 P do foreach i such that i = match((~, j) do if A 6 CD and predicted(A,i) or A 6 CI add_edge_to_chart(A, i, k); make_new_predictions(A, i, k) ; find_new_reductions(A, i, k) ; end end } sub add_empty_categories(i) { foreach A such that A -+ e E P do if A 6 CD and predicted(A,/) or A 6 CI add_edge_to_chart(A, i, i) ; make_new_predictions(A, i, i) ; find_new_reductions(A, i, i) ; end } sub make_new_predictions(A, i, j) { foreach Aft E Predictions[i] do predict (fl, j) end foreach H -+ A~Bfl 6 P such that H 6 CI and B E CD and fl 6 CI* do predict (~B, j) end foreach H --+ A(~B$ 6 P such that H E CD and B E CD and fl E CI* and predicted(H, i) or H left-corner-of C and predicted(C, i) do predict (~B, j) end Figure 1: Limited Left-Context Algorithm left-context parser with those for a variant equiv- alent to a bottom-up parser (when all categories are context independent) and for a variant equiva- lent to a left-corner parser (when all categories are context dependent). The tests were performed on a set of 194 utterances chosen at random from the ARPA ATIS corpus (MADCOW, 1992), using a broad-coverage syntactic grammar of English hav- ing 84% coverage of the test set. The limited left-context parser can be thought of as at a midway point between the pure bottom- up parser and the left-corner parser, constructing a subset of the phrases found by the bottom-up parser, and a superset of the phrases found by the left-corner parser. Using limited left-context to constrain categories containing syntactic gaps re- duces the number of phrases by more than a fac- tor of 5 and is almost 15 times faster than the pure bottom-up parser. The limited left-context parser builds 81% more edges than the left-corner parser, but many fewer predictions. Somewhat surprisingly, this results in the limited left-context parser being 4 times faster than the left-corner parser. We conjecture that this is due to the fact that context-independent phrases are licensed by a static table that is quicker to check against than dynamic predictions. This results in a lower av- erage time per edge for the limited left-context parser (0.005 seconds) than the left-corner parser (0.036 seconds). Some additional penalty may also have been incurred by not using dotted grammar rules to generate reductions, as in standard left- corner parsing algorithms. 2 There are important differences between the technique for limited prediction in this parser, and other techniques for limited prediction such as Shieber's notion of restriction (Shieber, 1985) (which we also use). In methods such as Shieber's, predictions are weakened in ways that can re- sult in an overall gain in efficiency, but predic- tions nevertheless must be dynamically generated for every phrase that is built bottom-up. In our log version 3.1.4. 2Other than this, we do not believe that the bottom-up and left-corner algorithms we tested suf- fered from any unnecessary overheads from being im- plemented as special cases of our general algorithm, as we removed calls to subroutines that were unnecessary for those special cases. 112 method, no predictions need to be generated for the context-independent categories; from another point of view, context-independent categories are predicted statically, at compile time, for all points in the input, rather than dynamically at run time. Time is saved both because the predictions do not have to be generated at run time, and because the process of checking these static predictions is sim- pler. In previous work (Moore and Dowding, 1991), we compared limited left-context checking to some other methods for dealing with empty categories in a bottom-up parser. Standard grammar transfor- mation techniques (Hopcroft and Ullman, 1980) can be used to eliminate empty nonterminals. This approach is useful to eliminate some edges, but still allows edges that dominate empty cat- egories to be created. We found that using this technique was faster than pure bottom-up pars- ing, but still significantly slower than limited left- context checking. A further refinement is to trans- form the grammar to eliminate both empty and nonbranching rules. I.n the case of our grammar, however, this resulted in such a large" increase in grammar size as to be impractical. An alternative method for making left-corner parsers more robust is to explicitly add predictions for start categories at every point in the input. If every context-independent category is a possible left corner of a start category, this approach will result in the same set of edges in the chart that the limited left-context approach builds, but at the added expense of creating many more predic- tions. Since increasing the total number of pre- dictions increases parse time, we expect that this technique would be significantly slower than lim- ited left-context checking, although we have not carried out any experiments on this approach. The technique of precompiling the left- daughter-of table is not unique to this parser, and has appeared in both the GHR, parser (Graham, Harrison, and Russo, 1980) and the Core Lan- guage Engine parser (Alshawi, 1992). INTERLEAVED SEMANTIC PROCESSING The Gemini system allows either syntax-only pars- ing or parsing with syntactic and semantic pro- cessing fully interleaved. In interleaved processing, whenever a syntax rule successfully creates a new syntactic phrase, corresponding semantic rules are applied to construct possible logical forms for the phrase, 3 the logical forms are checked to verify 3As a possible optimization, we tried combining the syntactic and semantic rules at compile time. This turned out to be slower than checking all syntactic that they satisfy semantic sortal constraints, and edges for interpretations that pass all constraints are added to the chart. In general, this leads to fewer syntactically distinct analyses being present in the chart (since phrases that have no inter- pretation satisfying sortal constraints do not pro- duce edges), but semantic ambiguity can lead to a greater total number of semantically distinct edges. As is the case in syntax-only parsing, in- terleaved processing uses packing to collapse anal- yses for later processing. Analyses are collapsed if they have the same parent nonterminal, incorpo- rating both syntactic and semantic features, and the same semantic sortal properties. Deferred Sortal-Constraint Application In Gemini, there are two sources of semantic am- biguity to be considered when interleaving syntax and semantics in parsing: semantic rule ambiguity and sortal ambiguity. For every syntactic rule of the form: Rulename: A,vn ~ B, vn, C, vn there are one or more semantic rules indexed on the same rule name: Rulename: ( LFA, A,,,n) ~ ( LFB, B,e,n), ( LF¢ , C,,m) Here, LFA, LFB and LFc are logical form expres- sions indicating how the logical form LFA is to be constructed from the logical forms of its chil- dren LFB and LFc, and A, B, and C are category expressions that are unified. The second source of semantic ambiguity is sortal ambiguity. Every atom in a logicM form expression is assigned one or more semantic sorts. For example, in the logical form fragment exists ( (A ; [flight] ), [and, [flight, (A; [flight] )] ; [prop], [to, (A; [flight] ), ('BOSTOn' ; [city] )] ; [prop]] ; [prop] ) ; [prop] the atoms exists, and, flight, to and 'BOSTON' have sort assignments (sorts are printed as the right-hand side of the ';' operator). Some atoms like 'BOSTON' are assigned atomic sorts like [city], while other atoms like to are as- signed more complex sorts, for instance, a func- constraints first, at least for our grammar at the time. We speculate that this is due to unifying multiple vari- ants of the same syntactic pattern against the chart in cases where one syntactic rule has several correspond- ing semantic rules, and that applying syntactic rules first provides an effective filter for faster matching. 113 tion from flights and cities to propositions, rep- resented as ( [ [flight], [city] ], [prop] ). Sorts for nonatomic logical form expressions are then constructed recursively from the subexpressions they contain. For instance, the expression [to, (A; [flight] ), ('BOSTON'; [city] )] is assigned the sort [prop] because there is a possible sort assignment for to consistent with the relation to holding between something of sort [flight] and something of sort [city]. If an atom within a logical form expression has more than one possible sort assignment, then the expression may be ambiguous if the other sorts in the expression do not further constrain it; if a logical form expression associated with a syntactic edge is ambiguous, then new edges are added to the chart for each of the possible semantic read- ings. This is very common with sort assignments for logical form functors. If all the arguments of the functor have already been found at the point when the functor is first encountered in a logi- cal form expression, then usually only one possi- ble sort assignment for the functor will apply, and the resulting semantic edge will be sortally unam- biguous. If the functor is encountered in a phrase where one or more of its arguments have not yet been encountered, such as a verb phrase before it has been combined with its §ubject, edges for all possible sorts for the missing arguments will be hypothesized, creating local sort ambiguities. As can be seen in Table 2, there is a modest increase in the number of edges created per utterance due to semantic rule ambiguity, but a much more dra- matic increase due to sortal ambiguity. The approach we have taken to deal with this problem is to prevent sortal ambiguity from mul- tiplying out into distinct edges in the chart, by deferring the application of sortal constraints in eases where sortal ambiguities would be created. To implement this approach, we associate with ev- ery semantic edge a set (possibly empty) of de- ferred sort assignments. In order to construct this set for an edge, we create deferred sort assignments for any logical form atoms introduced by the se- mantic rule or lexical entry that created the edge that have more than one possible sort, given all the information we have at that edge (such as the sorts of the arguments of a functor). For a phrasal edge, we add to this any deferred sort assignments inherited from the daughters of the edge. Once the set of deferred sorts has been con- structed, but before the new edge is added to the chart, the set is analyzed to determine whether it is consistent, and to remove any deferred sort as- signments that have become unambiguous because of unifications performed in creating the edge. Since the deferred sort assignments can share logic variables, it is possible that even though each de- farted assignment is ambiguous, there is no assign- ment of sorts that can satisfy all constraints at the same time, in which case the edge is rejected. The incorporation of additional information from sib- ling nodes can result in a sortal ambiguity becom- ing resolved when an edge is constructed, in which case the resulting sort assignment is applied and removed from the set of deferred sort assignments. Finally, we check whether the deferred sort assign- ments, although individually ambiguous, jointly have a unique solution. In this case, that assign- ment of values is applied, and the set of deferred sort assignments becomes the empty set. Type of Processing Syntax Only Plus Semantic Rules Plus Sorts With Deferred Sorts Edges/ Sees/ Sees/ Utt Edge Utt 203 0.005 0.98 209 0.006 1.20 357 0.011 4.04 194 0.007 1.33 Table 2: Results of Deferring Sortal Constraints The effectiveness of this technique is demon- strated by Table 2, which compares the average number of edges per utterance, average parse time per edge, and average parse time per utterance for four different modes of processing: syntax- only parsing, interleaving syntax and semantics without applying sortal constraints, interleaving syntax and semantics while immediately apply- ing sortal constraints, and interleaving syntax and semantics while deferring ambiguous sortal con- straints. We can see that the total number of semantic edges is reduced significantly, resulting in a decrease in the total syntax+semantics+sorts time by a factor of 3. Note that despite the addi- tion of semantic rule ambiguity, the total number of edges built during interleaved syntactic and se- mantic processing is less than the number of edges built using syntax alone, demonstrating that we in fact succeed in using semantic information to prune the syntactic search space. IMPROVING ACCURACY IN SPEECH RECOGNITION One of our prime motivations in designing a parser to find all syntactically well-formed semantically meaningful phrases in a word string was to be able to use it for the robust application of natural- language constraints in speech recognition. Most attempts to apply natural-language constraints in speech recognition have relied on finding a com- plete parse for a recognition hypothesis. Many have worked by simply picking as the preferred hypothesis the string with the highest recognition 114 score that can be completely parsed and inter- preted. It seems virtually impossible, however, to create a natural-language grammar that models spontaneous spoken language accurately enough to avoid introducing more errors than it corrects, if applied in this way. A state-of-the-art natural- language grammar for a problem such as the ARPA ATIS task might fail to find a complete analysis for 10% or more of test utterances. In this case, a substantial recognition error rate would be introduced, because of the correct utterances that would be completely excluded, and it is extremely unlikely that the grammar would result in enough reduction of the recognition errors of a state-of- the-art speech recognizer on other utterances to overcome the errors it introduces. We have taken a different approach based on the observation that, even when our grammar fails to provide a complete analysis of an utterance, it is usually possible to find a small number of semanti- cally meaningful phrases that span the utterance. We therefore use our parser to find the minimal number of semantically meaningful phrases needed to span a recognition hypothesis and to compute a natural-language score for the hypothesis based on this number. Having a parser that finds all syntactically well-formed semantically meaningful phrases is an obvious prerequisite to taking such an approach. We have applied this idea in a system combin- ing Gemini with SRI's DECIPHER TM speech rec- ognizer (Murveit et al., 1993), which was tested in the December 1993 ARPA ATIS benchmark evalu- ation (Pallet et al., 1994). The following example from the evaluation test set illustrates the basic approach: hypothesis: [list flights][of fare code][a][q] reference: [list flightsl[of fare code of q] These two word strings represent the recognizer's first hypothesis for the utterance and the reference transcription of the utterance, each bracketed ac- cording to the best analysis that Gemini was able to find as a sequence of semantically meaningful phrases. Because of a missing sortal possibility, Gemini did not allow the preposition of to re- late a noun phrase headed by flights to a noun phrase headed by fare code, so it was not possi- ble to find a single complete analysis for either word string. Gemini was, however, able to find a single phrase spanning of fare code of q, but re- quired three phrases to span of fare code a q, so it still strongly preferred the reference transcription of the utterance over the recognizer's first hypoth- esis. The integration of Gemini and DECIPHER was implemented by combining a Gemini score with the recognition score for each of the rec- ognizer's N-top hypotheses and selecting the hy- pothesis with the best overall score. 4 The Gemini score was computed as a somewhat ad hoc combi- nation of the number of phrases needed to cover the hypothesis, a bonus if the hypothesis could be analyzed as a single sentence (as opposed to any other single grammatical phrase), and penalties for using certain "dispreferred" grammar rules. This score was then scaled by an empirically op- timized parameter and added to the recognition score. We carried out a detailed analysis of the preliminary results of the December 1993 ARPA ATIS benchmark evaluation to determine the ef- fect of incorporating natural-language information into recognition in this way. Overall, the word error rate improved from 6.0% to 5.7% (5.0% im- provement), and the utterance error rate improved from 29.6% to 27.8% (6.1% improvement). These improvements, while modest, were measured to be statistically significant at the 95% confidence level according to the rhatched-pair sentence segment (word error) test and the McNemar (sentence er- ror) test. In more detail, the first hypothesis of the rec- ognizer was correct for 704 of 995 utterances for which the natural-language grammar was used. Of these, the natural-language grammar failed to find complete analysis for 62. The combined system nevertheless chose the correct hypothesis in 57 of these cases; thus, only 5 correct hypotheses were lost due to lack of grammar coverage. On the other hand, use of the natural-language grammar resulted in correcting 22 incorrect recognizer first hypotheses. Moreover, 4 of these were not com- pletely analyzable by the natural-language gram- mar, but were chosen because they received a bet- ter analysis as a sequence of phrases than the first hypothesis of the recognizer. We also analyzed which of the natural- language factors incorporated in the Gemini score were responsible for the corrections and errors rel- ative to the performance of the recognizer alone. For the 22 utterances that were corrected, in 18 cases the correction was due to the preference for fewer fragments, in 3 cases the correction was due to the preference for complete sentences, and in only one case did the correction result from a grammar rule preference. Of the 5 utterance errors introduced by Gemini, 3 turned out to be cases in which the reference transcription was incorrect and the hypothesis selected by Gemini was actu- 4The value of N was variable, but sufficiently large (typically hundreds) that a limit on N was never a factor in which hypothesis was chosen. 115 ally correct, one was due to inadequate grammat- ical coverage resulting in a larger number of frag- ments for the correct hypothesis, and one was due to a grammatical rule preference. We concluded from this that the preference for fewer fragments is clearly useful and the preference for complete sentences seems to be somewhat useful, but there is no evidence that the current system of rule pref- erences is of any benefit in speech recognition. A more systematic approach to rule preferences, such as one based on a statistical grammar, may be of more benefit, however. CONCLUSIONS We have described an efficient parser that oper- ates bottom-up to produce syntactic and semantic structures fully interleaved. Two techniques com- bine to reduce the total ambiguity represented in the chart. Limited left-context constraints reduce local syntactic ambiguity, and deferred sortal- constraint application reduces local semantic am- biguity. We have expermentally evaluated these techniques, and shown order-of-magnitude reduc- tions in both number of chart edges and total pars- ing time. The robust processing capabilities of the parser have also been shown to be able to provide a small but significant increase in the accuracy of a speech recognizer. ACKNOWLEDGMENTS We would like to thank Mark Gawron for helpful comments on earlier drafts, and the SRI speech group, particularly Harry Bratt, for help perform- ing the speech recognition experiments. REFERENCES Alshawi, H. (ed.). 1992. The Core Language En- gine. MIT Press, Cambridge, Massachusetts. Dowding, J., Garwon, J., Appelt, D., Bear, 3, Cherny, L., Moore, R., and Moran, D. 1993. GEMINI: A Natural Language Understand- ings System for Spoken-Language Understand- ing, in 31st Annual Meeting of the Association for Computational Linguistics, Columbus, Ohio (June), pp. 54-61. Earley, J. 1970. An Efficient Context-Free Parsing Algorithm, Communications of the ACM, 31,2 (Feb.), pp. 94-102. Graham, S., Harrison, M., and Ruzzo, W. 1980. An Improved Context-Free Recognizer, ACM Transactions on Programming Languages and Systems, 2,3 (July), pp. 415-462. Hopcroft, J. and Ullman, J. 1980. Introduc- tion to Automata Theory, Languages, and Com- putation, Addison-Wesley Publishing, Reading, Massachusetts. Kasami, T. 1965. An Efficient Recognition and Syntax Algorithm for Context-Free Languages, Scientific Report AFCRL-65-758, Air Force Cambridge Research Laboratory, Bedford, Mas- sachusetts. MADCOW 1992. Multi-site Data Collection for a Spoken Language Corpus, in Proceedings of the DARPA Speech and Natural Language Work- shop, February 23-26, pp. 7-14. Moore, R., and Dowding, J. 1991. Efficient Bottom-Up Parsing, in Proceedings of the DARPA Speech and Natural Language Work- shop, February 19-22, pp. 200-203. Murveit, H., Butzberger, J., Digalakis, V. and Weintraub, M. 1993. Large-Vocabulary Dicta- tion Using SRI's DECIPHER TM Speech Recog- nition System: Progressive-Search Techniques, in Proceedings of the IEEE International Con- ference on Acoustics, Speech and Signal Process- ing, Minneapolis, Minnesota (April), pp. II-319- II-322. Pallet, D. et al. 1994. 1993 Benchmark Tests for the ARPA Spoken Language Program, in Proceedings of the ARPA Workshop on Human Language Technology, March 8-11. Shieber, S. 1985. Using Restriction to Ex- tend Parsing Algorithms for Complex-Feature- Based Formalisms, in 23rd Annual Meeting of the Association for Computational Linguistics, Chicago, Illinois (July), pp. 145-152. Younger, D. 1967. Recognition and Parsing of Context-Free Languages in Time n a, Informa- tion and Control, 10, 2, pp. 189-208. 116
1994
16
AN OPTIMAL TABULAR PARSING ALGORITHM Mark-Jan Nederhof * University of Nijmegen, Department of Computer Science Toernooiveld, 6525 ED Nijmegen, The Netherlands markj an~cs, kun. nl Abstract In this paper we relate a number of parsing algorithms which have been developed in very different areas of parsing theory, and which include deterministic algo- rithms, tabular algorithms, and a parallel algorithm. We show that these algorithms are based on the same underlying ideas. By relating existing ideas, we hope to provide an op- portunity to improve some algorithms based on features of others. A second purpose of this paper is to answer a question which has come up in the area of tabular pars- ing, namely how to obtain a parsing algorithm with the property that the table will contain as little entries as possible, but without the possibility that two entries represent the same subderivation. Introduction Left-corner (LC) parsing is a parsing strategy which has been used in different guises in various areas of com- puter science. Deterministic LC parsing with k symbols of lookahead can handle the class of LC(k) grammars. Since LC parsing is a very simple parsing technique and at the same time is able to deal with left recursion, it is often used as an alternative to top-down (TD) parsing, which cannot handle left recursion and is generally less efficient. Nondeterministic LC parsing is the foundation of a very efficient parsing algorithm [7], related to Tomita's algorithm and Earley's algorithm. It has one disad- vantage however, which becomes noticeable when the grammar contains many rules whose right-hand sides begin with the same few grammars symbols, e.g. A ~ c~f~l I ~f~2 I ... where ~ is not the empty string. After an LC parser has recognized the first symbol X of such an c~, it will as next step predict all aforementioned rules. This amounts to much nondeterminism, which is detrimental both to the time-complexity and the space-complexity. *Supported by the Dutch Organisation for Scientific Re- search (NWO), under grant 00-62-518 A first attempt to solve this problem is to use predic- tive LR (PLR) parsing. PLR parsing allows simulta- neous processing of a common prefix c~, provided that the left-hand sides of the rules are the same. However, in case we have e.g. the rules A --* c~t31 and B --~ ~/32, where again ~ is not the empty string but now A ~ B, then PLR parsing will not improve the efficiency. We therefore go one step further and discuss extended LR (ELR) and common-prefix (CP) parsing, which are al- gorithms capable of simultaneous processing of all com- mon prefixes. ELR and CP parsing are the foundation of tabular parsing algorithms and a parallel parsing al- gorithm from the existing literature, but they have not been described in their own right. To the best of the author's knowledge, the various parsing algorithms mentioned above have not been dis- cussed together in the existing literature. The main purpose of this paper is to make explicit the connec- tions between these algorithms. A second purpose of this paper is to show that CP and ELR parsing are obvious solutions to a problem of tabular parsing which can be described as follows. For each parsing algorithm working on a stack there is a realisation using a parse table, where the parse table allows sharing of computation between different search paths. For example, Tomita's algorithm [18] can be seen as a tabular realisation of nondeterministic LR parsing. At this point we use the term state to indicate the symbols occurring on the stack of the original algo- rithm, which also occur as entries in the parse table of its tabular realisation. In general, powerful algorithms working on a stack lead to efficient tabular parsing algorithms, provided the grammar can be handled almost deterministically. In case the stack algorithm is very nondeterministic for a certain grammar however, sophistication which in- creases the number of states may lead to an increasing number of entries in the parse table of the tabular re- alization. This can be informally explained by the fact that each state represents the computation of a number of subderivations. If the number of states is increased then it is inevitable that at some point some states represent an overlapping collection of subderivations, 117 which may lead to work being repeated during parsing. Furthermore, the parse forest (a compact representa- tion of all parse trees) which is output by a tabular algorithm may in this case not be optimally dense. We conclude that we have a tradeoff between the case that the grammar allows almost deterministic parsing and the case that the stack algorithm is very nondeter- ministic for a certain grammar. In the former case, so- phistication leads to less entries in the table, and in the latter case, sophistication leads to more entries, pro- vided this sophistication is realised by an increase in the number of states. This is corroborated by empirical data from [1, 4], which deal with tabular LR parsing. As we will explain, CP and ELR parsing are more deterministic than most other parsing algorithms for many grammars, but their tabular realizations can never compute the same subderivation twice. This rep- resents an optimum in a range of possible parsing algo- rithms. This paper is organized as follows. First we discuss nondeterministic left-corner parsing, and demonstrate how common prefixes in a grammar may be a source of bad performance for this technique. Then, a multitude of parsing techniques which ex- hibit better treatment of common prefixes is dis- cussed. These techniques, including nondeterministic PLR, ELR, and CP parsing, have their origins in theory of deterministic, parallel, and tabular parsing. Subse- quently, the application to parallel and tabular parsing is investigated more closely. Further, we briefly describe how rules with empty right-hand sides complicate the parsing process. The ideas described in this paper can be generalized to head-driven parsing, as argued in [9]. We will take some liberty in describing algorithms from the existing literature, since using the original de- scriptions would blur the similarities of the algorithms to one another. In particular, we will not treat the use of lookahead, and we will consider all algorithms work- ing on a stack to be nondeterministic. We will only describe recognition algorithms. Each of the algorithms can however be easily extended to yield parse trees as a side-effect of recognition. The notation used in the sequel is for the most part standard and is summarised below. A context-free grammar G = (T, N, P, S) consists of two finite disjoint sets N and T of nonterminals and terminals, respectively, a start symbol S E N, and a finite set of rules P. Every rule has the form A --* c~, where the left-hand side (lhs) A is an element from N and the right-hand side (rhs) a is an element from V*, where V denotes (NUT). P can also be seen as a relation on N × V*. We use symbols A, B, C,... to range over N, symbols a, b, c,... to range over T, symbols X, ]I, Z to range over V, symbols c~, [3, 7,-.. to range over V*, and v, w, x,... to range over T*. We let e denote the empty string. The notation of rules A --* al, A --* a2,.., with the same lhs is often simplified to A ~ c~1]a21... A rule of the form A --~ e is called an epsilon rule. We assume grammars do not have epsilon rules unless stated otherwise. The relation P is extended to a relation ~ on V* × V* as usual. The reflexive and transitive closure of ~ is denoted by --**. We define: B L A if and only if A --* Be for some a. The reflexive and transitive closure of / is denoted by /*, and is called the left-corner relation. We say two rules A --* al and B --* a2 have a com- mon prefix [3 if c~1 = [3"/1 and a2 = [3'/2, for some '/1 and '/2, where [3 ¢ e. A recognition algorithm can be specified by means of a push-down automaton A = (T, Alph, Init, ~-, Fin), which manipulates configurations of the form (F,v), where F E Alph* is the stack, constructed from left to right, and v • T* is the remaining input. The initial configuration is (Init, w), where Init E Alph is a distinguished stack symbol, and w is the input. The steps of an automaton are specified by means of the relation ~-. Thus, (F,v) ~- (F',v') denotes that (F',v') is obtainable from (F, v) by one step of the automaton. The reflexive and transitive closure of ~- is denoted by F-*. The input w is accepted if (Init, w) F-* (Fin, e), where Fin E Alph is a distinguished stack symbol. LC parsing For the definition of left-corner (LC) recognition [7] we need stack symbols (items) of the form [A --~ a • [3], where A --~ c~[3 is a rule, and a ¢ e. (Remember that we do not allow epsilon rules.) The informal meaning of an item is "The part before the dot has just been recognized, the first symbol after the dot is to be rec- ognized next". For technical reasons we also need the items [S' ~ ..S] and [S' --~ S .], where S' is a fresh symbol. Formally: I LC = {[A --* a • f]l A --* af • Pt A(c~ ¢ eVA -- S')} where pt represents the augmented set of rules, consist- ing of the rules in P plus the extra rule S t --~ S. Algorithm 1 (Left-corner) ALe= (T,I Lc, Init,~-, Fin), Init = IS' ---* • S], Fin = [S t --* S .]. Transitions are allowed according to the following clauses. 1. (FIB --* f • C'/], av) ~- (F[B --~/3 • CT][A ~ a • ~], v) where there is A --* ac~ • P~ such that A [* C 2. (F[A --~ a • aft], av) ~- (F[A --* c~a •/3], v) 3. (FIB ~ [3 • C'/][d ~ ~ .], v) (rib ~ f • C'/][D ---, A • 6], v) where there is D ~ A5 • pt such that D L* C 4. (FIB --* [3 • A'/][A ---* a .], v) ~- (FIB ~ fA • '/], v) The conditions using the left-corner relation Z* in the first and third clauses together form a feature which is 118 called top-down (TD) filtering. TD filtering makes sure that subderivations that are being computed bottom- up may eventually grow into subderivations with the re- quired root. TD filtering is not necessary for a correct algorithm, but it reduces nondeterminism, and guar- antees the correct-prefix property, which means that in case of incorrect input the parser does not read past the first incorrect character. Example 1 Consider the grammar with the following rules: E ---* E+T[TTE[T T ~ T*FIT**F IF F ---* a It is easy to see that E / E,T Z E,T L T, F / T. The relation L* contains g but from the reflexive closure it also contains F L* F and from the transitive closure it also contains F L* E. The recognition of a * a is realised by: [E' --* • E-I- a,a 1 [E'--~ •E][F--*a•] *a 2 [E'--*•E][T~F•] *a 3 [E'--~QE][T--*T.*F] *a 4 [E'~ •E][T~T.•F] a 5 [E'~.EI[T--*T.•F][F---*ae] 6 [E' ---* • E][T ---* T * F •] 7 [E'~•E][E~T•] 8 [E'~E•] Note that since the automaton does not use any looka- head, Step 3 may also have replaced [T ---* F •] by any other item besides [T --* T • • F] whose rhs starts with T and whose lhs satisfies the condition of top- down filtering with regard to E, i.e. by [T --~ T • **F], [E ~ T. T El, or [E ~ T •]. [] LC parsing with k symbols of lookahead can handle deterministically the so called LC(k) grammars. This class of grammars is formalized in [13]. 1 How LC pars- ing can be improved to handle common su~xes effi- ciently is discussed in [6]; in this paper we restrict our attention to common prefixes. PLR, ELR, and CP parsing In this section we investigate a number of algorithms which exhibit a better treatment of common prefixes. Predictive LR parsing Predictive LR (PLR) parsing with k symbols of looka- head was introduced in [17] as an algorithm which yields efficient parsers for a subset of the LR(k) grammars [16] and a superset of the LC(k) grammars. How determin- istic PLR parsing succeeds in handling a larger class of grammars (the PLR(k) grammars) than the LC(k) grammars can be explained by identifying PLR parsing 1In [17] a different definition of the LC(k) grammars may be found, which is not completely equivalent. for some grammar G with LC parsing for some gram- mar G t which results after applying a transformation called left-factoring. Left-factoring consists of replacing two or more rules A ~ a/31 [a/32[... with a common prefix a by the rules A ~ hA' and A' --* ~311f~2[..., where A' is a fresh non- terminal. The effect on LC parsing is that a choice between rules is postponed until after all symbols of a are completely recognized. Investigation of the next k symbols of the remaining input may then allow a choice between the rules to be made deterministically. The PLR algorithm is formalised in [17] by trans- forming a PLR(k) grammar into an LL(k) grammar and then assuming the standard realisation of LL(k) parsing. When we consider nondeterministic top-down parsing instead of LL(k) parsing, then we obtain the new formulation of nondeterministic PLR(0) parsing below. We first need to define another kind of item, viz. of the form [A --* ~] such that there is at least one rule of the form A --* a/3 for some ft. Formally: I PLR = {[A ---* ~] [ A --* a/3 • pt A (a # e V A = S')} Informally, an item [A --* ~ I PLa a • represents one or more items [A --~ cr •/3] • I e. Algorithm 2 (Predictive LR) A PLR = (T, I PLR, Init, F-, Fin), Init = [S' --~ ], Fin = [S t --~ S], and F- defined by: 1. (F[B --~/3], av) F- (rib -~/3][A -~ ~],,) where there are A --~ as, B ---* tiC7 • pt such that AL*C 2. (F[A --* a], av) F- (r[A --, ~a], v) where there is A ~ haft • P+ 3. (FIB--*/3][A -* a], v) b (rOB--,/3][0--, A], v) where A --* cr • Ptand where there are D A~f, B --~ f?C7 • pt such that D/* C 4. (F[B --*/3][A --, a],v) ~- (F[B --*/~A], v) where A --~ a • pT and where there is B --~/3A7 • pt Example 2 Consider the grammar from Example 1. Using Predictive LR, recognition of a * a is realised by: [E' ~ ] a * a [E' ][F a] • a [E' --~ ][T ---* F] * a [E' --* ][T --* T] * a [E' --* ][T ~ T .] a : [E' E] Comparing these configurations with those reached by the LC recognizer, we see that here after Step 3 the stack element IT --~ T] represents both [T ~ T • * F] and [T --* T • **F], so that nondeterminism is reduced. Still some nondeterminism remains, since Step 3 could also have replaced [T --* F] by [Z --* T], which repre- sents both [E --* T- T E] and [E --~ T •]. [] 119 Extended Lit parsing An extended context-free grammar has right-hand sides consisting of arbitrary regular expressions over V. This requires an LR parser for an extended grammar (an ELR parser) to behave differently from normal LR parsers. The behaviour of a normal LR parser upon a reduc- tion with some rule A --* a is very simple: it pops la[ states from the stack, revealing, say, state Q; it then pushes state goto(Q, A). (We identify a state with its corresponding set of items.) For extended grammars the behaviour upon a reduc- tion cannot be realised in this way since the regular expression of which the rhs is composed may describe strings of various lengths, so that it is unknown how many states need to be popped. In [11] this problem is solved by forcing the parser to decide at each call goto(Q, X) whether a) X is one more symbol of an item in Q of which some symbols have already been recognized, or whether b) X is the first symbol of an item which has been introduced in Q by means of the closure function. In the second case, a state which is a variant of goto(Q,X) is pushed on top of state Q as usual. In the first case, however, state Q on top of the stack is replaced by a variant of goto(Q, X). This is safe since we will never need to return to Q if after some more steps we succeed in recognizing some rule correspond- ing with one of the items in Q. A consequence of the action in the first case is that upon reduction we need to pop only one state off the stack. Further work in this area is reported in [5], which treats nondeterministic ELR parsing and therefore does not regard it as an obstacle if a choice between cases a) and b) cannot be uniquely made. We are not concerned with extended context-free grammars in this paper. However, a very interesting algorithm results from ELR parsing if we restrict its ap- plication to ordinary context-free grammars. (We will maintain the name "extended LR" to stress the origin of the algorithm.) This results in the new nondetermin- istic ELR(0) algorithm that we describe below, derived from the formulation of ELK parsing in [5]. First, we define a set of items as I = {[A --* c~ •/3] I A --* 4/3 E pt} Note that I LC C I. If we define for each Q G I: closure(Q) -= QU{[A--*.a]I[B--*/3.CT]EQAAZ*C} then the goto function for LR(0) parsing is defined by goto(q,x) = closure({[A ---* aX •/3] I [A ~ a • X/3] E Q}) For ELR parsing however, we need two goto func- tions, goto I and goto2, one for kernel items (i.e. those in I LC) and one for nonkernel items (the others). These are defined by gotol(Q,X) = closure({[A --* aX • fl] I [A ---* (~ • X/3] E Q A (a # e VA = S')}) goto2(Q,X ) = closure({[A ~ X •/3] I [A --* • X/3] 6 Q A A # S'}) At each shift (where X is some terminal) and each re- duce with some rule A --* a (where X is A) we may non- deterministically apply gotol, which corresponds with case a), or goto2, which corresponds with case b). Of course, one or both may not be defined on Q and X, because gotoi(Q, X) may be @, for i E {1, 2}. Now remark that when using goto I and goto2, each reachable set of items contains only items of the form A --* a •/3, for some fixed string a, plus some nonkernel items. We will ignore the nonkernel items since they can be derived from the kernel items by means of the closure function. This suggests representing each set of items by a new kind of item of the form [{Az, A2,..., A,~} --* a], which represents all items A --* a • /3 for some /3 and A E {A1, A2,..., An}. Formally: I ELR .~ {[A ---+ a] ] 0 C A G {A I A --* aft E pt} A (4 # E v a = {s'})} where we use the symbol A to range over sets of non- terminals. Algorithm 3 (Extended LR) A ELR = (T, I ELR, Init, t-, Fin), Init = [{S'} --* ], Fin = [{S'} --* S], and t- defined by: 1. (rid -./31, (rid -./3][a' -. a],v) where A' = {A I 3A ~ aa, S --~ flC'y 6 pt[B E A A A Z* C]} is non-empty 2. (rid a], (rid' where A' = { A E A [ A ---* daft E pt } is non-empty 3. (F[A --* fl][A' --. a],v) t- (F[A --*/3][A" --. A],v) where there is A --* a E pt with A E A', and A" -~ {D 130 ---* A6, B --*/3C7 E Pt[B 6 A A D Z* C]}is non-empty 4. (F[A --. fl][A' ---, a],v) }- (F[A" --* flA],v) where there is A --* a E pt with A E A', and A" = {B E A I B --*/3A',/E pt} is non-empty Note that Clauses 1 and 3 correspond with goto 2 and that Clauses 2 and 4 correspond with goto 1. Example 3 Consider again the grammar from Exam- ple 1. Using the ELR algorithm, recognition of a * a is realised by: [{E'} -* ] a a [{E'} ][{F} 6] a [{E'} --* ][{T} --* F] a [{E'} --* ][{T, E} --* T] a [{E'} --* ][{T} --* T *] a [{E'} ---* E] 120 Comparing these configurations with those reached by the PLR recognizer, we see that here after Step 3 the stack element [{T, E} ~ T] represents both [T ---* T • • F] and [T --, T • * * F], but also [E --* T .] and [E -~ T • T E], so that nondeterminism is even further reduced. [] A simplified ELR algorithm, which we call the pseudo ELR algorithm, results from avoiding reference to A in Clauses 1 and 3. In Clause 1 we then have a simplified definition of A ~, viz. A ~ = {A [ 3A --* as, B ---* tiC'7 E Pt[a l* C]}, and in the same way we have in Clause 3 the new definition A" = {D [ 3D ~ AS, B --~ ~C~( E Pt[D [* C]}. Pseudo ELR parsing can be more easily realised than full ELR parsing, but the correct-prefix property can no longer be guaranteed. Pseudo ELR parsing is the foundation of a tabular algorithm in [20]. Common-prefix parsing One of the more complicated aspects of the ELR algo- rithm is the treatment of the sets of nonterminals in the left-hand sides of items. A drastically simplified algorithm is the basis of a tabular algorithm in [21]. Since in [21] the algorithm itself is not described but only its tabular realisation, 2 we take the liberty of giv- ing this algorithm our own name: common-prefix (CP) parsing, since it treats all rules with a common prefix simultaneously, a The simplification consists of omitting the sets of nonterminals in the left-hand sides of items: I Cp = {[--* s] [ A ~ s/3 e pt} Algorithm 4 (Common-prefix) A t;r = (T, I cP, Init, ~-, Fin), Init = [--*], Fin = [---+ S], and I- defined by: i. (F[---* /3], av) ~ (F[---* /3][4_. a], v) where there are A --~ as, B --~/3C'7 E pt such that AL*C 2. (r[-~ a], av) ~ (r[-~ sa], v) where there is A --~ sa~3 E pt 3. (F[--~/3][4_. s], v) F- (F[--~ fl][--. A], v) where there are A --* a, D -* A6, B --* /3C'7 E pt such that D/* C 4. (V[-~/3][4_, s], v) F- (V[--*/3A], v) where there are A --* s, B --~/3A'7 E pt The simplification which leads to the CP algorithm inevitably causes the correct-prefix property to be lost. Example 4 Consider again the grammar from Exam- ple 1. It is clear that a÷a T ais not acorrect string according to this grammar. The CP algorithm may go through the following sequence of configurations: 2An attempt has been made in [19] but this paper does not describe the algorithm in its full generality. 3The original algorithm in [21] applies an optimization concerning unit rules, irrelevant to our discussion. 1 2 F] 3 [--*][-* T] 4 [--*][-* E] 5 E 6 E 7 E 0 ¢ +] ÷][--, F] T] T T] a÷aTa ÷aTa ÷aT a ÷aTa ÷aTa aTa Ta Ta ~a a 10 E r a] We see that in Step 9 the first incorrect symbol T is read, but recognition then continues. Eventually, the recog- nition process is blocked in some unsuccessful configu- ration, which is guaranteed to happen for any incorrect input 4. In general however, after reading the first incor- rect symbol, the algorithm may perform an unbounded number of steps before it halts. (Imagine what happens for input of the forma+aTa÷a+a+...+a.) [] Tabular parsing Nondeterministic push-down automata can be realised efficiently using parse tables [1]. A parse table consists of sets Ti,j of items, for 0 < i < j _~ n, where al ...an represents the input. The idea is that an item is only stored in a set Ti,j if the item represents recognition of the part of the input ai+l • • • aj. We will first discuss a tabular form of CP parsing, since this is the most simple parsing technique discussed above. We will then move on to the more difficult ELR technique. Tabular PLR parsing is fairly straightfor- ward and will not be discussed in this paper. Tabular CP parsing CP parsing has the following tabular realization: Algorithm 5 (Tabular common-prefix) P c Sets Tij of the table are to be subsets of I . Start with an empty table. Add [-*] to T0,0. Perform one of the following steps until no more items can be added. 1. Add [--~ a] to T~-i,i for a = al and [--*/3] E Tj,i-i where there are A --* an, B --* /3C'7 E P? such that A/*C 2. Add [-~ sa] to Tj,i for a = ai and [--* a] E Tj,l-i where there is A --* an/3 E pt 3. Add [--* A] to Tj# for [--* a] e Tj,i and [-*/3] E Th,j where there are A --~ s, D --* AS, B --* /3C'7 E pt such that D/* C 4. Add [--~/3A] to Th,i for [--* s] E Tj,i and [---~/3] E Th,j where there are A --* s, B --*/3A 7 E pt Report recognition of the input if [--~ S] E T0,n. For an example, see Figure 1. Tabular CP parsing is related to a variant of CYK parsing with TD filtering in [5]. A form of tabular 4unless the grammar is cyclic, in which case the parser may not terminate, both on correct and on incorrect input 121 0 1 2 [--.] (0) [-*[-*[--*[--* T]E]F]a] (4)(3)(2)(1) [--. E +](5) 1 0 3 [--* E + T] E] [4 a] (6) [--* F] (7) [--* T] (s) Figure 1: Tabular 0 [--* T T] (9) CP parsing without top-down filtering (i.e. without the checks concerning the left-corner relation /*) is the main algorithm in [21]. Without the use of top-down filtering, the references to [---~/9] in Clauses 1 and 3 are clearly not of much use any more. When we also remove the use of these items, then these clauses become: [--* T T E] Consider again the grammar from Example 1 and the (incorrect) in- put a + a T a. After execution of the tabular common-prefix al- gorithm, the table is as given here. The sets Tj,i are given at the j-th row and i-th column. The items which correspond with those from Example 4 are labelled with (0), (1),... These labels also indicate the order in which these items are added to the table. 1. Add [--+ a] to Tc-I,C for a = ai where there is A --* ac~ • pt 3. Add [--* A] to Tj,i for [--+ o~] • Tj,i where there are A ---* a, D --* A6 • pt [--* a] (Io) T] [--, E] CP parsing However, for certain i there may be many [A ~ /9] • Tj,c-1, for some j, and each may give rise to a different A' which is non-empty. In this way, Clause 1 may add several items [A' --~ a] to Tc-I,C, some possibly with overlapping sets A'. Since items represent computation of subderivations, the algorithm may therefore compute the same subderivation several times. In the resulting algorithm, no set Tc,j depends on any set Tg,h with g < i. In [15] this fact is used to construct a parallel parser with n processors Po,..., Pn-1, with each Pi processing the sets Ti,j for all j > i. The flow of data is strictly from right to left, i.e. items computed by Pc are only passed on to P0,..., Pc-1. Tabular ELR parsing The tabular form of ELR parsing allows an optimiza- tion which constitutes an interesting example of how a tabular algorithm can have a property not shared by its nondeterministic origin. 5 First note that we can compute the columns of a parse table strictly from left to right, that is, for fixed i we can compute all sets Tj,c before we compute the sets Tj,C-F1 • If we formulate a tabular ELR algorithm in a naive way analogously to Algorithm 5, as is done in [5], then for example the first clause is given by: 1. Add [A' --. a] to Tc-1,c for a = ac and [A ~/9] • Tj,c-1 where A' -- {A ] 3A --~ a(~,B --+ /9C~ • Pt[B • A A A Z* C]} is non-empty 5This is reminiscent of the admissibility tests [3], which are applicable to tabular realisations of logical push-down automata, but not to these automata themselves. We propose an optimization which makes use of the fact that all possible items [A --+/9] • Tj,i-1 are already present when we compute items in Ti-l,i: we compute one single item [A' -+ hi, where A' is a large set com- puted using all [A --+ /9] • Tj,i-1, for any j. A similar to Tj, i • [A ---* c~] • Tj,i-1 • A i A -~ c~a/9 • pt } is non-empty optimization can be made for the third clause. Algorithm 6 (Tabular extended LR) Sets T/j of the table are to be subsets of I ELR. Start with an empty table. Add [{S'} ~ ] to T0,0. For i ---- 1,..., n, in this order, perform one of the following steps until no more items can be added. 1. Add [A' --. a] to Ti-l# for a = ai where A' = {A I 3j3[A --*/9] • Tj,i-13A ----, ha, B ---* /9C0' • pt[B • A A A Z* C]} is non-empty 2. Add [A' --* aa] for a = ai and where A' = {A 3. Add [A" --. A] to Tj,i for [A' --* a]E Tj,i where there is A --+ a E pt with A E A', and A" = {D [ 3h3[A --* /9] E TtL,j3D ----, A6, B ----, /9C',/ E pt[B E A A D Z* C]} is non-empty 4. Add [A" --./gA] to Th,i for [A' --* a] E Tj,/ and [A --,/9] • Th,j where there is A --* a • pt with A • A', and A" = {B • A ] B --~/9A7 • pt} is non-empty Report recognition of the input if [{S'} --* S] • T0,,~. Informally, the top-down filtering in the first and third clauses is realised by investigating all left corners D of nonterminals C (i.e. D Z* C) which are expected 122 from a certain input position. For input position i these nonterminals D are given by Si = {D ] 3j3[A ~ fl] E Td,i 3B --, tiC"/e Pt[B E A A D Z* C]} Provided each set Si is computed just after comple- tion of the i-th column of the table, the first and third clauses can be simplified to: 1. Add [A' ~ a] to Ti-l,i for a = a i where A' = {A [ A --~ aa E pt} M Si-1 is non-empty 3. Add [A" ---, A] to Tj,i for [A' --, ~] E Tj,i where there is A --, a E pt with A E A', and A" = {D [ D ~ A5 E pt} N Sj is non-empty which may lead to more practical implementations. Note that we may have that the tabular ELR algo- rithm manipulates items of the form [A --~ a] which would not occur in any search path of the nondeter- ministic ELR algorithm, because in general such a A is the union of many sets A' of items [A ~ --~ a] which would be manipulated at the same input position by the nondeterministic algorithm in different search paths. With minor differences, the above tabular ELR algo- rithm is described in [21]. A tabular version of pseudo ELR parsing is presented in [20]. Some useful data structures for practical implementation of tabular and non-tabular PLR, ELR and CP parsing are described in [S], Finding an optimal tabular algorithm In [14] Schabes derives the LC algorithm from LR pars- ing similar to the way that ELR parsing can be derived from LR parsing. The LC algorithm is obtained by not only splitting up the goto function into goto 1 and goto 2 but also splitting up goto~ even further, so that it non- deterministically yields the closure of one single kernel item. (This idea was described earlier in [5], and more recently in [10].) Schabes then argues that the LC algorithm can be determinized (i.e. made more deterministic) by manip- ulating the goto functions. One application of this idea is to take a fixed grammar and choose different goto functions for different parts of the grammar, in order to tune the parser to the grammar. In this section we discuss a different application of this idea: we consider various goto functions which are global, i.e. which are the same for all parts of a grammar. One example is ELR parsing, as its goto~ function can be seen as a determinized version of the goto 2 function of LC parsing. In a similar way we obtain PLR parsing. Traditional LR parsing is obtained by taking the full determinization, i.e. by taking the normal goto function which is not split up. 6 6Schabes more or less also argues that LC itself can be obtained by determinizing TD parsing. (In lieu of TD pars- ing he mentions Earley's algorithm, which is its tabular realisation.) We conclude that we have a family consisting of LC, PLR, ELR, and LR parsing, which are increasingly de- terministic. In general, the more deterministic an algo- rithm is, the more parser states it requires. For exam- ple, the LC algorithm requires a number of states (the items in I Lc) which is linear in the size of the gram- mar. By contrast, the LR algorithm requires a number of states (the sets of items) which is exponential in the size of the grammar [2]. The differences in the number of states complicates the choice of a tabular algorithm as the one giving op- timal behaviour for all grammars. If a grammar is very simple, then a sophisticated algorithm such as LR may allow completely deterministic parsing, which requires a linear number of entries to be added to the parse table, measured in the size of the grammar. If, on the other hand, the grammar is very ambigu- ous such that even LR parsing is very nondeterministic, then the tabular realisation may at worst add each state to each set Tij, so that the more states there are, the more work the parser needs to do. This favours sim- ple algorithms such as LC over more sophisticated ones such as LR. Furthermore, if more than one state repre- sents the same subderivation, then computation of that subderivation may be done more than once, which leads to parse forests (compact representations of collections of parse trees) which are not optimally dense [1, 12, 7]. Schabes proposes to tune a parser to a grammar, or in other words, to use a combination of parsing tech- niques in order to find an optimal parser for a certain grammar. 7 This idea has until now not been realised. However, when we try to find a single parsing algorithm which performs well for all grammars, then the tabu- lar ELR algorithm we have presented may be a serious candidate, for the following reasons: • For M1 i, j, and a at most one item of the form [A --, ct] is added to Tij. Therefore, identical sub- derivations are not computed more than once. (This is a consequence of our optimization in Algorithm 6.) Note that this also holds for the tabular CP algo- rithm. • ELR parsing guarantees the correct-prefix property, contrary to the CP algorithm. This prevents com- putation of all subderivations which are useless with regard to the already processed input. • ELR parsing is more deterministic than LC and PLR parsing, because it allows shared processing of all common prefixes. It is hard to imagine a practical parsing technique more deterministic than ELR pars- ing which also satisfies the previous two properties. In particular, we argue in [8] that refinement of the LR technique in such a way that the first property above holds whould require an impractically large number of LR states. 7This is reminiscent of the idea of "optimal cover" [5]. 123 Epsilon rules Epsilon rules cause two problems for bottom-up pars- ing. The first is non-termination for simple realisations of nondeterminism (such as backtrack parsing) caused by hidden left recursion [7]. The second problem occurs when we optimize TD filtering e.g. using the sets Si: it is no longer possible to completely construct a set Si be- fore it is used, because the computation of a derivation deriving the empty string requires Si for TD filtering but at the same time its result causes new elements to be added to S~. Both problems can be overcome [8]. Conclusions We have discussed a range of different parsing algo- rithms, which have their roots in compiler construction, expression parsing, and natural language processing. We have shown that these algorithms can be described in a common framework. We further discussed tabular realisations of these al- gorithms, and concluded that we have found an opti- mal algorithm, which in most cases leads to parse tables containing fewer entries than for other algorithms, but which avoids computing identical subderivations more than once. Acknowledgements The author acknowledges valuable correspondence with Klaas Sikkel, Ran6 Leermakers, Franqois Barth61emy, Giorgio Satta, Yves Schabes, and Fr6d@ric Voisin. References [1] S. Billot and B. Lang. The structure of shared forests in ambiguous parsing. In 27th Annual Meet- ing of the ACL, 143-151, 1989. [2] M. Johnson. The computational complexity of GLR parsing. In M. Tomita, editor, Generalized LR Parsing, chapter 3, 35-42. Kluwer Academic Publishers, 1991. [3] B. Lang. Complete evaluation of Horn clauses: An automata theoretic approach. Rapport de Recherche 913, Institut National de Recherche en Informatique et en Automatique, Rocquencourt, France, November 1988. [4] M. Lankhorst. An empirical comparison of gener- alized LR tables. In R. Heemels, A. Nijholt, and K. Sikkel, editors, Tomita's Algorithm: Extensions and Applications, Proc. of the first Twente Work- shop on Language Technology, 87-93. University of Twente, September 1991. Memoranda Informatica 91-68. [5] R. Leermakers. How to cover a grammar. In 27th Annual Meeting of the ACL, 135-142, 1989. [6] R. Leermakers. A recursive ascent Earley parser. Information Processing Letters, 41(2):87- 91, February 1992. [7] M.J. Nederhof. Generalized left-corner parsing. In Sixth Conference of the European Chapter of the ACL, 305-314, 1993. [8] M.J. Nederhof. A multidisciplinary approach to a parsing algorithm. In K. Sikkel and A. Ni- jholt, editors, Natural Language Parsing: Methods and Formalisms, Proc. of the sixth Twente Work- shop on Language Technology, 85-98. University of Twente, 1993. [9] M.J. Nederhof and G. Satta. An extended theory of head-driven parsing. In this proceedings. [10] P. Oude Luttighuis and K. Sikkel. Generalized LR parsing and attribute evaluation. In Third Inter- national Workshop on Parsing Technologies, 219- 233, Tilburg (The Netherlands) and Durbuy (Bel- gium), August 1993. [11] P.W. Purdom, Jr. and C.A. Brown. Parsing extended LR(k) grammars. Acta Informatica, 15:115-127, 1981. [12] J. Rekers. Parser Generation for Interactive Envi- ronments. PhD thesis, University of Amsterdam, 1992. [13] D.J. Rosenkrantz and P.M. Lewis II. Deterministic left corner parsing. In IEEE Conference Record of the 11th Annual Symposium on Switching and Automata Theory, 139-152, 1970. [14] Y. Schabes. Polynomial time and space shift- reduce parsing of arbitrary context-free grammars. In 29th Annual Meeting of the ACL, 106-113, 1991. [15] K. Sikkel and M. Lankhorst. A parallel bottom- up Tomita parser. In 1. Konferenz "Verarbeitung Natiirlicher Sprache", 238-247, Nfirnberg, October 1992. Springer-Verlag. [16] S. Sippu and E. Soisalon-Soininen. Parsing The- ory, Vol. H: LR(k) and LL(k) Parsing, EATCS Monographs on Theoretical Computer Science, volume 20. Springer-Verlag, 1990. [17] E. Soisalon-Soininen and E. Ukkonen. A method for transforming grammars into LL(k) form. Acta Informatica, 12:339-369, 1979. [18] M. Tomita. Efficient Parsing for Natural Lan- guage. Kluwer Academic Publishers, 1986. [19] F. Voisin. CIGALE: A tool for interactive grammar construction and expression parsing. Science of Computer Programming, 7:61-86, 1986. [20] F. Voisin. A bottom-up adaptation of Earley's parsing algorithm. In Programming Languages Implementation and Logic Programming, Interna- tional Workshop, LNCS 348, 146-160, Orl@ans, France, May 1988. Springer-Verlag. [21] F. Voisin and J.-C. Raoult. A new, bottom-up, general parsing algorithm. BIGRE, 70:221-235, September 1990. 124
1994
17
A Psycholinguistically Motivated Parser for CCG Michael Niv* Technion - Israel Institute of Technology Haifa, Israel Internet: [email protected] Abstract Considering the speed in which humans resolve syn- tactic ambiguity, and the overwhelming evidence that syntactic ambiguity is resolved through selec- tion of the analysis whose interpretation is the most 'sensible', one comes to the conclusion that inter- pretation, hence parsing take place incrementally, just about every word. Considerations of parsimony in the theory of the syntactic processor lead one to explore the simplest of parsers: one which repre- sents only analyses as defined by the grammar and no other information. Toward this aim of a simple, incremental parser I explore the proposal that the competence gram- mar is a Combinatory Categorial Grammar (CCG). I address the problem of the proliferating analyses that stem from CCG's associativity of derivation. My solution involves maintaining only the max- imally incremental analysis and, when necessary, computing the maximally right-branching analysis. I use results from the study of rewrite systems to show that this computation is efficient. 1 Introduction The aim of this paper is to work towards a compu- tational model of how humans syntactically process the language that they hear and read. The endpoint of this enterprise is a precise characterization of the process that humans follow, getting details such as timing and garden pathing exactly right. *The research reported here was conducted as part of my Ph.D. thesis work at the University of Pennsyl- vania and supported by the following grants: DARPA N00014-90-J-1863, ARO DAAL03-89-C-0031, NSF IRI 90-16592, Ben Franklin 91S.3078C-1. Preparation of this paper was supported by a postdoctoral fellowship at the Technion in Israel. I am grateful to Mark Hepple, Mitch Marcus, Mark Steedman, VM Tannen, and Henry Thompson for helpful suggestions, and to Jeff Siskind for help with typesetting CCG derivations. Any errors are my own. 1.1 Ambiguity Resolution Recently, a great deal of evidence has accumu- lated that humans resolve syntactic ambiguity by considering the meaning of the available analyses and selecting the 'best' one. Various criteria for goodness of meaning have been advanced in the psycholinguistic literature: e.g. thematic compat- ibility and lexical selection (Trueswell and Tanen- haus 1994), discourse felicity of definite expressions (Altmann et al. 1994), temporal coherence in dis- course (Trueswell and Tanenhaus 1991), grammati- cal function vis avis given/new status (Niv 1993b), and general world-knowledge (Kawamoto and Far- rar 1993). Many of the works cited above consider the tim- ing of the ambiguity resolution decision. The evi- dence is overwhelming that ambiguity is resolved within a word or two of the arrival of disambiguat- ing information-- that is, when there is a meaning- based criterion which militates toward one or an- other syntactically available analysis, that analysis is selected. Should the other analysis turn out to be the ultimately correct analysis, a garden path will result. Given that the various analyses available are compared on various criteria of sensibleness, it fol- lows that these analyses are constructed and main- tained in parallel until disambiguating information arrives. Indeed, there is psycholinguistic evidence that the processor maintains the various analyses in parallel (Nicol and Pickering 1993; MacDonland et al. 1992). Our parser, therefore, must be able to build and maintain analyses in parallel. It must also extract from the developing parse in a prompt fashion all of the semantically relevant syntactic commitments (e.g. predicate-argument relations) in order to allow the interpretation module that it feeds to make ac- curate evaluations of the meaning. Recovery from garden paths is not addressed in this paper. 125 1.2 Parser and Grammar Let us adopt the widely held position that humans posses a representation of grammatical competence which is independent of any process (e.g. produc- tion, perception, acquisition) that uses it. Steed- man (1994) argues that if two theories of the gram- mar and processor package have identical empirical coverage, but one has a more complex parser, then the other is preferred. This preference is not just on philosophical grounds of cleanliness of one's the- ories, but stems from consideration of the evolution of the human linguistic capacity: A theory whose grammar requires a complex parser in order to be of any use would entail a more complex or less likely evolutionary path which the parser and grammar took together than would a theory whose gram- mar requires little specialized apparatus by way of a parser, and could thus have evolved gradually. So what is the simplest parser one can con- struct? In other words, what is the minimal ad- dition of computational apparatus to the compe- tence grammar necessary to make it parse? From the argument in section 1.1, this addition must in- clude a mechanism for maintaining analyses in par- allel. Minimally, nothing else is necessary -- the data structure which resides in each parallel slot in the parser is a direct representation of an analysis as defined by the competence machinery. Suppose the grammatical competence is one that always divides an English clause into a subject and a predicate (VP henceforth). Suppose also that the primary operations of the grammar are putting constituents together. Could the minimal parser for such a grammar account for the minimal pair in (1)? (1) a. The doctor sent for the patient arrived. b. The flowers sent for the patient arrived. (1)a is a garden path. In (1)b the garden path is avoided because flowers are not good senders. The difference between (1)a and b indicates that well before the word 'arrived' is encountered, the proces- sor has already resolved the ambiguity introduced by the word 'sent'. That is, in the main-verb anal- ysis of 'sent', the interpreter is aware of the relation between the subject the verb before the end of the VP. But the minimal parser cannot put the subject together with 'sent' or 'sent for the' because the latter are not a complete VP! There are two possible solutions to this prob- lem, each relaxes one of the two suppositions above: Steedman (1994) argues for a grammatical theory (CCG) which does not always make the subject- predicate juncture the primary division point of a clause. Shieber and Johnson (1993) on the other hand, argue that there is no need to assume that a constituent has to be complete before it is combined with its sister(s). At this time, neither approach is sufficiently developed to be evaluable (e.g. they both lack broad coverage grammar) so either one is viable. In this paper, I develop the first. 2 Preliminaries CCG is a lexicalized grammar formalism -- a lexi- con assigns each word to one or more grammatical categories. Adjacent constituents can combine by one of a small number of combinatory rules. The universe of grammatical categories contains a col- lection of basic categories (e.g. atomic symbols such as n, np, s, etc. or Prolog terms such np(3,sg)) and is closed under the category-forming connectives / and \. Intuitively a constituent of category X/Y (resp. X\Y) is something of category X which is missing something of category Y to its right (resp. left). The combinatory rules are listed 1 in table 1. They formalize this intuition. A combinatory rule may be qualified with a predicate over the variables X, Y, and Z1...Zn. A derivation is a binary tree whose leaves are each a single-word constituent, and whose internal nodes are each a constituent which is derived from its children by an application of one of the com- binatory rules. A string w is grammatical just in case there exists a derivation whose frontier is w. I equivocate between a derivation and the constituent at its root. An analysis of a string w is a sequence of derivations such that the concatenation of their frontiers is w. 3 The Simplest Parser Let us consider the simplest conceivable parser. Its specification is "find all analyses of the string so far." It has a collection of slots for maintaining one analysis each, in parallel. Each slot maintains an analysis of the string seen so far -- a sequence of one or more derivations. The parser has two operations, as shown in figure 1. This parser succeeds in constructing the incre- mental analysis (2) necessary for solving the prob- lem in (1). 1Two common combinatory rules, type-raising and substitution are not listed here. The substitution rule (Steedman 1987) is orthogonal to the present discussion and can be added without modification. The rule for type-raising (see e.g. Dowty 1988) can cause difficulties for the parsing scheme advocated here (Hepple 1987) and is therefore assumed to apply in the lexicon. So a proper name, for example, would be have two cate- gories: np and s/(s\np). 126 Forward combination rule name X/Y Y X >0 X/Y YIZ X[Z >1 X/Y Y]-Z11Z2 X~z [Z2 >2 Backward Combination rule name X/Y YIZ1...IZ. XIZI...[Z. >n Y X\Y , X <0 YIZ x\Y , xlz <1 Y]-Z, [Z2 X\Y ~ X]-ZIIZ2 " <2 Y[Z1... [Zn X\Y ' X[Z1... [Zn <n IZ stands for either/Z or \Z. Underlined regions in a rule must match. Table 1: The combinatory rules • scan get the next word from the input stream for each analysis a in the parser's memory empty the slot containing a for each lexical entry e of the word make a copy a ~ of a add the leaf derivation e to the right of a ~ add a ~ as a new analysis • combine for each analysis a in the parser's memory if a contains more than one constituent and some rule can combine the rightmost two constituents in a then make a copy a ~ of a replace the two constituents of a ~ by their combination add a / as a new analysis Figure 1: Parser operations the flowers sent (2) s/(s\np)/, n >0s\np/pp s/(s\np) >I s/pp But this parser is just an unconstrained shift- reduce parser that simulates non-determinism via parallelism. It suffers from a standard problem of simple bottom-up parsers: it can only know when a certain substring has a derivation, but in case a sub- string does not have a derivation, the parser cannot yet know whether or not a larger string containing the substring will have a derivation. This means that when faced with a string such as (3) The insults the new students shouted at the teacher were appalling. the parser will note the noun-verb ambiguity of 'in- sults', but will be unable to use the information that 'insults' is preceded by a determiner to rule out the verb analysis in a timely fashion. It would only no- tice the difficulty with the verb analysis after it had come to the end of the string and failed to find a derivation for it. This delay in ruling out doomed analyses means that the parser and the interpreter are burdened with a quickly proliferating collection of irrelevant analyses. Standard solution to this problem (e.g. Earley's 1970 parser; LR parsing, Aho and Johnson 1974) consider global properties of the competence gram- mar to infer that no grammatical string will be- gin with a determiner followed by a verb. These solutions exact a cost in complicating the design of the parser: new data structures such as dotted rules or an LR table must be added to the parser. The parser is no longer a generic search algorithm for the competence grammar. Given the flexibil- ity of CCG derivations, one may consider impos- ing a very simple constraint on the parser: every prefix of a grammatical string must have a deriva- tion. But such a move it too heavy-handed. Indeed CCG often gives left-branching derivations, but it is not purely left-branching. For example, the deriva- tion of a WH-dependency requires leaving the WH- filler constituent uncombined until the entire gap- containing constituent is completed, as in (4). (4) whose cat did Fred find n s/s s/(s\np) >i s\np/np q/(s/np)/n >0 q/(s/np) s/(s\np) s/np >I >0 4 The Viable Analysis Criterion Given the desideratum to minimize the complexity of the biologically specified parser, I propose that the human parser is indeed as simple as the scan- combine algorithm presented above, and that the ability to rule out analyses such as determiner+verb is not innate, but is an acquired skill. This 'skill' is implemented as a criterion which an analysis must meet in order to survive. An infant starts out with this criterion completely permissive. Consequently it cannot process any utterances longer than a few words without requiring excessively many parser 127 slots. But as the infant observes the various analy- ses in the parser memory and tracks their respective outcomes, it notices that certain sequences of cate- gories never lead to a grammatical overall analysis. After observing an analysis failing a certain number of times and never succeeding, the child concludes that it is not a viable analysis and learns to discard it. The more spurious analyses are discarded, the better able the child is to cope with longer strings. The collection of analyses that are maintained by the parser is therefore filtered by two indepen- dent processes: The Viable Analysis Criterion is a purely syntactic filter which rules out analyses inde- pendently of ambiguity. The interpreter considers the semantic information of the remaining analyses in parallel and occasionally deems certain analyses more sensible than their competitors, and discards the latter. Given that English sentences rarely require more than two or three CCG constituents at any point in their parse, and given the limited range of categories that arise in English, the problem of learning the viable analysis criterion from data promises to be comparable to other n-gram learn- ing tasks. The empirical validation of this proposal awaits the availability of a broad coverage CCG for English, and other languages. 2 5 CCG and flexible derivation 5.1 The Problem CCG's distinguishing characteristic is its deriva- tional flexibility -- the fact that one string is po- tentially assigned many truth-conditionally equiva- lent analyses. This feature is crucial to the present approach of incremental parsing (as well as for a range of grammatical phenomena, see e.g. Steed- man 1987, 1994; Dowty 1988). But the additional ambiguity, sometimes referred to as 'spurious', is also a source of difficulty for parsing. For example, the truth-conditionally unambiguous string 'John was thinking that Bill had left' has CCG deriva- tions corresponding to each of the 132 different bi- nary trees possible for seven leaves. The fact that this sentence makes no unusual demands on hu- mans makes it clear that its exponentially prolif~ crating ambiguous analyses are pruned somehow. The interpreter, which can resolve many kinds of ambiguity, cannot be used to for this task: it has no visible basis for determining, for example, that the single-constituent analysis 'John was thinking' 2In addition to the category-ambiguity problem in (3), the viable analysis criterion solves other problems, analogous to shift-reduce ambiguities, which are omit- ted here for reasons of space. The interested reader is referred to Niv (1993a) for a comprehensive discussion and an implementation of the parser proposed here. somehow makes more sense (in CCG) than the two- constituent analysis 'John'+'was thinking'. Note that the maximMly left-branching deriva- tion is the one which most promptly identifies syn- tactic relations, and is thus the preferred derivation. It is possible to extend the viable analysis criterion to encompass this consideration of efficiency as well. The infant learns that it is usually most efficient to combine whenever possible, and to discard an analysis in which a combination is possible, but not taken. 3. While this left-branching criterion eliminates the inefficiency due to flexibility of derivation, it gives rise to difficulties with (5). John loves Mary madly (5) s/vp vp/np np vp\vp In (5), it is precisely the non-left-branching derivation of 'John loves Mary' which is necessary in order to make the VP constituent available for combination with the adverb. (See Pareschi and Steedman 1987.) 5.2 Previous Approaches Following up on the work of Lambek (1958) who proposed that the process of deriving the grammat- icality of a string of categories be viewed as a proof, there have been quite a few proposals put forth for computing only normal forms of derivations or proofs (KSnig 1989; Hepple and Morrill 1989; Hep- ple 1991; inter alia). The basic idea with all of these works is to define 'normal forms' -- distinguished members of each equivalence class of derivations, and to require the parser to search this smaller space of possible derivations. But none of the pro- posed methods result in parsing systems which pro- ceed incrementally through the string. 4 Karttunen (1989) and others have proposed chart-based parsers which directly address the derivational ambiguity problem. For the present purpose, the principal feature of chart parsing -- the factoring out of constituents from analyses -- turns out to create an encumberance: The inter- preter cannot compare constituents, or arcs, for the purposes of ambiguity resolution. It must compare analyses of the entire prefix so far, which are awk- ward to compute from the developing chart. 3 Discussion of the consequences of this move on the processing of picture noun extractions and ambiguity- related filled-gap effects is omitted for lack of space. See Niv (1993a). 4In the case of Hepple's (1991) proposal, a left- branching normal form is indeed computed. But its computation must be delayed for some words, so it does not provide the interpreter with timely informa- tion about the incoming string. 128 Pareschi and Steedman (1987) propose the fol- lowing strategy: (which can be taken out of the chart-parsing context of their paper) construct only maximally left-branching derivations, but al- low a limited form of backtracking when a locally non-left-branching derivation turns out to have been necessary. For example, when parsing (5), Pareschi and Steedman's algorithm constructs the left branching analysis for 'John loves Mary'. When it encounters 'madly', it applies >0 in reverse to solve for the hidden VP constituent 'loves Mary' by subtracting the s/vp category 'John' from the s category 'John loves Mary': John loves Mary (6) s/vp vp/nP>l np s/np vp vp madly vp\vp >0 reveal >0 <0 >0 The idea with this 'revealing' operation is to ex- ploit the fact that the rules >n and <n, when viewed as three-place relations, are functional in all three arguments. That is, knowledge any two of {left con- stituent, right constituent, result), uniquely deter- mines the third. There are many problems with the completeness and soundness Pareschi and Steed- man's proposal (Hepple 1987; Niv 1993a). For ex- ample, in (7), the category b\c cannot be revealed after it had participated in two combinations of mixed direction: <0 and >0. (7) a/b c d\C<ob\d b\c\(b\c) d <0 b >0 a stuck 6 A Proposal Pareschi and Steedman's idea of lazy parsing is very attractive in the present setting. I propose to replace their unification-based revealing opera- tion with a normal-form based manipulation of the derivation history. The idea is to construct and maintain the maximally incremental, left-branching derivations. (see section 4.) When a constituent such as the VP 'loves Mary' in (5) may be nec- essary, e.g. whenever the right-most constituent in an analysis is of the form X\Y, the next-to-right- most derivation is rewritten to its equivalent right- branching derivation by repeated application the local transformations , defined in (8) and (9). The right frontier of the rewritten derivation now provides all the grammatically possible attachment sites. (8) W/X x [YI'''IYm-~/Ym W [Yz'.' lYre-1/Y,~ Ym [Zl"" [Z,~ >m >n WIYz'-'IYm-IlZI''-IZ~ W/X X[Y1...[Ym_I/Ym Ym IZl''']Zn X lYe.-. [Ym-~ IZ~... IZ,~ ~n ~m+n-1 (9) W IY1.-. Wm-~lZ~"'' IZ, Y.~ [Z1-..lZn X [Ya'''IY,~-I\Ym W\X <n x IYz""" IY,,,-a Iza..- IZ, WIYI-.-IYm_IIZI'..IZn .._.._4 <m+n-I Ym IZ~'--IZ. XIY1.-.IY.~_I\Ym W\X <m W [Y1-.. IY.~-~ \Ym <n W WI"" [Y~-I [Za'.- [Z, Results from the study of rewrite systems (see Klop (1992) for an overview) help determine the computational complexity of this operation: 6.1 A Rewrite System for Derivations If x is a node in a binary tree let A(x) (resp. p(x)) refer to its left (right) child. Any subtree of a derivation which matches the left-hand-side of either (8) or (9) is called a redez. The result of replacing a redex by the corresponding right-hand-side of a rule is called the eontractum. A derivation is in normal form (NF) if it contains no redexes. In the following I use the symbol --~ to also stand for the relation over pairs of derivations such that the second is derived from the first by one application of ,7. Let ~-- be the converse of---*. Let ( , be ~ U ~---. Let ,~ be the reflexive transitive closure of --~ and similarly, the reflexive transitive closure of ~---, and , ,, the reflexive transitive closure of ~ ,. Note that .... is an equivalence relation. A rewrite system is strongly normalizing (SN) iff every sequence of applications of ~ is finite. Theorem 1 ---* is SN 5 proof Every derivation with n internal nodes is assigned a positive integer score. An application of is guaranteed to yield a derivation with a lower 5Hepple and Morrill (1989) Proved SN for a slight variant of ---*. The present proof provides a tighter score function, see lemma 1 below. 129 Figure 2: Schema for one redex in DRS score. This is done by defining functions # and for each node of the derivation as follows: (~ if x is a leaf node #(x) = + #(A(x)) + #(p(x)) otherwise f0 if x is a leaf node ~(x) = ~¢r(A(x)) + ~(p(x)) + #(A(x)) otherwise Each application of ---+ decreases a, the score of the derivation. This follows from the monotonic dependency of the score of the root of the derivation upon the scores of each sub-derivation, and from the fact that locally, the score of a redex decreases when ---+ is applied: In figure 2, a derivation is depicted schematically with a redex whose sub-constituents are named a, b, and c. Applying ~ reduces ~(e), hence the score of the whole derivation. in redex: #(d) -=- #(a)-t-#(b)+I cr(d) = or(a) + ~(b) + #(a) ~(~) = ~(d) + ~(c) + #(d) = c~(a) + q(b) + q(c) + #(b) + 2-~t(a) + 1 in contractum: ~(f) = a(b) + ~(c) +#(b) ~(~') = ~(~) + ~(f) + #0) = ~(~) + ~(b) + ~(c) + #0) + #(~) < ~(~) + ~(b) + ~(0 + #0) + 2. #(~) + 1 [] Observe that #(x) is the number of internal nodes in x. Lemma I Given a derivation x, let n = #x. Ev- ery sequence of applications of ---+ is of length at most n(n - 1)/2. 6 proof By induction on n: Base case: n = 1; 0 applications are necessary. Induction: Suppose true for all derivations of fewer than n internal nodes. Let m = #A(x). So 0 < 6Niv (1994) shows by example that this bound is tight. m_<n--1 and#p(x)=n-m-1. ~(~) - n(n - 1)/2 = = a(A(x)) + a(p(x)) + #(A(x)) - n(n - 1)/2 < ~(.~-~) (,~-~-i)(,~-,~-2) ~(n-1) - 2 + 2 + m - 2 = (m + 1)(rn - (n - 1)) _< 0 recalling that 0 _< m _< n - 1 [] So far I have shown that every sequence of ap- plications of ----+ is not very long: at most quadratic in the size of the derivation. I now show that when there is a choice of redex, it makes no difference which redex one picks. That is, all redex selection strategies result in the same normal form. A rewrite system is Church-Rosser (CR)just in case w, y.(z ,, ,, y ~ 3z.(z---~ z ^ y ,, z)) A rewrite system is Weakly Church-Rosser (WCR) just in ease w, ~, w.(w~ ~ ^ w~ y) ~ 3z.(, ~ z ^ y ,, z) Lemma 2 ---, is WCR. proof Let w be a derivation with two distinct re- dexes x and y, yielding the two distinct derivations w I and w" respectively. There are a few possibili- ties: case 1: x and y share no internal nodes. There are three subcases: x dominates y (includes y as a subconstituent), x is dominated by y, or z and y are incomparable with respect to dominance. Ei- ther way, it is clear that the order of application of ---+ makes no difference. case 2: x and y share some internal node. Without loss of generality, y does not dominate x. There exists a derivation z such that w~----~ zAw"---~ z. This is depicted in figure 3. (Note that all three internal nodes in figure 3 are of the same rule direction, either > or <.) [] Lemma 3 (Newman) WCR A SN D CR. Theorem 2 ~ is CR. proof From theorem 1 and lemmas 2 and 3. [] Therefore any maximal sequence of applica- tions of ~ will lead to the normal form 7. We are free to select the most efficient redex selection scheme. From lemma 1 the worst case is quadratic. Niv (1994) shows that the optimal strategy, of ap- plying --+ closest as possible to the root, yields ---+ applications sequences of at most n steps. 7Assuming, as is the case with extant CCG accounts, that constraints on the applicability of the combinatory rules do not present significant roadblocks to the deriva- tion rewrite process. 130 d c c ? / ~ a x ~ ~ a d cb a a b a Arrows are annotated by the substrucure to which they are applied Figure 3: Why --~ is weakly Church-Rosser Note that all that was said in this section gen- eralizes beyond CCG derivations to any associative algebra. 6.2 Discussion Given the rightmost subconstituent recovered us- ing the normal form technique above, how should parsing proceed? Obviously, if the leftward looking category which precipitated the normal form com- putation is a modifier, i.e. of the form X\X, then it ought to be combined with the recovered con- stituent in a form analogous to Chomsky adjunc- tion. But what if this category is not of the form X\X? For example, should the parser compute the reanalysis in (10)? (lO) a/b b/C>lC/d s\(a/b)\(b/d) a/c >1 a/d a/b b/c c/d>lS\(a/b)\(b/d ) b/d <0 s\(a/b) <0 S Ascribing the same non-garden-path status to the reanalysis in (10) that we do to (6) would consti- tute a very odd move: Before reanalysis, the deriva- tion encoded the commitment that the /b of the first category is satisfied by the b of the b/c in the second category. This commitment is undone in the reanalysis. This is an undesirable property to have in a computational model of parsing commitment, as it renders certain revisions of commitments eas- ier than others, without any empirical justification. Furthermore, given the possibility that the parser change its mind about what serves as argument to what, the interpreter must be able to cope with such non-monotonic updates to its view of the anal- ysis so far -- this would surely complicate the de- sign of the interpreter, s Therefore, constituents on the right-frontier of a right-normal-form should only combine with 'endocentric' categories to their right. The precise definition of 'endocentric' depends on the semantic formalism used -- it certainly includes post-head modifiers, and might also include coordi- nation. Stipulating that certain reanalyses are impos- sible immediately makes the parser 'incomplete' in the sense that it cannot find the analysis in (10). From the current perspective of identifying garden paths, this incompleteness is a desirable, even a nec- essary property. In (10), committing to the compo- sition of a/b and b/c is tantamount to being led down the garden path. In a different sense, the current parser is complete: it finds all analyses if the Viable Analysis Criterion and the interpreter never discard any analyses. 7 Conclusion The current proposal shifts some of the burden tra- ditionally associated with the parser to other com- ponents of the human cognitive faculty: the inter- preter resolves ambiguity, and an acquired skill re- moves 'garbage' analyses from the parser's mem- ory -- solving the so-called spurious ambiguity problem, as well as effectively applying grammar- global constraints traditionally computed by top- down techniques or grammar compilation. The re- sultant parser adheres to the desideratum that it be a generic search algorithm for the grammar for- malism, provided the definition of CCG explicitly includes the notion of 'derivation' and explicates the truth-conditional equivalence relation. Such inclu- sions have indeed been proposed (Steedman 1990). Bibliography Aho, Alfred and S. C. Johnson. 1974. LR Parsing. ACM Computing Surveys 6(2):99-124. Altmann, Gerry T., Alan Garnham, and Judith A. Henstra. 1994. Effects of syntax in human sen- tence parsing: Evidence against a structure- based proposal mechanism. Journal of Ex- perimental Psychology: Learning, Memory and Cognition 20(1):1-8. Dowty, David. 1988. Type Raising, Functional Composition, and Non-Constituent Conjunc- tion. In Richard T. Oehrle, Emmon Bach, and sI am indebted to Henry Thompson for a discussion of monotonicity. 131 Deirdre Wheeler (Eds.), Categorial Grammars and Natural Language Structures. Reidel. Earley, Jay. 1970. An Efficient Context-Free Pars- ing Algorithm. Communications of the Asso- ciation for Computing Machinery 13:94-102. Hepple, Mark R. 1987. Methods for Parsing Combi- natory Grammars and the Spurious Ambiguity Problem. Master's thesis, University of Edin- burgh. Hepple, Mark R. 1991. Efficient Incremental Pro- cessing with Categorial Grammar. In Proceed- ings of the 29th Annual Meeting of the Associ- ation for Computational Linguistics, 79-86. Hepple, Mark R. and Glyn V. Morrill. 1989. Pars- ing and Derivational Equivalence. In Proceed- ings of the Annual Meeting of the European Chapter of the Association for Computational Linguistics. Karttunen, Lauri. 1989. Radical Lexicalism. In Mark Baltin and Anthony S. Kroch (Eds.), Al- ternative Conceptions of Phrase Structure, 43- 65. Chicago: University of Chicago Press. Kawamoto, Alan and William Farrar. 1993. The Return of Visiting Relatives: Pragmatic Ef- fects in Sentence Processing. Quarterly Jour- nal of Experimental Psychology 46A(3):463 - 487. Klop, Jan W. 1992. Term Rewrite Systems. In Samson Abramsky, Dov M. Gabbay, and T. S. E. Maibaum (Eds.), Handbook of Logic in Computer Science, Vol. 2, 1 - 116. Oxford: Clarendon Press. KSnig, Esther. 1989. Parsing as Natural Deduction. In Proceedings of the g7th Annual Meeting of the Association for Computational Linguistics, 272-279, June. Lambek, Joachim. 1958. The Mathematics of Sentence Structure. American Mathematical Monthly 65:154-169. MacDonland, Maryellen, Adam Just, and Patri- cia Carpenter. 1992. Working Memory Con- straints on the Processing of Syntactic Ambi- guity. Cognitive Psychology 24:56-98. Nicol, Janet L. and Martin J. Pickering. 1993. Pro- cessing Syntactically Ambiguous Sentences: Evidence from Semantic Priming. Journal of Psycholinguistic Research. Niv, Michael. 1993a. A Computational Model of Syntactic Processing: Ambiguity Resolution from Interpretation. PhD thesis, University of Pennsylvania. (ftp://ftp.cis.upenn.edu/pub/- ircs/tr/93-27.ps). Niv, Michael. 1993b. Resolution of Syntactic Am- biguity: the Case of New Subjects. In Proceed- ings of the 15th Annual Conference of the Cog- nitive Science Society, Hillsdale, NJ. Lawrence Erlbaum Associates. Niv, Michael. 1994. The complexity of normal form rewrite sequences for Associativity. Tech- nical Report LCL 94-6, Computer Science Department, Technion. (ftp://ftp.cis.upenn.- edu/pub/niv/rewrite.ps). Pareschi, Remo and Mark a. Steedman. 1987. A Lazy Way to Chart Parse with Combinatory Grammars. In Proceedings of the 25th Annual Meeting of the Association for Computational Linguistics. Shieber, Stuart M. and Mark Johnson. 1993. Vari- ations on Incremental Interpretation. Journal of Psycholinguistic Research. (to appear). Steedman, Mark J. 1987. Combinatory Gram- mars and Parasitic Gaps. Natural Language and Linguislic Theory 5:403-439. Steedman, Mark J. 1990. Gapping as Con- stituent Coordination. Linguistics and Philos- ophy 13:207-264. Steedman, Mark J. 1994. Grammars and Proces- sors. In Hans Kamp and Christian Rohrer (Eds.), Aspects of Computational Linguistics. Springer Verlag. (to appear). Trueswell, John C. and Michael K. Tanenhaus. 1991. Tense, Temporal Context and Syntactic Ambiguity Resolution. Language and Cogni- tive Processses 6:303-338. Trueswell, John C. and Michael K. Tanenhaus. 1994. Toward a Lexicalist Framework of Constraint-Based Syntactic Ambiguity Reso- lution. In Charles Clifton, Lyn Frazier, and Keith Rayner (Eds.), Perspectives on Sentence Processing. Hillsdale, NJ: Lawrence Erlbaum Associates. 132
1994
18
VERB SEMANTICS AND LEXICAL SELECTION Zhibiao Wu Department of Information System & Computer Science National University of Singapore Republic of Singapore, 0511 [email protected] Martha Palmer Department of Computer and Information Science University of Pennsylvania Philadelphia, PA 19104-6389 [email protected] Abstract This paper will focus on the semantic representa- tion of verbs in computer systems and its impact on lexical selection problems in machine transla- tion (MT). Two groups of English and Chinese verbs are examined to show that lexical selec- tion must be based on interpretation of the sen- tence as well as selection restrictions placed on the verb arguments. A novel representation scheme is suggested, and is compared to representations with selection restrictions used in transfer-based MT. We see our approach as closely aligned with knowledge-based MT approaches (KBMT), and as a separate component that could be incorporated into existing systems. Examples and experimental results will show that, using this scheme, inexact matches can achieve correct lexical selection. Introduction The task of lexical selection in machine transla- tion (MT) is choosing the target lexical item which most closely carries the same meaning as the cor- responding item in the source text. Information sources that support this decision making process are the source text, dictionaries, and knowledge bases in MT systems. In the early direct replace- ment approaches, very little data was used for verb selection. The source verb was directly replaced by a target verb with the help of a bilingual dictio- nary. In transfer-based approaches, more informa- tion is involved in the verb selection process. In particular, the verb argument structure is used for selecting the target verb. This requires that each translation verb pair and the selection restrictions on the verb arguments be exhaustively listed in the bilingual dictionary. In this way, a verb sense is defined with a target verb and a set of selection restrictions on its arguments. Our questions are: Is the exhaustive listing of translation verb pairs feasible? Is this verb representation scheme suffi- cient for solving the verb selection problem? Our study of a particular MT system shows that when English verbs are translated into Chinese, it is dif- ficult to achieve large coverage by listing transla- tion pairs. We will show that a set of rigid se- lection restrictions on verb arguments can at best define a default situation for the verb usage. The translations from English verbs to Chinese verb compounds that we present here provide evidence of the reference to the context and to a fine-grained level of semantic representation. Therefore, we propose a novel verb semantic representation that defines each verb by a set of concepts in differ- ent conceptual domains. Based on this conceptual representation, a similarity measure can be defined that allows correct lexical choice to be achieved, even when there is no exact lexical match from the source language to the target language. We see this approach as compatible with other interlingua verb representation methods, such as verb representations in KBMT (Nirenburg,1992) and UNITRAN (Dorr, 1990). Since these methods do not currently employ a multi-domain approach, they cannot address the fine-tuned meaning dif- ferences among verbs and the correspondence be- tween semantics and syntax. Our approach could be adapted to either of these systems and incopo- rated into them. The limitations of direct transfer In a transfer-based MT system, pairs of verbs are exhaustively listed in a bilingual dictionary. The translation of a source verb is limited by the num- ber of entries in the dictionary. For some source verbs with just a few translations, this method is direct and efficient. However, some source verbs are very active and have a lot of different transla- tions in the target language. As illustrated by the following test of a commercial English to Chinese MT system, TranStar, using sentences from the Brown corpus, current transfer-based approaches have no alternative to listing every translation pair. In the Brown corpus, 246 sentences take break as the main verb. After removing most idiomatic 133 usages and verb particle constructions, there are 157 sentences left. We used these sentences to test TranStar. The translation results are shown be- low: d=ui pohui ji&nxie to hreuk into pieces to na&ke d~m&ge to to h~ve • break 5 2 JIl t 0 juelie weifzn bsofL to bresk (8 rel~tlon) to ~g~inst to bresk out 0 0 o f~henguzh~ng chu&nlu d~du~n to break down to bresh into to break & continuity tupo deshixi&nd&n weibel to break through to bre&k even with to bre&k (~ promise) o w~nchenjued~bufen to bre&k with In the TranStar system, English break only has 13 Chinese verb entries. The numbers above are the frequencies with which the 157 sentences translated into a particular Chinese expression. Most of the zero frequencies represent Chinese verbs that correspond to English break idiomatic usages or verb particle constructions which were removed. The accuracy rate of the translation is not high. Only 30 (19.1%) words were correctly translated. The Chinese verb ~7]i~ (dasui) acts like a default translation when no other choice matches. The same 157 sentences were translated by one of the authors into 68 Chinese verb expres- sions. These expressions can be listed according to the frequency with which they occurred, in de- creasing order. The verb which has the highest rank is the verb which has the highest frequency. In this way, the frequency distribution of the two different translations can be shown below: Figure 1. Frequency distribution of translations It seems that the nature of the lexical selec- tion task in translation obeys Zipf's law. It means that, for all possible verb usages, a large portion is translated into a few target verbs, while a small portion might be translated into many different target verbs. Any approach that has a fixed num- ber of target candidate verbs and provides no way to measure the meaning similarity among verbs, is not able to handle the new verb usages, i.e., the small portion outside the dictionary cover- age. However, a native speaker has an unrestricted number of verbs for lexical selection. By measur- ing the similarities among target verbs, the most similar one can be chosen for the new verb usage. The challenge of verb representation is to capture the fluid nature of verb meanings that allows hu- man speakers to contrive new usages in every sen- tence. Translating English into Chinese serial verb compounds Translating the English verb break into Chinese (Mandarin) poses unusual difficulties for two rea- sons. One is that in English break can be thought of as a very general verb indicating an entire set of breaking events that can be distinguished by the resulting state of the object being broken. Shatter, snap, split, etc., can all be seen as more special- ized versions of the general breaking event. Chi- nese has no equivalent verb for indicating the class of breaking events, and each usage of break has to be mapped on to a more specialized lexical item. This is the equivalent of having to first interpret the English expression into its more semantically precise situation. For instance this would probably result in mapping, John broke the crystal vase, and John broke the stick onto John shattered the crys- tal vase and John snapped the stick. Also, English specializations of break do not cover all the ways in which Chinese can express a breaking event. But that is only part of the difficulty in trans- lation. In addition to requiring more semantically precise lexemes, Mandarin also requires a serial verb construction. The action by which force is exerted to violate the integrity of the object being broken must be specified, as well as the description of the resulting state of the broken object itself. Serial verb compounds in Chinese - Chinese serial verb compounds are composed of two Chi- nese characters, with the first character being a verb, and the second character being a verb or ad- jective. The grammatical analysis can be found in (Wu, 1991). The following is an example: Yuehan da-sui le huapin. John hit-broken Asp. vase. John broke the vase. (VA) Here, da is the action of John, sui is the result- ing state of the vase after the action. These two Chinese characters are composed to form a verb compound. Chinese verb compounds are produc- tive. Different verbs and adjectives can be com- posed to form new verb compounds, as in ilia, ji- sui, hit-being-in-pieces; or ilia, ji-duan, hit-being- in-line-shape. Many of these verb compounds have not been listed in the human dictionary. However, they must still be listed individually in a machine dictionary. Not any single character verb or single character adjective can be composed to form a VA type verb compound. The productive applications must be semantically sound, and therefore have to treated individually. 134 Inadequacy of selection restrictions for choosing actions - By looking at specific ex- amples, it soon becomes clear that shallow selec- tion restrictions give very little information about the choice of the action. An understanding of the context is necessary. For the sentence John broke the vase, a correct translation is: Yuehan da-sui le huapin. John hit-in-pieces Asp. vase. Here break is translated into a VA type verb compound. The action is specified clearly in the translation sentence. The following sentences which do not specify the action clearly are anoma- lous. , ~tr ~ T ~ Yuehan sui le huapin. John in-pieces Asp. vase. A translation with a causation verb is also anomalous: * ~ ~ ~t ~ T. Yuehan shi huapin sui le. John let vase in-pieces Asp. The following example shows that the trans- lation must depend on an understanding of the surrounding context. The earthquake shook the room violently, and the more fragile pieces did not hold up well. The dishes shattered, and the glass table was smashed into many pieces. Translation of last clause: na boli zhuozi bei zhenchen le euipian That glass table Pass. shake-become Asp. pieces Selection restrictions reliably choose result states - Selection restrictions are more reliable when they are used for specifying the result state. For example, break in the vase broke is translated into dasui (hit and broken into pieces), since the vase is brittle and easily broken into pieces. Break in the stick broke is translated into zheduan (bend and separated into line-segment shape) which is a default situation for breaking a line-segment shape object. However, even here, sometimes the context can override the selection restrictions on a particular noun. In John broke the stick into pieces, the obvious translation would be da sui in- stead. These examples illustrate that achieving correct lexical choice requires more than a simple matching of selection restrictions. A fine-grained semantic representation of the interpretation of the entire sentence is required. This can indicate the contextually implied action as well as the re- sulting state of the object involved. An explicit representation of the context is beyond the state of the art for current machine translation. When the context is not available, We need an algorithm for selecting the action verb. Following is a deci- sion tree for translating English Change-of-state verbs into Chinese: k, ti.m upremmi ia emt~ V .I. A ~ bs Ac~oo cu be inferred ~,~,-~ ]ss.lcm o~ def~ ~clm ex~.s V t A wu:b but ud:cb aaa to Kleet vEb ~¢ifi~l U.. genre, ieti= gse carom h~=oa, (I=~, ¢j=) (=hi, ran, to ,=~.} Figure 2. Decision tree for translation A multi-domain approach We suggest that to achieve accurate lexical se- lection, it is necessary to have fine-grained selec- tion restrictions that can be matched in a flexible fashion, and which can be augmented when nec- essary by context-dependent knowledge-based un- derstanding. The underlying framework for both the selection restrictions on the verb arguments and the knowledge base should be a verb tax- onomy that relates verbs with similar meanings by associating them with the same conceptual do- mains. We view a verb meaning as a lexicalized con- cept which is undecomposable. However, this se- mantic form can be projected onto a set of con- cepts in different conceptual domains. Langacker (Langacker, 1988) presents a set of basic domains used for defining a knife. It is possible to define an entity by using the size, shape, color, weight, functionality etc. We think it is also possible to identify a compatible set of conceptual domains for characterizing events and therefore, defining verbs as well. Initially we are relying on the semantic domains suggested by Levin as relevant to syn- tactic alternations, such as motion, force, contact, change-of-state and action, etc, (Levin, 1992). We will augment these domains as needed to distin- guish between different senses for the achievment of accurate lexical selection. If words can be defined with concepts in a hierarchical structure, it is possible to measure the meaning similarity between words with an in- formation measure based on WordNet (Resnik, 1993), or structure level information based on a thesaurus (Kurohashi and Nagao, 1992). How- ever, verb meanings are difficult to organize in a 135 hierarchical structure. One reason is that many verb meanings are involved in several different con- ceptual domains. For example, break identifies a change-of-state event with an optional causation conception, while hit identifies a complex event in- volving motion, force and contact domains. Those Chinese verb compounds with V + A construc- tions always identify complex events which involve action and change-of-state domains. Levin has demonstrated that in English a verb's syntactic behavior has a close relation to semantic com- ponents of the verb. Our lexical selection study shows that these semantic domains are also impor- tant for accurate lexical selection. For example, in the above decision tree for action selection, a Chi- nese verb compound dasui can be defined with a concept ~hit-action in an action domain and a concept ~separate-into-pieces in a change-of-state domain. The action domain can be further divided into motion, force, contact domains, etc. A related discussion about defining complex concepts with simple concepts can be found in (Ravin, 1990). The semantic relations of verbs that are relevant to syntactic behavior and that capture part of the similarity between verbs can be more closely re- alized with a conceptual multi-domain approach than with a paraphrase approach. Therefore we propose the following representation method for verbs, which makes use of several different con- cept domains for verb representation. Defining verb projections - Following is a rep- resentation of a break sense. LEXEME BREAK-I EXAMPLE I dropped my cup and it broke. CONSTRAINT (is-a physical-object El) (is-a animate-object EO) (is-a instrument E~) [ ch.ofstate (~ehange-o].integrity El) ] OBL OPT IMP causation (~cause EO *) instrument (~with-instrument EO E~ I time (~around-time @tO *) space (~at-location @10 EO) (~at-location 011 El) (~at-location @12 E2) I action @ L functionality @ The CONSTRAINT slot encodes the selection information on verb arguments, but the meaning itself is not a paraphrase. The meaning repre- sentation is divided into three parts. It identifies a %change-of-integrity concept in the change-of- state domain which is OBLIGATORY to the verb meaning. The causation and instrument domains are OPTIONAL and may be realized by syntactic alternations. Other time, space, action and func- tionality domains are IMPLICIT, and are neces- sary for all events of this type. In each conceptual domain, lexicalized con- cepts can be organized in a hierarchical struc- ture. The conceptual domains for English and Chinese are merged to form interlingua conceptual domains used for similarity measures. Following is part of the change-of-state domain containing En- glish and Chinese lexicalized concepts. c~tmp-, f-yatt, ~pa~-h ~aM-h ~ka=In liu-~j~t pt~ ir~la:tkqm (C:du~,dltbu) (C:ni, l~jni) (C:p,y~po) Figure 3. Change-of-state domain for English and Chinese Within one conceptual domain, the similarity of two concepts is defined by how closely they are related in the hierarchy, i.e., their structural rela- tions. Figure 4. The concept similarity measure The conceptual similarity between C1 and C2 is: ConSim(C1, C2) = 2,N3 Nl+N2+2*N3 C3 is the least common superconcept of C1 and C2. N1 is the number of nodes on the path from C1 to C3. N2 is the number of nodes on the path from C2 to C3. N3 is the number of nodes on the path from C3 to root. After defining the similarity measure in one domain, the similarity between two verb mean- ings, e. g, a target verb and a source verb, can be defined as a summation of weighted similari- ties between pairs of simpler concepts in each of the domains the two verbs are projected onto. WordSim(Vt, V2) = ~-]~i Wl * ConSim(Ci,,, el,2) 136 UNICON: An implementation We have implemented a prototype lexical selec- tion system UNICON where the representations of both the English and Chinese verbs are based on a set of shared semantic domains. The selec- tion information is also included in these repre- sentations, but does not have to match exactly. We then organize these concepts into hierarchical structures to form an interlingua conceptual base. The names of our concept domain constitute the artificial language on which an interlingua must be based, thus place us firmly in the knowledge based understanding MT camp. (Goodman and Nirenburg, 1991). The input to the system is the source verb ar- gument structure. After sense disambiguation, the internal sentence representation can be formed. The system then tries to find the target verb real- ization for the internal representation. If the con- cepts in the representation do not have any target verb realization, the system takes nearby concepts as candidates to see whether they have target verb realizations. If a target verb is found, an inexact match is performed with the target verb mean- ing and the internal representation, with the se- lection restrictions associated with the target verb being imposed on the input arguments. Therefore, the system has two measurements in this inexact match. One is the conceptual similarity of the in- ternal representation and the target verb meaning, and the other is the degree of satisfaction of the selection restrictions on the verb arguments. We take the conceptual similarity, i.e., the meaning, as having first priority over the selection restrictions. A running example - For the English sentence The branch broke, after disambiguation, the inter- nal meaning representation of the sentence can be: [ INTER-REP sentence-I ] ch-of-state (change-of-integrity branch-I) Since there is no Chinese lexicalized concept having an exact match for the concept change-of- integrity, the system looks at the similar concepts in the lattice around it. They are: (%SEPARAT E-IN-PIEC ES-STATE %SEPARATE-IN-NEEDLE-LIKE-STATE 9~SEPARATE-IN-D UAN-STATE 9~SEPARATE-IN-PO-STATE %SEPARATE-IN-SHANG-STATE %S EPARAT E-IN-F ENSUI-STAT E) For one concept %SEPARATE-IN-DUAN- STATE, there is a set of Chinese realizations: • ~-J~ dean la ( to separate in line-segment shape). • ~-1 da dean ( to hit and separate the object in line-segment shape). • ~ dean cheat ( to separate in li ..... gment shape into). • ~]~ zhe duan ( to bend and separate in line-segment shape with human hands) • ~'~ gua dean ( to separate in line-segment shape by wind blow- ing). After filling the argument of each verb rep- resentation and doing an inexact match with the internal representation, the result is as.follows: conceptions 6/7 0 0 0 0 constraints 3/14 0 3/7 0 0 The system then chooses the verb ~-J" (duan la) as the target realization. Handling metaphorical usages - One test of our approach was its ability to match metaphorical usages, relying on a handcrafted ontology for the objects involved. We include it here to illustrate the flexibility and power of the similarity measure for handling new usages. In these examples the system effectively performs coercion of the verb arguments (Hobbs, 1986). The system was able to translate the following metaphorical usage from the Brown corpus cor- rectly. cfO9:86:No believer in the traditional devotion of royal servitors, the plump Pulley broke the language barrier and lured her to Cairo where she waited for nine months, vainly hoping to see Farouk. In our system, break has one sense which means loss of functionality. Its selection restriction is that the patient should be a mechanical device which fails to match language barrier. However, in our ontology, a language barrier is supposed to be an entity having functionality which has been placed in the nominal hierachy near the concept of mechanical-device. So the system can choose the break sense loss of functionality over all the other break senses as the most probable one. Based on this interpretation, the system can correctly se- lect the Chinese verb ?YM da-po as the target re- alization. The correct selection becomes possible because the system has a measurement for the de- gree of satisfaction of the selection restrictions. In another example, ca43:lO:Other tax-exempt bonds of State and local governments hit a price peak on Febru- ary P1, according to Standard gJ Poor's av- erage. hit is defined with the concepts %move-toward-in- space %contact-in-space %receive-fores. Since tar- exempt bonds and a price peak are not physical ob- jects, the argument structure is excluded from the HIT usage type. If the system has the knowledge that price can be changed in value and fixed at some value, and these concepts of change-in-value 137 and fix-at-value are near the concepts ~move- toward-in-space ~contact-in-space, the system can interpret the meaning as change-in.value and fix- at-value. In this case, the correct lexical selection can be made as Ik~ da-dao. This result is pred- icated on the definition of hit as having concepts in three domains that are all structurally related, i.e., nearby in the hierarchy, the concepts related to prices. Methodology and experimental results Our UNICON system translates a subset (the more concrete usages) of the English break verbs from the Brown corpus into Chinese with larger freedom to choose the target verbs and more ac- curacy than the TranStar system. Our coverage has been extended to include verbs from the se- mantically similar hit, touch, break and cut classes as defined by Beth Levin. Twenty-one English verbs from these classes have been encoded in the system. Four hundred Brown corpus sentences which contain these 21 English verbs have been se- lected, Among them, 100 sentences with concrete objects are used as training samples. The verbs were translated into Chinese verbs. The other 300 sentences are divided into two test sets. Test set one contains 154 sentences that are carefully cho- sen to make sure the verb takes a concrete object as its patient. For test set one, the lexical selec- tion of the system got a correct rate 57.8% be- fore encoding the meaning of the unknown verb arguments; and a correct rate 99.45% after giving the unknown English words conceptual meanings in the system's conceptual hierarchy. The second test set contains 116 sentences including sentences with non-concrete objects, metaphors, etc. The lexical selection of the system got a correct rate of 31% before encoding the unknown verb argu- ments, a 75% correct rate after adding meanings and a 88.8% correct rate after extended selection process applied. The extended selection process relaxes the constraints and attempts to find out the best possible target verb with the similarity measure. From these tests, we can see the benefit of defining the verbs on several cognitive domains. The conceptual hierarchical structure provides a way of measuring the similarities among differ- ent verb senses; with relaxation, metaphorical pro- cessing becomes possible. The correct rate is im- proved by 13.8% by using this extended selection process. Discussion With examples from the translation of English to Chinese we have shown that verb semantic repre- sentation has great impact on the quality of lexical selection. Selection restrictions on verb arguments can only define default situations for verb events, and are often overridden by context information. Therefore, we propose a novel method for defin- ing verbs based on a set of shared semantic do- mains. This representation scheme not only takes care of the semantic-syntactic correspondence, but also provides similarity measures for the system for the performance of inexact matches based on verb meanings. The conceptual similarity has pri- ority over selection constrants on the verb argu- ments. We leave scaling up the system to future work. REFERENCES Dolm, B. J. (1990). Lezical Conceptual Structure and machine Translation. PhD thesis, MIT. GOODMAN, K. & NIRENBURG, S., editors (1991). The KBMT Project: A Case Study in Knowledge- Based Machine Translation. Morgan Kaufmann Publishers. HOBBS, J. (1986). Overview of the TACITUS Project. Computational Linguistics, 12(3). JACKENDOFF, R. (1990). Semantic Structures. MIT Press. KUROHASm, S. & NAGAO, M. (1992). Dynamic Programming Method for Analyzing Conjunctive Structures in Japanese. In Proceedings of the 14th International Conference on Computational Lin- guistics (COLING-9e), Nantes, France. LANQACKlm, R. W. (1988). An overview of cognitive grammar. In RUDZKA-OSTYN, B., editor, Topics in Cognitive Grammar. John Benjamins Publish- ing Company, Amsterdam/Phil~lelphia. LEVlN, B. (1992). English Verb Classes and Alter- nations: A Preliminary Investigation. Techni- cal report, Department of Linguistics, Northwest- era University, 2016 Sheridan Road, Evanston, IL 60208. NmENBURG, S., CARBONELL, J., TOMITA, M., & GOODMAN, K. (1992). Machine Translation: A Knowledge-Based Approach. Morgan Kaufmann Publishers. RAVIN, Y. (1990). Lexical Semantics without The- matic Roles. Clarendon Press, Oxford. RESNIK, P. (1993). Selection and Information: A Class-Based Approach to Lexicai Relationships. PhD thesis, Department of Information and Computer Science, University of Pennsylvania. Wu, D. (1991). On Serial verb Construction. PhD thesis, Department of Information and Computer Science, University of Maryland. 138
1994
19
MULTI-PARAGRAPH SEGMENTATION EXPOSITORY TEXT Marti A. Hearst Computer Science Division, 571 Evans Hall University of California, Berkeley Berkeley, CA 94720 and Xerox Palo Alto Research Center marti @cs. berkeley, edu OF Abstract This paper describes TextTiling, an algorithm for parti- tioning expository texts into coherent multi-paragraph discourse units which reflect the subtopic structure of the texts. The algorithm uses domain-independent lex- ical frequency and distribution information to recog- nize the interactions of multiple simultaneous themes. Two fully-implemented versions of the algorithm are de- scribed and shown to produce segmentation that corre- sponds well to human judgments of the major subtopic boundaries of thirteen lengthy texts. INTRODUCTION The structure of expository texts can be characterized as a sequence ofsubtopical discussions that occur in the context of a few main topic discussions. For example, a popular science text called Stargazers, whose main topic is the existence of life on earth and other planets, can be described as consisting of the following subdiscussions (numbers indicate paragraph numbers): 1-3 Intro - the search for life in space 4-5 The moon's chemical composition 6-8 How early proximity of the moon shaped it 9-12 How the moon helped life evolve on earth 13 Improbability of the earth-moon system 14-16 Binary/trinary star systems make life un- likely 17-18 The low probability of non-binary/trinary systems 19-20 Properties of our sun that facilitate life 21 Summary Subtopic structure is sometimes marked in techni- cal texts by headings and subheadings which divide the text into coherent segments; Brown & Yule (1983:140) state that this kind of division is one of the most basic in discourse. However, many expository texts consist of long sequences of paragraphs with very little structural demarcation. This paper presents fully-implemented al- gorithms that use lexical cohesion relations to partition expository texts into multi-paragraph segments that re- flect their subtopic structure. Because the model of dis- course structure is one in which text is partitioned into contiguous, nonoverlapping blocks, I call the general approach TextTiling. The ultimate goal is to not only identify the extents of the subtopical units, but to label their contents as well. This paper focusses only on the discovery of subtopic structure, leaving determination of subtopic content to future work. Most discourse segmentation work is done at a finer granularity than that suggested here. However, for lengthy written expository texts, multi-paragraph seg- mentation has many potential uses, including the im- provement of computational tasks that make use of dis- tributional information. For example, disambiguation algorithms that train on arbitrary-size text windows, e.g., Yarowsky (1992) and Gale et ai. (1992b), and al- gorithms that use lexical co-occurrence to determine se- mantic relatedness, e.g., Schfitze (1993), might benefit from using windows with motivated boundaries instead. Information retrieval algorithms can use subtopic structuring to return meaningful portions of a text if paragraphs are too short and sections are too long (or are not present). Motivated segments can also be used as a more meaningful unit for indexing long texts. Salton et al. (1993), working with encyclopedia text, find that comparing a query against sections and then paragraphs is more successful than comparing against full documents alone. I have used the results of Text- Tiling in a new paradigm for information access on full- text documents (Hearst 1994). The next section describes the discourse model that motivates the approach. This is followed by a descrip- tion of two algorithms for subtopic structuring that make use only of lexical cohesion relations, the evalua- tion of these algorithms, and a summary and discussion 9 of future work. THE DISCOURSE MODEL Many discourse models assume a hierarchical segmen- tation model, e.g., attentional/intentional structure (Crosz & Sidner 1986) and Rhetorical Structure Theory (Mann ~ Thompson 1987). Although many aspects of discourse analysis require such a model, I choose to cast expository text into a linear sequence of segments, both for computational simplicity and because such a struc- ture is sufficient for the coarse-grained tasks of interest here. 1 -2_ "- -- -- Chained nl m i -~__ -- _/'- Ringed m Monolith Pleeewise Figure 1: Skorochod'ko's text structure types. Nodes correspond to sentences and edges between nodes indi- cate strong term overlap between the sentences. Skorochod'ko (1972) suggests discovering a text's structure by dividing it up into sentences and seeing how much word overlap appears among the sentences. The overlap forms a kind of intra-structure; fully con- nected graphs might indicate dense discussions of a topic, while long spindly chains of connectivity might indicate a sequential account (see Figure 1). The cen- tral idea is that of defining the structure of a text as a function of the connectivity patterns of the terms that comprise it. This is in contrast with segmenting guided primarily by fine-grained discourse cues such as register change, focus shift, and cue words. From a computa- tional viewpoint, deducing textual topic structure from lexical connectivity alone is appealing, both because it is easy to compute, and also because discourse cues are sometimes misleading with respect to the topic struc- ture (Brown & Yule 1983)(§3). 1 Additionally, (Passonneau & Litman 1993) concede the difficulty of eliciting hierarchical intentional structure with any degree of consistency from their human judges. The topology most of interest to this work is the final one in the diagram, the Piecewise Monolithic Structure, since it represents sequences of densely interrelated dis- cussions linked together, one after another. This topol- ogy maps nicely onto that of viewing documents as a sequence of densely interrelated subtopical discussions, one following another. This assumption, as will be seen, is not always valid, but is nevertheless quite useful. This theoretical stance bears a close resemblance to Chafe's notion of The Flow Model of discourse (Chafe 1979), in description of which he writes (pp 179-180): Our data.., suggest that as a speaker moves from focus to focus (or from thought to thought) there are certain points at which there may be a more or less radical change in space, time, character config- uration, event structure, or, even, world .... At points where all of these change in a maximal way, an episode boundary is strongly present. But often one or another will change considerably while oth- ers will change less radically, and all kinds of var- ied interactions between these several factors are possible. 2 Although Chafe's work concerns narrative text, the same kind of observation applies to expository text. The TextTiling algorithms are designed to recognize episode boundaries by determining where thematic components like those listed by Chafe change in a max- imal way. Many researchers have studied the patterns of occur- rence of characters, setting, time, and the other the- matic factors that Chafe mentions, usually in the con- text of narrative. In contrast, I attempt to determine where a relatively large set of active themes changes simultaneously, regardless of the type of thematic fac- tor. This is especially important in expository text in which the subject matter tends to structure the dis- course more so than characters, setting, etc. For ex- ample, in the Stargazers text, a discussion of continen- tal movement, shoreline acreage, and habitability gives way to a discussion of binary and unary star systems. This is not so much a change in setting or character as a change in subject matter. Therefore, to recognize where the subtopic changes occur, I make use of lexical cohesion relations (Halliday & Hasan 1976) in a manner similar to that suggested by Skorochod'ko. Morris and Hirst's pioneering work on computing dis- course structure from lexical relations (Morris & Hirst 1991), (Morris 1988) is a precursor to the work reported on here. Influenced by Halliday & Hasan's (1976) the- ory of lexical coherence, Morris developed an algorithm that finds chains of related terms via a comprehensive thesaurus (Roget's Fourth Edition). 3 For example, the 2Interestingly, Chafe arrived at the Flow Model after working extensively with, and then becoming dissatisfied with, a hierarchical model of paragraph structure like that of Longacre (1979). 3The algorithm is executed by hand since the thesaurus ]0 Sentence: 05 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 iii 1 1 1 1 1 1 1 II 1 1 1 1 1 1 1 1 1 1 11 22 111112 1 1 1 11 1111 1 11 1 1 1 1 1 1 1 1 1 1 1 1 1 1 12 1 1 2 1 1 1 1 1 11 1 1 21 11111 1 1 1 1 11 1 1 1 1 1 1 1 1 11 1 11 1 1 1 1 1 111 1 1 13 1111 1 1 22 21 21 21 11 1 1 1 1 21121 12 1 1 1 1 1 1 11 1 1 1 1 11 1 1 1 1 14 form 8 scientist 5 space II 25 star 1 5 binary 4 trinary 8 astronomer 1 7 orbit 1 6 pull 16 planet 1 7 galaxy 1 4 lunar 19 life 1 1 27 moon 3 move 7 continent 3 shoreline 6 time 3 water 6 say 3 species 1 1 Sentence: 05 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 2: Distribution of selected terms from the Stargazer text, with a single digit frequency per sentence number (blanks indicate a frequency of zero). words residential and apartment both index the same thesaural category and can thus be considered to be in a coherence relation with one another. The chains are used to structure texts according to the atten- tional/intentional theory of discourse structure (Grosz & Sidner 1986), and the extent of the chains correspond to the extent of a segment. The algorithm also incorpo- rates the notion of "chain returns" - repetition of terms after a long hiatus - to close off an intention that spans over a digression. Since the Morris & Hirst (1991) algorithm attempts to discover attentional/intentional structure, their goals are different than those of TextTiling. Specifically, the discourse structure they attempt to discover is hierar- chical and more fine-grained than that discussed here. Thus their model is not set up to take advantage of the fact that multiple simultaneous chains might occur over the same intention. Furthermore, chains tend to overlap one another extensively in long texts. Figure 2 shows the distribution, by sentence number, of selected terms from the Stargazers text. The first two terms have fairly uniform distribution and so should not be expected to provide much information about the di- visions of the discussion. The next two terms occur mainly at the beginning and the end of the text, while terms binary through planet have considerable overlap is not generally available online. from sentences 58 to 78. There is a somewhat well- demarked cluster of terms between sentences 35 and 50, corresponding to the grouping together of paragraphs 10, 11, and 12 by human judges who have read the text. From the diagram it is evident that simply looking for chains of repeated terms is not sufficient for deter- mining subtopic breaks. Even combining terms that are closely related semantically into single chains is insuf- ficient, since often several different themes are active in the same segment. For example, sentences 37 - 51 contain dense interaction among the terms move, con- tinent, shoreline, time, species, and life, and all but the latter occur only in this region. However, it is the case that the interlinked terms of sentences 57 - 71 (space, star, binary, trinary, astronomer, orbit ) are closely re- lated semantically, assuming the appropriate senses of the terms have been determined. ALGORITHMS FOR DISCOVERING SUBTOPIC STRUCTURE Many researchers (e.g., Halliday ~z Hasan (1976), Tan- hen (1989), Walker (1991)) have noted that term rep- etition is a strong cohesion indicator. I have found in this work that term repetition alone is a very useful in- dicator of subtopic structure, when analyzed in terms of multiple simultaneous information threads. This sec- tion describes two algorithms for discovering subtopic 11 structure using term repetition as a lexical cohesion in- dicator. The first method compares, for a given window size, each pair of adjacent blocks of text according to how similar they are lexically. This method assumes that the more similar two blocks of text are, the more likely it is that the current subtopic continues, and, conversely, if two adjacent blocks of text are dissimilar, this implies a change in subtopic flow. The second method, an exten- sion of Morris & Hirst's (1991) approach, keeps track of active chains of repeated terms, where membership in a chain is determined by location in the text. The method determines subtopic flow by recording where in the discourse the bulk of one set of chains ends and a new set of chains begins. The core algorithm has three main parts: 1. Tokenization 2. Similarity Determination 3. Boundary Identification Tokenization refers to the division of the input text into individual lexical units. For both versions of the algorithm, the text is subdivided into psuedosentences of a pre-defined size w (a parameter of the algorithm) rather than actual syntactically-determined sentences, thus circumventing normalization problems. For the purposes of the rest of the discussion these groupings of tokens will be referred to as token-sequences. In prac- tice, setting w to 20 tokens per token-sequence works best for many texts. The morphologically-analyzed to- ken is stored in a table along with a record of the token- sequence number it occurred in, and how frequently it appeared in the token-sequence. A record is also kept of the locations of the paragraph breaks within the text. Closed-class and other very frequent words are elimi- nated from the analysis. After tokenization, the next step is the comparison of adjacent pairs of blocks of token-sequences for over- all lexical similarity. Another important parameter for the algorithm is the blocksize: the number of token- sequences that are grouped together into a block to be compared against an adjacent group of token-sequences. This value, labeled k, varies slightly from text to text; as a heuristic it is the average paragraph length (in token-sequences). In practice, a value of k = 6 works well for many texts. Actual paragraphs are not used because their lengths can be highly irregular, leading to unbalanced comparisons. Similarity values are computed for every token- sequence gap number; that is, a score is assigned to token-sequence gap i corresponding to how similar the token-sequences from token-sequence i- k through i are to the token-sequences from i + 1 to i + k + 1. Note that this moving window approach means that each token- sequence appears in k * 2 similarity computations. Similarity between blocks is calculated by a cosine measure: given two text blocks bl and bz, each with k token-sequences, •/E t 2 n ~JJt,bx Et=l ~/)2 t,b~ where t ranges over all the terms that have been reg- istered during the tokenization step, and wt,b~ is the weight assigned to term t in block /)I- In this version of the algorithm, the weights on the terms are simply their frequency within the block .4 Thus if the similarity score between two blocks is high, then the blocks have many terms in common. This formula yields a score between 0 and 1, inclusive. These scores can be plotted, token-sequence number against similarity score. However, since similarity is measured between blocks bl and b2, where bl spans token-sequences i - k through i and b2 spans i + 1 to i + k + 1, the measurement's z-axis coordinate falls be- tween token-sequences i and i + 1. Rather than plot- ting a token-sequence number on the x-axis, we plot token-sequence gap number i. The plot is smoothed with average smoothing; in practice one round of aver- age smoothing with a window size of three works best for most texts. Boundaries are determined by changes in the se- quence of similarity scores. The token-sequence gap numbers are ordered according to how steeply the slopes of the plot are to either side of the token-sequence gap, rather than by their absolute similarity score. For a given token-sequence gap i, the algorithm looks at the scores of the token-sequence gaps to the left of i as long are their values are increasing. When the values to the left peak out, the difference between the score at the peak and the score at i is recorded. The same proce- dure takes place with the token-sequence gaps to the right of i; their scores are examined as long as they continue to rise. The relative height of the peak to the right of i is added to the relative height of the peak to the left. (A gap occurring at a peak will have a score of zero since neither of its neighbors is higher than it.) These new scores, called depth scores, corresponding to how sharp a change occurs on both sides of the token- sequence gap, are then sorted. Segment boundaries are assigned to the token-sequence gaps with the largest corresponding scores, adjusted as necessary to corre- spond to true paragraph breaks. A proviso check is done that prevents assignment of very close adjacent segment boundaries. Currently there must be at least three intervening token-sequences between boundaries. This helps control for the fact that many texts have spurious header information and single-sentence para- graphs. The algorithm must determine how many segments to assigned to a document, since every paragraph is a 4Earlier work weighted the terms according to their fre- quency times their inverse document frequency. In these more recent experiments, simple term frequencies seem to work better. 12 I1 i :il I O m u i m m mimmml~mlmmm m ~ Imlll mlllm Im Immm~ml~m l l l l l l m m m ~ m minim m m m l l m l m l m l~mmm i m ~ m m l l D ~ m l m~lm~mmm i m m ~ l l m ~ l m 1.11llmlElllllEillmB......I .......................... ~ ...... j' .................................................................................... :...:::::....:..:.:...:..:::7 ...... • s ~ ~ s e 7 • o 1o 11 1~ •~ 1~ ,• la •7 18 •• ~o & 2o ~0' ..' 2o --' 7o' .' ~ ,.o Figure 3: Judgments of seven readers on the Stargazer text. Internal numbers indicate location of gaps between paragraphs; x-axis indicates token-sequence gap number, y-axis indicates judge number, a break in a horizontal line indicates a judge-specified segment break. o.e o.lL 0.4 o.i o,1 ° 0 14 ~s i la | io I I i / ! Io *o ~ To ~ loo Figure 4: Results of the block similarity algorithm on the Stargazer text. Internal numbers indicate paragraph numbers, x-axis indicates token-sequence gap number, y-axis indicates similarity between blocks centered at the corresponding token-sequence gap. Vertical lines indicate boundaries chosen by the algorithm; for example, the leftmost vertical line represents a boundary after paragraph 3. Note how these align with the boundary gaps of Figure 3 above. potential segment boundary. Any attempt to make an absolute cutoff is problematic since there would need to be some correspondence to the document style and length. A cutoff based on a particular valley depth is similarly problematic• I have devised a method for determining the number of boundaries to assign that scales with the size of the document and is sensitive to the patterns of similarity scores that it produces: the cutoff is a function of the average and standard deviations of the depth scores for the text under analysis• Currently a boundary is drawn only if the depth score exceeds g - ¢r/2. EVALUATION One way to evaluate these segmentation algorithms is to compare against judgments made by human readers, another is to compare the algorithms against texts pre- marked by authors, and a third way is to see how well the results improve a computational task. This section compares the algorithm against reader judgments, since author markups are fallible and are usually applied to text types that this algorithm is not designed for, and Hearst (1994) shows how to use TextTiles in a task (although it does not show whether or not the results of the algorithms used here are better than some other algorithm with similar goals). Reader Judgments Judgments were obtained from seven readers for each of thirteen magazine articles which satisfied the length criteria (between 1800 and 2500 words) 5 and which contained little structural demarkation. The judges SOne longer text of 2932 words was used since reader judgments had been obtained for it from an earlier ex- periment. Judges were technical researchers. Two texts had three or four short headers which were removed for consistency. 13 were asked simply to mark the paragraph boundaries at which the topic changed; they were not given more explicit instructions about the granularity of the seg- mentation. Figure 3 shows the boundaries marked by seven judges on the Stargazers text. This format helps il- lustrate the general trends made by the judges and also helps show where and how often they disagree. For instance, all but one judge marked a boundary be- tween paragraphs 2 and 3. The dissenting judge did mark a boundary after 3, as did two of the concurring judges. The next three major boundaries occur after paragraphs 5, 9, 12, and 13. There is some contention in the later paragraphs; three readers marked both 16 and 18, two marked 18 alone, and two marked 17 alone. The outline in the Introduction gives an idea of what each segment is about. Passonneau & Litman (1993) discuss at length con- siderations about evaluating segmentation algorithms according to reader judgment information. As Figure 3 shows, agreement among judges is imperfect, but trends can be discerned. In Passonneau & Litman's (1993) data, if 4 or more out of 7 judges mark a boundary, the segmentation is found to be significant using a variation of the Q-test (Cochran 1950). My data showed similar results. However, it isn't clear how useful this signifi- cance information is, since a simple majority does not provide overwhelming proof about the objective real- ity of the subtopic break. Since readers often disagree about where to draw a boundary marking for a topic shift, one can only use the general trends as a basis from which to compare different algorithms. Since the goals of TextTiling are better served by algorithms that produce more rather than fewer boundaries, I set the cutoff for "true" boundaries to three rather than four judges per paragraph. 6 The remaining gaps are consid- ered nonboundaries. Results Figure 4 shows a plot of the results of applying the block comparison algorithm to the Stargazer text. When the lowermost portion of a valley is not located at a para- graph gap, the judgment is moved to the nearest para- graph gap. 7 For the most part, the regions of strong similarity correspond to the regions of strong agree- ment among the readers. (The results for this text were fifth highest out of the 13 test texts.) Note however, that the similarity information around paragraph 12 is weak. This paragraph briefly summarizes the contents of the previous three paragraphs; much of the terminol- 6Paragraphs of three or fewer sentences were combined with their neighbor if that neighbor was deemed to follow at "true" boundary, as in paragraphs 2 and 3 of the Stargazers text. rThis might be explained in part by (Stark 1988) who shows that readers disagree measurably about where to place paragraph boundaries when presented with texts with those boundaries removed. ogy that occurred in all of them reappears in this one location (in the spirit of a Grosz ~; Sidner (1986) "pop" operation). Thus it displays low similarity both to itself and to its neighbors. This is an example of a breakdown caused by the assumptions about the subtopic struc- ture. It is possible that an additional pass through the text could be used to find structure of this kind. The final paragraph is a summary of the entire text; the algorithm recognizes the change in terminology from the preceding paragraphs and marks a boundary; only two of the readers chose to differentiate the sum- mary; for this reason the algorithm is judged to have made an error even though this sectioning decision is reasonable. This illustrates the inherent fallibility of testing against reader judgments, although in part this is because the judges were given loose constraints. Following the advice of Gale et al. (1992a), I compare the Mgorithm against both upper and lower bounds. The upper bound in this case is the reader judgment data. The lower bound is a baseline algorithm that is a simple, reasonable approach to the problem that can be automated. A simple way to segment the texts is to place boundaries randomly in the document, con- straining the number of boundaries to equal that of the average number of paragraph gaps assigned by judges. In the test data, boundaries are placed in about 41% of the paragraph gaps. A program was written that places a boundary at each potential gap 41% of the time (us- ing a random number generator), and run 10,000 times for each text, and the average of the scores of these runs was found. These scores appear in Table 1 (results at 33% are also shown for comparison purposes). The algorithms are evaluated according to how many true boundaries they select out of the total selected (precision) and how many true boundaries are found out of the total possible (recall) (Salton 1988). The recall measure implicitly signals the number of missed bound- aries (false negatives, or deletion errors); the number of false positives, or insertion errors, is indicated explic- itly. In many cases the algorithms are almost correct but off by one paragraph, especially in the texts that the al- gorithm performs poorly on. When the block similarity algorithm is allowed to be off by one paragraph, there is dramatic improvement in the scores for the texts that lower part of Table 2, yielding an overall precision of 83% and recall of 78%. As in Figure 4, it is often the case that where the algorithm is incorrect, e.g., para- graph gap 11, the overall blocking is very close to what the judges intended. Table 1 shows that both the blocking algorithm and the chaining algorithm are sandwiched between the up- per and lower bounds. Table 2 shows some of these results in more detail. The block similarity algorithm seems to work slightly better than the chaining algo- rithm, although the difference may not prove significant over the long run. Furthermore, in both versions of the algorithm, changes to the parameters of the algorithm 14 Baseline 33% Baseline 41% Chains Blocks Judges Precision Recall .44 .08 .37 .04' .43 .08 .42 .03 .64 .17 .58 .17 .66 .18 .61 .13 .81 .06 .71 .06 Table 1: Precision and Recall values for 13 test texts. perturbs the resulting boundary markings. This is an undesirable property and perhaps could be remedied with some kind of information-theoretic formulation of the problem. SUMMARY AND FUTURE WORK This paper has described algorithms for the segmen- tation of expository texts into discourse units that re- flect the subtopic structure of expository text. I have introduced the notion of the recognition of multiple si- multaneous themes, which bears some resemblance to .Chafe's Flow Model of discourse and Skorochod'ko's text structure types. The algorithms are fully imple- mented: term repetition alone, without use of thesaural relations, knowledge bases, or inference mechanisms, works well for many of the experimental texts. The structure it obtains is coarse-grained but generally re- flects human judgment data. Earlier work (Hearst 1993) incorporated thesaural information into the algorithms; surprisingly the lat- est experiments find that this information degrades the performance. This could very well be due to problems with the algorithm used. A simple algorithm that just posits relations among terms that are a small distance apart according to WordNet (Miller et al. 1990) or Ro- get's 1911 thesaurus (from Project Gutenberg), mod- eled after Morris and Hirst's heuristics, might work bet- ter. Therefore I do not feel the issue is closed, and in- stead consider successful grouping of related words as future work. As another possible alternative Kozima (1993) has suggested using a (computationally expen- sive) semantic similarity metric to find similarity among terms within a small window of text (5 to 7 words). This work does not incorporate the notion of multi- ple simultaneous themes but instead just tries to find breaks in semantic similarity among a small number of terms. A good strategy may be to substitute this kind of similarity information for term repetition in al- gorithms like those described here. Another possibility would be to use semantic similarity information as com- puted in Schiitze (1993), Resnik (1993), or Dagan et ai. (1993). The use of discourse cues for detection of segment boundaries and other discourse purposes has been ex- tensively researched, although predominantly on spo- ken text (see Hirschberg & Litman (1993) for a sum- mary of six research groups' treatments of 64 cue words). It is possible that incorporation of such in- formation may provide a relatively simple way improve the cases where the algorithm is off by one paragraph. Acknowledgments This paper has benefited from the comments of Graeme Hirst, Jan Pedersen, Penni Sibun, and Jeff Siskind. I would like to thank Anne Fontaine for her interest and help in the early stages of this work, and Robert Wilen- sky for supporting this line of research. This work was sponsored in part by the Advanced Research Projects Agency under Grant No. MDA972-92-J-1029 with the Corporation for National Research Initiatives (CNRI), and by the Xerox Palo Alto Research Center. References BROWN, GILLIAN, ~ GEORGE YULE. 1983. Discourse Anal- ysis. Cambridge Textbooks in Linguistics Series. Cam- bridge University Press. CHAFE, WALLACE L. 1979. The flow of thought and the flow of language. In Syntax and Semantics: Discourse and Syntax, ed. by Talmy Giv6n, volume 12, 159-182. Academic Press. COCrmAN, W. G. 1950. The comparison of percentages in matched samples. Biometrika 37.256-266. DAGAN, IDO, SHAUL MARCUS, & SHAUL MARKOVITCH. 1993. Contextual word similarity and estimation from sparse data. In Proceedings of the 31th Annual Meet- ing of the Association for Computational Linguistics, 164-171. GALE, WILLIAM A., KENNETH W. CHURCH, &: DAVID YAROWSKY. 1992a. Estimating upper and lower bounds on the performance of word-sense disambigua- tion programs. In Proceedings of the 30th Meeting of the Association for Computational Linguistics, 249- 256. --, , & . 1992b. A method for disambiguating word senses in a large corpus. Computers and the Hu- manities 5-6.415-439. GRosz, BARBARA J., &: CANDACE L. SIDNER. 1986. Atten- tion, intention, and the structure of discourse. Compu- tational Linguistics 12.172-204. HALLIDAY, M. A. K., & R. HASAN. 1976. Cohesion in English. London: Longman. HEARST, MARTI A. 1993. TextTiling: A quantitative ap- proach to discourse segmentation. Technical Report Sequoia 93/24, Computer Science Department, Univer- sity of California, Berkeley. , 1994. Context and Structure in Automated Full-Text Information Access. University of California at Berke- ley dissertation. (Computer Science Division Technical Report). HIRSCHBERG, JULIA, ~: DIANE LITMAN. 1993. Empirical studies on the disambiguation of cue phrases. Compu- tational Linguistics 19.501-530. 15 Text 1 9 2 9 3 9 4 12 5 8 6 8 7 9 8 8 9 9 10 8 11 9 12 9 13 10 Total Baseline 41% (avg) Blocks Chains Judges (avg) Possible Prec Rec C I Prec Rec C I Prec Rec C Rec C I .44 .44 4 5 .50 .44 4 4 .40 .44 4 6 .63 .42 5 3 .43 .38 3 4 .40 .38 3 9 .36 .44 4 7 .43 .38 3 4 .36 .44 4 7 .50 .38 3 3 .36 .44 4 ? .44 .44 4 5 .36 .40 4 7 1.0 .78 7 0 .88 .78 7 1 .78 .78 7 2 .86 .50 6 1 .70 .75 6 2 .60 .75 6 3 .60 .56 5 3 .50 .63 5 4 .50 .44 4 3 .50 .50 4 3 .50 .44 4 4 .50 .56 5 5 .30 .50 5 9 1.0 .78 7 .75 .33 3 .56 .56 5 .56 .42 5 .86 .75 6 .42 .63 5 .40 .44 4 .67 .75 6 .60 .33 3 .63 .63 5 .71 .56 5 .54 .78 7 .60 .60 6 I Prec 0 .78 1 .88 4 .75 4 .91 1 .86 8 .75 6 .75 3 .86 2 .75 3 .86 2 .75 6 .86 4 .78 .78 7 2 .78 7 1 .67 6 2 .83 10 1 .75 6 1 .75 6 2 .67 6 2 .75 6 1 .67 6 2 .75 6 1 .67 6 2 .67 6 1 .70 7 2 Table 2: Scores by text, showing precision and recall. (C) indicates the number of correctly placed boundaries, (I) indicates the number of inserted boundaries. The number of deleted boundaries can be determined by subtracting (C) from Total Possible. KOZIMA, HIDEKI. 1993. Text segmentation based on similar- ity between words. In Proceedings of the 31th Annual Meeting of the Association for Computational Linguis- tics, 286-288, Columbus, OH. LONGACRE, R. E. 1979. The paragraph as a grammatical unit. In Syntax and Semantics: Discourse and Syntax, ed. by Talmy Givdn, volume 12, 115-134. Academic Press. MANN, WILLIAM C., & SANDRA A. THOMPSON. 1987. Rhetorical structure theory: A theory of text organi- zation. Technical Report ISI/RS 87-190, ISI. MILLER, GEORGE A., RICHARD BECKWITH, CHRISTIANE FELLBAUM, DEREK GROSS, ~ KATHERINE J. MILLER. 1990. Introduction to WordNet: An on-line lexical database. Journal of Lexicography 3.235-244. MoPmIS, JANE. 1988. Lexical cohesion, the thesaurus, and the structure of text. Technical Report CSRI- 219, Computer Systems Research Institute, University of Toronto. --, ~z GRAEME HIRST. 1991. Lexical cohesion computed by thesaural relations as an indicator of the structure of text. Computational Linguistics 17.21-48. PASSONNEAU, REBECCA J., ~z DIANE J. LITMAN. 1993. Intention-based segmentation: Human reliability and correlation with linguistic cues. In Proceedings of the 31st Annual Meeting of the Association for Computa- tional Linguistics, 148-155. RESNIK, PHILIP, 1993. Selection and Information: A Class- Based Approach to Lexical Relationships. University of Pennsylvania dissertation. (Institute for Research in Cognitive Science report IRCS-93-42). SALTON, GERARD. 1988. Automatic text processing : the transformation, analysis, and retrieval of information by computer. Reading, MA: Addison-Wesley. --, JAMES ALLAN, ~ CHRIS BUCKLEY. 1993. Ap- proaches to passage retrieval in full text information systems. In Proceedings of the 16th Annual Inter- national ACM/SIGIR Conference, 49-58, Pittsburgh, PA. SCHUTZE, HINRICH. 1993. Word space. In Advances in Neural Information Processing Systems 5, ed. by Stephen J. Hanson, Jack D. Cowan, & C. Lee Giles. San Mateo CA: Morgan Kaufmann. SKOROCHOD'KO, E.F. 1972. Adaptive method of automatic abstracting and indexing. In Information Processing 71: Proceedings of the IFIP Congress 71, ed. by C.V. Freiman, 1179-1182. North-Holland Publishing Com- pany. STARK, HEATHER. 1988. What do paragraph markers do? Discourse Processes 11.275-304. TANNEN, DEBORAH. 1989. Talking Voices: Repetition, dia- logue, and imagery in conversational discourse. Studies in Interactional Sociolinguistics 6. Cambridge Univer- sity Press. WALKER, MARILYN. 1991. Redundancy in collaborative dia- logue. In AAAI Fall Symposium on Discourse Structure in Natural Language Understanding and Generation, ed. by Julia Hirschberg, Diane Litman, Kathy McCoy, & Candy Sidner, Pacific Grove, CA. YAROWSKY, DAVID. 1992. Word sense disambiguation us- ing statistical models of Roget's categories trained on large corpora. In Proceedings of the Fourteenth Interna- tional Conference on Computational Linguistics, 454- 460, Nantes, France. 16
1994
2
Word-Sense Disambiguation Using Decomposable Models Rebecca Bruce and Janyce Wiebe Computing Research Lab and Department of Computer Science New Mexico State University Las Cruces, NM 88003 [email protected], [email protected] Abstract Most probabilistic classifiers used for word-sense disam- biguation have either been based on only one contextual feature or have used a model that is simply assumed to characterize the interdependencies among multiple contextual features. In this paper, a different approach to formulating a probabilistic model is presented along with a case study of the performance of models pro- duced in this manner for the disambiguation of the noun interest. We describe a method for formulating proba- bilistic models that use multiple contextual features for word-sense disambiguation, without requiring untested assumptions regarding the form of the model. Using this approach, the joint distribution of all variables is described by only the most systematic variable inter- actions, thereby limiting the number of parameters to be estimated, supporting computational efficiency, and providing an understanding of the data. Introduction This paper presents a method for constructing prob- abilistic classifiers for word-sense disambiguation that offers advantages over previous approaches. Most pre- vious efforts have not attempted to systematically iden- tify the interdependencies among contextual features (such as collocations) that can be used to classify the meaning of an ambiguous word. Many researchers have performed disambiguation on the basis of only a single feature, while others who do consider multiple contex- tual features assume that all contextual features are either conditionally independent given the sense of the word or fully independent. Of course, all contextual fea- tures could be treated as interdependent, but, if there are several features, such a model could have too many parameters to estimate in practice. We present a method for formulating probabilistic models that describe the relationships among all vari- ables in terms of only the most important interdepen- dencies, that is, models of a certain class that are good approximations to the joint distribution of contextual features and word meanings. This class is the set of de- composable models: models that can be expressed as a product of marginal distributions, where each marginal is composed of interdependent variables. The test used to evaluate a model gives preference to those that have the fewest number of interdependencies, thereby select- ing models expressing only the most systematic variable interactions. To summarize the method, one first identifies infor- mative contextual features (where "informative" is a well-defined notion, discussed in Section 2). Then, out of all possible decomposable models characterizing in- terdependency relationships among the selected vari- ables, those that are found to produce good approxima- tions to the data are identified (using the test mentioned above) and one of those models is used to perform dis- ambiguation. Thus, we are able to use multiple contex- tual features without the need for untested assumptions regarding the form of the model. Further, approximat- ing the joint distribution of all variables with a model identifying only the most important systematic interac- tions among variables limits the number of parameters to be estimated, supports computational efficiency, and provides an understanding of the data. The biggest lim- itation associated with this method is the need for large amounts of sense-tagged data. Because asymptotic dis- tributions of the test statistics are used, the validity of the results obtained using this approach are compro- mised when it is applied to sparse data (this point is discussed further in Section 2). To test the method of model selection presented in this paper, a case study of the disambiguation of the noun interest was performed. Interest was selected be- cause it has been shown in previous studies to be a dif- ficult word to disambiguate. We selected as the set of sense tags all non-idiomatic noun senses of interest de- fined in the electronic version of Longman's Dictionary of Contemporary English (LDOCE) ([23]). Using the models produced in this study, we are able to assign an LDOCE sense tag to every usage of interest in a held- out test set with 78% accuracy. Although it is difficult to compare our results to those reported for previous disambiguation experiments, as will be discussed later, we feel these results are encouraging. The remainder of the paper is organized as follows. Section 2 provides a more complete definition of the 139 methodology used for formulating decomposable mod- els and Section 3 describes the details of the case study performed to test the approach. The results of the dis- ambiguation case study are discussed and contrasted with similar efforts in Sections 4 and 5. Section 6 is the conclusion. Decomposable Models In this Section, we address the problem of finding the models that generate good approximations to a given discrete probability distribution, as selected from among the class of decomposable models. Decomposable models are a subclass of log-linear models and, as such, can be used to characterize and study the structure of data ([2]), that is, the interactions among variables as evidenced by the frequency with which the values of the variables co-occur. Given a data sample of ob- jects, where each object is described by d discrete vari- ables, let x=(zz, z2,..., zq) be a q-dimensional vector of counts, where each zi is the frequency with which one of the possible combinations of the values of the d vari- ables occurs in the data sample (and the frequencies of all such possible combinations are included in x). The log-linear model expresses the logarithm of E[x] (the mean of x) as a linear sum of the contributions of the "effects" of the variables and the interactions among the variables. Assume that a random sample consisting of N inde- pendent and identical tridls (i.e., all trials are described by the same probability density function) is drawn from a discrete d-variate distribution. In such a situation, the outcome of each trial must be an event corresponding to a particular combination of the values of the d variables. Let Pi be the probability that the ith event (i.e., the i th possible combination of the values of all variables) oc- curs on any trial and let zi be the number of times that the i th event occurs in the random sample. Then (zt, x2,..., zq) has a multinomiM distribution with pa- rameters N and Pl,..., Pq- For a given sample size, N, the likelihood of selecting any particular random sam- ple is defined once the population parameters, that is, the Pi'S or, equivalently, the E[xi]'s (where E[zi] is the mean frequency of event i), are known. Log-linear mod- els express the value of the logarithm of each E[~:i] or p; as a linear sum of a smaller (i.e., less than q) number of new population parameters that characterize the effects of individual variables and their interactions. The theory of log-linear models specifies the suffi- cient slatislics (functions of x) for estimating the ef- fects of each variable and of each interaction among variables on E[x]. The sufficient statistics are the sam- ple counts from the highest-order marginals composed of only interdependent variables. These statistics are the maximum likelihood estimates of the mean values of the corresponding marginals distributions. Consider, for example, a random sample taken from a popula- tion in which four contextual features are used to char- acterize each occurrence of an ambiguous word. The sufficient statistics for the model describing contextual features one and two as independent but all other vari- ables as interdependent are, for all i, j, k, m, n (in this and all subsequent equations, f is an abbreviation for feature): t~[count(f2 = j, f3 = k, f4 = m, tag = n)] = E Xfx=i,f2=j,f3=k,f4=m,tag=n i and l~[count(fl = i, f3 = k, f4 = m, tag = n)] = E Xfa=i,f2=j,f3=k,f4=rn,tag=n J Within the class of decomposable models, the maxi- mum likelihood estimate for E[x] reduces to the product of the sufficient statistics divided by the sample counts defined in the marginals composed of the common el- ements in the sufficient statistics. As such, decompos- able models are models that can be expressed as a prod- uct of marginals, 1 where each marginal consists of only interdependent variables. Returning to our previous example, the maximum likelihood estimate for E[x] is, for all i,j, k, m, n: E[z11=i,l~=j,13=k,1,=m,t~g=n ] = ]~[count(fl = i, f3 = k, f4 = m, tag -- n)] × ]~[count(f2 = j, f3 = k, f4 = m, tag = n)] -- ]~[count(/a = k, f4 = m, tag = n)] Expressing the population parameters as probabil- ities instead of expected counts, the equation above can be rewritten as follows, where the sample marginal relative frequencies are the maximum likelihood esti- mates of the population marginal probabilities. For all i,j,k,m,n: P(ft = i, f2 = j, f3 = k, f4 = m, tag -- n) = = i = A = m, tag = n) × P(f2 = j I f3 = k, f4 = m, tag= n) × P(f3 : k, f4 = m, tag = n) The degree to which the data is approximated by a model is called the fit of the model. In this work, the likelihood ratio statistic, G 2, is used as the measure of the goodness-of-fit of a model. It is distributed asymp- totically as X z with degrees of freedom corresponding to the number of interactions (and/or variables) omitted from (unconstrained in) the model. Accessing the fit 1The marginal distributions can be represented in terms of counts or relative frequencies, depending on whether the parameters are expressed as expected frequencies or proba- bilities, respectively. 140 of a model in terms of the significance of its G 2 statis- tic gives preference to models with the fewest number of interdependencies, thereby assuring the selection of a model specifying only the most systematic variable interactions. Within the framework described above, the process of model selection becomes one of hypothesis testing, where each pattern of dependencies among variables expressible in terms of a decomposable model is pos- tulated as a hypothetical model and its fit to the data is evaluated. The "best fitting" models are identified, in the sense that the significance of their reference X 2 values are large, and, from among this set, a conceptu- ally appealing model is chosen. The exhaustive search of decomposable models can be conducted as described in [12]. What we have just described is a method for approx- imating the joint distribution of all variables with a model containing only the most important systematic interactions among variables. This approach to model formulation limits the number of parameters to be esti- mated, supports computational efficiency, and provides an understanding of the data. The single biggest limita- tion remaining in this day of large memory, high speed computers results from reliance on asymptotic theory to describe the distribution of the maximum likelihood estimates and the likelihood ratio statistic. The effect of this reliance is felt most acutely when working with large sparse multinomials, which is exactly when this approach to model construction is most needed. When the data is sparse, the usual asymptotic properties of the distribution of the likelihood ratio statistic and the maximum likelihood estimates may not hold. In such cases, the fit of the model will appear to be too good, indicating that the model is in fact over constrained for the data available. In this work, we have limited our- selves to considering only those models with sufficient statistics that are not sparse, where the significance of the reference X 2 is not unreasonable; most such models have sufficient statistics that are lower-order marginal distributions. In the future, we will investigate other goodness-of-fit tests ([18], [1], [22]) that are perhaps more appropriate for sparse data. The Experiment Unlike several previous approaches to word sense disam- biguation ([29], [5], [7], [10]), nothing in this approach limits the selection of sense tags to a particular num- ber or type of meaning distinctions. In this study, our goal was to address a non-trivial case of ambiguity, but one that would allow some comparison of results with previous work. As a result of these considerations, the word interest was chosen as a test case, and the six non-idiomatic noun senses of interest defined in LDOCE were selected as the tag set. The only restriction lim- iting the choice of corpus is the need for large amounts of on-line data. Due to availability, the Penn Treebank Wall Street Journal corpus was selected. In total, 2,476 usages 2 of interest as a noun 3 were automatically extracted from the corpus and manually assigned sense tags corresponding to the LDOCE defi- nitions. During tagging, 107 usages were removed from the data set due to the authors' inability to classify them in terms of the set of LDOCE senses. Of the rejected usages, 43 are metonymic, and the rest are hybrid meanings specific to the domain, such as public interest group. Because our sense distinctions are not merely be- tween two or three clearly defined core senses of a word, the task of hand-tagging the tokens of interest required subtle judgments, a point that has also been observed by other researchers disambiguating with respect to the full set of LDOCE senses ([6], [28]). Although this un- doubtedly degraded the accuracy of the manually as- signed sense tags (and thus the accuracy of the study as well), this problem seems unavoidable when making semantic distinctions beyond clearly defined core senses of a word ([17], [11], [14], [15]). Of the 2,369 sentences containing the sense-tagged usages of interest, 600 were randomly selected and set aside to serve as the test set. The distribution of sense tags in the data set is presented in Table 1. We now turn to the selection of individually infor- mative contextual features. In our approach to disam- biguation, a contextual feature is judged to be informa- tive (i.e., correlated with the sense tag of the ambiguous word) if the model for independence between that fea- ture and the sense tag is judged to have an extremely poor fit using the test described in Section 2. The worse the fit, the more informative the feature is judged to be (similar to the approach suggested in [9]). Only features whose values can be automatically de- termined were considered, and preference was given to features that intuitively are not specific to interest (but see the discussion of collocational features below). An additional criterion was that the features not have too many possible values, in order to curtail sparsity in the resulting data matrix. We considered three different types of contextual fea- tures: morphological, collocation-specific, and class- based, with part-of-speech (POS) categories serving as the word classes. Within these classes, we choose a number of specific features, each of which was judged to be informative as described above. We used one mor- phological feature: a dichotomous variable indicating the presence or absence of the plural form. The values of the class-based variables are a set of twenty-five POS tags formed, with one exception, from the first letter of the tags used in the Penn Treebank corpus. Two dif- ferent sets of class-based variables were selected. The 2For sentences with more than one usage, the tool used to automatically extract the test data ignored all but one of them. Thus, some usages were missed. 3The Penn Treebank corpus comes complete with POS tags. 141 first set contained only the POS tags of the word imme- diately preceding and the word immediately succeeding the ambiguous word, while the second set was extended to include the POS tags of the two immediately preced- ing and two succeeding words. A limited number of collocation-specific variables were selected, where the term collocation is used loosely to refer to a specific spelling form occurring in the same sentence as the ambiguous word. All of our colloea- tional variables are dichotomous, indicating the pres- ence or absence of the associated spelling form. While collocation-specific variables are, by definition, specific to the word being disambiguated, the procedure used to select them is general. The search for collocation- specific variables was limited to the 400 most frequent spelling forms in a data sample composed of sentences containing interest. Out of these 400, the five spelling forms found to be the most informative using the test described above were selected as the collocational vari- ables. It is not enough to know that each of the features described above is highly correlated with the meaning of the ambiguous word. In order to use the features in concert to perform disambiguation, a model describing the interactions among them is needed. Since we had no reason to prefer, a priori, one form of model over an- other, all models describing possible interactions among the features were generated, and a model with good fit was selected. Models were generated and tested as de- scribed in Section 2. Results Both the form and the performance of the model se- lected for each set of variables is presented in Table 2. Performance is measured in terms of the total percent- age of the test set tagged correctly by a classifier using the specified model. This measure combines both pre- cision and recall. Portions of the test set that are not covered by the estimates of the parameters made from the training set are not tagged and, therefore, counted as wrong. The form of the model describes the interactions among the variables by expressing the joint distribution of the values of all contextual features and sense tags as a product of conditionally independent marginals, with each marginal being composed of non-independent vari- ables. Models of this form describe a markov field ([8], [21]) that can be represented graphically as is shown in Figure 1 for Model 4 of Table 2. In both Figures 1 and 2, each of the variables short, in, pursue, rate(s), percent (i.e., the sign '%') is the presence or absence of that spelling form. Each of the variables rlpos, r2pos, llpos, and 12pos is the POS tag of the word 1 or 2 po- sitions to the left (/) or right (r). The variable ending is whether interest is in the singular or plural, and the variable tag is the sense tag assigned to interest. The graphical representation of Model 4 is such that there is a one-to-one correspondence between the nodes of the graph and the sets of conditionally independent variables in the model. The semantics of the graph topology is that all variables that are not directly con- nected in the graph are conditionally independent given the values of the variables mapping to the connecting nodes. For example, if node a separates node b from node c in the graphical representation of a markov field, then the variables mapping to node b are conditionally independent of the variables mapping to node c given the values of the variables mapping to node a. In the case of Model 4, Figure 1 graphically depicts the fact that the value of the morphological variable ending is conditionally independent of the values of all other con- textual features given the sense tag of the ambiguous word. ~ E L1POS' .I I Figure 1 L2POS The Markov field depicted in Figure 1 is represented by an undirected graph because conditional indepen- dence is a symmetric relationship. But decomposable models can also be characterized by directed graphs and interpreted according to the semantics of a Bayesian network ([21]; also described as "recursive causal mod- els" in [27] and [16]). In a Bayesian network, the no- tions of causation and influence replace the notion of conditional independence in a Markov field. The par- ents of a variable (or set of variables) V are those vari- ables judged to be the direct causes or to have direct influence on the value of V; V is called a "response" to those causes or influences. The Bayesian network representation of a decomposable model embodies an explicit ordering of the n variables in the model such that variable i may be considered a response to some or all of variables {i + 1,..., n}, but is not thought of as a response to any one of the variables {1 .... , i - 1}. In all models presented in this paper, the sense tag of the ambiguous word causes or influences the values of all other variables in the model. The Bayesian network representation of Model 4 is presented in Figure 2. In Model 4, the variables in and percent are treated as in- fluencing the values of rate, short, and pursue in order to achieve an ordering of variables as described above. 142 [ ~ ~ LIPOS, L2POS Figure 2 Comparison to Previous Work Many researchers have avoided characterizing the inter- actions among multiple contextual features by consider- ing only one feature in determining the sense of an am- biguous word. Techniques for identifying the optimum feature to use in disambiguating a word are presented in [7], [30] and [5]. Other works consider multiple con- textual features in performing disambiguation without formally characterizing the relationships among the fea- tures. The majority of these efforts ([13], [31]) weight each feature in predicting the sense of an ambiguous word in accordance with frequency information, with- out considering the extent to which the features co- occur with one another. Gale, Church and Yarowsky ([10]) and Yarowsky ([29]) formally characterize the in- teractions that they consider in their model, but they simply assume that their model fits the data. Other researchers have proposed approaches to sys- tematically combining information from multiple con- textual features in determining the sense of an ambigu- ous word. Schutze ([26]) derived contextual features from a singular value decomposition of a matrix of letter four-gram co-occurrence frequencies, thereby assuring the independence of all features. Unfortunately, inter- preting a contextual feature that is a weighted combina- tion of letter four-grams is difficult. Further, the clus- tering procedure used to assign word meaning based on these features is such that the resulting sense clusters do not have known statistical properties. This makes it impossible to generalize the results to other data sets. Black ([3]) used decision trees ([4]) to define the re- lationships among a number of pre-specified contextual features, which he called "contextual categories", and the sense tags of an ambiguous word. The tree construc- tion process used by Black partitions the data according to the values of one contextual feature before consider- ing the values of the next, thereby treating all features incorporated in the tree as interdependent. The method presented here for using information from multiple con- textual features is more flexible and makes better use of a small data set by eliminating the need to treat all features as interdependent. The work that bears the closest resemblance to the work presented here is the maximum entropy approach to developing language models ([24], [25], [19] and [20]). Although this approach has not been applied to word- sense disambiguation, there is a strong similarity be- tween that method of model formulation and our own. A maximum entropy model for multivariate data is the likelihood function with the highest entropy that satis- fies a pre-defined set of linear constraints on the under- lying probability estimates. The constraints describe interactions among variables by specifying the expected frequency with which the values of the constrained vari- ables co-occur. When the expected frequencies speci- fied in the constraints are linear combinations of the observed frequencies in the training data, the resulting maximum entropy model is equivalent to a maximum likelihood model, which is the type of model used here. To date, in the area of natural language processing, the principles underlying the formulation of maximum entropy models have been used only to estimate the parameters of a model. Although the method described in this paper for finding a good approximation to the joint distribution of a set of discrete variables makes use of maximum likelihood models, the scope of the technique we are describing extends beyond parameter estimation to include selecting the form of the model that approximates the joint distribution. Several of the studies mentioned in this Section have used interest as a test case, and all of them (with the ex- ception of Schutze [26]) considered four possible mean- ings for that word. In order to facilitate comparison of our work with previous studies, we re-estimated the parameters of our best model and tested it using data containing only the four LDOCE senses corresponding to those used by others (usages not tagged as being one of these four senses were removed from both the test and training data sets). The results of the modified ex- periment along with a summary of the published results of previous studies are presented in Table 3. While it is true that all of the studies reported in Table 3 used four senses of interest, it is not clear that any of the other experimental parameters were held con- stant in all studies. Therefore, this comparison is only suggestive. In order to facilitate more meaningful com- parisons in the future, we are donating the data used in this experiment to the Consortium for Lexical Research (ftp site: clr.nmsu.edu) where it will be available to all interested parties. Conclusions and Future Work • In this paper, we presented a method for formulating probabilistic models that use multiple contextual fea- tures for word-sense disambiguation without requiring untested assumptions regarding the form of the model. In this approach, the joint distribution of all variables is described by only the most systematic variable in- teractions, thereby limiting the number of parameters to be estimated, supporting computational efficiency, and providing an understanding of the data. Further, different types of variables, such as class-based and collocation-specific ones, can be used in combination 143 with one another. We also presented the results of a study testing this approach. The results suggest that the models produced in this study perform as well as or better than previous efforts on a difficult test case. We are investigating several extensions to this work. In order to reasonably consider doing large-scale word- sense disambiguation, it is necessary to eliminate the need for large amounts of manually sense-tagged data. In the future, we hope to develop a parametric model or models applicable to a wide range of content words and to estimate the parameters of those models from untagged data. To those ends, we are currently investi- gating a means of obtaining maximum likelihood esti- mates of the parameters of decomposable models from untagged data. The procedure we are using is a vari- ant of the EM algorithm that is specific to models of the form produced in this study. Preliminary results are mixed, with performance being reasonably good on models with low-order marginals (e.g., 63% of the test set was tagged correctly with Model 1 using parame- ters estimated in this manner) but poorer on models with higher-order marginals, such as Model 4. Work is needed to identify and constrain the parameters that cannot be estimated from the available data and to de- termine the amount of data needed for this procedure. We also hope to integrate probabilistic disambigua- tion models, of the type described in this paper, with a constraint-based knowledge base such as WordNet. In the past, there have been two types of approaches to word sense disambiguation: 1) a probabilistic approach such as that described here which bases the choice of sense tag on the observed joint distribution of the tags and contextual features, and 2) a symbolic knowledge based approach that postulates some kind of relational or constraint structure among the words to be tagged. We hope to combine these methodologies and thereby derive the benefits of both. Our approach to combining these two paradigms hinges on the network representa- tions of our probabilistic models as described in Section 4 and will make use of the methods presented in [21]. Acknowledgements The authors would like to thank Gerald Rogers for shar- ing his expertise in statistics, Ted Dunning for advice and support on software development, and the members of the NLP group in the CRL for helpful discussions. References [1] Baglivo, J., Olivier, D., and Pagano, M. (1992). Methods for Exact Goodness-of-Fit Tests. Jour- nal of the American Statistical Association, Vol. 87, No. 418, June 1992. [2] Bishop, Y. M.; Fienberg, S.; and Holland, P (1975). Discrete Multivariate Analysis: Theory and Practice. Cambridge: The MIT Press. [3] Black, Ezra (1988). An Experiment in Compu- tational Discrimination of English Word Senses. IBM Journal of Research and Development, Vol. 32, No. 2, pp. 185-194. [4] Breiman, L., Friedman, J., Olshen, R., and Stone, C. (1984). Classification and Regression Trees. Monterey, CA: Wadsworth & Brooks/Cole Ad- vanced Books & Software. [5] Brown, P., Della Pietra, S., Della Pietra, V., and Mercer, R. (1991). Word Sense Disambigua- tion Using Statistical Methods. Proceedings of the 29th Annual Meeting of the Association for Com- putational Linguistics (A CL-91), pp. 264-304. [6] Cowie, J., Guthrie, J., and Guthrie, L. (1992). Lexical Disambiguation and Simulating Anneal- ing. Proceedings of the 15th International Con- ference on Computational Linguistics (COLING- 92). pp 359-365. [7] Dagan, I., Itai, A., and Schwall, U. (1991). Two Languages Are More Informative Than One. Pro- ceedings of the 29th Annual Meeting of the Asso- ciation for Computational Linguistics (A CL-9I), pp. 130-137. [8] Darroch, J., Lauritzen, S., and Speed, T. (1980). Markov Fields and Log-Linear Interaction Models for Contingency Tables. The Annals of Statistics, Vol. 8, No. 3, pp. 522-539. [9] Dunning, Ted (1993). Accurate Methods for the Statistics of Surprise and Coincidence. Computa- tional Linguistics, Vol. 19, No. 1, pp.61-74. [10] Gale, W., Church, K., and Yarowsky, D. (1992a). A Method for Disambiguating Word Senses in a Large Corpus. A T84T Bell Laboratories Statistical Research Report No. 104. [11] Gale, W., Church, K. and Yarowsky, D. (1992b). Estimating Upper and Lower Bounds on the Performance of Word-Sense Disambiguation Pro- grams. Proceedings of the 30th Annual Meeting of the A CL, 1992. [12] Havranek, Womas (1984). A Procedure for Model Search in Multidimensional Contingency Tables. Biometrics, 40, pp.95-100. [13] Hearst, Marti (1991). Toward Noun Homonym Disambiguation--Using Local Context in Large Text Corpora. Proceedings of the Seventh Annual Conference of the UW Centre for the New OED and Text Research Using Corpora, pp. 1-22. [14] Jorgensen, Julia (1990). The Psychological Real- ity of Word Senses. Journal of Psycholinguistic Research, Vol 19, pp. 167-190. [15] Kelly, E and P. Stone (1979). Computer Recog- nition of English Word Senses, Vol. 3 of North Holland Linguistics Series, Amsterdam: North- Holland. [16] Kiiveri, H., Speed, T., and Carlin, J. (1984). Re- cursive Causal Models. Journal Austral. Math. Soc. (Series A), 36, pp. 30-52. 144 [17] Kilgarriff, Adam (1993). Dictionary Word Sense Distinctions: An Enquiry Into Their Nature. Computers and the Humanities, 26, pp.365-387. [18] Koehler, K. (1986). Goodness-of-Fit Tests for Log-Linear Models in Sparse Contingency Tables. Journal of the American Statistical Association, Vol. 81, No. 394, June 1986. [19] Lau, R., Rosenfeld, R., and Roukos, S. (1993a). Trigger-Based Language Models: a Maximum Entropy Approach. Proceedings of ICASSP-93. April 1993. [20] Lau, R., Rosenfeld, R., and Roukos, S. (1993b). Adaptive Language Modeling Using the Max- imum Entropy Principle. Proc. ARPA Human Language Technology Workshop. March 1993. [21] Pearl, Judea (1988). Probabilistic Reasoning In Intelligent Systems: Networks of Plausible Infer- ence. San Marco, Ca.: Morgan Kaufmann. [22] Pederson, S. and Johnson, M. (1990). Estimating Model Discrepancy. Technometrics, Vol. 32, No. 3, pp. 305-314. [23] Procter, Paul et al. (1978). Longman Dictionary of Contemporary English. [24] Ratnaparkhi, h. and Roukos, S. (1994). h Maxi- mum Entropy Model for Prepositional Phrase At- tachment. Proc. ARPA Human Language Tech- nology Workshop. March 1994. [25] Rosenfeld, R. (1994). h Hybrid Approach to Adaptive Statistical Language Modeling. Proc. ARPA Human Language Technology Workshop. March 1994. [26] Schutze, Hinrich (1992). Word Space. In S.J. Hanson, J.D. Cowan, and C.L. Giles (Eds.), Ad- vances in Neural Information Processing Systems 5, San Mateo, Ca.: Morgan Kaufmann. [27] Wermuth, N. and Lauritzen, S. (1983). Graphi- cal and recursive models for contingency tables. Biometrika, Vol. 70, No. 3, pp. 537-52. [28] Wilks, Y., Fass, D., Guo, C., McDonald, J., Plate, T., and Slator, B. (1990). Providing Ma- chine Tractable Dictionary Tools. Computers and Translation 2. Also to appear in Theoretical and Computational Issues in Lexicai Semantics (TCILS). Edited by James Pustejovsky. Cam- bridge, MA.: MIT Press. [29] Yarowsky, David (1992). Word-Sense Disam- biguating Using Statistical Models of Roget's Categories Trained on Large Corpora. Proceed- ings of the 15th International Conference on Computational Linguistics (COLING-92). [30] Yarowsky, David (1993). One Sense Per Colloca- tion. Proceedings of the Speech and Natural Lan- guage ARPA Workshop, March 1993, Princeton, NJ. [31] Zernik, Uri (1990). Tagging Word Senses In Cor- pus: The Needle in the Haystack Revisited. Tech- nical Report 90CRD198, GE Research and Devel- opment Center. 145 LDOCE Sense Representation Representation Representation in Total Sample in Training Sample in Test Sample sense 1: 361 "readiness to give attention" (15%) sense 2: 11 "quality of causing attention to be given" (<1%) sense 3: 66 "activity, subject, etc., which one (3%) gives time and attention to" 271 90 (15% / (15%) 9 2 (<1%) (<1%) 50 16 (3%) (3%) sense 4: "advantage, advancement, or favor" sense 5: '% share (in a company, business, etc.)" sense 6: "money paid for the use of money" 178 130 48 (8%) (7%). (8%) 500 378 122 (21%) (21%) (20%) 1253 931 322 (53%) (53%) (54%) Table 1: Distribution of sense tags. Model Percent Correct 1 P(rlpos, ilpos, ending,tag) = 73% P(rlposltag ) × P(ilposltag )× P(endingltag ) × P(tag) 2 P(rlpos, r2pos, llpos, 12pos, ending, tag) = 76% P(rlpos, r2posltag ) × P(llpos, 12posltag)× P(endingltag) × P(tag) 3 P(percent,pursue, short, in, rate,tag) = 61%" P(shortlpercent, in, tag)x P(rate[percent, in, tag)x P(pursuelPercent , in, tag)× P(percent, inltag) × P( tag) 4 P(percent, pursue, short, in, rate, rlpos, r2pos, llpos, 12pos, ending, tag) = 78% P( short[percent, in, tag) × P(ratelpercent, in, tag) × P(pursuelpercent, in, tag)× P(percent, inltag)× P(rlpos, r2posltag ) × P(ilpos, 12posltag)× P(endingltag) × P(tag) Table 2: The form and performance on the test data of the model found for each set of variables. Each of the variables short, in, pursue, rate(s), percent (i.e., the sign '%') is the presence or absence of that spelling form. Each of the variables rlpos, r2pos, ilpos, and 12pos is the POS tag of the word 1 or 2 positions to the left (/) or right (r). The variable ending is whether interest is in the singular or plural, and the variable fag is the sense tag assigned to interest. Model Percent Correct Black (1988) 72% Zernik (1990) 70% Yarowsky (1992) 72% Bruce & Wiebe model 4 using only four senses 79% Table 3: Comparison to previous results. 146
1994
20
Constraint-Based Categorial Grammar Gosse Bouma and Gertjan van Noord Alfa-informatica and Behavorial and Cognitive Neurosciences, Rijksuniversiteit Groningen {gosse,vannoord} @let.rug.nl Abstract We propose a generalization of Categorial Grammar in which lexical categories are defined by means of recur- sive constraints. In particular, the introduction of re- lational constraints allows one to capture the effects of (recursive) lexical rules in a computationally attractive manner. We illustrate the linguistic merits of the new approach by showing how it accounts for the syntax of Dutch cross-serial dependencies and the position and scope of adjuncts in such constructions. Delayed evalu- ation is used to process grammars containing recursive constraints. 1 Introduction Combinations of Categorial Grammar (co) and unifica- tion naturally lead to the introduction of polymorphic categories. Thus, Karttunen (1989) categorizes NP's as X/X, where x is a verbal category, Zeevat el al. (1987) assign the category X/(NP\X) to NP's, and Emms (1993) extends the Lambek-calculus with polymorphic cate- gories to account for coordination, quantifier scope, and extraction. The role of polymorphism has been restricted, how- ever, by the fact that in previous work categories were defined as feature structures using the simple, non- recursive, constraints familiar from feature description languages such as PATR. Relational constraints can be used to define a range of polymorphic categories that are beyond the expressive capabilities of previous ap- proaches. In particular, the introduction of relational con- straints captures the effects of (recursive) lexical rules in a computationally attractive manner. The addition of such rules makes it feasible to consider truly 'lexical- ist' grammars, in which a powerful lexical component is accompanied by a highly restricted syntactic compo- nent, consisting of application only. 2 Recursive Constraints In cG, many grammatical concepts can only be de- fined recursively. Dowty (1982) defines grammatical functions such as subject and object as being the ul- timate and penultimate 'argument-in' of a verbal cate- gory. Hoeksema (1984) defines verbs as exocentric cat- egories reducible to s. Lexical rules frequently refer to such concepts. For instance, a categorial lexical rule of passive applies to verbs selecting an object and must remove the subject. In standard unification-based formalisms, these con- cepts and the rules referring to such concepts cannot be expressed directly. 2.1 Subject-verb agreement Consider a categorial treatment of subject-verb agree- ment with intransitive ( NP[NOM]\S ) and transitive ((NP[NOM]\S)/NP[ACC]) verbs defined as follows: (1) lez(walks,X):- iv(X). /ez(kisses, X) :- tv(X). vat[ eat s ] iv( dir'\' arg [ catnp ] )" case nora iv( val dir '/' vat[ cats ] dir '\' arg [ cat np ] case nom arg [ cal np ] case aCE Subject-verb agreement can be incorporated easily if one reduces agreement to a form of subcategorization. 147 If, however, one wishes to distinguish these two pieces of information (to avoid a proliferation of subcategoriza- tion types or for morphological reasons, for instance), it is not obvious how this could be done without recursive constraints. For intransitive verbs one needs the con- straint that (arg agr) = Agr (where Agr is some agree- ment value), for transitive verbs that (val arg agr) = Agr, and for ditransitive verbs that (val val arg agr) = Agr. The generalization is captured using the recursive constraint sv_agreement (2). In (2) and below, we use definite clauses to define lexical entries and constraints. Note that lexical entries relate words to feature struc- tures that are defined indirectly as a combination of simple constraints (evaluated by means of unification) and recursive constraints. 1 (2) lex(walks, X) :- iv(X), sv_agreement( sg3 , X). lex(kisses, X) :- tv(X), sv_agreement( sg3 , X). sv-agreement(Agr' [ cat np ] agr Agr \S). sv_agreement( Agr , Y\X ) :- sv_agreement( Agr , X). Relational constraints can also be used to capture the effect of lexical rules. In a lexicalist theory such as cG, in which syntactic rules are considered to be universally valid scheme's of functor-argument combi- nation, lexical rules are an essential tool for capturing language-specific generalizations. As Carpenter (1991) observes, some of the rules that have been proposed must be able to operate recursively. Predicative forma- tion in English, for instance, uses a lexical rule turning a category reducible to vP into a category reducing to a vP-modifier (vP\vP). As a vP-modifier is reducible to vP, the rule can (and sometimes must) be applied recursively. 2.2 Adjuncts as arguments Miller (1992) proposes a lexical rule for French nouns which adds an (modifying) adjective to the list of argu- ments that the noun subcategorizes for. Since a noun 1We use X/Y and Y\X as shorthand for dir '/' arg Y and dir ' , respectively and S, NP, and Adj as 'typed arg Y variables' of type [ cats ], [ cat np ], and [ cat adj ], respectively. can be modified by any number of adjectives, the rule must be optional as well as recursive. The advantages of using a lexical rule in this case is that it simplifies accounting for agreement between nouns and adjectives and that it enables an account of word order constraints between arguments and modifiers of a noun in terms of obliqueness. The idea that modifiers are introduced by means of a lexical rule can be extended to verbs. That is, ad- juncts could be introduced by means of a recursive rule that optionally adds these elements to verbal categories. Such a rule would be an alternative for the standard cat- egorial analysis of adjuncts as (endocentric) functors. There is reason to consider this alternative. In Dutch, for instance, the position of verb modifiers is not fixed. Adjuncts can in principle occur anywhere to the left of the verb: 2 (3) a. dat Johan opzettelijk een ongeluk that J. deliberately an accident veroorzaakt causes that J. deliberately causes an accident b. dat Johan Marie opzettelijk that J. M. deliberately geen cadeau geeft no present gives that J. deliberately gave M. no present There are several ways to account for this fact. One can assign multiple categories to adjuncts or one can assign a polymorphic category x/x to adjuncts, with x restricted to 'verbal projections' (Bouma, 1988). Alternatively, one can assume that adjuncts are not functors, but arguments of the verb. Since adjuncts are optional, can be iterated, and can occur in several posi- tions, this implies that verbs must be polymorphic. The constraint add_adjuncts has this effect, as it optionally adds one or more adjuncts as arguments to the 'initial' category of a verb: (4) iex(veroorzaken, X):- add_adjuncts(X, NP\(NP \S)). lex(geven, X) :- add_adjuncts(X, NP\(NP\(NP\S))). add_adjuncts(S, S). add_adjuncts(Adj \X, Y) :- add_adjuncts(X, Y). add_adjuncts( dir D , dir D ) :- arg A arg A add_adjuncts(X, Y). 2As we want to abstract away from the effects of 'verb- second', we present only examples of subordinate clauses. 148 This constraint captures the effect of applying the following (schematic) lexical rule recursively: (5) xl\...\xi\x,+l\...\s/Y1...Y, # XI \ . . . \XikAdjkXi+l \. . . \S/Y1. . . Y. The derivation of (3a) is given below (where X =~ Y indicates that add_adjuncts(Y,X) is satisfied, and IV ---- NP\S). (6) ... J. opzettelijk een ongeluk NP ADJ NP veroorzaakt NP\IV NP\ (ADJ\IV) ADJ\IV IV S An interesting implication of this analysis is that in a categorial setting the notion 'head' can be equated with the notion 'main functor'. This has been pro- posed by Barry and Pickering (1990), but they are forced to assign a category containing Kleene-star op- erators to verbal elements. The semantic counterpart of such category-assignments is unclear. The present proposal is an alternative for such assignments which avoids introducing new categorial operators and which does not lead to semantic complications (the semantics of add_adjuncts is presented in section 3.3). Below we argue that this analysis also allows for a straightforward explanation of the distribution and scope of adjuncts in verb phrases headed by a verbal complex. 3 Cross-Serial Dependencies In Dutch, verbs selecting an infinitival complement (e.g. modals and perception verbs) give rise to so called cross- serial dependencies. The arguments of the verbs in- volved appear in the same order as the verbs in the 'verb cluster': (7) a. b. dat An1 Bea2 will kussen~. dat An Bea wants to kiss that An wants to kiss Bea dat An1 Bea2 Cor3 Will dat An Bea Cor wants zien2 kussen3. to see kiss that An wants to see Bea kiss Cor The property of forming cross-serial dependencies is a lexical property of the matrix verb. If this verb is a 'trigger' for cross-serial word order, this order is obliga- tory, whereas if it is not, the infinitival complement will follow the verb: (8) a. *dat An wil Bea kussen. b. dat An zich voornam Bea that An Refl. planned Bea te kussen. to kiss that An. planned to kiss Bea e. *dat An zich Bea voornam te kussen. 3.1 Generalized Division Categorial accounts of cross-serial dependencies ini- tially made use of a syntactic rule of composition (Steedman, 1985). Recognizing the lexical nature of the process, more recent proposals have used either a lexical rule of composition (Moortgat, 1988) or a lexical rule of 'division' (Hoeksema, 1991). Division is a rule which enables a functor to inherit the arguments of its argument :3 X/Y ::¢, (X/Z, . . . IZ,,)I(Y/Z. . . IZ,) To generate cross-serial dependencies, a 'dishar- monic' version of this rule is needed: (9) x/v (zA... z.\x)/(zA.., z.\Y) Hoeksema proposes that verbs which trigger cross- serial word order are subject to (9): (10) ...An Bea wil kussen NP NP IV/IV NP\IV # (NP\IV)/(NP\IV) NP\IV IV In a framework using recursive constraints, gener- alized disharmonic division can be implemented as a recursive constraint connecting the initial category of such verbs with a derived category: (11) lez(willen,X) :- cross_serial(X, (NP\S)/(NP\S)). lez(zien, X) :- cross_serial(X, (NP\(NPkS))/(NP\S)). lez(voornemen, (NPre fl \(NP\S))/(NP \S)). aArgument inheritance is used in HPSG to account for verb clustering in German (Hinrichs and Nakazawa, 1989). The rlPSG analysis is essentially equivalent to Hoeksema's account. 149 (12) cross_serial(Out,In) :- division(Out, In), verb_cluster(Out). division(X, X). division( ( Z\X ) / ( Z\ Y ), X' /Y') :- division(X/Y, X ' / Y'). [ [ + ] ] ) Only verbs that trigger the cross-serial order are sub- ject to the division constraint. This accounts immedi- ately for the fact that cross-serial orders do not arise with all verbs selecting infinitival complements. 3.2 Verb Clusters The verb_cluster constraint ensures that cross-serial word order is obligatory for verbs subject to cross_serial. To rule out the ungrammatical (8a), for instance, we assume that Bea kussen is not a verb clus- ter. The verb kussen by itself, however, is unspecified for vc, and thus (7a) is not excluded. We do not assume that cross-serial verbs take lexical arguments (as has sometimes been suggested), as that would rule out the possibility of complex constituents to the right of cross-serial verbs altogether. If one assumes that a possible bracketing of the verb cluster in (7b) is [wil [zien kussen]] (coordination and fronting data have been used as arguments that this is indeed the case), a cross-serial verb must be able to combine with non- lexical verb clusters. Furthermore, if a verb selects a particle, the particle can optionally be included in the verb cluster, and thus can appear either to the right or to the left of a governing cross-serial verb. For a verb cluster containing two cross-serial verbs, for instance, we have the following possibilities: (13) a. dat An Bea heeft durven aan that An Bea has dared part. te spreken to speak that An has dared to speak to Bea. b. dat An Bea heeft aan durven te spreken. c. dat An Bea aan heeft durven te spreken. A final piece of evidence for the the fact that cross- serial verbs may take complex phrases as argument stems from the observation that certain adjectival and prepositional arguments can also appear as part of the verb cluster: (14) dat An dit aan Bea had duidelijk that An this to Bea has clear gemaakt made thai An had made this clear to Bea Cross-serial verbs select a +vc argument. Therefore, all phrases that are not verb clusters must be marked - vc. In general, in combining a (verbal) functor with its argument, it is the argument that determines whether the resulting phrase is -vc. For instance, NP-arguments always give rise to -VC phrases, whereas particles and verbal arguments do not give rise to -vc phrases. This suggests that NP's must be marked -vc, that particles and verbs can remain unspecified for this feature, and that in the syntactic rule for application the value of the feature vc must be reentrant between argument and resultant. 3.3 The distribution and scope of adjuncts The analysis of cross-serial dependencies in terms of argument inheritance interacts with the analysis of ad- juncts presented in section 2.2. If a matrix verb inherits the arguments of the verb it governs, it should be pos- sible to find modifiers of the matrix verb between this verb and one of its inherited arguments. This prediction is borne out (15a). However, we also find structurally similar examples in which the adjunct modifies the gov- erned verb (15b). Finally, there are examples that are ambiguous between a wide and narrow scope reading (15c). We take it that the latter case is actually what needs to be accounted for, i.e. examples such as (15a) and (15b) are cases in which there is a strong prefer- ence for a wide and narrow scope reading, respectively, but we will remain silent about the (semantic) factors determining such preferences. (15) a. dat Frits Marie volgens mij lijkt that F. M. to me seems te ontwijken. to avoid It seems to me that F. avoids M. b. dat Frits Marie opzettelijk lijkt that F. M. deliberately seems te ontwijken. to avoid It seems that F. deliberately avoids M. c. dat Frits Marie de laatste tijd lijkt that F. M. lately seems te ontwijken. to avoid It seems lately as if F. avoids M. It seems as if F. avoids M. lately On the assumption that the lexical entries for lijken en ontwijken are as in (16), example (15c) has two possi- ble derivations ((17) and (18)). Procedurally speaking, the rule that adds adjuncts can be applied either to the matrix verb (after division has taken place) or to the 150 governed verb. In the latter case, the adjunct is 'inher- ited' by the matrix verb. Assuming that adjuncts take scope over the verbs introducing them, this accounts for the ambiguity observed above. (16) lex(lijken, Verb):- add_adjuncts(Verb, Verb'), cross_serial(Verb', (NP\S)/(NP\S)). lex(ontwijken, Verb):- add_adjuncts(Verb, NP\(NP\S)). (17) ... de laatste tijd lijkt ADJ IV/IV te ontwijken TV TV/TV (AD&TV)/TV ADJ\TV TV (18) ... de laatste tijd lijkt te ontwijken ADJ IV/IV TV (ADJ\TV)/(ADJ\TV) ADJ\TV ADJ\TV TV The assumption that adjuncts scope over the verbs introducing them can be implemented as follows. We use a unification-based semantics in the spirit of Pereira and Shieber (1987). Furthermore, the semantics is head-driven, i.e. the semantics of a complex constituent is reetrant with the semantics of its head (i.e. the func- tor). The feature structure for a transitive verb in- cluding semantics (taking two NP's of the generalized quantifier type ((e, t), t} as argument and assigning wide scope to the subject) is: (19) val dir 'V arg [ [ cat s ] dir 'V [ cat np ] ar9 sem (X^Sobj)^Ss,,bj cat np ] sem (Y^kiss(X,V))ASobj sem Ssubi TV Thus, a lexical entry for a transitive verb can be de- fined as follows (where TV refers to the feature struc- ture in 19): (20)/ez(kussen, X) :- add_adjuncts(X, TV). The lexical rule for adding adjuncts can now be ex- tended with a semantics: (21) add_adjuncts([ sem Sx ]x' [ sem Sy ]y) :- add_adj(X, Y, Sx, Sy). add_adj(S, S, Sem, Sem). val X dir 'V add_adj( arg [ add_adj(X, cat adj ] sere Sy^SA Y, Sx, Sa). ,Y, Sx, Sy):- [va, x] [va, Y] add_adj( dir D , dir D ,Sx,Sr) :- arg A arg A add_adj(X, Y, Sx, Sy). Each time an adjunct is added to the subcategoriza- tion frame of a verb, the semantics of the adjunct is 'applied' to the semantics as it has been built up so far (Sy), and the result (SA) is passed on. The final step in the recursion unifies the semantics that is constructed in this way with the semantics of the 'output' category. As an adjunct A1 that appears to the left of an adjunct A2 in the string will be added to the subcategoriza- tion frame of the governing verb after As is added, this orders the (sentential) scope of adjuncts according to left-to-right word order. Furthermore, since the scope of adjuncts is now part of a verb's lexical semantics, any functor taking such a verb as argument (e.g. verbs selecting for an infinitival complement) will have the semantics of these adjuncts in its scope. Note that the alternative treatments of adjuncts men- tioned in section 2.2 cannot account for the distribution or scope of adjuncts in cross-serial dependency con- structions. Multiple (i.e. a finite number of) catego- rizations cannot account for all possible word orders, since division implies that a trigger for cross-serial word order may have any number of arguments, and thus, that the number of 'subcategorization frames' for such verbs is not fixed. The polymorphic solution (assigning adjuncts the category x/x) does account for word or- der, but cannot account for narrow scope readings, as the adjunct will always modify the whole verb cluster (i.e the matrix verb) and cannot be made to modify an embedded verb only. 4 Processing The introduction of recursive lexical rules has repercus- sions for processing as they lead to an infinite number of lexical categories for a given lexical item or, if one 151 considers lexical rules as unary syntactic rules, to non- branching derivations of unbounded length. In both cases, a parser may not terminate. One of the main advantages of modeling lexical rules by means of con- straints is that it suggests a solution for this problem. A control strategy which delays the evaluation of con- straints until certain crucial bits of information are filled in avoids non-termination and in practice leads to gram- mars in which all constraints are fully evaluated at the end of the parse-process. Consider a grammar in which the only recursive con- straint is add_adjuncts, as defined in section 2.2. The introduction of recursive constraints in itself does not solve the non-termination problem. If all solutions for add_adjuncts are simply enumerated during lexical look-up an infinite number of categories for any given verb will result. During processing, however, it is not necessarily the case that we need to consider all solutions. Syntactic processing can lead to a (partial) instantiation of the arguments of a constraint. If the right pieces of infor- mation are instantiated, the constraint will only have a finite number of solutions. Consider, for instance, a parse for the following string. (22) ... J. opzettelijk een ongeluk veroorzaakt NP ADJ NP Verb NP\(ADJ\IV) ADJ\IV NP\S S Even if the category of the verb is left completely open initially, there is only one derivation for this string that reduces to S (remember that the syntax uses appli- cation only). This derivation provides the information that the variable Verb must be a transitive verb select- ing one additional adjunct, and with this information it is easy to check whether the following constraint is satisfied: add_adjuncts(NP\(ADJ\(NP\S) ), NP\(NP\S)). This suggests that recursive constraints should not be evaluated during lexical look-up, but that their evalu- ation should be delayed until the arguments are suffi- ciently instantiated. To implement this delayed evaluation strategy, we used the block facility of Sicstus Prolog. For each re- cursive constraint, a block declaration defines what the conditions are under which it may be evaluated. The definition of add_adjuncts (with semantics omitted for readability), for instance, now becomes: (23) add_adjuncts([ arg Arg ]x,Y) :- add_adjuncts(X, Y, Arg). • - block add_adjuncts(?,?,-). add_adjuncts(S, S, _). add_adjuncts(Adj \X, Y, _) :- add_adjuncts(X, Y). ivy, x] [w,Y] add_adjuncts( dir D , dir D ,.A.) :- arg A arg A add_adjuncts(X, Y). We use add_adjuncts~2 to extract the information that determines when add_adjuncts/3 is to be evalu- ated. The block declaration states that add_adjuncts/3 may only be evaluated if the third argument (i.e. the argument of the 'output' category) is not a variable. During lexical look-up, this argument is uninstantiated, and thus, no evaluation takes place. As soon as a verb combines with an argument, the argument category of the verb is instantiated and add_adjuncts~3 will be eval- uated. Note, however, that calls to add_adjuncts~3 are recursive, and thus one evaluation step may lead to an- other call to add_adjuncts~3, which in its turn will be blocked until the argument has been instantiated suffi- ciently. Thus, the recursive constraint is evaluated in- crementally, with each syntactic application step lead- ing to a new evaluation step of the blocked constraint. The recursion will stop if an atomic category s is found. Delayed evaluation leads to a processing model in which the evaluation of lexieal constraints and the con- struction of derivational structure is completely inter- twined. 4.1 Other strategies The delayed evaluation techniques discussed above can be easily implemented in parsers which rely on back- tracking for their search. For the grammars that we have worked with, a simple bottom-up (shift-reduce) parser combined with delayed evaluation guarantees termination of the parsing process. To obtain an efficient parser more complicated search strategies are required. However, chart-based search techniques are not easily generalized for grammars which make use of complex constraints. Even if the the- oretical problems can be solved (Johnson, 1993; DSrre, 1993) severe practical problems might surface, if the constraints are as complex as the ones proposed here. As an alternative we have implemented chart-based parsers using the 'non-interleaved pruning' strategy (terminology from (Maxwell III and Kaplan, 1994)). 152 Using this strategy the parser first builds a parse-forest for a sentence on the basis of the context-free backbone of the grammar. In a second processing phase parses are recovered on the basis of the parse forest and the corresponding constraints are applied. This may be ad- vantageous if the context-free backbone of the grammar is 'informative' enough to filter many unsuccessful par- tial derivations that the parser otherwise would have to check. As clearly a CUG grammar does not contain such an informative context-free backbone a further step is to use 'selective feature movement' (cf. again (Maxwell III and Kaplan, 1994)). In this approach the base gram- mar is compiled into an equivalent modified grammar in which certain constraints from the base grammar are converted to a more complex context-free backbone in the modified grammar. Again, this technique does not easily give good results for grammars of the type described. It is not clear at all where we should begin extracting appropriate features for such a modified grammar, because most information passing is simply too 'indirect' to be easily compiled into a context-free backbone. We achieved the best results by using a 'hand- fabricated' context-free grammar as the first phase of parsing. This context-free grammar builds a parse for- est that is then used by the 'real' grammar to obtain ap- propriate representation(s) for the input sentence. This turned out to reduce parsing times considerably. Clearly such a strategy raises questions on the rela- tion between this context-free grammar and the CUG grammar. The context-free grammar is required to pro- duce a superset of the derivations allowed by the CUG. Given the problems mentioned above it is difficult to show that this is indeed the case (if it were easy, then it probably would also be easy to obtain such a context- free grammar automatically). The strategy can be described in somewhat more de- tail as follows. The context-free phase of processing builds a number of items defining the parse forest, in a format that can be used by the second processing phase. Such items are four-tuples (R, Po,P,n) where R is a rule name (consistent with the rule names from the CUG), P0, P are string positions and D de- scribes the string positions associated with each daugh- ter of the rule (indicating which part of the string is covered by that daughter). Through a head-driven recursive descent the second processing phase recovers derivations on the basis of these items. Note that the delayed evaluation tech- nique for complex constraints is essential here. Alter- native solutions are obtained by backtracking. If the first phase has done a good job in pruning many failing search branches then this is not too expensive, and we do not have to worry about the interaction of caching and complex constraints. 5 Final Remarks In sections 2 and 3 we have sketched an analysis of cross-serial dependency constructions and its interac- tion with the position and scope of adjuncts. The rules given there are actually part of a larger frag- ment that covers the syntax of Dutch verb clusters in more detail. The fragment accounts for cross- serial dependencies and extraposition constructions (in- cluding cases of 'partial' extraposition), infinitivus-pro- participio, modal and participle inversion, the position of particles in verb clusters, clitic climbing, partial vp- topicalization, and verb second. In the larger fragment, additional recursive constraints are introduced, but the syntax is still restricted to application only. The result of Carpenter (1991) emphasizes the impor- tance of lexical rules. There is a tendency in both CG and HPSG to rely more and more on mechanisms (such as inheritance and lexical rules or recursive constraints) that operate in the lexicon. The unrestricted generative capacity of recursive lexical rules implies that the re- maining role of syntax can be extremely simple. In the examples above we have stressed this by giving an ac- count for the syntax of cross-serial dependencies (a con- struction that is, given some additional assumptions, not context-free) using application only. In general, such an approach seems promising, as it locates the sources of complexity for a given grammar in one place, namely the lexicon. References Guy Barry and Martin Pickering. 1990. Dependency and coordination in categorial grammar. In G. Barry and G. Morrill, editors, S~udies in Categorial Gram- mar. University of Edinburgh. Edinburgh Working Papers in Cognitive Science 5. Gosse Bouma. 1988. Modifiers and specifiers in cate- gorial unification grammar. Linguistics, 26:21-46. Bob Carpenter. 1991. The generative power of cat- egorial grammars and head-driven phrase structure grammars with lexical rules. Computational Linguis- tics, 17(3):301-313. Jochen DSrre. 1993. Generalizing earley deduction for constraint-based grammars. In Jochen Do~rre, edi- tor, Computational Aspects of Constraint-Based Lin- guistic Description. Dyana-2 Deliverable R1.2.A. 153 David R. Dowty. 1982. Grammatical relations and montague grammar. In Pauline Jacobson and Ge- offrey Pullum, editors, The Nature of Syntactic Rep- resentation, pages 79-130. Reidel, Dordrecht. Martin Emms. 1993. Parsing with polymorphism. In Sixth Conference of the European Chapter of the As- sociation for Computational Linguistics, pages 120- 129. Erhard Hinrichs and Tsuneko Nakazawa. 1989. Flipped out: AUX in German. In Papers from the 25th Annual Regional Meeting of the Chicago Linguistic Society, pages 187-202. Chicago Linguistics Society, Chicago. Jack Hoeksema. 1984. Categorial Morphology. Ph.D. thesis, Groningen University. Jack Hoeksema. 1991. A categorial theory of reanalysis phenomena, ms. RUG Groningen. Mark Johnson. 1993. Memoization in constraint logic programming. Paper presented at the First Work- shop on Principles and Practice of Constraint Pro- gramming, April 28-30, Newport, Rhode Island. Lauri Karttunen. 1989. RadicM lexicalism. In Mark R. Baltin and Anthony S. Kroch, editors, Alternative Conceptions of Phrase Structure, pages 43-65. Uni- versity of Chicago Press. John T. Maxwell III and Ronald M. Kaplan. 1994. The interface between phrasal and functional constraints. Computational Linguistics, 19(4):571-90. Philip Miller. 1992. Clitics and Constituents in Phrase Structure Grammar. Garland, New York. Michael Moortgat. 1988. Categorial Investigations. Ph.D. thesis, University of Amsterdam. Fernando C.N. Pereira and Stuart M. Shieber. 1987. Prolog and Natural Language Analysis. Center for the Study of Language and Information Stanford. Mark Steedman. 1985. Dependency and coordination in the grammar of Dutch and English. Language, 61:523-68. Henk Zeevat, Ewan Klein, and Jo Calder. 1987. Uni- fication categorial grammar. In Nicholas Haddock, Ewan Klein, and Glyn Morrill, editors, Categorial Grammar, Unification Grammar and Parsing. Cen- tre for Cognitive Science, University of Edinburgh. Volume 1 of Working Papers in Cognitive Science. 154
1994
21
Capturing CFLs with Tree Adjoining James Rogers* Dept. of Computer and Information Sciences University of Delaware Newark, DE 19716, USA j rogers©cis, udel. edu Grammars Abstract We define a decidable class of TAGs that is strongly equivalent to CFGs and is cubic-time parsable. This class serves to lexicalize CFGs in the same manner as the LC, FGs of Schabes and Waters but with consider- ably less restriction on the form of the grammars. The class provides a nornlal form for TAGs that generate local sets m rnuch the same way that regular grammars provide a normal form for CFGs that generate regular sets. Introduction We introduce the notion of Regular Form for Tree Ad- joining (;rammars (TA(;s). The class of TAGs that are in regular from is equivalent in strong generative capacity 1 to the Context-Free Grammars, that is, the sets of trees generated by TAGs in this class are the local sets--the sets of derivation trees generated by CFGs. 2 Our investigations were initially motivated by the work of Schabes, Joshi, and Waters in lexicalization of CFGs via TAGs (Schabes and Joshi, 1991; Joshi and Schabes, 1992; Schabes and Waters, 1993a; Schabes and Waters, 1993b; Schabes, 1990). The class we describe not only serves to lexicalize CFGs in a way that is more faith- tiff and more flexible in its encoding than earlier work, but provides a basis for using the more expressive TAG formalism to define Context-Free Languages (CFLs.) In Schabes et al. (1988) and Schabes (1990) a gen- eral notion of lexicalized grammars is introduced. A grammar is lexicalized in this sense if each of the ba- sic structures it manipulates is associated with a lexical item, its anchor. The set of structures relevant to a particular input string, then, is selected by the lexical *The work reported here owes a great deal to extensive discussions with K. Vijay-Shanker. 1 We will refer to equivalence of the sets of trees generated by two grammars or classes of grammars as strong equiva- lence. Equivalence of their string languages will be referred to as weak equivalence. 2Technically, the sets of trees generated by TAGs in the class are recognizable sets. The local and recognizable sets are equivalent modulo projection. We discuss the distinction in the next section. items that occur in that string. There are a number of reasons for exploring lexicalized grammars. Chief among these are linguistic considerations--lexicalized grammars reflect the tendency in many current syntac- tic theories to have the details of the syntactic structure be projected from the lexicon. There are also practical advantages. All lexicalized grammars are finitely am- biguous and, consequently, recognition for them is de- cidable. Further, lexicalization supports strategies that can, in practice, improve the speed of recognition algo- rithms (Schabes et M., 1988). One grammar formalism is said to lezicalize an- other (Joshi and Schabes, 1992) if for every grammar in the second formalism there is a lexicalized grammar in the first that generates exactly the same set of struc- tures. While CFGs are attractive for efficiency of recog- nition, Joshi and Schabes (1992) have shown that an arbitrary CFG cannot, in general, be converted into a strongly equivalent lexiealized CFG. Instead, they show how CFGs can be lexicalized by LTAGS (Lexicalized TAGs). While the LTAG that lexicalizes a given CFG must be strongly equivalent to that CFG, both the lan- guages and sets of trees generated by LTAGs as a class are strict supersets of the CFLs and local sets. Thus, while this gives a means of constructing a lexicalized grammar from an existing CFG, it does not provide a direct method for constructing lexicalized grammars that are known to be equivalent to (unspecified) CFGs. Furthermore, the best known recognition algorithm for LTAGs runs in O(n 6) time. Schabes and Waters (1993a; 1993b) define Lexical- ized Context-Free Grammars (LCFGs), a class of lex- icalized TAGs (with restricted adjunction) that not only lexicalizes CFGs, but is cubic-time parsable and is weakly equivalent to CFGs. These LCFGs have a cou- ple of shortcomings. First, they are not strongly equiv- alent to CFGs. Since they are cubic-time parsable this is primarily a theoretical rather than practical concern. More importantly, they employ structures of a highly restricted form. Thus the restrictions of the formalism, in some cases, may override linguistic considerations in constructing the grammar. Clearly any class of TAGs that are cubic-time parsable, or that are equivalent in 155 any sense to CFGs, must be restricted in some way. The question is what restrictions are necessary. In this paper we directly address the issue of iden- tifying a class of TAGs that are strongly equivalent to CFGs. In doing so we define such a class--TAGs in regular form--that is decidable, cubic-time parsable, and lexicalizes CFGs. Further, regular form is essen- tially a closure condition on the elementary trees of the TAG. Rather than restricting the form of the trees that can be employed, or the mechanisms by which they are combined, it requires that whenever a tree with a par- ticular form can be derived then certain other related trees must be derivable as well. The algorithm for de- ciding whether a given grammar is in regular form can produce a set of elementary trees that will extend a grammar that does not meet the condition to one that does. 3 Thus the grammar can be written largely on the basis of the linguistic structures that it is intended to capture. We show that, while the LCFGs that are built by Schabes and Waters's algorithm for lexicalization of CFGs are in regular form, the restrictions they employ are unnecessarily strong. Regular form provides a partial answer to the more general issue of characterizing the TAGs that generate local sets. It serves as a normal form for these TAGs in the same way that regular grammars serve as a normal form for CFGs that generate regular languages. While for every TAG that generates a local set there is a TAG in regular form that generates the same set, and every TAG in regular form generates a local set (modulo pro- jection), there are TAGs that are not in regular form that generate local sets, just as there are CFGs that generate regular languages that are not regular gram- mars. The next section of this paper briefly introduces no- tation for TAGs and the concept of recognizable sets. Our results on regular form are developed in the subse- quent section. We first define a restricted use of the ad- junction operation--derivation by regular adjunction-- which we show derives only recognizable sets. We then define the class of TAGs in regular form and show that the set of trees derivable in a TAG of this form is deriv- able by regular adjunction in that TAG and is therefore recognizable. We next show that every local set can be generated by a TAG in regular form and that Schabes and Waters's construction for LCFGs in fact produces TAGs in regular form. Finally, we provide an algorithm for deciding if a given TAG is in regular form. We close with a discussion of the implications of this work with respect to the lexicalization of CFGs and the use of TAGs to define languages that are strictly context-free, and raise the question of whether our results can be strengthened for some classes of TAGs. 3Although the result of this process is not, in general, equivalent to the original grammar. Preliminaries Tree Adjoining Grammars Formally, a TAG is a five-tuple (E, NT, I, A, S / where: E is a finite set of terminal symbols, NT is a finite set of non-terminal symbols, I is a finite set of elementary initial trees, A is a finite set of elementary auxiliary trees, S is a distinguished non-terminal, the start symbol. Every non-frontier node of a tree in I t3 A is labeled with a non-terminal. Frontier nodes may be labeled with either a terminal or a non-terminal. Every tree in A has exactly one frontier node that is designated as its foot. This must be labeled with the same non- terminal as the root. The auxiliary and initial trees are distinguished by the presence (or absence, respectively) of a foot node. Every other frontier node that is la- beled with a non-terminal is considered to be marked for substitution. In a lexicalized TAG (LTAG) every tree in I tO A must have some frontier node designated the anchor, which must be labeled with a terminal. Unless otherwise stated, we include both elementary and derived trees when referring to initial trees and auxiliary trees. A TAG derives trees by a sequence of substitutions and adjunctions in the elementary trees. In substitution an instance of an initial tree in which the root is labeled X E NT is substituted for a frontier node (other than the foot) in an instance of either an initial or auxiliary tree that is also labeled X. Both trees may be either an elementary tree or a derived tree. In adjunction an instance of an auxiliary tree in which the root and foot are labeled X is inserted at a node, also labeled X, in an instance of either an initial or auxiliary tree as follows: the subtree at that node is ex- cised, the auxiliary tree is substituted at that node, and the excised subtree is substituted at the foot of the aux- iliary tree. Again, the trees may be either elementary or derived. The set of objects ultimately derived by a TAG 6' is T(G), the set of completed initial trees derivable in (;. These are the initial trees derivable in G in which tile root is labeled S and every frontier node is labeled with a terminal (thus no nodes are marked for substitution.) We refer to the set of all trees, both initial and auxiliary, with or without nodes marked for substitution, that are derivable in G as TI(G). The language derived by G is L(G) the set of strings in E* that are the yields of trees in T(G). In this paper, all TAGs are pure TAGs, i.e., without adjoining constraints. Most of our results go through for TAGs with adjoining constraints as well, but there is much more to say about these TAGs and the impli- cations of this work in distinguishing the pure TACs from TAGs in general. This is a part of our ongoing research. The path between the root and foot (inclusive) of an auxiliary tree is referred to as its spine. Auxiliary trees 156 in which no node on the spine other than the foot is labeled with the same non-terminal as the root we call a prvper auxiliary tree. Lemma 1 For any TAG G there is a TAG G' that includes no improper elementary trees ,such that T(G) is a projection ofT((7'). Proof (Sketch): The grammar G can be relabeled with symbols in {(x,i} [ x E E U NT, i E {0, 1}} to form G'. Every auxiliary tree is duplicated, with the root and foot labeled (X,O) in one copy and (X, 1} in the other. Improper elementary auxiliary trees can be avoided by appropriate choice of labels along the spine. [] The labels in the trees generated by G' are a refine- ment of the labels of the trees generated by G. Thus (7 partitions the categories assigned by G into sub- categories on the basis of (a fixed amount of) context. While the use here is technical rather than natural, the al)proach is familiar, as in the use of slashed categories to handle movement. Recognizable Sets The local sets are formally very closely related to the recognizable sets, which are somewhat more con- venient to work with. These are sets of trees that are accepted by finite-state tree automata (G~cseg and Steinby, 1984). If E is a finite alphabet, a Z-valued tree is a finite, rooted, left-to-right ordered tree, the nodes of which are labeled with symbols in E. We will denote such a tree in which the root is labeled o" and in which the subtrees at the children of the root are tl,..., tn as cr(tl,...,t,,). The set of all E-valued trees is denoted A (non-deterministic) bottom-up finite state tree au- tomaton over E-valued trees is a tuple (E,Q, M, F) where: e is a finite alphabet, Q is a finite set of states, F is a subset of Q, the set of final states, and M is a partial flmction from I3 x Q* to p(Q) (the powerset of Q) with finite domain, the transi- tion function. The transition function M associates sets of states with alphabet symbols. It induces a function that as- sociates sets of states with trees, M : T~ ~ P(Q), such that: q e M(t) 4~ t is a leaf labeled a and q E M(a, e), or t = a(to,..., t,~) and there is a sequence of states qo, •.., q, such that qi E M(ti), for 0 < i < n, and q E M(a, qo ..... q,~). An automaton A = (E,Q, M, F} accepts a tree t E TE iff, by definition, FIq-'M(t) is not empty. The set of trees accepted by an automaton .,4 is denoted T(A). A set of trees is recognizable iff, by definition, it is T(A) for some automaton .A. Lemma 2 (Thatcher, 1967) Every local set is recog- nizable. Every recognizable set is the projection of some local set. The projection is necessary because the automaton can distinguish between nodes labeled with the same sym- bol while the CFG cannot. The set of trees (with bounded branching) in which exactly one node is la- beled A, for instance, is recognizable but not local. It is, however, the projection of a local set in which the labels of the nodes that dominate the node labeled A are distinguished from the labels of those that don't. As a corollary of this lemma, the path set of a recog- nizable (or local) set, i.e., the set of strings that label paths in the trees in that set, is regular. TAGs in Regular Form Regular Adjunction The fact that the path sets of recognizable sets must be regular provides our basic approach to defining a class of TAGs that generate only recognizable sets. We start with a restricted form of adjunction that can generate only regular path sets and then look for a class of TAGs that do not generate any trees that cannot be generated with this restricted form of adjunction. Definition 1 Regular adjunction is ordinary ad- junction restricted to the following cases: • any auxiliary tree may be adjoined into any initial tree or at any node that is not on the spine of an auxiliary tree, • any proper auxiliary tree may be adjoined into any auxiliary tree at the root or fool of that tree, • any auxiliary tree 7t may be adjoined at any node along the spine of any auxiliary tree 72 provided that no instance of 3'2 can be adjoined at any node along the spine of 71. In figure 1, for example, this rules out adjunction of /31 into the spine of/33, or vice versa, either directly or indirectly (by adjunction of/33, say, into f12 and then adjunction of the resulting auxiliary tree into fit-) Note that, in the case of TAGs with no improper elementary auxiliary trees, the requirement that only proper aux- iliary trees may be adjoined at the root or foot is not actually a restriction. This is because the only way to derive an improper auxiliary tree in such a TAG with- out violating the other restrictions on regular adjunc- tion is by adjunction at the root or foot. Any sequence of such adjunctions can always be re-ordered in a way which meets the requirement. We denote the set. of completed initial trees derivable by regular adjunetion in G as TR(G). Similarly, we denote the set of all trees that are derivable by regular adjunction in G as T~(G). As intended, we can show that TR(G) is always a recognizable set. We are looking, then, for a class of TAGs for which T(G) = TR(G) for every G in the class. Clearly, this will be the case if T'(G) = Th(a ) for every such G. 157 t~l: S A B I I a b X U X~__ x2 A A B B a A* b b ]32: B b B* Figure 1: Regular Adjunction / x Figure 2: Regular Form B b A [--... B* a / × Proposition 1 If G is a TAG and T'(G) = T'a(G ). Then T(G) is a recognizable set. Proof (Sketch): This follows from the fact that in reg- ular adjunction, if one treats adjunction at the root or foot as substitution, there is a fixed bound, dependent only on G, on the depth to which auxiliary trees can be nested. Thus the nesting of the auxiliary trees can be tracked by a fixed depth stack. Such a stack can be encoded in a finite set of states. It's reasonably easy to see, then, how G can be compiled into a bottom-up finite state tree automaton, t3 Since regular adjunction generates only recognizable sets, and thus (modulo projection) local sets, and since CFGs can be parsed in cubic time, one would hope that TAGs that employ only regular adjunction can be parsed in cubic time as well. In fact, such is the case. Proposition 2 If G is a TAG for which T(G) = TR(G) then there is a algorithm that recognizes strings in L(G) in time proportional to the cube of the length of the string. 4 Proof(Sketch): This, again, follows from the fact that the depth of nesting of auxiliary trees is bounded in regular adjunction. A CKY-style style parsing algorithm for TAGs (the one given in Vijay-Shanker and Weir (1993), for example) can be modified to work with a two-dimensionM array, storing in each slot [i, j] a set of structures that encode a node in an elementary tree that can occur at the root of a subtree spanning the input from position i through j in some tree derivable in G, along with a stack recording the nesting of elementary auxiliary trees around that node in the derivation of that tree. Since the stacks 4This result was suggested by K. Vijay-Shanker. are bounded the amount of data stored in each node is independent of the input length and the algorithm executes in time proportional to the cube of the length of the input, o Regular Form We are interested in classes of TAGs for which T'(G) = T~(G). One such class is the TAGs in regular form. Definition 2 A TAG is in regular form if[ whenever a completed auxiliary tree of the form 71 in Figure 2 is derivable, where Xo ~£ xl ~ x2 and no node labeled X occurs properly between xo and xl, then trees of the form 72 and 73 are derivable as well. Effectively, this is a closure condition oll the elementary trees of the grammar. Note that it immediately implies that every improper elementary auxiliary tree in a reg- ular form TAG is redundant. It is also easy to see, by induction on the number of occurrences of X along the spine, that any auxiliary tree 7 for X that is derivable in G can be decomposed into the concatenation of a sequence of proper auxiliary trees for X each of which is derivable in G. We will refer to the proper auxiliary trees in this sequence as the proper segments of 7. Lemina 3 Suppose G is a TAG in regular form. Then T'(G) = T£(G) Proof: Suppose 7 is any non-elementary auxiliary tree derivable by unrestricted adjunction in G and that any smaller tree derivable in (7, is derivable by regular ad- junction in G. If'/is proper, then it is clearly derivable from two strictly smaller trees by regular adjunction, each of which, by the induction hypothesis, is in T~(G). If 7 is improper, then it has the form of 71 in Figure 2 and it is derivable by regular adjunction of 72 at the root of'/3. Since both of these are derivable and strictly 158 smaller than 7 they are in T~(G). It follows that 7 is in T~(G') as well. [] Lemma 4 Suppose (; is a TAG with no improper ele- mentary trees and T'(G) = T'R(G ). Then G is in regu- lar form. Proofi Suppose some 7 with the form of 7l in Fig- ure 2 is derivable in G and that for all trees 7' that are smaller than 7 every proper segment of 7' is derivable in G'. By assumption 7 is not elementary since it is im- proper. Thus, by hypothesis, 7 is derivable by regular adjunction of some 7" into some 7' both of which are derivable in (/. Suppose 7" adjoins into the spine of 7' and that a node labeled X occurs along the spine of 7". Then, by the definition of regular adjunction, 7" must be ad- joined at. either tile root or foot of 7'. Thus both 7' and 7" consist of sequences of consecutive proper seg- ments of 7 with 7" including t and the initial (possibly empty) portion of u and 7' including the remainder of u or vice versa. In either case, by the induction hypoth- esis, every proper segment of both 7' and 7", and thus every proper segment of 7 is derivable in G. Then trees of the forrn 72 and 73 are derivable from these proper segments. Suppose, on the other hand, that 7" does not adjoin along the spine of 7 ~ or that no node labeled X occurs along tile spine of 7"- Note that 7" must occur entirely within a proper segment of 7. Then 7' is a tree with the form of 71 that is smaller than 7. From the induc- tion hypothesis every proper segment of 7 ~ is derivable in (;. It follows then that every proper segment of 7 is derivable in G, either because it is a proper segment of 7' or because it is derivable by a¢0unction of 7" into a proper segment of 7'- Again, trees of the form "r2 and 7a are derivable from these 1)roper segments. [] Regular Form and Local Sets The class of TAGs in regular form is related to the lo- cal sets in much the same way that the class of regular grammars is related to regular languages. Every TAG in regular form generates a recognizable set. This fol- lows from Lemma 3 and Proposition 1. Thus, modulo projection, every TAG in regular form generates a local set. C, onversely, the next proposition establishes that every local set can be generated by a TAG in regu- lar form. Thus regular form provides a normal form for TAGs that generate local sets. It is not the case, however, that all TAGs that generate local sets are in regular form. Proposition 3 For every CFG G there is a TAG G' in regular form such that the set of derivation trees for G is exactly T(G'). Proof: This is nearly immediate, since every CFG is equivalent to a Tree Substitution Grammar (in which all trees are of depth one) and every Tree Substitution Grammar is, in the definition we use here, a TAG with no elementary auxiliary trees. It follows that this TAG can derive no auxiliary trees at all, and is thus vacu- ously in regular form. [] This proof is hardly satisfying, depending as it does on the fact that TAGs, as we define them, can employ sub- stitution. The next proposition yields, as a corollary, the more substantial result that every CFG is strongly equivalent to a TAG in regular form in which substitu- tion plays no role. Proposition 4 The class of TAGs in regular form can lexicalize CFGs. Proof: This follows directly from the equivalent lemma in Schabes and Waters (1993a). The construction given there builds a left-corner derivation graph (LCG). Vertices in this graph are the terminals and non- terminals of G. Edges correspond to the productions of G in the following way: there is an edge from X to Y labeled X ---* Ya iff X ---* Ya is a production in G. Paths through this graph that end on a termi- nal characterize the left-corner derivations in G. The construction proceeds by building a set of elementary initial trees corresponding to the simple (acyelic) paths through the LCG that end on terminals. These capture the non-recursive left-corner derivations in G. The set of auxiliary trees is built in two steps. First, an aux- iliary tree is constructed for every simple cycle in the graph. This gives a set of auxiliary trees that is suffi- cient, with the initial trees, to derive every tree gener- ated by the CFG. This set of auxiliary trees, however, may include some which are not lexicalized, that is, in which every frontier node other than the foot is marked for substitution. These can be lexicalized by substitut- ing every corresponding elementary initial tree at one of those frontier nodes. Call the LCFG constructed for G by this method G'. For our purposes, the important point of the construction is that every simple cycle in the LCG is represented by an elementary auxiliary tree. Since the spines of auxiliary trees derivable in G' cor- respond to cycles in the LCG, every proper segment of an auxiliary tree derivable in G' is a simple cycle in the LCG. Thus every such proper segment is derivable in G' and G' is in regular form. [] The use of a graph which captures left-corner deriva- tions as the foundation of this construction guarantees that the auxiliary trees it builds will be left-recursive (will have the foot as the left-most leaf.) It is a require- ment of LCFGs that all auxiliary trees be either left- or right-recursive. Thus, while other derivation strate- gies may be employed in constructing the graph, these must always expand either the left- or right-most child at each step. All that is required for the construction to produce a TAG in regular form, though, is that every simple cycle in the graph be realized in an elementary tree. The resulting grammar will be in regular form no 159 matter what (complete) derivation strategy is captured ill the graph. In particular, this admits the possibility of generating an LTAG in which the anchor of each el- ementary tree is some linguistically motivated "head". Corollary 1 For every CFG G there is a TAG G ~ in regular form in which no node is marked for substitu- tion, such that the set of derivation trees for G is exactly T(G'). This follows from the fact that the step used to lex- icalize the elementary auxiliary trees in Schabes and Waters's construction can be applied to every node (in both initial and auxiliary trees) which is marked for substitution. Paradoxically, to establish the corollary it is not necessary for every elementary tree to be lex- icalized. In Schabes and Waters's lemma G is required to be finitely ambiguous and to not generate the empty string. These restrictions are only necessary if G ~ is to be lexicalized. Here we can accept TAGs which include elementary trees in which the only leaf is the foot node or which yield only the empty string. Thus the corollary applies to all CFGs without restriction. Regular Form is Decidable We have established that regular form gives a class of TAGs that is strongly equivalent to CFGs (modulo pro- jection), and that LTAGs in this class lexicalize CFGs. In this section we provide an effective procedure for de- ciding if a given TAG is in regular form. The procedure is based on a graph that is not unlike the LCG of the construction of Schabes and Waters. If G is a TAG, the Spine Graph of G is a directed multi-graph on a set of vertices, one for each non- terminal in G. If Hi is an elementary auxiliary tree in G and the spine of fli is labeled with the sequence of non-terminals (Xo, X1,..., Xn) (where X0 = Xn and the remaining Xj are not necessarily distinct), then there is an edge in the graph from each Xj to Xj+I la- beled (Hi, J, ti,j), where ti,j is that portion of Hi that is dominated by Xj but not properly dominated by Xj+I. There are no other edges in the graph except those cor- responding to the elementary auxiliary trees of G in this way. The intent is for the spine graph of G to characterize the set of auxiliary trees derivable in G by adjunction along the spine. Clearly, any vertex that is labeled with a non-terminal for which there is no corresponding aux- iliary tree plays no active role in these derivations and can be replaced, along with the pairs of edges incident on it, by single edges. Without loss of generality, then, we assume spine graphs of this reduced form. Thus ev- ery vertex has at least one edge labeled with a 0 in its second component incident from it. A well-formed-cycle (wfc) in this graph is a (non- empty) path traced by the following non-deterministic automaton: • The automaton consists of a single push-down stack. Stack contents are labels of edges in the graph. • The automaton starts on any vertex of the graph with an empty stack. • At each step, the automaton can move as follows: - If there is an edge incident from the current vertex labeled (ill, O, ti,o) the automaton can push that label onto the stack and move to the vertex at the far end of that edge. - If the top of stack contains (fli,j, tis) and there is an edge incident from the current vertex labeled (fli,j+ 1,ti,j+l) the automaton may pop the top of stack, push (Hi,j-t-l,ti,j+l) and move to the vertex at the end of that edge. - If the top of stack contains (Hi,j, ti,j) but there is no edge incident from the current vertex labeled (Hi,J + 1,ti,j+l) then the automaton may pop the top of stack and remain at the same vertex. • The automaton may halt if its stack is empty. • A path through the graph is traced by the automaton if it starts at the first vertex in the path and halts at the last vertex in the path visiting each of the vertices in the path in order. Each wfc in a spine graph corresponds to the auxil- iary tree built by concatenating the third components of the labels on the edges in the cycle in order. Then every wfc in the spine graph of G corresponds to an auxiliary tree that is derivable in G by adjunction along the spine only. Conversely, every such auxiliary tree corresponds to some wfc in the spine graph. A simple cycle in the spine graph, by definition, is any minimal cycle in the graph that ignores the labels of the edges but not their direction. Simple cycles cor- respond to auxiliary trees in the same way that wfcs do. Say that two cycles in the graph are equivalent iff they correspond to the same auxiliary tree. The simple cy- cles in the spine graph for G correspond to the minimal set of elementary auxiliary trees in any presentation of G that is closed under the regular form condition in tile following way. Lemma 5 A TAG G is in regular form iff every simple cycle in its spine graph is equivalent to a wfc in that graph. Proof: (If every simple cycle is equivalent to a wfc then (; is in regular form.) Suppose every simple cycle in the spine graph of (; is equivalent to a wfc and some tree of the form 71 in Figure 2 is derivable in G. Wlog, assume the tree is derivable by adjunction along the spine only. Then there is a wfc in the spine graph of G corresponding to that tree that is of the form (Xo,...,Xk,...,X,,) where X0 = Xk = Xn, 0 :~ k # n, and Xi # Xo for all0 < i < k. Thus (X0 .... ,Xk) is asimple cy- cle in the spine graph. Further, (Xk ..... Xn) is a se- quence of one or more such simple cycles. It follows that both (X0,...,Xk) and (Xk,...,Xn) are wfc in tile 160 /3~1o - 1, so ~ /3o, to, to ..... > Xo Spine Graph /30, lo + 1 !~o ~,, l~, t~ ...> X1 7o: tk so Xo Figure 3: Regular Form is Decidable X spine graph and thus both 72 and 73 are derivable in (;. (If (; is in regular form then every simple cycle corre- sponds to a wfc.) Assume, wlog, tile spine graph of G is connected. (If it is not we can treat G as a union of grammars.) Since the spine graph is a union of wfcs it has an Eulerian wfc (in tile usual sense of Eulerian). Further, since every w~rl, ex is the initial vertex of some wfc, every vertex is tile initial vertex of some Eulerian wfc. Suppose there is some simple cycle X0 (fl0,10, t0) Xl (ill,ll,tl) ''' ... x~ (f~,, t,, t~) x0 where the Xj are the vertices and the tuples are the labels on the edges of the cycle. Then there is a wfc starting at Xo that includes the edge (flo, 10, to), al- though not necessarily initially. In particular the Eule- rian wfc starting at X0 is such a wfc. This corresponds to a derivable auxiliary tree that includes a proper seg- ment beginning with to. Since G is in regular form, that proper segment is a derivable auxiliary tree. Call this 7o (see Figure 3.) The spine of that tree is labeled X0,X1,...,X0, where anything (other than X0) can occur in the ellipses. The same cycle can be rotated to get a simple cycle starting at each of the Xj. Thus for each Xj there is a derivable auxiliary tree starting with tj. Call it 73". By a sequence of adjunctions of each 7j at the second node on the spine of 7j-1 an auxiliary tree for X0 is derivable in which the first proper segment is the concatenation of tO, tl,...,tn. Again, by the fact that G is in regular form, this proper segment is derivable in G. Hence there is a wfc in the spine graph corresponding to this tree. [] Proposition5 For any TAG G the question of whetherG is in regular form is decidable. Further, there is an effective procedure that, given any TAG, will ex- tend it to a TAG that is in regular form. Proof." Given a TAG G we construct its spine graph. Since the TAG is finite, the graph is as well. The TAG is in regular form iff every simple cycle is equivalent to a wfc. This is clearly decidable. Further, the set of elementary trees corresponding to simple cycles that are not equivalent to wfcs is effectively constructible. Adding that set to the original TAG extends it to reg- ular form. [] Of course the set of trees generated by the extended TAG may well be a proper superset of the set gener- ated by the original TAG. Discussion The LCFGs of Schabes and Waters employ a restricted form of adjunction and a highly restricted form of ele- mentary auxiliary tree. The auxiliary trees of LCFGs can only occur in left- or right-recursive form, that is, with the foot as either the left- or right-most node on the frontier of the tree. Thus the structures that can be captured in these trees are restricted by the mechanism itself, and Schabes and Waters (in (1993a)) cite two situations where an existing LTAG grammar for En- glish (Abeill@ et at., 1990) fails to meet this restriction. But while it is sufficient to assure that the language generated is context-free and cubic-time parsable, this restriction is stronger than necessary. TAGs in regular form, in contrast, are ordinary TAGs utilizing ordinary adjunction. While it is developed from the notion of regular adjunction, regular form is just a closure condition on the elementary trees of the grammar. Although that closure condition assures that all improper elementary auxiliary trees are redun- dant, the form of the elementary trees themselves is unrestricted. Thus the structures they capture can be driven primarily by linguistic considerations. As we noted earlier, the restrictions on the form of the trees in an LCFG significantly constrain the way in which CFGs can be lexicalized using Schabes and Waters's construction. These constraints are eliminated if we re- quire only that the result be in regular form and the lexicalization can then be structured largely on linguis- tic principles. 161 On the other hand, regular form is a property of the grammar as a whole, while the restrictions of LCFG are restrictions on individual trees (and the manner in which they are combined.) Consequently, it is imme- diately obvious if a grammar meets the requirements of LCFG, while it is less apparent if it is in regular form. In the case of the LTAG grammar for English, neither of the situations noted by Schabes and Waters violate regular form themselves. As regular form is decidable, it is reasonable to ask whether the gram- mar as a whole is in regular form. A positive result would identify the large fragment of English covered by this grammar as strongly context-free and cubic-time parsable. A negative result is likely to give insight into those structures covered by the grammar that require context-sensitivity. One might approach defining a context-free language within the TAG formalism by developing a grammar with the intent that all trees derivable in the grammar be derivable by regular adjunction. This condition can then be verified by the algorithm of previous section. In the case that the grammar is not in regular form, the al- gorithm proposes a set of additional auxiliary trees that will establish that form. In essence, this is a prediction about the strings that would occur in a context-free language extending the language encoded by the origi- nal grammar. It is then a linguistic issue whether these additional strings are consistent with the intent of the grammar. If a grammar is not in regular form, it is not necessar- ily the case that it does not generate a recognizable set. The main unresolved issue in this work is whether it is possible to characterize the class of TAGs that gen- erate local sets more completely. It is easy to show, for TAGs that employ adjoining constraints, that this is not possible. This is a consequence of the fact that one can construct, for any CFG, a TAG in which the path language is the image, under a bijeetive homomor- phisrn, of the string language generated by that CFG. Since it is undecidable if an arbitrary CFG generates a regular string language, and since the path language of every recognizable set is regular, it is undecidable if an arbitrary TAG (employing adjoining constraints) generates a recognizable set. This ability to capture CFLs in the string language, however, seems to depend crucially on the nature of the adjoining constraints. It does not appear to extend to pure TAGs, or even TAGs in which the adjoining constraints are implemented as monotonically growing sets of simple features. In the case of TAGs with these limited adjoining constraints, then, the questions of whether there is a class of TAGs which includes all and only those which generate rec- ognizable sets, or if there is an effective procedure for reducing any such TAG which generates a recognizable set to one in regular form, are open. References Anne Abeill~, Kathleen M. Bishop, Sharon Cote, and Yves Schabes. 1990. A lexicalized tree adjoining grammar for English. Technical Report MS-CIS-90- 24, Department of Computer and Information Sci- ence, University of Pennsylvania. Ferenc G~eseg and Magnus Steinby. 1984. Tree Au- tomata. Akad~miai Kiad6, Budapest. Aravind K. Joshi and Yves Schabes. 1992. Tree- adjoining grammars and lexicalized grammars. In M. Nivat and A. Podelski, editors, Tree Automata and Languages, pages 409-431. Elsevier Science Pub- lishers B.V. Yves Schabes and Aravind K. Joshi. 1991. Parsing with lexicalized tree adjoining grammar. In Masaru Tomita, editor, Current Issues in Parsing Technol- ogy, chapter 3, pages 25-47. Kluwer Academic Pub- lishers. Yves Schabes and Richard C. Waters. 1993a. Lexical- ized context-free grammars. In 31st Annual Meet- ing of the Association for Computational Linguistics (ACL'93), pages 121-129, Columbus, OH. Associa- tion for Computational Linguistics. Yves Schabes and Richard C. Waters. 1993b. Lexical- ized context-free grammar: A cubic-time parsable, lexicalized normal form for context-free grammar that preserves tree structure. Technical Report 93- 04, Mitsubishi Electric Research Laboratories Cam- bridge Research Center, Cambridge, MA, June. Yves Sehabes, Anne Abeill~, and Aravind K..]oshi. 1988. Parsing strategies with 'lexicalized' grammars: Application to tree adjoining grammars. In Proceed- ings of the 12th International Conference on Compu- tational Linguistics (COLING'88), Budapest, Hun- gary. Association for Computational Linguistics. Yves Sehabes. 1990. Mathematical and Computational Aspects of Lexicalized Grammars. Ph.D. thesis, De- partment of Computer and information Science, Uni- versity of Pennsylvania. J. W. Thatcher. 1967. Characterizing derivation trees of context-free grammars through a generalization of finite automata theory. Journal of Computer and System Sciences, 1:317-322. K. Vijay-Shanker and David Weir. 1993. Parsing some constrained grammar formalisms. Computa- tional Linguistics, 19(4):591-636. 162
1994
22
ON DETERMINING THE CONSISTENCY OF PARTIAL DESCRIPTIONS OF TREES Thomas L. Cornell Cognitive Science Program University of Arizona Tucson, AZ 85721 [email protected] Abstract 1 We examine the consistency problem for descriptions of trees based on remote dominance, and present a consistency-checking algorithm which is polynomial in the number of nodes in the description, despite disjunctions inherent in the theory of trees. The resulting algorithm allows for descriptions which go beyond sets of atomic formulas to allow certain types of disjunction and negation. INTRODUCTION In Marcus, Hindle & Fleck (1983), the authors proposed an approach to syntactic tree structures which took the primary structural relation to be remote dominance rather than immediate dominance. Recently, researchers have shown a revived interest in variants of Marcus et al.'s D- Theory, most likely due to the availability of approaches and techniques developed in the study of feature structures and their underlying logics. For example, both Rogers & Vijay- Shanker (1992) and Cornell (1992) present formal treatments of many notions which Marcus et al. (1983) treated only informally and incompletely. Furthermore, work on the psycholinguistic implications of this approach has continued apace (Weinberg 1988; Gorrell 1991; Marcus & Hindle 1990), making all the more necessary sustained foundational work in the theory of description-based tree-building applications (parsers, generators, etc.) This paper addresses one particular problem that arises in this approach to tree building. As with feature-structures, the essential operation here is the combination of two collections of partial information about the syntactic structure of an expression. It may happen that the two 1 Many thanks to Dick Oehrle, Ed Stabler, Drew Moshier and Mark 3ohnson for comments, discussion and encouragement. Theirs the gratitude, mine the fault. collections to be combined contain contradictory information. For example one might contain the assertion that "node 7 dominates node 12" while the other claims that "node 12 precedes node 7". No tree structure can satisfy both these constraints. The operation of description combination is thus not simple set union, but, like unification, involves taking a least upper bound in a semi-lattice where lub's are not everywhere defined. Both Rogers & Vijay-Shanker (1992) and Cornell (1992) propose to solve the D-Theoretic consistency problem by using essentially Tableau-based approaches. This can lead to combinatorial explosion in the face of disjunctions inherent in the theory of trees. But as it happens, proof techniques designed to handle general disjunctions are more powerful than we need; the disjunctions that arise from the theory of trees are of a restricted kind which can be handled by strictly polynomial means. We will see that we can efficiently handle richer notions of description than those in the "classical" D-Theory of Marcus, et al. (1983). D-THEORYANDTREETHEORY DESCRIPTION LANGUAGE We will make use of the following description language ,,~. Define the set of basic relation names, R, as: b --"below" (i.e., dominated-by) d -- "dominates" e -- "equals" f --"follows" (i.e., preceded-by) p -- "precedes" We define an algebra on relation names as follows. ($1 V $2)(x,y) =def the collection of relation names in either $1 or S2. (S1 A $2)(x,y) =def the collection of relation names in both $1 and $2. S'(xoI) --def the collection of relation names 163 not in S. We then define the full set of compound relation name expressions R* as the closure of the basic relation names under A, V and '. A formula of is then an element of R* applied to a pair of node names. We will often refer to the compound relation name expression in a formula S(x,31 ) as a constraint on the pair x, 31. Semantically, we treat S(x,31 ) as satisfiable if we can assign the denotata of the pair x, 31 to at least one of the relations denoted by members of S. On this semantics, if S(x,31 ) is satisfiable and S ~< T, then T(x,31 ) is satisfiable as well. Clearly the empty constraint (x,31) is never satisfiable. (Atoms of the form e(x,31 ) are satisfiable if and only if x and 31 denote identical members of the domain of discourse. Atoms of the form b(x,31) and f(x,31) are to be considered equivalent to d(31,x ) and p(31,x), respectively.) A description is a finite set of formulas. If a description contains only formulas with a basic relation name, we will call it classical, since this is the type of description considered in Marcus et al. (1983). AXIOMS Note that such structures are not guaranteed to be trees. Therefore we make use of the following fragment of an axiomatization of tree structures, which we will assume in the background of all that follows, and formalize in the next section. Strictness. Dominance and precedence are strict partial orders, i.e., transitive and irreflexive relations. Equality. We assume that equality is reflexive, and that we can freely substitute equals for equals. Exhaustiveness. Every pair of nodes in a tree stand in at least one of the five possible relations. I.e. R(x,31 ) for all x and 31. Inheritance. All nodes inherit the precedence properties of their ancestors. So if p(x,31 ) and d(31,z), then p(x,z) as well. A number of familiar properties of trees follow from the above system. Inheritance assures both the non-tangling of tree branches and the impossibility of upward branching ('V-shaped') configurations. Inheritance, Transitivity, Substitution of equals and Exhaustiveness jointly derive the property of Exclusiueness, which states that every pair of nodes is related in at most one way. (Note that it is Exclusiveness which assures the soundess of our use of A.) A less familiar property, which we will make some use of, is roughly parallel to Inheritance; Upwards Inheritance states that if x dominates 31 and 31 precedes (follows) z, then x dominates or precedes (follows) z. Note that this system is not meant to be an axiomatic definition of trees; it lacks a Rootedness condition, and it allows infinite and densely ordered structures. It is specifically adapted to the satisfiabilit31 problem, rather than the oalidit31 problem. It is relatively straightforward to show that, from any finite atomic dE-description satisfying these conditions, we can construct a finite tree or a precedence ordered finite forest of finite trees (which can be extended to a finite tree by the addition of a root node). So this system is complete as far as satisfiability is concerned. Briefly, if a set of formulas satisfies all of the above constraints, then we can (1) construct a new description over the quotient node-space modulo e; (2) list the dominance chains; (3) add a root if necessary; (4) noting that the dominance maximal elements under the root must be totally precedence ordered (they must be ordered and they cannot be dominance ordered or they would not be maximal), we number them accordingly; (5) apply the same procedure to the dominance ideals generated by each of the root's daughters. From the resulting numbering we can construct a "tree domain" straightforwardly. The Inheritance property assures us that dominance chains are non-tangled, so that the ideal generatred by any node will be disjoint from the ideal generated by any node precedence-ordered with respect to the first. Therefore no node will receive two numbers, and, by Exhaustiveness, every node will receive a number. DEDUCTION WITH DESCRIPTIONS There is a strong formal parallel among the axioms of Transitivity, Substitution of Equals, and Inheritance: each allows us to reason from a pair of atomic formulas to a single atomic formula. Thus they allow us to reason from classical descriptions to (slightly larger) classical descriptions. Let us refer to these axioms as 9enerators. The reason for adopting dE as a description language, rather than the simpler language of Marcus et al. (1983), is that we can now treat the No Upward Branching property ("if x and z both dominate 31 then x dominates z or z dominates x or they are equal,") and the Upwards Inheritance property as generators. They allow us to reason from pairs of atomic formulas (e.g., d(x,31 ) and P(31,z)) to compound formulas (e.g., dp(x,z)). This means that we can express the consequences of any pair of atomic 164 b(x,y) b(x,z) d(x,y) bde(x,z) e(x,y) b(x,z) f(x,y) bf(x,z) p(x,y) bp(x,z) R(x,z) ] f(x,z) p(x,z) b(x,z) fix,z) p(x,z) d(x,z) df(x,z) ' dp(x,z) ~(x,z) fix,z) i p(x,z) f(x,z) f(x,z) R(x,z) p(x,z) R(x,z) Figure 1. Generator Table. formulas as a formula of d~, though possibly a compound formula. They are exhibited in Figure 1. Cells corresponding to axioms in the theory are boxed. For doing formal deductions we will employ a sequent calculus adapted to our description language ~. We assume that sequents are pairs of finite sets of formulas, and we can make the further restriction that formulas on the right of the sequent arrow ("succedents") contain at most a single member. The axioms of the calculus we employ are exhibited in Figure 2, and the connective rules in Figure 3. Structural Axioms: F,A --. A Generator Axioms: F, Sl(x,y),S2(y,z) --- S3(x,z) for all instances of the generators Exhaustiveness: --. R(x,y) for all x, y Figure 2. D-The0ry Axioms. A sequent [F --, A] is interpreted as an implication from conjunctions to disjunctions: if everything in F is true, then something in A must be true. It follows that [--*A] is invariably true, and [F--,] is invariably false. A sequent calculus proof is a tree (written right side up, with its root on the bottom) labeled with sequents. The theorem to be proved labels its root, the leaves are labeled with axioms, and all the local subtrees must be accepted by some inference rule. A proof that a description F0 is inconsistent is a proof of the sequent [F0--*]. Note that r--, (x,~) F--, is a valid inference, essentially since (x,y) and the empty succedent both express the empty disjunction. RA LV r, Sl(x,y) -. A LA ($2 ~< S1) F, S2(x,y) --, A F --* Sl(x,y) r --* S2(x,y) F --* (SIA S2)(x,y) F, Sl(x,y) -, A r, SZ(x.y) --, A r,(sa v se)(x,y) --, z~ r -. Sa(x,y) Rv ($2 <<. S1 ) r -. S2(x,y) F --* S(x,y) L' r,s'~,y)~ r,s(x,y) -~ R' F -, S'(x,y) Figure 3. D-Theory Inference Rules. TWO ALGORITHMS Suppose we are given an input description F0 to check for satisfiability. If it is unsatisfiable, then it contradicts one of the axioms of the tree theory: Exhaustiveness, Reflexivity of Equals, lrreflexivity of Dominance and Precedence, and the Generators. A complete consistency checker must be able to exhaust the consequences of these axioms for F0, monitoring for the false formula (x,~/). Both algorithms take as input a description and an integer indicating the number of node- names constrained by the description. In the Prolog implementations that follow, the description is expected to be a list of constraints in normal order, that is, with the first argument lexicographically less than or equal to the second. Thus, assuming we are using integers as node names, the normal order form of d(2,1) will be b(1,2). Furthermore, the description is assumed to be sorted by its node-pairs. This will allow us to use efficient ordered-set manipulations. For any given set of nodes of size N, we can construct a description which is a filter for violations of Reflexivity, lrreflexivity and Exhaustiveness. We construct F N to contain for every pair of nodes x,, xj, i,j <~ N, e(x,,xj) if i =j, and e'(x,xi) (i.e., bdfp(x,xi)) if i#j. We can 165 e(1,3),d(1,2),p(2,3)--,dp(l,3) d(l,2),p(2,3),e(1,3)~e(l,3) RA e(1,3), d(1,2),p(2,3) ~ (1,3) e(1,3), d(l,2), p(2,3) ---, V LV Figure 4. {d(1,3), e(1,3), p(2,3)} is Inconsistent. A e(1,3), d(1,2),p(2,3) --. e(1,3), p(1,2),p(2,3) --. e(1,3), dp(1,2), p(2,3) LV f(1,3),d(1,2),p(2,3)~ f(1,3),p(1,2),p(2,3)~ Lv f(1,3), dp(1,2), p(2,3) --, el(l,3), dp(1,2), p(2,3) --, Figure 5. {ef(1,3),dp(1,2), p(2,3)} is Inconsistent (A RA-LV Proof). determine that a description F0 violates either Reflexivity, lrreflexivity or Exhaustiveness simply by taking its pointwise meet F0 I"1 FN .2 If a description is in violation of the Exhaustiveness condition, then it contains some formula S(x,21) with some part of S not given in R*. In that case, taking its meet with anything ~< R(x,21) will prune away the offending part. Similarly, if a constraint on a reflexive pair of nodes S(x,x) fails to have eE S, then taking its meet with e(x,x) will yield (x,x). Finally, taking the meet of S(x,y), x # 2t, with e'(x,21) will yield (x,21) if S = e; in arty case it will have the useful effect of pruning e out of S. Therefore both algorithms begin by constructing F N and then taking its meet with the input description. This has the extra side effect that any pair of nodes x and 21 not explicitly constrained in the input will now be explicitly constrained. EXTEND : TOP-DOWN BACKTRACKING SEARCH FOR CONSISTENT MAXIMAL EXTENSIONS Given that we have begun by taking F0 I-I F N, we have only the generators left to check F0 against. We can think of the generator table as defining a function from pairs of atomic formulas to consequences. To use it we must first have atomic formulas. Def'n: A description r' is a maximal extension of a description F if, for every formula S(x,y) in 2 We can assume that any pair of nodes x, y not explicitly constrained in the input is implicitly constrained by R(x,y). Of course, (RAe')= e', so this assumption just amounts to setting unmentioned pairs of (distinct) nodes to e'(x,y). FRFN, r' contains s(x,y) for some sES. An obvious solution is to enumerate the maximal extensions of F0 F'I F N and feed them to the generators. If any such extension passes the generators, then it is satisfiable, and therefore it is a witness to the satisfiability of F0. If the extension is unsatisfiable, then it must violate at least one of the generators. Because a maximal extension is a total assignment of node-pairs to relations, a single application of a well-chosen generator will suffice to derive a contradiction. And so a single pass through the complete set of applicable generators should be sufficient to decide if a given maximal extension is consistent. Thus, if the input description F0 is inconsistent, then there is a proof of F0--, in which every branch of the proof ends in a sub- proof like that in Figure 4. There we have the simple description {d(1,2),e(1,3),p(2,3)}, which gives us dp(1,3), by a generator (Upwards Inheritance, in this case), and e(1,3), by a structural axiom. Combining these by an invocation of RA we get the false formula (1,3). The roots of these sub-proofs can be combined using LV until we eventually build up the input description on the left, proving F0-', as in Figure 5. The following fragment of a Prolog implementation of max_extension/3 can be seen as implementing a backwards chaining search for such a "RA-LV" proof. The input to both extend and to close (see below, next section) is assumed to be an ~-description together with an integer giving the number of node-names subject to the description. The node-count is used to construct the appropriate F N for this description. Note 166 that, aside from implementing pointwise /-1, merge descs/3 checks for the derivation of an empty constraint, and fails if that happens. The real work is then done by extend/3, which is a recursion on an Agenda. The agenda is initialized to the input description. As individual constraints are narrowed, they are added to the agenda so as to implement constraint propagation. max_axtension( DO, N, Extension ) :- it reflexive_rule( N, Filter_N ), merge._descs( DO, Filter_N, D1 ), Agenda = D1, extend( Agenda, D1, Extension ). extend( [], X, X ). extend( [COICs], D0, X ) :- consequences( CO, D0, Conseqs ), meet_rule( Conseqs, DO, D1, NewCons ), merge_descs( NewCons, Cs, Agenda1 ), extend( Agenda1, D1, X ). Meet_rule/4, in the second clause of extend~3, differs from merge_descs/3 only in (a) sorting its first argument and (b) deriving both the merged description (D1) and a list of those consequences which actually had some effect on DO. Both merge_descs/3 and meet_rule/'[ are based on routines for ordered set union from O'Keefe (1990). The main difference is that ordering is defined on the node-pairs of the constraint, rather than on the term expressing the constraint as a whole; equality is defined so that two constraints are equal if they constrain the same node pair, and if two formulas are 'equal' in this sense, then the output contains the meet of their respective relation names expressions. The truly new consequences derived by meet_rule/4 are then added to the remaining agenda (Cs) with another call to merge_descs/3. (If NewCons were merely appended to Cs, we could have two constraints on the same pair of nodes in the agenda at once, either of which may be less tightly constrained than the result of merging the two instances.) Extend/3 thus both consumes items off the agenda (CO) and adds new items (NewCons). However, each new consequence, if it is truly novel, represents the narrowing of a constraint; since each pair starts with a maximum of four options, clearly we will eventually run out of options to remove; NewCons will be empty, the remaining agenda will eventually be consumed, and the program will halt. The core of extend/3 is consequences/3, which determines for any given constraint what consequences it has when paired with each of the constraints in the description. Consequences/3 has two clauses; the first handles compound formulas, while the second handles atomic formulas. The second clause of consequences/3 invokes the Splitting Rule, which implements LV. Note that, instead of exhausting the consequences of the Splitting Rule and then applying the Generator Rule, we apply the Generator Rule whenever we can. This is because it can act to prune away options from its consequents, thus minimizing the combinatorial explosion lurking behind the Splitting Rule. Furthermore, if an application of the Generator Rule does lead to the discovery of an inconsistency, then the program backtracks to its last application of the Splitting Rule, in effect pruning away from its search tree all further consequences of its inconsistent choice. consequences( C. _D, Consequences ) :- compound_formula( C ), splittingrule( C, Consequences ). consequences( C, D, Consequences ) :- atomic_formula( C ), generator_rule( D, C, Consequences ). atomic_formula([_]:(_~_) ). compound_formula(L,_L.]:(_,_) ). splitting_rule( C, [Assumption] ) :- C = Rels:Nodes, member( R, Rels ), Assumption = [R]:Nodes. The heart of consequences/3 is the Generator Rule, implemented as generator_rule/3. It scans the current description for formulas which form a connected pair with its second argument. Note that in all our examples, we have carefully presented inputs to the generators as Sx(x,3), Sz(y,z). Such a combination can be looked up directly in the generator table. However, note that St(x,y), S2(z,y) is no less a connected pair. In order to match it to the generator table, though, we need to invert the second member, giving Sz'l(y,z). This is done by connected order/4, which succeeds, returning the connected form of the formulas, if they have a connected form, and fails otherwise. If it succeeds, then there is art entry in the generator table which gives the consequence of that connected pair. This consequence (XZ) is then placed in normal order (C3), and added to the output list of consequences. If C2 is an unconnected atom, or a compound formula, it is skipped. Note that skipping compound formulas does not affect the 167 completeness of the algorithm. Every agenda item leads a dual life: as an agenda item, and as a member of the current description. The ignored compound formula will eventually be subjected to the Splitting Rule, the result being placed on the agenda. It will then eventually be paired with C2's entry in the description by the Generator Rule. The only difference will be in which formula is the left antecedent and which the right; but that doesn't matter, since they'll be converted to connected form in any case, and their result will be converted to normal order. 3 generator_rule( ~, _C, [1 ). generator_rule( [C21Rest], C1, [C3lConseqs] ) :- atomic._formula( C2 ), connected_order( C1, C2, XY, YZ ), gen( XY, YZ, XZ ), normal_order( XZ, C3 ), generator_rule( Rest, C1, Conseqs ). generator_rule( [C21Rest], C1, Conseqs ) :- atomic_formula( C2 ), \+ connected_order( C1, C2 .... ), generator_rule( Rest, C1, Conseqs ). generator__rule( [C2JRest], C1, Conseqs ) :- compound_formula( C2 ), generator_rule( Rest, C1, Conseqs ). Every rule applied in this procedure is based on a rule in the associated sequent calculus. The Splitting Rule is just LV; the Meet Rule is RA; and the Generator Rule is just the application of an axiom. So there can be little doubt that the algorithm is a sound implementation of a search for a LV-RA proof of F0 --,. That it is complete follows from the fact that consistent maximal extensions are Hintikka sets. In particular, every generator F,A,B--* C has the same truth conditions as the set of formulas F U {',AV'~BVC}. So a maximal extension is a Hintikka set if it contains either -,A or "~B or C for every generator. The exhaustiveness of our search assures this: every pair of constraints is checked at least once to see if it matches a 3 In fact, every connected pair has two connected forms: Sl(x,y), S2(y,z) and T~(z,y), Tl(y,z). Unsurprisingly, in this case the output of the generator table for T2 and T: will be the inverse of what it is for S~ and $2. In either case, the output will be placed in normal order before being entered into the description, so we have the required commutativity, generator. If it does not then the extension must contain either ",A or -~B. If it does, then the extension contains A and B, and so it must also contain C, or be found inconsistent by the Meet Rule/RA. However, completeness is purchased at the cost of the complexities of exhaustive search. Note that the Splitting Rule is the only source of non-determinism in the program. All of the routines whose definitions were left out are deterministic. The ordered set manipulations are linear in the size of the combined input lists; the sort called by the Meet Rule is just a variant of merge-sort, and so of N logN complexity; the many inversions which may have to be done are linear in the length of the constraint list, which is bounded from above by 4, so they can be treated as constant time operations. It is only the Splitting Rule that causes us trouble. The second algorithm attempts to address this problem. CLOSE : POLYNOMIAL SEARCH FOR A Lv-RA PROOF The basic design problem to be solved is that the generator table accepts only atomic formulas as inputs, while the description whose consistency is at issue may contain any number of compound formulas. Extend solved this problem by 'bringing the description to the generators,' Close solves this problem by 'bringing the generators to the description.' Figure 6 represents a proof that {dp(1,2),bf(1,B),dp(2,3)} is inconsistent. Here the leaves are almost entirely drawn from the generator axioms. Only the rightmost leaf invokes a structural axiom. The initial stages of the proof involve combining generators by means of RV and LV until the two antecedent atoms match a pair of compound atoms found in the input description (in this case dp(1,2) and dp(2,3)). Then this 'compound generator' is fed into the RA rule together with the corresponding structural axiom, generating our inconsistency. Close, like extend, implements a backwards chaining search for a proof of the relevant sort. The code for the two algorithms has been made almost identical, for the purposes of this paper. The sole essential difference is that now consequences/3 has only one clause, which invokes the New Generator Rule. The input to new_generator_rule/3 is the same as the input to generator_rule/3: the current description, a constraint looking to be the left antecedent of a generator, and the output consequences. Like the old rule, the new rule searches the current description for a connected formula (now not 168 RV bf(l,3),d(l,2),d(2,3) --, d(l,3) bf(1,3),p(l,2),d(2,3) --* p(l,3) RV LV bf(1,3),d(1,2),d(2,3) --. dp(1,3) bf(1,3),p(1,2),d(2,3) --. dp(1,3) bf(l,a),dp(l,2),d(2,3) --. dp(l,3) V bf(1,3),p(1,2),p(2,3) ---, p(1,3) Lv bf(1,3),d(1,2),p(2,3) -. @(1,3) bf(1,3),p(1,2),p(2,3) -, @(1,3) bf(1,3),dp(1,2),p(2,3) -, dp(1,3) V A A Lv bf(1,3),dp(1,2),d(Z,3) -, @(1,3) bf(1,3),dp(1,Z),p(2,3) --, @(1,3) RA bf(1,3),dp(1,Z),dp(2,3) ~ dp(1,3) dp(1,2),dp(2,3),bf(1,3) -. bf(1,3) dp(l,2),bf(l,3),dp(2,3) ---. (1,3) Rv Figure 6. A LV-RA Proof that {dp(1,2), bf(1,3), dp(2,3)} is Inconsistent. necessarily atomic). From the resulting connected pair it constructs a compound generator by taking the cross product of the atomic relations in the compound formulas (in a double loop implemented in distribute/4 and distrib_l/4), feeding the atomic pairs so constructed to the generator table (in distrib 1/4), and joining each successive generator output. The result is a compound generator whose consequent represents the join of all the atomic generators that went into its construction. newgeneratorrule( U, _C, fl )- new generator_rule( [C21Rest], C1, [C31Cons] ) :- connected_order( C1, C2, $1 :(X,Y), S2:(Y,Z) ), distribute( $1 :(X,Y), S2:(Y,Z), []:(X,Z), S3:(X,Z) ), normal_order(S3:(X,Z), C3 ), new_generator_rule( Rest, C1, Cons ). new_generator_rule( [C21Rest], C1, Cons ) :- \+ connected order( C1, C2 .... ), new_generator_rule( Rest, C1, Cons ). distribute( ~:_, _C2, Cons, Cons ). distribute( [RIlS1]:XY, S2:YZ, S3a:XZ, S3:XZ ) :- distrib_J (S2:YZ, [RlJ:XY, S3a:XZ, S3b:XZ ), distribute( $1 :XY, S2:YZ, S3b:XZ, S3:XZ ). distdb_l( [].'.._, _C1, Cons, Cons ). dislrib_l( [R21S2]:YZ, $1 :XY, S3a:XZ, S3:XZ ) :- gen( $1 :XY. [R2]:YZ, S3b:XZ ), ord_union( S3a, S3b, $3c ), distrib 1 ( S2:YZ, $1 :XY, S3c:XZ, S3:XZ ). On completion of the double loop, control works its way back to consequences/3 and thence to the Meet Rule, as usual. Unlike extend, close is deterministic. Each agenda item is compared to each item in the current description, and that is that. Furthermore, the complexity of the New Generator Rule is not much greater than before: the double loop we have added can only be executed a maximum of 4 x 4 -- 16 times, so we have increased the complexity of the algorithm, considered apart from the Splitting Rule, by at most a constant factor. The question is: at what cost? Before we turn to the analysis of close, however, note that its output is different from that of extend. Extend returns a maximal extension, selected non-deterministically. Close returns the input description, but with values that could not be part of any solution removed. Essentially, close returns the pointwise join of all of F0's consistent maximal extensions. This action, of joining all the atomic consequences of a pair of constraints, does not preserve all of the information present in the atomic consequences. Consider the following description. F0 : {d(1,2), dp(1,3), dp(2,3)} F0 is its own closure, and is consistent. However, if we examine its maximal extensions, we note that one of them r3 : {d(1,2), p(1,3), d(2,3)} is inconsistent. There is nothing in F0 to tell us that one combination of the values it presents is impossible. Note that this may not be essential to proving inconsistency: for F0 to be inconsistent, it would have to be the case that all values in some constraint were ruled out in all maximal extensions. 169 ANALYSIS OF CLOSE We first argue that close is indeed finding a LV- RA proof of F0 --,. Note that in our toy example of Figure 6 only a single 'compound generator' was required to derive the empty solution. In general it may take several compound generators to build a proof of F0--*. Each one functions to remove some of the possibilities from a constraint, until eventually no possibilities are left. Thus we have a LV-RA proof of F0-, if and only if we have a proof of FQ --, (x,y), for some x and y. Let us call such a (not necessarily unique) pair a critical pair in the proof of F0---,, and its associated constraint in F0 a critical constraint. It is not at all obvious how to choose a critical constraint beforehand, so close must search for it. Every time it calls the New Generator Rule and then calls the Meet Rule to merge in its consequence, it constructs a fragment of a LV-RA proof. We could then take the constraint which it finally succeeds in emptying out as the critical constraint, collect the proof fragments having that constraint as their succedent, and plug them together in the order they were generated to supply us with a LV-RA proof of F0 --*. So close will find a LV-RA proof of F0-,, if one exists. It is not clear, however, that such a proof always exists when F0 is unsatisfiable. Close is essentially a variant of the path- consistency algorithms frequently discussed in the Constraint Satisfaction literature (IVlackworth, 1977; Allen, 1983). It is known that path-consistency is not in general a strong enough condition to ensure completeness. There are, however, special cases where path- consistency techniques are complete (Montanari, 1974). So far, close appears to be complete, (two years of work have failed to turn up a counterexample) but it is unlikely to yield an easy completeness proof. The algorithm presented here is strongly reminiscent of the algorithm in Allen (1983), which is demonstrably incomplete for the temporal reasoning problems to which he applied it. Therefore, if close is complete for D-theory, it can only be due to a property of the generator axioms, that is, to properties of trees, as contrasted with properties of temporal intervals. Standard approaches of any generality will almost certainly generalize to the temporal reasoning case. REFERENCES Allen, 2ames F. 1983. Maintaining Knowledge about Temporal Intervals. Communications of the ACM 26(11): 832-843. Cornell, Thomas L. 1992. Description Theory, Licensing Theory and Principle-Based Grammars and Parsers. UCLA Ph.D. thesis. Gorrell, P. 1991. Subcategorization and Sentence Processing. In Berwick, R., S. Abney & C. Tenney, eds. Principle-Based Parsing: Computation and Psycholinguistics. Kluwer, Dordrecht. Mackworth, Alan K. 1977. Consistency in Networks of Relations. Artificial Intelligence 8: 99-118. Marcus, Mitchell P., & Donald Hindle. (1990). Description Theory and Intonation Boundaries. In G. T. M. Ahman (Ed.), Cognitioe Models of Speech Processing (pp. 483- 512). Cambridge, MA: MIT Press. Marcus, Mitchell P., Donald Hindle & Margaret M. Fleck. 1983. D-Theory: Talking about Talking about Trees. Proceedings of the 21st Mtg. of the A CL. Montanari, Ugo. 1974. Networks of Constraints: Fundamental Properties and Applications to Picture Processing. Information Sciences 7: 95-132. O'Keefe, Richard A. 1990. The Craft of Prolog. Cambridge, MA: MIT Press. Rogers, 3ames & K. Vijay-Shanker. 1992. Reasoning with Descriptions of Trees. Proceedings of the 30th Mtg. of the ACL. Weinberg, A. 1988. Locality Principles in Syntax and in Parsing. MIT Ph.D. dissertation. 170
1994
23
A MARKOV LANGUAGE LEARNING MODEL FOR FINITE PARAMETER SPACES Partha Niyogi and Robert C. Berwick Center for Biological and Computational Learning Massachusetts Institute of Technology E25-201 Cambridge, MA 02139, USA Internet: [email protected], [email protected] Abstract This paper shows how to formally characterize lan- guage learning in a finite parameter space as a Markov structure, hnportant new language learning results fol- low directly: explicitly calculated sample complexity learning times under different input distribution as- sumptions (including CHILDES database language in- put) and learning regimes. We also briefly describe a new way to formally model (rapid) diachronic syntax change. BACKGROUND MOTIVATION: TRIGGERS AND LANGUAGE ACQUISITION Recently, several researchers, including Gibson and Wexler (1994), henceforth GW, Dresher and Kaye (1990); and Clark and Roberts (1993) have modeled language learning in a (finite) space whose grammars are characterized by a finite number of parameters or n- length Boolean-valued vectors. Many current linguistic theories now employ such parametric models explicitly or in spirit, including Lexical-Functional Grammar and versions of HPSG, besides GB variants. With all such models, key questions about sample complexity, convergence time, and alternative model- ing assumptions are difficult to assess without a pre- cise mathematical formalization. Previous research has usually addressed only the question of convergence in the limit without probing the equally important ques- tion of sample complexity: it is of not much use that a learner can acquire a language if sample complexity is extraordinarily high, hence psychologically implausible. This remains a relatively undeveloped area of language learning theory. The current paper aims to fill that gap. We choose as a starting point the GW Triggering Learning Algorithm (TLA). Our central result is that the performance of this algorithm and others like it is completely modeled by a Markov chain. We explore the basic computational consequences of this, including some surprising results about sample complexity and convergence time, the dominance of random walk over gradient ascent, and the applicability of these results to actual child language acquisition and possibly language change. Background. Following Gold (1967) the basic frame- work is that of identification in the limit. We assume some familiarity with Gold's assumptions. The learner receives an (infinite) sequence of (positive) example sen- tences from some target language. After each, the learner either (i) stays in the same state; or (ii) moves to a new state (change its parameter settings). If after some finite number of examples the learner converges to the correct target language and never changes its guess, then it has correctly identified the target language in the limit; otherwise, it fails. In the GW model (and others) the learner obeys two additional fundamental constraints: (1) the single.value constraint--the learner can change only 1 parameter value each step; and (2) the greediness constraint--if the learner is given a positive example it cannot recog- nize and changes one parameter value, finding that it can accept the example, then the learner retains that new value. The TLA essentially simulates this; see Gib- son and Wexler (1994) for details. THE MARKOV FORMULATION Previous parameter models leave open key questions ad- dressable by a more precise formalization as a Markov chain. The correspondence is direct. Each point i in the Markov space is a possible parameter setting. Transi- tions between states stand for probabilities b that the learner will move from hypothesis state i to state j. As we show below, given a distribution over L(G), we can calculate the actual b's themselves. Thus, we can picture the TLA learning space as a directed, labeled graph V with 2 n vertices. See figure 1 for an example in a 3-parameter system. 1 We can now use Markov theory to describe TLA parameter spaces, as in lsaacson and 1GW construct an identical transition diagram in the description of their computer program for calculating lo- cal maxima. However, this diagram is not explicitly pre- sented as a Markov structure and does not include transition probabilities. 171 Madsen (1976). By the single value hypothesis, the sys- tem can only move 1 Hamming bit at a time, either to- ward the target language or 1 bit away. Surface strings can force the learner from one hypothesis state to an- other. For instance, if state i corresponds to a gram- mar that generates a language that is a proper subset of another grammar hypothesis j, there can never be a transition from j to i, and there must be one from i to j. Once we reach the target grammar there is noth- ing that can move the learner from this state, since all remaining positive evidence will not cause the learner to change its hypothesis: an Absorbing State (AS) in the Markov literature. Clearly, one can conclude at once the following important learnability result: Theorem 1 Given a Markov chain C corresponding to a GW TLA learner, 3 exactly 1 AS (corresponding to the target grammar/language) iff C is learnable. Proof. ¢::. By assumption, C is learnable. Now assume for sake of contradiction that there is not exactly one AS. Then there must be either 0 AS or > 1 AS. In the first case, by the definition of an absorbing state, there is no hypothesis in which the learner will remain for- ever. Therefore C is not learnable, a contradiction. In the second case, without loss of generality, assume there are exactly two absorbing states, the first S correspond- ing to the target parameter setting, and the second S ~ corresponding to some other setting. By the definition of an absorbing state, in the limit C will with some nonzero probability enter S I, and never exit S I. Then C is not learnable, a contradiction. Hence our assump- tion that there is not exactly 1 AS must be false. =¢.. Assume that there exists exactly 1 AS i in the Markov chain M. Then, by the definition of an absorb- ing state, after some number of steps n, no matter what the starting state, M will end up in state i, correspond- ing to the target grammar. | Corollary 0.1 Given a Markov chain corresponding to a (finite) family of grammars in a G W learning sys- tem, if there exist 2 or more AS, then that family is not learnable. DERIVATION OF TRANSITION PROBABILITIES FOR THE MARKOV TLA STRUCTURE We now derive the transition probabilities for the Markov TLA structure, the key to establishing sam- ple complexity results. Let the target language L~ be L~ = {sl, s2, s3, ...} and P a probability distribution on these strings. Suppose the learner is in a state corre- sponding to language Ls. With probability P(sj), it receives a string sj. There are two cases given current parameter settings. Case I. The learner can syntactically analyze the re- ceived string sj. Then parameter values are unchanged. This is so only when sj • L~. The probability of re- maining in the state s is P(sj). Case II. The learner cannot syntactically analyze the string. Then sj ~ Ls; the learner is in state s, and has n neighboring states (Hamming distance of 1). The learner picks one of these uniformly at random. If nj of these neighboring states correspond to languages which contain sj and the learner picks any one of them (with probability nj/n), it stays in that state. If the learner picks any of the other states (with probability ( n - nj)/n) then it remains in state s. Note that nj could take values between 0 and n. Thus the probability that the learner remains in state s is P(sj)(( n -nj )/n). The probability of moving to each of the other nj states is P(sj)(nj/n). The probability that the learner will remain in its original state s is the sum of the probabilities of these two cases: ~,jEL, P(sj) + E,jCL,(1 - nj/n)P(sj). To compute the transition probability from s to k, note that this transition will occur with proba- bility 1/n for all the strings sj E Lk but not in L~. These strings occur with probability P(sj) each and so the transition probability is:P[s ~ k] = ~,jeL,,,j¢L,,,jeLk (1/n)P(si) • Summing over all strings sj E ( Lt N Lk ) \ L, (set dif- ference) it is easy to see that sj • ( Lt N Lk ) \ Ls ¢~ sj • (L, N nk) \ (L, n Ls). Rewriting, we have P[s ---* k] = ~,je(L,nLk)\(L,nL.)(1/n)P(sj). Now we can compute the transition probabilities between any two states. Thus the self-transition probability can be given as, P[s --, s] = 1-~-'~ k is a neighboring state of, P[s ---, k]. Example. Consider the 3-parameter natural language system de- scribed by Gibson and Wexler (1994), designed to cover basic word orders (X-bar structures) plus the verb- second phenomena of Germanic languages, lts binary parameters are: (1) Spec(ifier) initial (0) or final (1); (2) Compl(ement) initial (0) or final (1); and Verb Sec- ond (V2) does not exist (0) or does exist (l). Possi- ble "words" in this language include S(ubject), V(erb), O(bject), D(irect) O(bject), Adv(erb) phrase, and so forth. Given these alternatives, Gibson and Wexler (1994) show that there are 12 possible surface strings for each (-V2) grammar and 18 possible surface strings for each (+V2) grammar, restricted to unembedded or "degree-0" examples for reasons of psychological plau- sibility (see Gibson and Wexler for discussion). For in- stance, the parameter setting [0 1 0]= Specifier initial, Complement final, and -V2, works out to the possi- ble basic English surface phrase order of Subject-Verb- Object (SVO). As in figure 1 below, suppose the SVO ("English", setting #5=[0 1 0]) is the target grammar. The figure's shaded rings represent increasing Hamming distances from the target. Each labeled circle is a Markov state. Surrounding the bulls-eye target are the 3 other param- eter arrays that differ from [0 1 0] by one binary digit: e.g., [0, 0, 0], or Spec-first, Comp-first, -V2, basic order SOV or "Japanese". 172 j:.-.- i \ i i ":':':!i <::::::.:: . . . . . . .:::.~-.~-':~ Figure 1: The 8 parameter settings in the GW example, shown as a Markov structure, with transition probabilities omitted. Directed arrows between circles (states) represent possible nonzero (possible learner) transitions. The target grammar (in this case, number 5, setting [0 1 0]), lies at dead center. Around it are the three settings that differ from the target by exactly one binary digit; surrounding those are the 3 hypotheses two binary digits away from the target; the third ring out contains the single hypothesis that differs from the target by 3 binary digits. 173 Plainly there are exactly 2 absorbing states in this Markov chain. One is the target grammar (by defini- tion); the other is state 2. State 4 is also a sink that leads only to state 4 or state 2. GW call these two nontarget states local maxima because local gradient ascent will converge to these without reaching the de- sired target. Hence this system is not learnable. More importantly though, in addition to these local maxima, we show (see below) that there are other states (not detected in GW or described by Clark) from which the learner will never reach the target with (high) positive probability. Example: we show that if the learner starts at hypothesis VOS-V2, then with probability 0.33 in the limit, the learner will never converge to the SVO target. Crucially, we must use set differences to build the Markov figure straightforwardly, as indicated in the next section. In short, while it is possible to reach "En- glish"from some source languages like "Japanese," this is not possible for other starting points (exactly 4 other initial states). It is easy to imagine alternatives to the TLA that avoid the local maxima problem. As it stands the learner only changes a parameter setting if that change allows the learner to analyze the sentence it could not analyze before. If we relax this condition so that under unanalyzability the learner picks a random parameter to change, then the problem with local maxima disap- pears, because there can be only 1 Absorbing State, the target grammar. All other states have exit arcs. Thus, by our main theorem, such a system is learnable. We discuss other alternatives below. CONVERGENCE TIMES FOR THE MARKOV CHAIN MODEL Perhaps the most significant advantage of the Markov chain formulation is that one can calculate the number of examples needed to acquire a language. Recall it is not enough to demonstrate convergence in the limit; learning must also be feasible. This is particularly true in the case of finite parameter spaces where convergence might not be as much of a problem as feasibility. Fortu- nately, given the transition matrix of a Markov chain, the problem of how long it takes to converge has been well studied. SOME TRANSITION MATRICES AND THEIR CONVERGENCE CURVES Consider the example in the previous section. The tar- get grammar is SVO-V2 (grammar ~5 in GW). For simplicity, assume a uniform distribution on L5. Then the probability of a particular string sj in L5 is 1/12 be- cause there are 12 (degree-0) strings in L~. We directly compute the transition matrix (0 entries elsewhere): L1 L2 L3 L4 L5 L6 L7 Ls L1 J. 2 L2 L3 L4 L5 L6 L7 Ls ± £ 6 3 3_ Z ! 4 ~ 6 ! 12 12 1 1_ 5 2_ 1__ 12 36 9 States 2 and 5 are absorbing; thus this chain contains local maxima. Also, state 4 exits only to either itself or to state 2, hence is also a local maximum. If T is the transition probability matrix of a chain, then the corresponding i, j element of T m is the probability that the learner moves from state i to state j in m steps. For learnability to hold irrespective starting state, the probability of reaching state 5 should approach 1 as m goes to infinity, i.e., column 5 of T m should contain all l's, and O's elsewhere. Direct computation shows this to be false: L1 L2 L3 L4 Ls L6 L7 Ls L1 L2 L3 L4 L5 L6 L7 Ls ! 3 1 1 3 1 We see that if the learner starts out in states 2 or 4, it will certainly end up in state 2 in the limit. These two states correspond to local maxima grammars in the GW framework. We also see that if the learner starts in states 5 through 8, it will certainly converge in the limit to the target grammar. States 1 and 3 are much more interesting, and con- stitute new results about this parameterization. If the learner starts in either of these states, it reaches the target grammar with probability 2/3 and state 2 with probability 1/3. Thus, local maxima are not the only problem for parameter space learnability. To our knowl- edge, GW and other researchers have focused exclu- sively on local maxima. However, while it is true that states 2 and 4 will, with probability l, not converge to the target grammar, it is also true that states l and 3 will not converge to the target, with probability 1/3. Thus, the number of "bad" initial hypotheses is signif- icantly larger than realized generally (in fact, 12 out of 56 of the possible source-target grammar pairs in the 3- parameter system). This difference is again due to the new probabilistic framework introduced in the current paper. 174 Figure 2 shows a plot of the quantity p(m) = min{pi(rn)} as a function of m, the number of exam- ples. Here Pi denotes the probability of being in state 1 at the end of m examples in the case where the learner started in state i. Naturally we want lim pi(m)= 1 and for this example this is indeed the case. The next figure shows a plot of the following quantity as a func- tion of m, the number of examples. p(m) = min{pi(m)} The quantity p(m) is easy to interpret. Thus p(m) = 0.95 rneans that for every initial state of the learner the probability that it is in the target state after m examples is at least 0.95. Further there is one initial state (the worst initial state with respect to the target, which in our example is Ls) for which this probability is exactly 0.95. We find on looking at the curve that the learner converges with high probability within 100 to 200 (degree-0) example sentences, a psychologically plausible number. We can now compare the convergence time of TLA to other algorithms. Perhaps the simplest is random walk: start the learner at a random point in the 3-parameter space, and then, if an input sentence cannot be ana- lyzed, move 1-bit randomly from state to state. Note that this regime cannot suffer from the local maxima problem, since there is always some finite probability of exiting a non-target state. Computing the convergence curves for a random walk algorithm (RWA) on the 8 state space, we find that the convergence times are actually faster than for the TLA; see figure 2. Since the RWA is also superior in that it does not suffer from the same local maxima problem as TLA, the conceptual support for the TLA is by no means clear. Of course, it may be that the TLA has empirical support, in the sense of independent evidence that children do use this procedure (given by the pat- tern of their errors, etc.), but this evidence is lacking, as far as we know. DISTRIBUTIONAL ASSUMPTIONS: PART I In the earlier section we assumed that the data was uni- formly distributed. We computed the transition matrix for a particular target language and showed that con- vergence times were of the order of 100-200 samples. In this section we show that the convergence times depend crucially upon the distribution. In particular we can choose a distribution which will make the convergence time as large as we want. Thus the distribution-free convergence time for the 3-parameter system is infinite. As before, we consider the situation where the target language is L1. There are no local maxima problems for this choice. We begin by letting the distribution be parametrized by the variables a, b, c, d where a = P(A = {Adv(erb)Phrase V S}) b = P(B = {Adv V O S, Adv Aux V S}) c = P(C={AdvV O1 O2S, AdvAuxVOS, Adv Aux V O1 02 S}) d = P(D={VS}) Thus each of the sets A, B, C and D contain different degree-O sentences of L1. Clearly the probability of the set L, \{AUBUCUD} is 1-(a+b+c+d). The elements of each defined subset of La are equally likely with re- spect to each other. Setting positive values for a, b, c, d such that a + b + c + d < 1 now defines a unique prob- ability for each degree(O) sentence in L1. For example, the probability of AdvVOS is b/2, the probability of AdvAuxVOS is c/3, that of VOS is (1-(a+b+c+d))/6 and so on; see figure 3. We can now obtain the tran- sition matrix corresponding to this distribution. If we compare this matrix with that obtained with a uniform distribution on the sentences of La in the earlier section. This matrix has non-zero elements (transition proba- bilities) exactly where the earlier matrix had non-zero elements. However, the value of each transition prob- ability now depends upon a,b, c, and d. In particular if we choose a = 1/12, b = 2/12, c = 3/12, d = 1/12 (this is equivalent to assuming a uniform distribution) we obtain the appropriate transition matrix as before. Looking more closely at the general transition matrix, we see that the transition probability from state 2 to state 1 is (1- (a+b+c))/3. Clearly if we make a arbitrarily close to 1, then this transition probability is arbitrarily close to 0 so that the number of samples needed to converge can be made arbitrarily large. Thus choosing large values for a and small values for b will result in large convergence times. This means that the sample complexity cannot be bounded in a distribution-free sense, because by choos- ing a highly unfavorable distribution the sample com- plexity can be made as high as possible. For example, we now give the convergence curves calculated for dif- ferent choices of a, b,c, d. We see that for a uniform distribution the convergence occurs within 200 sam- ples. By choosing a distribution with a = 0.9999 and b = c = d = 0.000001, the convergence time can be pushed up to as much as 50 million samples. (Of course, this distribution is presumably not psychologically re- alistic.) For a = 0.99, b = c = d = 0.0001, the sample complexity is on the order of 100,000 positive examples. Remark. The preceding calculation provides a worst- case convergence time. We can also calculate average convergence times using standard results from Markov chain theory (see Isaacson and Madsen, 1976), as in table 2. These support our previous results. There are also well-known convergence theorems de- rived from a consideration of the eigenvalues of the transition matrix. We state without proof a conver- gence result for transition matrices stated in terms of its eigenvalues. 175 Table 1: Complete list of problem states, i.e., all combinations of starting grammar and target grammar which result in non-learnability of the target. The items marked with an asterisk are those listed in the original paper by Gibson and Wexler (1994). Initial Grammar Target Grammar (svo-v2) (svo+v2)* (soy-v2) (SOV+V2)* (VOS-V2) (VOS+V2)* (OVS-V2) (ovs+v2)* (vos-v2) (VOS+V2)* (OVS-V2) (OVS+V2)* (OVS-V2) (ovs-v2) (ovs-v2) (ovs-v2) (svo-v2) (svo-v2) (svo-v2) (svo-v2) (sov-v2) (soy-v2) (soy-v2) (sov-v2) State of Initial Grammar (Markov Structure) Not Sink Probability of Not Converging to Target Not Sink Sink 0.5 Sink 1.0 0.15 Not Sink Sink Not Sink Not Sink Not Sink 1.0 Not Sink Sink 0.33 1.0 0.33 1.0 0.33 Sink 1.0 0.08 1.0 ~f f ............... ~m ~o ;°1 -@ 6 16o 260 360 460 Number of examples (m} Figure 2: Convergence as a function of number of examples. The probability of converging to the target state after m examples is plotted against m. The data from the target is assumed to be distributed uniformly over degree-0 sentences. The solid line represents TLA convergence times and the dotted line is a random walk learning algorithm (RWA) which actually converges fasler than the TLA in this case. 176 O E 8 O-o d" , .... I i t t I t t t t , / ' [ • .... =, r" .... o lo 2'o 3o 4o Log(Number of Samples) Figure 3: Rates of convergence for TLA with L1 as the target language for different distributions. The probability of converging to the target after m samples is plotted against log(m). The three curves show how unfavorable distributions can increase convergence times. The dashed nine assumes uniform distribution and is the same curve as plotted in figure 2. Table 2: Mean and standard deviation convergence times to target 5 (English) given different distributions over the target language, and a uniform distribution over initial states. The first distribution is uniform over the target language; the other distributions Learning Mean abs. scenario time TEA (uniform) 34.8 TLA (a = 0.99) 45000 TLA (a = 0.9999) 4.5 × 106 RW 9.6 alter the value of a as discussed in the main text. Std. Dev. of abs. time 22.3 33000 3.3 × l06 10.1 177 Theorem 2 Let T be an n x n transition matrix with n linearly independent left eigenvectors xl .... xn cor- responding to eigenvalues .~l , . . . , .~n. Let x0 (an n- dimensional vector) represent the starting probability of being in each state of the chain and r be the limiting probability of being in each state. Then after k transi- tions, the probability of being in each state x0T k can be described by n I1 x0T k-~ I1=11 ~ mfx0y~x, I1~< max I~,lk ~ II x0y,x, II 2<i<n i=1 - - i=2 where the Yi's are the right eigenvectors ofT. This theorem bounds the convergence rate to the limiting distribution 7r (in cases where there is only one absorption state, 7r will have a 1 corresponding to that state and 0 everywhere else). Using this result we bound the rates of convergence (in terms of num- ber k of samples). It should be plain that these results could be used to establish standard errors and confi- dence bounds on convergence times in the usual way, another advantage of our new approach; see table 3. DISTRIBUTIONAL ASSUMPTIONS, PART II The Markov model also allows us to easily determine the effect of distributional changes in the input. This is important for either computer or child acquisition studies, since we can use corpus distributions to com- pute convergence times in advance. For instance, it can be easily shown that convergence times depend cru- cially upon the distribution chosen (so in particular the TLA learning model does not follow any distribution- free PAC results). Specifically, we can choose a distribu- tion that will make the convergence time as large as we want. For example, in the situation where the target language is L1, we can increase the convergence time arbitrarily by increasing the probability of the string {Adv(verb) V S}. By choosing a more unfavorable dis- tribution the convergence time can be pushed up to as much as 50 million samples. While not surprising in it- self, the specificity of the model allows us to be precise about the required sample size. CHILDES DISTRIBUTIONS It is of interest to examine the fidelity of the model us- ing real language distributions, namely, the CHILDES database. We have carried out preliminary direct ex- periments using the CHILDES caretaker English input to "Nina" and German input to "Katrin"; these consist of 43,612 and 632 sentences each, respectively. We note, following well-known results by psycholinguists, that both corpuses contain a much higher percentage of aux- inversion and wh-questions than "ordinary" text (e.g., the LOB): 25,890 questions, and 11,775 wh-questions; 201 and 99 in the German corpus; but only 2,506 ques- tions or 3.7% out of 53,495 LOB sentences. To test convergence, an implemented system using a newer version of deMarcken's partial parser (see deMar- cken, 1990) analyzed each degree-0 or degree-1 sentence as falling into one of the input patterns SVO, S Aux V, etc., as appropriate for the target language. Sentences not parsable into these patterns were discarded (pre- sumably "too complex" in some sense following a tradi- tion established by many other researchers; see Wexler and Culicover (1980) for details). Some examples of caretaker inputs follow: this is a book ? what do you see in the book ? how many rabbits ? what is the rabbit doing ? (...) is he hopping ? oh . and what is he playing with ? red mir doch nicht alles nach! ja , die schw~tzen auch immer alles nach (...) When run through the TLA, we discover that con- vergence falls roughly along the TLA convergence time displayed in figure 1-roughly 100 examples to asymp- tote. Thus, the feasibility of the basic model is con- firmed by actual caretaker input, at least in this simple case, for both English and German. We are contin- uing to explore this model with other languages and distributional assumptions. However, there is one very important new complication that must be taken into account: we have found that one must (obviously) add patterns to cover the predominance of auxiliary inver- sions and wh-questions. However, that largely begs the question of whether the language is verb-second or not. Thus, as far as we can tell, we have not yet arrived at a satisfactory parameter-setting account for V2 acqui- sition. VARIANTS OF THE LEARNING MODEL AND EXTENSIONS The Markov formulation allows one to more easily ex- plore algorithm variants. Besides the TLA, we consider the possible three simple learning algorithm regimes by dropping either or both of the Single Value and Greed- iness constraints. The key result is that ahnost any other regime works faster than local gradient ascent and avoids problems with local maxima. See figure 4 for a representative result. Thus, most interestingly, param- eterized language learning appears particularly robust under algorithmic changes. EXTENSIONS, DIACHRONIC CHANGE AND CONCLUSIONS We remark here that the "batch" phonological param- eter learning system of Dresher and Kaye (1990) is sus- ceptible to a more direct PAC-type analysis, since their system sets parameters in an "off-line" mode. We state without proof some results that can be given in such cases. 178 Learning scenario TLA (uniform) TLA(a = 0.99) TLA(a = 0.9999) RW Table 3: Convergence rates derived from eigenvalue calculations. Rate of Convergence 0(0.94 ~) o((1- lo-~) ~) o((1 - 10-6) k) o(0.89 k) q ~, d d ,/ / i// /' //// L.~, 2'0 4'o 6'0 s'o 6o Number of samples Figure 4: Convergence rates for different learning algorithms when L1 is the target language. The curve with the slowest rate (large dashes) represents the TLA, the one with the fastest rate (small dashes) is the Random Walk (RWA) with no greediness or single value constraints. Random walks with exactly one of the greediness and single value constraints have performances in between. 179 Theorem 3 If the learner draws more than M = 1 In(l/b) samples, then it will identify the tar- ln(l/(1-bt)) get with confidence greater than 1 - 6. ( Here bt = P(Lt \ Uj~tLj)). Finally, the Markov model also points to an intrigu- ing new model for syntactic change. One simply has to introduce two or more target languages that emit posi- tive example strings with (probably different) frequen- cies: each corresponding to difference language sources. If the model is run as before, then there can be a large probability for a learner to converge to a state different from the highest frequency emitting target state: that is, the learner can acquire a different parameter setting, for example, a -V2 setting, even in a predominantly +V2 environment. This is of course one of the histor- ical changes that occurred in the development of En- glish. Space does not permit us to explore all the con- sequences of this new Markov model; we remark here that once again we can compute convergence times and stability under different distributions of target frequen- cies, combining it with the usual dynamical models of genotype fixation. In this case, the interesting result is that the TLA actually boosts diachronic change by or- ders of magnitude, since as observed earlier, it can per- mit the learner to arrive at a different convergent state even when there is just one target language emitter. In contrast, the local maxima targets are stable, and never undergo change. Whether this powerful "boost" effect plays a role in diachronic change remains a topic for fu- ture investigation. As far as we know, the possibility for formally modeling the kind of saltation indicated by the Markov model has not been noted previously and has only been vaguely stated by authors such as Lightfoot (1990). In conclusion, by introducing a formal mathematical model for language acquisition, we can provide rigor- ous results on parameter learning, algorithmic varia- tion, sample complexity, and diachronic syntax change. These results are of interest for corpus-based acquisition and investigations of child acquisition, as well as point- ing the way to a more rigorous bridge between modern computational learning theory and computational lin- guistics. ACKNOWLEDGMENTS We would like to thank Ken Wexler, Ted Gibson, and an anonymous ACL reviewer for valuable discussions and comments on this work. Dr. Leonardo Topa pro- vided invaluable programming assistance. All residual errors are ours. This research is supported by NSF grant 9217041-ASC and ARPA under the HPCC pro- gram. REFERENCES Clark, Robin and Roberts, Ian (1993). "A Compu- tational Model of Language Learnability and Lan- guage Change." Linguistic Inquiry, 24(2):299-345. deMarcken, Carl (1990). "Parsing the LOB Corpus." Proceedings of the 25th Annual Meeting of the As- sociation for Computational Linguistics. Pitts- burgh, PA: Association for Computational Linguis- tics, 243-251. Dresher, Elan and Kaye, Jonathan (1990). "A Compu- tational Learning Model For Metrical Phonology." Cognition, 34(1):137-195. Gibson, Edward and Wexler, Kenneth (1994). "Trig- gers." Linguistic Inquiry, to appear. Gold, E.M. (1967). "Language Identification in the Limit." Information and Control, 10(4): 447-474. Isaacson, David and Masden, John (1976). Markov Chains. New York: John Wiley. Lightfoot, David (1990). How to Set Parameters. Cam- bridge, MA: MIT Press. Wexler, Kenneth and Culicover, Peter (1980). Formal Principles of Language Acquisition. Cambridge, MA: MIT Press. 180
1994
24
PART-OF-SPEECH TAGGING USING A VARIABLE MEMORY MARKOV MODEL Hinrich Schiitze Center for the Study of Language and Information Stanford, CA 94305-4115 Internet: schuetze~csli.stanford.edu Yoram Singer Institute of Computer Science and Center for Neural Computation Hebrew University, Jerusalem 91904 Internet: [email protected] Abstract We present a new approach to disambiguating syn- tactically ambiguous words in context, based on Variable Memory Markov (VMM) models. In con- trast to fixed-length Markov models, which predict based on fixed-length histories, variable memory Markov models dynamically adapt their history length based on the training data, and hence may use fewer parameters. In a test of a VMM based tagger on the Brown corpus, 95.81% of tokens are correctly classified. INTRODUCTION Many words in English have several parts of speech (POS). For example "book" is used as a noun in "She read a book." and as a verb in "She didn't book a trip." Part-of-speech tagging is the prob- lem of determining the syntactic part of speech of an occurrence of a word in context. In any given English text, most tokens are syntactically am- biguous since most of the high-frequency English words have several parts of speech. Therefore, a correct syntactic classification of words in context is important for most syntactic and other higher- level processing of natural language text. Two stochastic methods have been widely used for POS tagging: fixed order Markov models and Bidden Markov models. Fixed order Markov models are used in (Church, 1989) and (Charniak et al., 1993). Since the order of the model is as- sumed to be fixed, a short memory (small order) is typically used, since the number of possible combi- nations grows exponentially. For example, assum- ing there are 184 different tags, as in the Brown corpus, there are 1843 = 6,229,504 different or- der 3 combinations of tags (of course not all of these will actually occur, see (Weischedel et al., 1993)). Because of the large number of param- eters higher-order fixed length models are hard to estimate. (See (Brill, 1993) for a rule-based approach to incorporating higher-order informa- tion.) In a Hidden iarkov Model (HMM) (Jelinek, 1985; Kupiec, 1992), a different state is defined for each POS tag and the transition probabilities and the output probabilities are estimated using the EM (Dempster et al., 1977) algorithm, which guarantees convergence to.a local minimum (Wu, 1983). The advantage of an HMM is that it can be trained using untagged text. On the other hand, the training procedure is time consuming, and a fixed model (topology) is assumed. Another dis- advantage is due to the local convergence proper- ties of the EM algorithm. The solution obtained depends on the initial setting of the model's pa- rameters, and different solutions are obtained for different parameter initialization schemes. This phenomenon discourages linguistic analysis based on the output of the model. We present a new method based on vari- able memory Markov models (VMM) (Ron et al., 1993; Ron et al., 1994). The VMM is an approx- imation of an unlimited order Markov source. It can incorporate both the static (order 0) and dy- namic (higher-order) information systematically, while keeping the ability to change the model due to future observations. This approach is easy to implement, the learning algorithm and classifica- tion of new tags are computationally efficient, and the results achieved, using simplified assumptions for the static tag probabilities, are encouraging. VARIABLE MEMORY MARKOV MODELS Markov models are a natural candidate for lan- guage modeling and temporal pattern recognition, mostly due to their mathematical simplicity. How- ever, it is obvious that finite memory Markov mod- els cannot capture the recursive nature of lan- guage, nor can they be trained effectively with long memories. The notion of variable contez~ length also appears naturally in the context of uni- versal coding (Rissanen, 1978; Rissanen and Lang- don, 1981). This information theoretic notion is now known to be closely related to efficient mod- eling (Rissanen, 1988). The natural measure that 181 appears in information theory is the description length, as measured by the statistical predictabil- ity via the Kullback-Leibler (KL) divergence. The VMM learning algorithm is based on min- imizing the statistical prediction error of a Markov model, measured by the instantaneous KL diver- gence of the following symbols, the current statisti- cal surprise of the model. The memory is extended precisely when such a surprise is significant, until the overall statistical prediction of the stochastic model is sufficiently good. For the sake of sim- plicity, a POS tag is termed a symbol and a se- quence of tags is called a string. We now briefly de- scribe the algorithm for learning a variable mem- ory Markov model. See (Ron et al., 1993; Ron et al., 1994) for a more detailed description of the algorithm. We first introduce notational conventions and define some basic concepts. Let ]E be a finite al- phabet. Denote by ]~* the set of all strings over ]E. A string s, over L TM of length n, is denoted by s = sls2...sn. We denote by • the empty string. The length of a string s is denoted by Isl and the size of an alphabet ]~ is denoted by []~1. Let Prefix(s) = SlS2...Sn_l denote the longest prefix of a string s, and let Prefix*(s) denote the set of all prefixes of s, including the empty string. Similarly, Suffix(s) = s2sz...s, and Suffix* (s) is the set of all suffixes of s. A set of strings is called a suffix (prefix) free set if, V s E S: SNSuffiz*(s ) = $ (SNPrefiz*(s) = 0). We call a probability measure P, over the strings in E* proper if P(o) = 1, and for every string s, Y~,er P(sa) = P(s). Hence, for every prefix free set S, ~'~,es P(s) < 1, and specifically for every integer n > O, ~'~se~, P(s) = 1. A prediction suffix tree T over ]E, is a tree of degree I~l. The edges of the tree are labeled by symbols from ~E, such that from every internal node there is at most one outgoing edge labeled by each symbol. The nodes of the tree are labeled by pairs (s,%) where s is the string associated with the walk starting from that node and end- ing in the root of the tree, and 7s : ~ ---* [0,1] is the output probability function of s satisfying )"]~o~ 7s (a) = 1. A. prediction suffix, tree. induces probabilities on arbitrarily long strings m the fol- lowing manner. The probability that T gener- ates a string w = wtw2...wn in E~, denoted by PT(w), is IIn=l%.i-,(Wi), where s o = e, and for 1 < i < n - 1, s J is the string labeling the deep- est node reached by taking the walk corresponding to wl...wi starting at the root of T. By defini- tion, a prediction suffix tree induces a proper mea- sure over E*, and hence for every prefix free set of strings {wX,...,wm}, ~=~ PT(w i) < 1, and specifically for n > 1, then ~,E~, PT(S) = 1. A Probabilistic Finite Automaton (PFA) A is a 5-tuple (Q, E, r, 7, ~), where Q is a finite set of n states, ~ is an alphabet of size k, v : Q x E --~ Q is the transition function, 7 : Q × E ~ [0,1] is the output probability function, and ~r : Q ~ [0,1] is the probability distribution over the start states. The functions 3' and r must satisfy the following requirements: for every q E Q, )-'~oe~ 7(q, a) = 1, and ~e~O rr(q) = 1. The probability that A generates a string s = sls2...s. E En 0 n is PA(s) = ~-~qoEq lr(q ) I-Ii=x 7(q i-1, sl), where qi+l ~_ r(qi,si). 7" can be extended to be de- fined on Q x E* as follows: 7"(q, sts2...st) = 7"(7"(q, st...st-x),st) = 7"(7"(q, Prefiz(s)),st). The distribution over the states, 7r, can be re- placed by a single start state, denoted by e such that r(¢, s) = 7r(q), where s is the label of the state q. Therefore, r(e) = 1 and r(q) = 0 if q # e. For POS tagging, we are interested in learning a sub-class of finite state machines which have the following property. Each state in a machine M belonging to this sub-class is labeled by a string of length at most L over E, for some L _> O. The set of strings labeling the states is suffix free. We require that for every two states qX, q2 E Q and for every symbol a E ~, if r(q 1,or) = q2 and qt is labeled by a string s 1, then q2 is labeled by a string s ~ which is a suffix of s 1 • or. Since the set of strings labeling the states is suffix free, if there exists a string having this property then it is unique. Thus, in order that r be well defined on a given set of string S, not only must the set be suffix free, but it must also have the property, that for every string s in the set and every symbol a, there exists a string which is a suffix of scr. For our convenience, from this point on, if q is a state in Q then q will also denote the string labeling that state. A special case of these automata is the case in which Q includes all I~l L strings of length L. These automata are known as Markov processes of order L. We are interested in learning automata for which the number of states, n, is much smaller than IEI L, which means that few states have long memory and most states have a short one. We re- fer to these automata as variable memory Markov (VMM) processes. In the case of Markov processes of order L, the identity of the states (i.e. the iden- tity of the strings labeling the states) is known and learning such a process reduces to approximating the output probability function. Given a sample consisting of m POS tag se- quences of lengths Ix,12,..., l,~ we would like to find a prediction suffix tree that will have the same statistical properties as the sample and thus can be used to predict the next outcome for se- c;uences generated by the same source. At each 182 stage we can transform the tree into a variable memory Markov process. The key idea is to iter- atively build a prediction tree whose probability measure equals the empirical probability measure calculated from the sample. We start with a tree consisting of a single node and add nodes which we have reason to be- lieve should be in the tree. A node as, must be added to the tree if it statistically differs from its parent node s. A natural measure to check the statistical difference is the relative entropy (also known as the Kullback-Leibler (KL) divergence) (Kullback, 1959), between the conditional proba- bilities P(.Is) and P(.las). Let X be an obser- vation space and P1, P2 be probability measures over X then the KL divergence between P1 and P1 x P2 is, D L(PIlIP )= • In our case, the KL divergence measures how much additional information is gained by using the suf- fix ~rs for prediction instead of the shorter suffix s. There are cases where the statistical difference is large yet the probability of observing the suffix as itself is so small that we can neglect those cases. Hence we weigh the statistical error by the prior probability of observing as. The statistical error measure in our case is, Err(as, s) = P(crs)DgL (P(.las)llP(.ls)) = P(as) P(a'las) log : ~,0,~ P(asa')log p(P/s°;p'() ) Therefore, a node as is added to the tree if the sta- tistical difference (defined by Err(as, s)) between the node and its parrent s is larger than a prede- termined accuracy e. The tree is grown level by level, adding a son of a given leaf in the tree when- ever the statistical error is large. The problem is that the requirement that a node statistically dif- fers from its parent node is a necessary condition for belonging to the tree, but is not sufficient. The leaves of a prediction suffix tree must differ from their parents (or they are redundant) but internal nodes might not have this property. Therefore, we must continue testing further potential descen- dants of the leaves in the tree up to depth L. In order to avoid exponential grow in the number of strings tested, we do not test strings which belong to branches which are reached with small prob- ability. The set of strings, tested at each step, is denoted by S, and can be viewed as a kind of frontier of the growing tree T. USING A VMM FOR POS TAGGING We used a tagged corpus to train a VMM. The syntactic information, i.e. the probability of a spe- 183 cific word belonging to a tag class, was estimated using maximum likelihood estimation from the in- dividual word counts. The states and the transi- tion probabilities of the Markov model were de- termined by the learning algorithm and tag out- put probabilities were estimated from word counts (the static information present in the training cor- pus). The whole structure, for two states, is de- picted in Fig. 1. Si and Si+l are strings of tags cor- responding to states of the automaton. P(ti[Si) is the probability that tag ti will be output by state Si and P(ti+l]Si+l) is the probability that the next tag ti+l is the output of state Si+l. P(Si+llSi) V 7 P(TilSi) P(Ti+IlSi+I) Figure 1: The structure of the VMM based POS tagger. When tagging a sequence of words Wl,,, we want to find the tag sequence tl,n that is most likely for Wl,n. We can maximize the joint proba- bility of wl,, and tl,n to find this sequence: 1 T(Wl,n) = arg maxt,, P(tl,nlWl,n) P(t,..,~,,.) = arg maxt~,. P(wl,.) = arg maxt~,.P(tl,.,wl,. ) P(tl,., Wl,.) can be expressed as a product of con- ditional probabilities as follows: P(tl,., Wl,.) = P(ts)P(wl Itl)P(t~ltl, wl)e(w21tl,2, wl) ... P(t. It 1,._ 1, Wl,.-1)P(w. It1,., wl,.- 1) = fi P(tiltl,i-1, wl,i-1)P(wiltl,i, Wl,/-1) i=1 With the simplifying assumption that the proba- bility of a tag only depends on previous tags and that the probability of a word only depends on its tags, we get: P(tl,n, wl,.) = fix P(tiltl,i-1) P(wilti) i=1 Given a variable memory Markov model M, P(tilQ,i-1) is estimated by P(tilSi-l,M) where 1 Part of the following derivation is adapted from (Charniak et al., 1993). Si = r(e, tx,i), since the dynamics of the sequence are represented by the transition probabilities of the corresponding automaton. The tags tl,n for a sequence of words wt,n are therefore chosen ac- cording to the following equation using the Viterbi algorithm: t% 7-M(Wl,n) -- arg maxq.. H P(tilSi-l' M)P(wilti) i=1 We estimate P(wilti) indirectly from P(tilwi) us- ing Bayes' Theorem: P(wilti) = P(wi)P(tilwi) P(ti) The terms P(wi) are constant for a given sequence wi and can therefore be omitted from the maxi- mization. We perform a maximum likelihood es- timation for P(ti) by calculating the relative fre- quency of ti in the training corpus. The estima- tion of the static parameters P(tilwi) is described in the next section. We trained the variable memory Markov model on the Brown corpus (Francis and Ku~era, 1982), with every tenth sentence removed (a total of 1,022,462 tags). The four stylistic tag modifiers "FW" (foreign word), "TL" (title), "NC" (cited word), and "HL" (headline) were ignored reduc- ing the complete set of 471 tags to 184 different tags. The resulting automaton has 49 states: the null state (e), 43 first order states (one symbol long) and 5 second order states (two symbols long). This means that 184-43=141 states were not (statistically) different enough to be included as separate states in the automaton. An analy- sis reveals two possible reasons. Frequent symbols such as "ABN" ("half", "all", "many" used as pre- quantifiers, e.g. in "many a younger man") and "DTI" (determiners that can be singular or plu- ral, "any" and "some") were not included because they occur in a variety of diverse contexts or often precede unambiguous words. For example, when tagged as "ABN .... half", "all", and "many" tend to occur before the unambiguous determiners "a", "an" and "the". Some rare tags were not included because they did not improve the optimization criterion, min- imum description length (measured by the KL- divergence). For example, "HVZ*" ("hasn't") is not a state although a following "- ed" form is al- ways disambiguated as belonging to class "VBN" (past participle). But since this is a rare event, de- scribing all "HVZ* VBN" sequences separately is cheaper than the added complexity of an automa- ton with state "HVZ*". We in fact lost some ac- curacy in tagging because of the optimization cri- terion: Several "-ed" forms after forms of "have" were mistagged as "VBD" (past tense). transition to one-symbol two-symbol state state NN JJ: 0.45 AT JJ: 0.69 IN JJ: 0.06 AT JJ: 0.004 IN NN: 0.27 AT NN: 0.35 NN: 0.14 AT NN: 0.10 NN IN NN JJ VB VBN VBN: 0.08 AT VBN: 0.48 VBN: 0.35 AT VBN: 0.003 CC: 0.12 JJ CC: 0.04 CC: 0.09 JJ CC: 0.58 RB: 0.05 MD RB: 0.48 RB: 0.08 MD RB: 0.0009 Table 1: States for which the statistical predic- tion is significantly different when using a longer suffix for prediction. Those states are identified automatically by the VMM learning algorithm. A better prediction and classification of POS-tags is achieved by adding those states with only a small increase in the computation time. The two-symbol states were "AT JJ", "AT NN", "AT VBN", "JJ CC", and "MD RB" (ar- ticle adjective, article noun, article past partici- ple, adjective conjunction, modal adverb). Ta- ble 1 lists two of the largest differences in transi- tion probabilities for each state. The varying tran- sition probabilities are based on differences be- tween the syntactic constructions in which the two competing states occur. For example, adjectives after articles ("AT JJ") are almost always used attributively which makes a following preposition impossible and a following noun highly probable, whereas a predicative use favors modifying prepo- sitional phrases. Similarly, an adverb preceded by a modal ("MD RB") is followed by an infinitive ("VB") half the time, whereas other adverbs oc- cur less often in pre-infinitival position. On the other hand, a past participle is virtually impossi- ble after "MD RB" whereas adverbs that are not preceded by modals modify past participles quite often. While it is known that Markov models of order 2 give a slight improvement over order-1 models (Charniak et al., 1993), the number of parameters in our model is much smaller than in a full order-2 Markov model (49"184 = 9016 vs. 184"184"184 -- 6,229,504). ESTIMATION OF THE STATIC PARAMETERS We have to estimate the conditional probabilities P(ti[wJ), the probability that a given word ufi will appear with tag t i, in order to compute the static parameters P(w j It/) used in the tagging equations described above. A first approximation would be 184 to use the maximum likelihood estimator: p(ti[w j) = C( ti, w i) c(w ) where C(t i, w j) is the number of times t i is tagged as w~ in the training text and C(wJ) is the num- ber of times w/ occurs in the training text. How- ever, some form of smoothing is necessary, since any new text will contain new words, for which C(w j) is zero. Also, words that are rare will only occur with some of their possible parts of speech in the training text. One solution to this problem is Good-Turing estimation: p(tilwj) _ C(t', wJ) + 1 c(wJ) + I where I is the number of tags, 184 in our case. It turns out that Good-Turing is not appropri- ate for our problem. The reason is the distinction between closed-class and open-class words. Some syntactic classes like verbs and nouns are produc- tive, others like articles are not. As a consequence, the probability that a new word is an article is zero, whereas it is high for verbs and nouns. We need a smoothing scheme that takes this fact into account. Extending an idea in (Charniak et al., 1993), we estimate the probability of tag conversion to find an adequate smoothing scheme. Open and closed classes differ in that words often add a tag from an open class, but rarely from a closed class. For example, a word that is first used as a noun will often be used as a verb subsequently, but closed classes such as possessive pronouns ("my", "her", "his") are rarely used with new syntactic categories after the first few thousand words of the Brown corpus. We only have to take stock of these "tag conversions" to make informed predictions on new tags when confronted with unseen text. For- mally, let W] ''~ be the set of words that have been seen with t i, but not with t k in the training text up to word wt. Then we can estimate the probability that a word with tag t i will later be seen with tag t ~ as the proportion of words allowing tag t i but not t k that later add tk: P~m(i --* k) = I{nll<n<m ^ i ~k , ~k wnEW I" OW,,- t ^t~=t~}l iw~'.-kl This formula also applies to words we haven't seen so far, if we regard such words as having occurred with a special tag "U" for "unseen". (In this case, W~ '-'k is the set of words that haven't occurred up to l.) PI,n(U ---* k) then estimates the probability that an unseen word has tag t k. Table 2 shows the estimates of tag conversion we derived from our training text for 1 = 1022462- 100000, m = 1022462, where 1022462 is the number of words in the training text. To avoid sparse data problems we assumed zero probability for types of tag con- version with less than 100 instances in the training set. tag conversion U --* NN U~JJ U --~ NNS U --* NP U ~ VBD U ~ VBG U --~ VBN U --~ VB U---, RB U ~ VBZ U --* NP$ VBD -~ VBN VBN --* VBD VB --* NN NN ~ VB estimated probability 0.29 0.13 0.12 0.08 0.07 0.07 0.06 0.05 0.05 0.01 0.01 0.09 0.05 0.05 0.01 Table 2: Estimates for tag conversion Our smoothing scheme is then the following heuristic modification of Good-Turing: C(t i, W j) -k ~k,ETi Rim(k1 --+ i) g(tilwi) = C(wi) + Ek,ETi,k2E T Pam(kz --" ks) where Tj is the set of tags that w/has in the train- ing set and T is the set of all tags. This scheme has the following desirable properties: • As with Good-Turing, smoothing has a small ef- fect on estimates that are based on large counts. • The difference between closed-class and open- class words is respected: The probability for conversion to a closed class is zero and is not affected by smoothing. • Prior knowledge about the probabilities of con- version to different tag classes is incorporated. For example, an unseen word w i is five times as likely to be a noun than an adverb. Our esti- mate for P(ti]w j) is correspondingly five times higher for "NN" than for "RB". ANALYSIS OF RESULTS Our result on the test set of 114392 words (the tenth of the Brown corpus not used for training) was 95.81%. Table 3 shows the 20 most frequent errors. Three typical examples for the most common error (tagging nouns as adjectives) are "Commu- nist", "public" and "homerun" in the following sentences. 185 VMM: correct: NN VBD NNS VBN JJ VB "'CS 'NP IN VBG RB QL ]1 JIVBNI NIVB°I INI °sI 259 102 110 63 227 165 142 194 94 219 112 63 103 RPIQLI B 100 71 76 Table 3: Most common errors. VB I VBG 69 66 * the Cuban fiasco and the Communist military victories in Laos • to increase public awareness of the movement • the best homerun hitter The words "public" and "communist" can be used as adjectives or nouns. Since in the above sen- tences an adjective is syntactically more likely, this was the tagging chosen by the VMM. The noun "homerun" didn't occur in the training set, therefore the priors for unknown words biased the tagging towards adjectives, again because the po- sition is more typical of an adjective than of a noun. Two examples of the second most common er- ror (tagging past tense forms ("VBD") as past participles ("VBN")) are "called" and "elected" in the following sentences: • the party called for government operation of all utilities • When I come back here after the November elec- tion you'll think, you're my man - elected. Most of the VBD/VBN errors were caused by words that have a higher prior for "VBN" so that in a situation in which both forms are possible ac- cording to local syntactic context, "VBN" is cho- sen. More global syntactic context is necessary to find the right tag "VBD" in the first sentence. The second sentence is an example for one of the tagging mistakes in the Brown corpus, "elected" is clearly used as a past participle, not as a past tense form. Comparison with other Results Charniak et al.'s result of 95.97% (Charniak et al., 1993) is slightly better than ours. This difference is probably due to the omission of rare tags that permit reliable prediction of the following tag (the case of "HVZ." for "hasn't"). Kupiec achieves up to 96.36% correctness (Kupiec, 1992), without using a tagged corpus for training as we do. But the results are not eas- ily comparable with ours since a lexicon is used that lists only possible tags. This can result in in- creasing the error rate when tags are listed in the lexicon that do not occur in the corpus. But it can also decrease the error rate when errors due to bad tags for rare words are avoided by looking them up in the lexicon. Our error rate on words that do not occur in the training text is 57%, since only the general priors are used for these words in decod- ing. This error rate could probably be reduced substantially by incorporating outside lexical in- formation. DISCUSSION While the learning algorithm of a VMM is efficient and the resulting tagging algorithm is very simple, the accuracy achieved is rather moderate. This is due to several reasons. As mentioned in the intro- ductory sections, any finite memory Markov model cannot capture the recursive nature of natural lan- guage. The VMM can accommodate longer sta- tistical dependencies than a traditional full-order Markov model, but due to its Markovian nature long-distance statistical correlations are neglected. Therefore, a VMM based tagger can be used for pruning many of the tagging alternatives using its prediction probability, but not as a complete tag- ging system. Furthermore, the VMM power can be better utilized in low level language process- ing tasks such as cleaning up corrupted text as demonstrated in (Ron et al., 1993). We currently investigate other stochastic models that can accommodate long distance sta- tistical correlation (see (Singer and Tishby, 1994) for preliminary results). However, there are theo- retical clues that those models are much harder to learn (Kearns et al., 1993), including HMM based models (Abe and Warmuth, 1992). 186 Another drawback of the current tagging scheme is the independence assumption of the un- derlying tags and the observed words, and the ad- hoc estimation of the static probabilities. We are pursuing a systematic scheme to estimate those probabilities based on Bayesian statistics, by as- signing a discrete probability distribution, such as the Dirichlet distribution (Berger, 1985), to each tag class. The a-posteriori probability estimation of the individual words can be estimated from the word counts and the tag class priors. Those priors can be modeled as a mixture of Dirichlet distribu- tions (Antoniak, 1974), where each mixture com- ponent would correspond to a different tag class. Currently we estimate the state transition prob- abilities from the conditional counts assuming a uniform prior. The same technique can be used to estimate those parameters as well. ACKNOWLEDGMENT Part of this work was done while the second au- thor was visiting the Department of Computer and Information Sciences, University of California, Santa-Cruz, supported by NSF grant IRI-9123692. We would like to thank Jan Pedersen and Naf- tali Tishby for helpful suggestions and discussions of this material. Yoram Singer would like to thank the Charles Clore foundation for supporting this research. We express our appreciation to faculty and students for the stimulating atmosphere at the 1993 Connectionist Models Summer School at which the idea for this paper took shape. References N. Abe and M. Warmuth, On the computational complexity of approximating distributionsby probabilistic automata, Machine Learning, Vol. 9, pp. 205-260, 1992. C. Antoniak, Mixture of Dirichlet processes with applications to Bayesian nonparametric prob- lems, Annals of Statistics, Vol. 2, pp. 1152- 174, 1974. J. Berger, Statistical decision theory and Bayesian analysis, New-York: Springer-Verlag, 1985. E. Brill. Automatic grammar induction and pars- ing free text: A transformation-based ap- proach. In Proceedings of ACL 31, pp. 259- 265, 1993. E. Charniak, Curtis Hendrickson, Neil Jacobson, and Mike Perkowitz, Equations for Part-of- Speech Tagging, Proceedings of the Eleventh National Conference on Artificial Intelligence, pp. 784-789, 1993. K.W. Church, A Stochastic Parts Program and Noun Phrase Parser for Unrestricted Text, Proceedings of ICASSP, 1989. A. Dempster, N. Laird, and D. Rubin, Maximum Likelihood estimation from Incomplete Data via the EM algorithm, J. Roy. Statist. Soc., Vol. 39(B), pp. 1-38, 1977. W.N. Francis and F. Ku~era, Frequency Analysis of English Usage, Houghton Mifflin, Boston MA, 1982. F. Jelinek, Robust part-of-speech tagging using a hidden Markov model, IBM Tech. Report, 1985. M. Kearns, Y. Mansour, D. Ron, R. Rubinfeld, R. Schapire, L. Sellie, On the Learnability of Discrete Distributions, The 25th Annual ACM Symposium on Theory of Computing, 1994. S. Kullback, Information Theory and Statistics, New-York: Wiley, 1959. J. Kupiec, Robust part-of-speech tagging using a hidden Markov model, Computer Speech and Language, Vol. 6, pp. 225-242, 1992. L.R. Rabiner and B. H. Juang, An Introduction to Hidden Markov Models, IEEE ASSP Mag- azine, Vol. 3, No. 1, pp. 4-16, 1986. J. Rissanen, Modeling by shortest data discription, Automatica, Vol. 14, pp. 465-471, 1978. J. Rissanen, Stochastic complexity and modeling, The Annals of Statistics, Vol. 14, No. 3, pp. 1080-1100, 1986. J. Rissanen and G. G. Langdon, Universal model- ing and coding, IEEE Trans. on Info. Theory, IT-27, No. 3, pp. 12-23, 1981. D. Ron, Y. Singer, and N. Tishby, The power of Amnesia, Advances in Neural Information Processing Systems 6, 1993. D. Ron, Y. Singer, and N. Tishby, Learning Probabilistic Automata with Variable Memory Length, Proceedings of the 1994 Workshop on Computational Learning Theory, 1994. Y. Singer and N. Tishby, Inferring Probabilis- tic Acyclic Automata Using the Minimum Description Length Principle, Proceedings of IEEE Intl. Symp. on Info. Theory, 1994. R. Weischedel, M. Meteer, R. Schwartz, L. Ramshaw, and :I. Palmucci. Coping with am- biguity and unknown words through prob- abilistic models. Computational Linguistics, 19(2):359-382, 1993. J. Wu, On the convergence properties of the EM algorithm, Annals of Statistics, Vol. 11, pp. 95-103, 1983. 187
1994
25
GRAMMAR SPECIALIZATION THROUGH ENTROPY THRESHOLDS Christer Samuelsson Swedish Institute of Computer Science Box 1263 S-164 28 Kista, Sweden Internet: christer@sics, se Abstract Explanation-based generalization is used to extract a specialized grammar from the original one using a train- ing corpus of parse trees. This allows very much faster parsing and gives a lower error rate, at the price of a small loss in coverage. Previously, it has been necessary to specify the tree-cutting criteria (or operationality cri- teria) manually; here they are derived automatically from the training set and the desired coverage of the specialized grammar. This is done by assigning an en- tropy value to each node in the parse trees and cutting in the nodes with sufficiently high entropy values. BACKGROUND Previous work by Manny Rayner and the author, see [Samuelsson &~ Rayner 1991] attempts to tailor an ex- isting natural-language system to a specific application domain by extracting a specialized grammar from the original one using a large set of training examples. The training set is a treebank consisting of implicit parse trees that each specify a verified analysis of an input sentence. The parse trees are implicit in the sense that each node in the tree is the (mnemonic) name of the grammar rule resolved on at that point, rather than the syntactic category of the LHS of the grammar rule as is the case in an ordinary parse tree. Figure 1 shows five examples of implicit parse trees. The analyses are ver- ified in the sense that each analysis has been judged to be the preferred one for that input sentence by a human evaluator using a semi-automatic evaluation method. A new grammar is created by cutting up each implicit parse tree in the treebank at appropriate points, creat- ing a set of new rules that consist of chunks of original grammar rules. The LHS of each new rule will be the LHS phrase of the original grammar rule at the root of the tree chunk and the RHS will be the RHS phrases of the rules in the leaves of the tree chunk. For example, cutting up the first parse tree of Figure 1 at the NP of the rule vp_v_np yields rules 2 and 3 of Figure 3. The idea behind this is to create a specialized gram- mar that retains a high coverage but allows very much faster parsing. This has turned out to be possible -- speedups compared to using the original grammar of in median 60 times were achieved at a cost in cover- age of about ten percent, see [Samuelsson 1994a].1 An- other benefit from the method is a decreased error rate when the system is required to select a preferred ana- lysis. In these experiments the scheme was applied to the grammar of a version of the SRI Core Language Engine [Alshawi ed. 1992] adapted to the Atis domain for a speech-translation task [Rayner el al 1993] and large corpora of real user data collected using Wizard- of-Oz simulation. The resulting specialized gram- mar was compiled into LR parsing tables, and a spe- cial LR parser exploited their special properties, see [Samuelsson 1994b]. The technical vehicle previously used to extract the specialized grammar is explanation-based generaliza- tion (EBG), see e.g. [Mitchell et al 1986]. Very briefly, this consists of redoing the derivation of each train- ing example top-down by letting the implicit parse tree drive a rule expansion process, and aborting the expan- sion of the specialized rule currently being extracted if the current node of the implicit parse tree meets a set of tree-cutting criteria 2. In this case the extraction pro- cess is invoked recursively to extract subrules rooted in the current node. The tree-cutting criteria can be local ("The LHS of the original grammar rule is an NP,") or dependent on the rest of the parse tree ("that doesn't dominate the empty string only,") and previous choices of nodes to cut at ("and there is no cut above the cur- rent node that is also labelled NP."). A problem not fully explored yet is how to arrive at an optimal choice of tree-cutting criteria. In the previous scheme, these must be specified manually, and the choice is left to the designer's intuitions. This article addresses the problem of automating this process and presents a method where the nodes to cut at are selected automatically using the information-theoretical concept of entropy. Entropy is well-known from physics, but the concept of perplexity is perhaps better known in the speech-recognition and natural-language communities. 1Other more easily obtainable publications about this are in preparation. 2These are usually referred to as "operationality criteria" in the EBG literature. 188 For this reason, we will review the concept of entropy at this point, and discuss its relation to perplexity. Entropy Entropy is a measure of disorder. Assume for exam- ple that a physical system can be in any of N states, and that it will be in state si with probability Pi. The entropy S of that system is then N S -= ~ -- Pi " In Pi i=1 If each state has equal probability, i.e. if Pi ~- 1 for all i, then N 1 1 S=Z-~--ln ~-=lnN i=1 In this case the entropy is simply the logarithm of the number of states the system can be in. To take a linguistic example, assume that we are try- ing to predict the next word in a word string from the previous ones. Let the next word be wk and the pre- vious word string wl,...,wk-1. Assume further that we have a language model that estimates the proba- bility of each possible next word (conditional on the previous word string). Let these probabilities be Pi for i = 1 .... ,N for the N possible next words w~, i.e. Pi = p(wik I Wl, ..., wk-a). The entropy is then a measure of how hard this prediction problem is: S(L/)I,...,Wk-x) : N -- P(Wik I w,, ..., Wk-i) . In p(w~ I Wl, '", Wk-i) i=1 If all words have equal probability, the entropy is the logarithm of the branching factor at this point in the input string. Perplexity Perplexity is related to entropy as follows. The observed perplexity Po of a language model with respect to an (imaginary) infinite test sequence wl, w2, ... is defined through the formula (see [Jelinek 1990]) In Po = lim --lln p(wi, ..., wn) rl--* OO n Here p(wl,..., Wn) denotes the probability of the word string Wl, ..., W n. Since we cannot experimentally measure infinite lim- its, we terminate after a finite test string wl, ...,WM, arriving at the measured perplexity Pro: Pm = -- --~--ln p(wl,..., WM) In Rewriting p(wl,...,wk) as p(wk [ wl,...,wk-1) • p(wl, ..., wk-1) gives us M 1 In Pm ----- ~ ~ -In p(wk I wl, ..., w~-l) k=l Let us call the exponential of the expectation value of -In p(w [ String) the local perplexity P~(String), which can be used as a measure of the information content of the initial String. In Pt(wl, ..., wk-1) = E(-ln P(~k I wl,..., wk-i)) = N -- p(wik I wl, ..., wk-1)" In p(wik I wl .... , wk-i) i=1 Here E(q) is the expectation value of q and the sum- mation is carried out over all N possible next words w~. Comparing this with the last equation of the previous section, we see that this is precisely the entropy S at point k in the input string. Thus, the entropy is the logarithm of the local perplexity at a given point in the word string. If all words are equally probable, then the local perplexity is simply the branching factor at this point. If the probabilities differ, the local perplexity can be viewed as a generalized branching factor that takes this into account. Tree entropy We now turn to the task of calculating the entropy of a node in a parse tree. This can be done in many different ways; we will only describe two different ones here. Consider the small test and training sets of Figure 1. Assume that we wish to calculate the entropy of the phrases of the rule PP --* Prep NP, which is named pp_prep_np. In the training set, the LHS PP is at- tached to the RHS PP of the rule np_np_pp in two cases and to the RHS PP of the rule vp_vp_pp in one case, giving it tile entropy -2ln~ - ½1n½ ,~ 0.64. The RHS preposition Prep is always a lexical lookup, and the entropy is thus zero 3, while the RHS NP in one case attaches to the LHS of rule np_det_np, in one case to the LHS of rule np_num, and in one case is a lexical lookup, and the resulting entropy is thus -ln½ ~ 1.10. The complete table is given here: Rule LHS 1st RHS 2nd RHS s_np_vp 0.00 np_np_pp 0.00 np_det_n 1.33 np_pron 0.00 np_num 0.00 vp_vp_pp 0.00 vp_v_np 0.00 vp_v 0.00 pp_prep_np 0.64 0.56 0.56 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.64 0.00 0.00 1.10 If we want to calculate the entropy of a particular node in a parse tree, we can either simply use the phrase 3Since there is only one alternative, namely a lexiea] lookup. In fact, the scheme could easily be extended to en- compass including lexical lookups of particular words into the specialized rules by distinguishing lexical lookups of dif- ferent words; the entropy would then determine whether or not to cut in a node corresponding to a lookup, just as for any other node, as is described in the following. 189 Training examples: s_np_vp /k np_pron vp_v_np I /\ lex lex np_det_n I I /\ I want lex lex I I a ticket s_np_vp /\ np_pron vp_v_np I /\ lex / \ I / \ I lex np_np_pp I /\ need np_det_n pp_prep_np /\ /\ lex lex lex lex I I I I a flight to Boston s_np_vp /\ / \ s_np_vp np_det_n vp_vp_pp /\ /\ /\ np_pron vp_v_np lex lex vp_v pp_prep_np I /\ I I I /\ lex / \ The flight lex lex I / \ I I We lex np_np_pp departs at ] /\ have / \ np_det_n pp_prep_np /\ /\ lex lex lex np_det_n I I I /\ a departure I lex lex in I I the morning np_nuR I lex I ten Test example: s_np_vp /\ np_pron vp_v_np I /\ lex / \ I / \ He lex np_np_pp I /\ booked / \ np_det_n pp_prep_np /\ /\ lex lex / \ I I / \ a ticket lex np_np_pp I /\ ] np_det_n pp_prep_np for /\ /\ lex lex lex lex I I I I a flight to Dallas Figure 1: A tiny training set entropy of the RttS node, or take the sum of the en- tropies of the two phrases that are unified in this node. For example, the entropy when the RHS NP of the rule pp_prep_np is unified with the LHS of the rule np_det n will in the former case be 1.10 and in the latter case be 1.10 + 1.33 = 2.43. SCHEME OVERVIEW In the following scheme, the desired coverage of the spe- cialized grammar is prescribed, and the parse trees are cut up at appropriate places without having to specify the tree-cutting criteria manually: 1. Index the treebank in an and-or tree where the or- nodes correspond to alternative choices of grammar rules to expand with and the and-nodes correspond to the RHS phrases of each grammar rule. Cutting up the parse trees will involve selecting a set of or- nodes in the and-or tree. Let us call these nodes "cutnodes". 2. Calculate the entropy of each or-node. We will cut at each node whose entropy exceeds a threshold value. The rationale for this is that we wish to cut up the parse trees where we can expect a lot of variation i.e. where it is difficult to predict which rule will be resolved on next. This corresponds exactly to the nodes in the and-or tree that exhibit high entropy values. 3. The nodes of the and-or tree must be partitioned into equivalence classes dependent on the choice of cutnodes in order to avoid redundant derivations at parse time. 4 Thus, selecting some particular node as a cutnode may cause other nodes to also become cut- nodes, even though their entropies are not above the threshold. 4. Determine a threshold entropy that yields the desired coverage. This can be done using for example interval bisection. 5. Cut up the training examples by matching them against the and-or tree and cutting at the determined cutnodes. It is interesting to note that a textbook method for conslructing decision trees for classification from attribute-value pairs is to minimize the (weighted aver- age of the) remaining entropy 5 over all possible choices of root attribute, see [Quinlan 1986]. 4This can most easily be seen as follows: Imagine two identical, but different portions of the and-or tree. If the roots and leaves of these portions are all selected as cut- nodes, but the distribution of cutnodes within them differ, then we will introduce multiple ways of deriving the portions of the parse trees that match any of these two portions of the and-or tree. 5Defined slightly differently, as described below. 190 DETAILED SCHEME First, the treebank is partitioned into a training set and a test set. The training set will be indexed in an and- or tree and used to extract the specialized rules. The test set will be used to check the coverage of the set of extracted rules. Indexing the treebank Then, the set of implicit parse trees is stored in an and- or tree. The parse trees have the general form of a rule identifier Id dominating a list of subtrees or a word of the training sentence. From the current or-node of the and-or tree there will be arcs labelled with rule iden- tifiers corresponding to previously stored parse trees. From this or-node we follow an arc labelled Id, or add a new one if there is none. We then reach (or add) an and-node indicating the RHS phrases of the gram- mar rule named Id. Here we follow each arc leading out from this and-node in turn to accommodate all the subtrees in the list. Each such arc leads to an or-node. We have now reached a point of recursion and can index the corresponding subtree. The recursion terminates if Id is the special rule identifier lex and thus dominates a word of the training sentence, rather than a list of subtrees. Indexing the four training examples of Figure 1 will result in the and-or tree of Figure 2. Finding the cutnodes Next, we find the set of nodes whose entropies exceed a threshold value. First we need to calculate the entropy of each or-node. We will here describe three different ways of doing this, but there are many others. Before doing this, though, we will discuss the question of re- dundancy in the resulting set of specialized rules. We must equate the cutnodes that correspond to the same type of phrase. This means that if we cut at a node corresponding to e.g. an NP, i.e. where the arcs incident from it are labelled with grammar rules whose left-hand-sides are NPs, we must allow all specialized NP rules to be potentially applicable at this point, not just the ones that are rooted in this node. This requires that we by transitivity equate the nodes that are dom- inated by a cutnode in a structurally equivalent way; if there is a path from a cutnode cl to a node nl and a path from a cutnode c2 to a node n2 with an identical sequence of labels, the two nodes nl and n2 must be equated. Now if nl is a cutnode, then n2 must also be a cutnode even if it has a low entropy value. The following iterative scheme accomplishes this: Function N* (N °) 1. i:=0; 2. Repeat i := i + 1; N i := N(NI-1); 3. Until N i = N i-1 4. Return N~; root I s_np_vp /\ / k / \ 1/ \2 / X / X / X ni(0.89) n2(0.56) IX IX np_pronl \np_det_n I \ / \ / \ 11 1/\2 / \ n n n I \ lex I lex I I lex / \ / \ vp_v_np/ \vp_vp_pp / \ / \ / \ /\ /k il X2 il \2 / X / \ n n3(1.08) (O.O0)n7 n8(0.64) lex I /\ vp_v I I pp_prep_np / \ II 11 \2 np_det_n/ \np_np_pp n n n9(l. I0) / \ lex~ lex~ I np_num I \ ll ilk2 I\ n n n / \ llex lexl Ilex 1/ \2 / \ / \ (1.33)n4 np_de¢_n I 11 \2 n lexl n5(0.64) Ipp_prep_np /\ n 11 \2 flex / \ n n6(1.76) lexl /\ lex/ \np_det_n / \ 1/\2 nn lexl Jlex Figure 2: The resulting and-or tree 191 Here N(N j) is the set of cutnodes NJ augmented with those induced in one step by selecting N~ as the set of cutnodes. In ~ practice this was accomplished by compil- ing an and-or graph from the and-or tree and the set of selected cutnodes, where each set of equated nodes constituted a vertex of the graph, and traversing it. In the simplest scheme for calculating the entropy of an or-node, only the RHS phrase of the parent rule, i.e. the dominating and-node, contributes to the en- tropy, and there is in fact no need to employ an and-or tree at all, since the tree-cutting criterion becomes local to the parse tree being cut up. In a slightly more elaborate scheme, we sum over the entropies of the nodes of the parse trees that match this node of the and-or tree. However, instead of letting each daughter node contribute with the full entropy of the LHS phrase of the corresponding grammar rule, these entropies are weighted with the relative frequency of use of each alternative choice of grammar rule. For example, the entropy of node n3 of the and- or tree of Figure 2 will be calculated as follows: The mother rule vp_v_np will contribute the entropy asso- ciated with the RHS NP, which is, referring to the table above, 0.64. There are 2 choices of rules to resolve on, namely np_det_n and np_np_pp with relative frequen- cies ½ and ~ respectively. Again referring to the entropy table above, we find that the LHS phrases of these rules have entropy 1.33 and 0.00 respectively. This results in the following entropy for node n3: 1 2 S(n3) = 0.64+ ~- 1.33+ ~-0.00 = 1.08 The following function determines the set of cutnodes N that either exceed the entropy threshold, or are in- duced by structural equivalence: Function N ( Smin ) 1. N :: {n: S(n) > S, ni,-,}; 2. Return N*(N); Here S(n) is the entropy of node n. In a third version of the scheme, the relative frequen- cies of the daughters of the or-nodes are used directly to calculate the node entropy: S(n) = ~., - p(niln). In p(n, lu ) ni:(n,ni)EA Here A is the set of arcs, and {n, ni) is an arc from n to hi. This is basically the entropy used in [Quinlan 1986]. Unfortunately, this tends to promote daughters of cut- nodes to in turn become cutnodes, and also results in a problem with instability, especially in conjunction with the additional constraints discussed in a later section, since the entropy of each node is now dependent on the choice of cutnodes. We must redefine the function N(S) accordingly: Function N(Smin) 1. N O := 0; 2. Repeat i := i+ 1; N := {n: S(nlg '-1) > S,~i,~}; g i := N*(N); 3. Until N*" = N i-1 4. Return N i; Here S(n]N j) is the entropy of node n given that the set of cutnodes is NJ. Convergence can be ensured 6 by modifying the termination criterion to be 3. Until 3j e [0, i- 1] : p(Ni,Y j) < 6(Yi,N j) for some appropriate set metric p(N1, N2) (e.g. the size of the symmetric difference) and norm-like function 6(N1,N2) (e.g. ten percent of the sum of the sizes), but this is to little avail, since we are not interested in solutions far away from the initial assignment of cut- nodes. Finding the threshold We will use a simple interval-bisection technique for finding the appropriate threshold value. We operate with a range where the lower bound gives at least the desired coverage, but where the higher bound doesn't. We will take the midpoint of the range, find the cut- nodes corresponding to this value of the threshold, and check if this gives us the desired coverage. If it does, this becomes the new lower bound, otherwise it becomes the new upper bound. If the lower and upper bounds are close to each other, we stop and return the nodes corresponding to the lower bound. This termination cri- terion can of course be replaced with something more elaborate. This can be implemented as follows: Function N(Co) 1. Stow := 0; Shigh := largenumber; Nc := N(0); 2. If Shigh - Sto~o < 6s then goto 6 Sto,,, + Sh i•h . else Staid := 2 ' 3. N := N(Smla); 4. If c(g) < Co then Shiflh :: Srnid else Sio~, := Smld; N¢ := N; 5. Goto 2; 6. Return Arc; Here C(N) is the coverage on the test set of the spe- cialized grammar determined by the set of cutnodes N. Actually, we also need to handle the boundary case where no assignment of cutnodes gives the required cov- erage. Likewise, the coverages of the upper and lower bound may be far apart even though the entropy dif- ference is small, and vice versa. These problems can readily be taken care of by modifying the termination criterion, but the solutions have been omitted for the sake of clarity. 6albeit in exponential time 192 1) "S => Det N V Prep ~IP" s_np_vp /\ / \ np_det_n vp_vp_pp /\ /\ lex lex vp_v pp_prep_np I /\ lex lex NP 2) "S => Pron V NP" s_npvp /\ np_pron vp_v_np I /\ lex lex NP 3) "NP => Det N" npdet_n /\ lex lex 4) "NP => NP Prep NP" np_np_pp /\ NP pp_prep_np /\ lex NP 5) "NP => Nu~" np _hUm I lex Figure 3: The specialized rules In the running example, using the weighted sum of the phrase entropies as the node entropy, if any thresh- old value less than 1.08 is chosen, this will yield any desired coverage, since the single test example of Fig- ure 1 is then covered. Retrieving the specialized rules When retrieving the specialized rules, we will match each training example against the and-or tree. If the current node is a cutnode, we will cut at this point in the training example. The resulting rules will be the set of cut-up training examples. A threshold value of say 1.00 in our example will yield the set of cutnodes {u3, n4, n6, ng} and result in the set of specialized rules of Figure 3. If we simply let the and-or tree determine the set of specialized rules, instead of using it to cut up the training examples, we will in general arrive at a larger number of rules, since some combinations of choices in 6) "S => Det N V NP" s_np_vp /\ np_det_n vp_vnp /\ /\ lex lex lex NP 7) "S => Pron Y Prep NP" s_np_vp /\ np_pren vpvp_pp I /\ lex vp_v pp_prep_np I /\ lex lex NP Figure 4: Additional specialized rules the and-or tree may not correspond to any training ex- ample. If this latter strategy is used in our example, this will give us the two extra rules of Figure 4. Note that they not correspond to any training example. ADDITIONAL CONSTRAINTS As mentioned at the beginning, the specialized gram- mar is compiled into LR parsing tables. Just finding any set of cutnodes that yields the desired coverage will not necessarily result in a grammar that is well suited for LP~ parsing. In particular, LR parsers, like any other parsers employing a bottom-up parsing strat- egy, do not blend well with empty productions. This is because without top-down filtering, any empty produc- tion is applicable at any point in the input string, and a naive bottom-up parser will loop indefinitely. The LR parsing tables constitute a type of top-down filtering, but this may not be sufficient to guarantee termination, and in any case, a lot of spurious applications of empty productions will most likely take place, degrading per- formance. For these reasons we will not allow learned rules whose RHSs are empty, but simply refrain from cutting in nodes of the parse trees that do not dominate at least one lexical lookup. Even so, the scheme described this far is not totally successful, the performance is not as good as using hand-coded tree-cutting criteria. This is conjectured to be an effect of the reduction lengths being far too short. The first reason for this is that for any spurious rule reduction to take place, the corresponding RHS phrases must be on the stack. The likelihood for this to happen by chance decreases drastically with increased rule length. A second reason for this is that the number of states visited will decrease with increasing reduction length. This can most easily be seen by noting that the number of states visited by a deterministic LR parser equals the number of shift actions plus the number of reductions, and equals the number of nodes in the cot- 193 responding parse tree, and the longer the reductions, the more shallow the parse tree. The hand-coded operationality criteria result in an average rule length of four, and a distribution of reduc- tion lengths that is such that only 17 percent are of length one and 11 percent are of length two. This is in sharp contrast to what the above scheme accomplishes; the corresponding figures are about 20 or 30 percent each for lengths one and two. An attempted solution to this problem is to impose restrictions on neighbouring cutnodes. This can be done in several ways; one that has been tested is to select for each rule the RHS phrase with the least en- tropy, and prescribe that if a node corresponding to the LHS of the rule is chosen as a cutnode, then no node corresponding to this RHS phrase may be chosen as a cutnode, and vice versa. In case of such a conflict, the node (class) with the lowest entropy is removed from the set of cutnodes. We modify the function N* to handle this: 2. Repeat i := i+ 1; N i := N(N i-1) \ B(Ni-1); Here B(NJ) is the set of nodes in NJ that should be re- moved to avoid violating the constraints on neighbour- ing cutnodes. It is also necessary to modify the termi- nation criterion as was done for the function N(S,,~in) above. Now we can no longer safely assume that the coverage increases with decreased entropy, and we must also modify the interval-bisection scheme to handle this. It has proved reasonable to assume that the coverage is monotone on both sides of some maximum, which simplifies this task considerably. EXPERIMENTAL RESULTS A module realizing this scheme has been implemented and applied to the very setup used for the previous ex- periments with the hand-coded tree-cutting criteria, see [Samuelsson 1994a]. 2100 of the verified parse trees con- stituted the training set, while 230 of them were used for the test set. The table below summarizes the re- sults for some grammars of different coverage extracted using: 1. Hand-coded tree-cutting criteria. 2. Induced tree-cutting criteria where the node entropy was taken to be the phrase entropy of the RHIS phrase of the dominating grammar rule. 3. Induced tree-cutting criteria where the node entropy was the sum of the phrase entropy of the RHS phrase of the dominating grammar rule and the weighted sum of the phrase entropies of the LHSs of the alter- native choices of grammar rules to resolve on. In the latter two cases experiments were carried out both with and without the restrictions on neighbouring cutnodes discussed in the previous section. Coverage 90.2 % Hand-coded tree-cutting criteria Reduction lengths (%) Times (ms) 1 2 3 > 4 Ave. Med. 17.3 11.3 21.6 49.8 72.6 48.0 RHS phrase entropy. Neighbour restrictions Coverage Reduction lengths (%) Times (ms) 1 2 3 > 4 Ave. Med. 75.8% 11.8 26.1 17.7 44.4 128 38.5 80.5% 11.5 27.4 20.0 41.1 133 47.2 85.3% 14.0 37.3 24.3 24.4 241 70.5 RI-IS phrase entropy. Coverage Reduction 1 2 75.8 % 8.3 12.4 79.7 % 9.0 16.2 85.3 9{ 8.4 17.3 90.9 % 18.2 27.5 No neighbour restrictions lengths (%) Times (ms) 3 > 4 Ave. Med. 25.6 53.7 76.7 37.0 26.9 47.9 99.1 49.4 31.1 43.2 186 74.0 21.7 32.6 469 126 Mixed phrase entropies. Neighbour restrictions Coverage Reduction lengths (%) Times (ms) 1 2 3 > 4 Ave. Med. 75.3 % 6.1 11.7 30.8 51.4 115.4 37.5 Mixed phrase entropies. No neighbour restrictions Coverage Reduction lengths (%) Times (ms) 1 2 3 > 4 Ave. Med. 75 % 16.1 13.8 19.8 50.3 700 92.0 80 % 18.3 16.3 20.1 45.3 842 108 With the mixed entropy scheme it seems important to include the restrictions on neighbouring cutnodes, while this does not seem to be the case with the RHS phrase entropy scheme. A potential explanation for the significantly higher average parsing times for all gram- mars extracted using the induced tree-cutting criteria is that these are in general recursive, while the hand- coded criteria do not allow recursion, and thus only produce grammars that generate finite languages. Although the hand-coded tree-cutting criteria are substantially better than the induced ones, we must remember that the former produce a grammar that in median allows 60 times faster processing than the orig- inal grammar and parser do. This means that even if the induced criteria produce grammars that are a fac- tor two or three slower than this, they are still approx- imately one and a half order of magnitude faster than the original setup. Also, this is by no means a closed research issue, but merely a first attempt to realize the scheme, and there is no doubt in my mind that it can be improved on most substantially. SUMMARY This article proposes a method for automatically find- ing the appropriate tree-cutting criteria in the EBG scheme, rather than having to hand-code them. The EBG scheme has previously proved most successful for 194 tuning a natural-language grammar to a specific ap- plication domain and thereby achieve very much faster parsing, at the cost of a small reduction in coverage. Instruments have been developed and tested for con- trolling the coverage and for avoiding a large number of short reductions, which is argued to be the main source to poor parser performance. Although these instruments are currently slightly too blunt to enable producing grammars with the same high performance as the hand-coded tree-cutting criteria, they can most probably be sharpened by future research, and in par- ticular refined to achieve the delicate balance between high coverage and a distribution of reduction lengths that is sufficiently biased towards long reductions. Also, banning recursion by category specialization, i.e. by for example distinguishing NPs that dominate other NPs from those that do not, will he investigated, since this is believed to be an important ingredient in the version of the scheme employing hand-coded tree-cutting criteria. ACKNOWLEDGEMENTS This research was made possible by the basic research programme at the Swedish Institute of Computer Sci- ence (SICS). I wish to thank Manny Rayner of SRI International, Cambridge, for help and support in mat- ters pertaining to the treebank, and for enlightening discussions of the scheme as a whole. I also wish to thank the NLP group at SICS for contributing to a very conductive atmosphere to work in, and in particu- lar Ivan Bretan for valuable comments on draft versions of this article. Finally, I wish to thank the anonymous reviewers for their comments. References [Alshawi ed. 1992] Hiyan Alshawi, editor. The Core Language Engine, MIT Press 1992. [Jelinek 1990] Fred Jelinek. "Self-Organizing Language Models for Speech Recognition", in Readings in Speech Recognition, pp. 450-506, Morgan Kauf- mann 1990. [Mitchell el al 1986] Tom M. Mitchell, Richard M. Keller and Smadar T. Kedar-Cabelli. "Explanation-Based Generaliza- tion: A Unifying View", in Machine Learning 1, No. l, pp. 47-80, 1986. [Quinlan 1986] J. Ross Quinlan. "Induction of Decision Trees", in Machine Learning 1, No. 1, pp. 81-107, 1986. [Rayner et al 1993] M. Rayner, H. Alshawi, I. Bretan, D. Carter, V. Digalakis, B. Gamb£ck, J. Kaja, J. Karlgren, B. Lyberg, P. Price, S. Pulman and C. Samuelsson. "A Speech to Speech Transla- tion System Built From Standard Components", in Procs. ARPA Workshop on Human Language Technology, Princeton, NJ 1993. [Samuelsson 1994a] Christer Samuelsson. Fast Natural- Language Parsing Using Explanation-Based Learn- ing, PhD thesis, Royal Institute of Technology, Stockholm, Sweden 1994. [Samuelsson 1994b] Christer Samuelsson. "Notes on LR Parser Design" to appear in Procs. 15th In- ternational Conference on Computational Linguis- tics, Kyoto, Japan 1994. [Samuelsson • Rayner 1991] Christer Samuelsson and Manny Rayner. "Quantitative Evaluation of Ex- planation-Based Learning as an Optimization Tool for a Large-ScMe Natural Language System", in Procs. 12th International Joint Conference on Ar- tificial Intelligence, pp. 609-615, Sydney, Australia 1991. 195
1994
26
OPTIMIZING THE COMPUTATIONAL LEXICALIZATION OF LARGE GRAMMARS Christian JACQUEMIN Institut de Recherche en Informatique de Nantes (IR/N) IUT de Nantes - 3, rue du MarEchal Joffre F-441M1 NANTES Cedex 01 - FRANCE a--mail : jaequemin@ irin.iut-nantas.univ-nantas.fr Abstract The computational lexicalization of a grammar is the optimization of the links between lexicalized rules and lexical items in order to improve the quality of the bottom-up filtering during parsing. This problem is N P-complete and untractable on large grammars. An approximation algorithm is presented. The quality of the suboptimal solution is evaluated on real-world grammars as well as on randomly generated ones. Introduction Lexicalized grammar formalisms and more specifically Lexicalized Tree Adjoining Grammars (LTAGs) give a lexical account of phenomena which cannot be considered as purely syntactic (Schabes et al, 1990). A formalism is said to be lexicalized if it is composed of structures or rules associated with each lexical item and operations to derive new structures from these elementary ones. The choice of the lexical anchor of a rule is supposed to be determined on purely linguistic grounds. This is the linguistic side of lexicalization which links to each lexical head a set of minimal and complete structures. But lexicalization also has a computational aspect because parsing algorithms for lexicalized grammars can take advantage of lexical links through a two-step strategy (Schabes and Joshi, 1990). The first step is the selection of the set of rules or elementary structures associated with the lexical items in the input sentence ~. In the second step, the parser uses the rules filtered by the first step. The two kinds of anchors corresponding to these two aspects of lexicalization can be considered separately : • The linguistic anchors are used to access the grammar, update the data, gather together items with similar structures, organize the grammar into a hierarchy... • The computational anchors are used to select the relevant rules during the first step of parsing and to improve computational and conceptual tractability of the parsing algorithm. Unlike linguistic lexicalization, computational anchoring concerns any of the lexical items found in a rule and is only motivated by the quality of the induced filtering. For example, the systematic linguistic anchoring of the rules describing "Nmetal alloy" to their head noun "alloy" should be avoided and replaced by a more distributed lexicalization. Then, only a few rules "Nmetal alloy" will be activated when encountering the word "alloy" in the input. In this paper, we investigate the problem of the optimization of computational lexicalization. We study how to choose the computational anchors of a lexicalized grammar so that the distribution of the rules on to the lexical items is the most uniform possible The computational anchor of a rule should not be optional (viz included in a disjunction) to make sure that it will be encountered in any string derived from this rule. 196 with respect to rule weights. Although introduced with reference to LTAGs, this optimization concerns any portion of a grammar where rules include one or more potential lexical anchors such as Head Driven Phrase Structure Grammar (Pollard and Sag, 1987) or Lexicalized Context-Free Grammar (Schabes and Waters, 1993). This algorithm is currently used to good effect in FASTR a unification-based parser for terminology extraction from large corpora (Jacquemin, 1994). In this framework, terms are represented by rules in a lexicalized constraint-based formalism. Due to the large size of the grammar, the quality of the lexicalization is a determining factor for the computational tractability of the application. FASTR is applied to automatic indexing on industrial data and lays a strong emphasis on the handling of term variations (Jacquemin and Royaut6, 1994). The remainder of this paper is organized as follows. In the following part, we prove that the problem of the Lexicalization of a Grammar is NP-complete and hence that there is no better algorithm known to solve it than an exponential exhaustive search. As this solution is untractable on large data, an approximation algorithm is presented which has a computational-time complexity proportional to the cubic size of the grammar. In the last part, an evaluation of this algorithm on real-world grammars of 6,622 and 71,623 rules as well as on randomly generated ones confirms its computational tractability and the quality of the lexicalization. The Problem of the Lexiealization of a Grammar Given a lexicalized grammar, this part describes the problem of the optimization of the computational lexicalization. The solution to this problem is a lexicalization function (henceforth a lexicalization) which associates to each grammar rule one of the lexical items it includes (its lexical anchor). A lexicalization is optimized to our sense if it induces an optimal preprocessing of the grammar. Preprocessing is intended to activate the rules whose lexical anchors are in the input and make all the possible filtering of these rules before the proper parsing algorithm. Mainly, preprocessing discards the rules selected through lexicalization including at least one lexical item which is not found in the input. The first step of the optimization of the lexicalization is to assign a weight to each rule. The weight is assumed to represent the cost of the corresponding rule during the preprocessing. For a given lexicalization, the weight of a lexical item is the sum of the weights of the rules linked to it. The weights are chosen so that a uniform distribution of the rules on to the lexical items ensures an optimal preprocessing. Thus, the problem is to find an anchoring which achieves such a uniform distribution. The weights depend on the physical constraints of the system. For example, the weight is the number of nodes if the memory size is the critical point. In this case, a uniform distribution ensures that the rules linked to an item will not require more than a given memory space. The weight is the number of terminal or non-terminal nodes if the computational cost has to be minimized. Experimental measures can be performed on a test set of rules in order to determine the most accurate weight assignment. Two simplifying assumptions are made : ° The weight of a rule does not depend on the lexical item to which it is anchored. • The weight of a rule does not depend on the other rules simultaneously activated. The second assumption is essential for settling a tractable problem. The first assumption can be avoided at the cost of a more complex representation. In this case, instead of having a unique weight, a rule must have as many weights as potential lexical anchors. Apart from this modification, the algorithm that will be presented in the next part remains much the same than in the case of a single weight. If the first assumption is removed, data about the frequency of the items in corpora can be accounted for. Assigning smaller weights to rules when they are anchored to rare items will 197 make the algorithm favor the anchoring to these items. Thus, due to their rareness, the corresponding rules will be rarely selected. Illustration Terms, compounds and more generally idioms require a lexicalized syntactic representation such as LTAGs to account for the syntax of these lexical entries (Abeill6 and Schabes, I989). The grammars chosen to illustrate the problem of the optimization of the lexicalization and to evaluate the algorithm consist of idiom rules such as 9 : 9 = {from time to time, high time, high grade, high grade steel} Each rule is represented by a pair (w i, Ai) where w i is the weight and A i the set of potential anchors. If we choose the total number of words in an idiom as its weight and its non- empty words as its potential anchors, 9 is represented by the following grammar : G 1 = {a = (4, {time}), b = (2, {high, time}), c = (2, {grade, high}), d = (3, {grade, high,steel}) } We call vocabulary, the union V of all the sets of potential anchors A i. Here, V = {grade, high, steel, time}. A lexicalization is a function ~. associating a lexical anchor to each rule. Given a threshold O, the membership problem called the Lexicalization of a Grammar (LG) is to find a lexicalization so that the weight of any lexical item in V is less than or equal to 0. If 0 >4 in the preceding example, LG has a solution g : g(a) = time, ~.(b) = ~(c) = high, ;t(d) = steel If 0 < 3, LG has no solution. Definition of the LG Problem G = {(w i, Ai) } (wie Q+, A i finite sets) V= {Vi} =k.)A i ;Oe 1~+ (1) LG- { (V, G, O, ~.) l where :t : G ---> V is a total function anchoring the rules so that (V(w, A)e G) 2((w, A))eA and (We V) ~ w < 0 } Z((w, A)) = v The associated optimization problem is to determine the lowest value Oop t of the threshold 0 so that there exists a solution (V, G, Oop t,/q.) to LG. The solution of the optimization problem for the preceding example is 0op t = 4. Lemma LG is in NP. It is evident that checking whether a given lexicalization is indeed a solution to LG can be done in polynomial time. The relation R defined by (2) is polynomially decidable : (2) R(V, G, O, 2.) "-- [if ~.: V-~G and (We V) w < 0 then true else false] 2((w, a)) = v The weights of the items can be computed through matrix products : a matrix for the grammar and a matrix for the lexicalization. The size of any lexicalization ~ is linear in the size of the grammar. As (V, G, O, &)e LG if and only if [R(V, G, 0, ~.)] is true, LG is in NP. • Theorem LG is NP-complete. Bin Packing (BP) which is NP-complete is polynomial-time Karp reducible to LG. BP (Baase, 1986) is the problem defined by (3) : (3) BP "-- { (R, {R I ..... Rk}) I where R = { r 1 ..... r n } is a set of n positive rational numbers less than or equal to 1 and {R 1 ..... Rk} is a partition of R (k bins in which the rjs are packed) such that (Vi~{1 ..... k}) ,~ r< 1. re Ri First, any instance of BP can be represented as an instance of LG. Let (R, {R 1 ..... Rk}) be an instance of BP it is transformed into the instance (V, G, 0, &) of LG as follows : (4) V= {v I ..... vk} a set of k symbols, O= 1, G = {(r v V) . . . . . (rn, V)} and (Vie {1 ..... k}) (Vje {1 ..... n}) ~t((rj, v)) = V i ¢~ rje R i For all i~{I ..... k} andjs{1 ..... n}, we consider the assignment of rj to the bin R i of BP as the anchoring of the rule (rj, V) to the item v i of LG. If(R, {R 1 ..... Rk})eBP then : 198 (5) (VIE{1 ..... k}) 2_, r< 1 rE Ri ¢~ (Vie { I ..... k}) ~_~ r _ I A((r, v)) = vi Thus (V, G, 1,/q.)~LG. Conversely, given a solution (V, G, 1, Z) of LG, let R i "- {rye R I Z((ry, V)) = vi} for all ie { 1 ..... k}. Clearly {R 1 ..... Rk} is a partition of R because the lexicalization is a total function and the preceding formula ensures that each bin is correctly loaded. Thus (R, {R I ..... Rk})EBP. It is also simple to verify that the transformation from B P to L G can be performed in polynomial time. [] The optimization of an NP-complete problem is NP-complete (Sommerhalder and van Westrhenen, 1988), then the optimization version of LG is NP-complete. An Approximation Algorithm for L G This part presents and evaluates an n3-time approximation algorithm for the LG problem which yields a suboptimal solution close to the optimal one. The first step is the 'easy' anchoring of rules including at least one rare lexical item to one of these items. The second step handles the 'hard' lexicalization of the remaining rules including only common items found in several other rules and for which the decision is not straightforward. The discrimination between these two kinds of items is made on the basis of their global weight G W (6) which is the sum of the weights of the rules which are not yet anchored and which have this lemma as potential anchor. Vx and Gx are subsets of V and G which denote the items and the rules not yet anchored. The ws and 0 are assumed to be integers by multiplying them by their lowest common denominator if necessary. (6) (Vw V Z) GW(v) = ~_~ w (w,A) e Gx,vE A Step 1 : 'Easy' Lexiealization of Rare Items This first step of the optimization algorithm is also the first step of the exhaustive search. The value of the minimal threshold Omi n given by (7) is computed by dividing the sum of the rule weights by the number of lemmas (['xl stands for the smallest integer greater than or equal to x and [ V;tl stands for the size of the set Vx)" (7) 0,m. n = (w, A) E G~t W where I V~.I ~ 0 lEvi All the rules which include a lemma with a global weight less than or equal to Orain are anchored to this lemma. When this linking is achieved in a non-deterministic manner, Omi . is recomputed. The algorithm loops on this lexicalization, starting it from scratch every time, until Omi . remains unchanged or until all the rules are anchored. The output value of 0,,i, is the minimal threshold such that LG has a solution and therefore is less than or equal to 0o_ r After Step 1, either each rule is anchored /J or all the remaining items in Va. have a global weight strictly greater than Omin. The algorithm is shown in Figure 1. Step 2 : 'Hard' Lexicalization of Common Items During this step, the algorithm repeatedly removes an item from the remaining vocabulary and yields the anchoring of this item. The item with the lowest global weight is handled first because it has the smallest combination of anchorings and hence the probability of making a wrong choice for the lexicalization is low. Given an item, the candidate rules with this item as potential anchor are ranked according to : 1 The highest priority is given to the rules whose set of potential anchors only includes the current item as non-anchored item. 2 The remaining candidate rules taken first are the ones whose potential anchors have the highest global weights (items found in several other non-anchored rules). The algorithm is shown in Figure 2. The output of Step 2 is the suboptimal computational lexicalization Z of the whole grammar and the associated threshold 0s,,bopr Both steps can be optimized. Useless computation is avoided by watching the capital 199 of weight C defined by (8) with 0 - 0m/~ during Step 1 and 0 - Osubopt during Step 2 : (8) c=o.lvxl- w (w, A) ~ Gx C corresponds to the weight which can be lost by giving a weight W(m) which is strictly less than the current threshold 0. Every time an anchoring to a unit m is completed, C is reduced from 0- W(t~). If C becomes negative in either of both steps, the algorithm will fail to make the lexicalization of the grammar and must be started again from Step 1 with a higher value for 0. Input Output Stepl V,G 0m/,,, V;t, G;t, 2 : (G - Ga) ---> (V-V a) I -[ -'Gw Omi,, ~- (w,A)~ IVl repeat G;t~G ; Vx<--- V; for each ve V such as GW(v)<Omi,, do for each (w, A)~ G such as wA and ~((w, A)) not yet defined do ~((w, A)) ~ v ; Gx~-Gx-{(w,A)} ; update GW(v) ; end v~ ~ v~- {v} ; end if( ( O'mi n <-- 0,,~ and ( (Vve Va) GW(v) > Omin ) ) or G~ = 0 ) then exit repeat ; Omi n ~-- O'mi n ; until( false ) ; Figure 1: Step 1 of the approximation algorithm. Input Output Step2 O~, V, G, V,~, G~, ~.: (G-GO ~ (V-V~ O~.~p t, A. : G ---> V O,.~pt ~ Omi,, ; repeat ;; anchoring the rules with only m as ;; free potential anchor (t~ e V x with ;; the lowest global weight) ~J~--vi; GaI ~- { (w,A)~G~tlAnV~= {t~} }; if ( ~ w < 0~bo~, ) (w, A) ~ Go, 1 then 0m/n ~ Omin + 1 ; goto Stepl ; for each (w, A)~ G~, 1 do X((w, A)) ~- ~ ; G;t~--G~t-{ (w,A) } ; end Gt~,2 ~-- {(w, A)eG;~ ; AnV z D {t~} }; W(~) ~ ~ w ; :t((w, A)) = ~Y ;; ranking 2 G~, 2 and anchoring for(i ~ 1; i_< [GruEl; i~- i+ 1 )do (w, A) <--- r -l(i) ;; t lh ranked by r if( W( t~) + W > Omin ) then exit for ; w(~) ~ w(~) + w ; ~((w, A )) ~ ~ ; G~ ~ G~t-{(w, A)} ; end v~- v~- {~} ; until ( G~t = 0 ) ; Figure 2: Step 2 of the approximation algorithm. 2 The ranking function r: Gt~ 2 --> { 1 .... [ G~2 [ } is such that r((w, A)) > r((w', A3 • min ~W(v') v ~ A ~n~v~- t~ W(v) > v' E A' ,~ V~- 200 Example 3 The algorithm has been applied to a test grammar G 2 obtained from 41 terms with 11 potential anchors. The algorithm fails in making the lexicalization of G 2 with the minimal threshold Omin = 12, but achieves it with Os,,bopt = 13. This value of Os,,bop t Can be compared with the optimal one by running the exhaustive search. There are 232 (= 4 109) possible lexicalizations among which 35,336 are optimal ones with a threshold of 13. This result shows that the approximation algorithm brings forth one of the optimal solutions which only represent a proportion of 8 10 -6 of the possible lexicalizations. In this case the optimal and the suboptimal threshold coincide. Time-Complexity of the Approximation Algorithm A grammar G on a vocabulary V can be represented by a ]Glx ]V I-matrix of Boolean values for the set of potential anchors and a lx I G l-matrix for the weights. In order to evaluate the complexity of the algorithms as a function of the size of the grammar, we assume that I V I and I GI are of the same order of magnitude n. Step 1 of the algorithm corresponds to products and sums on the preceding matrixes and takes O(n 3) time. The worst-case time-complexity for Step 2 of the algorithm is also O(n 3) when using a naive O(n 2) algorithm to sort the items and the rules by decreasing priority. In all, the time required by the approximation algorithm is proportional to the cubic size of the grammar. This order of magnitude ensures that the algorithm can be applied to large real-world grammars such as terminological grammars. On a Spare 2, the lexicalization of a terminological grammar composed of 6,622 rules and 3,256 words requires 3 seconds (real time) and the lexicalization of a very large terminological grammar of 71,623 rules and 38,536 single words takes 196 seconds. The two grammars used for these experiment were generated from two lists of terms provided by the documentation center INIST/CNRS. 3 The exhausitve grammar and more details about this example and the computations of the following section are in (Jacquemin, 1991). Evaluation of the Approximation Algorithm Bench Marks on Artificial Grammars In order to check the quality of the lexicalization on different kinds of grammars, the algorithm has been tested on eight randomly generated grammars of 4,000 rules having from 2 to 10 potential anchors (Table 1). The lexicon of the first four grammars is 40 times smaller than the grammar while the lexicon of the last four ones is 4 times smaller than the grammar (this proportion is close to the one of the real-world grammar studied in the next subsection). The eight grammars differ in their distribution of the items on to the rules. The uniform distribution corresponds to a uniform random choice of the items which build the set of potential anchors while the Gaussian one corresponds to a choice taking more frequently some items. The higher the parameter s, the flatter the Gaussian distribution. The last two columns of Table 1 give the minimal threshold Omi n after Step 1 and the suboptimal threshold Osul, op , found by the approximation algorithm. As mentioned when presenting Step 1, the optimal threshold Ooe t is necessarily greater than or equal to Omin after Step 1. Table 1 reports that the suboptimal threshold Os,,t, opt is not over 2 units greater than Omin after Step 1. The suboptimal threshold yielded by the approximation algorithm on these examples has a high quality because it is at worst 2 units greater than the optimal one. A Comparison with Linguistic Lexicalization on a Real-World Grammar This evaluation consists in applying the algorithm to a natural language grammar composed of 6,622 rules (terms from the domain of metallurgy provided by INIST/CNRS) and a lexicon of 3,256 items. Figure 3 depicts the distribution of the weights with the natural linguistic lexicalization. The frequent head words such as alloy are heavily loaded because of the numerous terms in N-alloy with N being a name of metal. Conversely, in Figure 4 the distribution of the weights from the approximation algorithm is much more 201 uniform. The maximal weight of an item is 241 with the linguistic lexicalization while it is only 34 with the optimized lexicalization. The threshold after Step 1 being 34, the suboptimal threshold yielded by the approximation algorithm is equal to the optimal one. Lexicon size Distribution of the On~ n On~n Osubopt items on the rules before Step 1 after Step I suboptimal threshold 100 uniform 143 143 143 100 Gaussian (s = 30) 141 143 144 100 Gaussian (s = 20) 141 260 261 100 Gaussian (s = 10) 141 466 468 1,000 uniform 15 15 16 1,000 Gaussian (s = 30) 14 117 118 1,000 Gaussian (s = 20) 15 237 238 1,000 Gaussian (s = 10) 14 466 467 Table 1: Bench marks of the approximation algorithm on eight randomly generated grammars. Number of items (log scale) 3000 1000 100 10 15 30 Weight 45 60 75 90 105 120 135 150 165 180 195 210 225 240 Figure 3: Distribution of the weights of the lexical items with the lexicalization on head words. Number of items (log scale) 1000 100 10 ,,,, .... ,,,,,,,,,,111 1 234 5678 910 12 14 16 18 20 22 24 26 28 30 32 34 36 Weight Figure 4: Distribution of the weights of the lexical items with the optimized lexicalization. 202 Conclusion As mentioned in the introduction, the improvement of the lexicalization through an optimization algorithm is currently used in FASTR a parser for terminological extraction through NLP techniques where terms are represented by lexicalized rules. In this framework as in top-down parsing with LTAGs (Schabes and Joshi, 1990), the first phase of parsing is a filtering of the rules with their anchors in the input sentence. An unbalanced distribution of the rules on to the lexical items has the major computational drawback of selecting an excessive number of rules when the input sentence includes a common head word such as "'alloy" (127 rules have "alloy" as head). The use of the optimized lexicalization allows us to filter 57% of the rules selected by the linguistic lexicalization. This reduction is comparable to the filtering induced by linguistic lexicalization which is around 85% (Schabes and Joshi, 1990). Correlatively the parsing speed is multiplied by 2.6 confirming the computational saving of the optimization reported in this study. There are many directions in which this work could be refined and extended. In particular, an optimization of this optimization could be achieved by testing different weight assignments in correlation with the parsing algorithm. Thus, the computational lexicalization would fasten both the preprocessing and the parsing algorithm. Acknowledgments I would like to thank Alain Colmerauer for his valuable comments and a long discussion on a draft version of my PhD dissertation. I also gratefully acknowledge Chantal Enguehard and two anonymous reviewers for their remarks on earlier drafts. The experiments on industrial data were done with term lists from the documentation center INIST/CNRS. REFERENCES Abeill6, Anne, and Yves Schabes. 1989. Parsing Idioms in Tree Adjoining Grammars. In Proceedings, 4 th Conference of the European Chapter of the Association for Computational Linguistics (EACL'89), Manchester, UK. Baase, Sara. 1978. Computer Algorithms. Addison Wesley, Reading, MA. Jacquemin, Christian. 1991. Transformations des noms composds. PhD Thesis in Computer Science, Universit6 of Paris 7. Unpublished. Jacquemin, Christian. 1994. FASTR : A unification grammar and a parser for terminology extraction from large corpora. In Proceedings, IA-94, Paris, EC2, June 1994. Jacquemin, Christian and Jean Royaut6. 1994. Retrieving terms and their variants in a lexicalized unification-based framework. In Proceedings, 17 th Annual International ACM SIGIR Conference (SIGIR'94), Dublin, July 1994. Pollard, Carl and Ivan Sag. 1987. Information- Based Syntax and Semantics. Vol 1: Fundamentals. CSLI, Stanford, CA. Schabes, Yves, Anne Abeill6, and Aravind K. Joshi. 1988. Parsing strategies with 'lexicalized' grammars: Application to tree adjoining grammar. In Proceedings, 12 th International Conference on Computational Linguistics (COLING'88), Budapest, Hungary. Schabes, Yves and Aravind K. Joshi. 1990. Parsing strategies with 'lexicalized' grammars: Application to tree adjoining grammar. In Masaru Tomita, editor, Current Issues in Parsing Technologies. Kluwer Academic Publishers, Dordrecht. Schabes, Yves and Richard C. Waters. 1993. Lexicalized Context-Free Grammars. In Proceedings, 31 st Meeting of the Association for Computational Linguistics (ACL'93), Columbus, Ohio. Sommerhalder, Rudolph and S. Christian van Westrhenen. 1988. The Theory of Computability: Programs, Machines, Effectiveness and Feasibility. Addison- Wesley, Reading, MA. 203
1994
27
COMPACT REPRESENTATIONS BY FINITE-STATE TRANSDUCERS Mehryar Mohri Institut Gaspard Monge-LADL Universit6 Marne-la-Vall6e 2, rue de la Butte verte 93160 Noisy-le-Grand, FRANCE Internet: [email protected] Abstract Finite-state transducers give efficient represen- tations of many Natural Language phenomena. They allow to account for complex lexicon restric- tions encountered, without involving the use of a large set of complex rules difficult to analyze. We here show that these representations can be made very compact, indicate how to perform the corre- sponding minimization, and point out interesting linguistic side-effects of this operation. 1. MOTIVATION Finite-state transducers constitute appropriate representations of Natural Language phenomena. Indeed, they have been shown to be sufficient tools to describe morphological and phonetic forms of a language (Kaxttunen et al., 1992; Kay and Ka- plan, 1994). Transducers can then be viewed as functions which map lexical representations to the surface forms, or inflected forms to their phonetic pronunciations, and vice versa. They allow to avoid the use of a great set of complex rules of. ten difficult to check, handle, or even understand. Finite-state automata and transducers can also be used to represent the syntactic constraints of languages such as English or French (Kosken- niemi, 1990; Mohri, 1993; Pereira, 1991; Roche, 1993). The syntactic analysis can then be reduced to performing the intersection of two automata, or to the application of a transducer to an au- tomaton. However, whereas first results show that the size of the syntactic transducer exceeds several hundreds of thousands of states, no upper bound has been proposed for it, as the representation of all syntactic entries has not been done yet. Thus, one may ask whether such representations could succeed on a large scale. It is therefore crucial to control or to limit the size of these transducers in order to avoid a blow up. Classic minimization algorithms permit to reduce to the minimal the size of a determinis- tic automaton recognizing a given language (Aho et al., 1974). No similar algorithm has been pro- posed in the case of sequential transducers, namely transducers whose associated input automata are deterministic. We here briefly describe an algorithm which allows to compute a minimal transducer, namely one with the least number of states, from a given subsequential transducer. In addition to the de- sired property of minimization, the transducer ob- tained in such a way has interesting linguistic properties that we shall indicate. We have fully implemented and experimented this algorithm in the case of large scale dictionaries. In the last section, we shall describe experiments and corre- sponding results. They show this algorithm to be very efficient. 2. ALGORITHM Our algorithm can be applied to any sequential transducer T = (V, i, F, A, B, 6, ~) where: V is the set of the states of T, i its initial state, F the set of its final states, A and B respectively the input and output alphabet of the transducer, ~ the state transition function which maps V x A to V, and the output function which maps V x A to B*. With this definition, input labels are elements of the alphabet, whereas output labels can be words. Figure 1 gives an example of a sequential trans- ducer. Transducers can be considered as automata over the alphabet A x B*. Thus, considered as such they can be submitted to the minimization in the sense of automata. Notice however that the application of the minimization algorithm for automata does not permit to reduce the number of states of the transducer T. We shall describe in the following how the algorithm we propose allows to reduce the number of states of this transducer. This algorithm works in two stages. The first one modifies only the output automaton associ- ated with the given sequential transducer T. Thus, we can denote by (V,i,F,A,B,~,~2) the trans- 204 ~b:b ~,1 b:c > c:c :k J- c-d f be Q Figure 1. Transducer T. ducer T2 obtained after this first stage. Let P be the function which maps V to B* which associates with each state q of T the greatest common prefix of all the words which can be read on the output labels of T from q to a final state. The value of P(5) is for instance db since this is the greatest common prefix of the labels of all output paths leaving 3. In particular, if q is a final state then P(q) is the empty word e. In order to simplify this presentation, we shall assume in the following that P(i) = e. The output function ~2 of T2 is defined by: Vq~V, ratA, ~2(q, a) = (P(q))-l~r(q, a)P(6(q, a)). Namely, the output labels of T are modified in such a way that they include every letter which would necessarily be read later on the following transitions. Figure 2 illustrates these modifica- tions. T if beginning with the transition (0, 1). The out- put label of the following transition of T2 is now empty. Indeed, anything which could be read from the transition (1, 2) on the output labels has now been included in the previous transition (0,1). It is easy to show that the transducer T2 ob- tained after the first stage is equivalent to T. Namely, these two transducers correspond to the same function mapping A* to B*. One may no- tice, however, that unlike T this transducer can be minimized in the sense of automata and that this leads to a transducer with only six states. Figure 3 indicates the transducer T3 obtained in such a way. The second stage of our algorithm precisely consists of the application of the minimization in the sense of automata, that is, of merging equiv- alent states of the transducer. It can be showed that the application of the two presented stages to ~b:bcddb b:l~ :- c:E . e "e b:e b:db Figure 2. Transducer T2. It shows the transducer T2 obtained from T by performing the operations described above. Notice that only the output labels of T have" been mod- ified. The output label a corresponding to the transition linking states 0 and 1 of the transducer has now become abcdb as this is the longest word which is necessarily read from the initial state 0 of a sequential transducer T systematically leads to an equivalent sequential transducer with the min- imal number of states (Mohri, 1994). Indeed, the states of this minimal transducer can be charac- terized by the following equivalence relation: two states of a sequential transducer axe equivalent if and only if one can read the same words from 205 a: abcdb d: cdb Q b : l~ddb ~ b:db Figure 3. Transducer Ta. these states using the left automaton associated with this transducer (equivalence in the sense of automata) and if the corresponding outputs from these states differ by the same prefix for any word leading to a final state. Thus, the described algo- rithm can be considered as optimal. Notice that we here only considered sequen- tial transducers, but not all transducers represent- ing sequential functions are sequential. However, transducers which are not sequential though repre- senting a sequential function can be determinized using a procedure close to the one used for the de- terminization of automata. The algorithm above can then be applied to such determinized trans- ducers. The complexity of the application of a non sequential transducer to a string is not linear. This is not the case even for non-deterministic automata. Indeed, recognizing a word w with a non-deterministic automaton of IV[ states each containing at most e leaving transitions requires O(e[Vl[w D (see Aho et al., 1974). The application of a non-sequential transducer is even more time consuming, so the determinization of transducers clearly improves their application. We have con- sidered above sequential transducers, but trans- ducers can be used in two ways. These transduc- ers, although they allow linear time application on left, are generally not sequential considered as right input transducers. However, the first stage of the presented algorithm constitutes a pseudo- determinization of right input transducers. In- deed, as right labels (outputs) are brought closer to the initial state as much as possible, irrelevant paths are sooner rejected. Consider for example the string x = abcdbcdbe and compare the application of transducers T and Tz to this sequence on right input. Using the transducer T, the first three letters of this se- quence lead to the single state 5, but then reading db leads to a set of states {1,5,6}. Thus, in or- der to proceed with the recognition, one needs to store this set and consider all possible transitions or paths from its states. Using the transducer T2 and reading abcdb give the single state 1. Hence, although the right input transducer is not sequen- tial, it still permits to reduce the number of paths and states to visit. This can be considered as an- other advantage of the method proposed for the minimization of sequential transducers: not only the transducer is sequential and minimal on one side, but it is also pseudo-sequential on the other side. The representation of language often reveals ambiguities. The sequential transducers we have just described do not allow them. However, real ambiguities encountered in Natural Language Pro- cessing can be assumed to be finite and bounded by an integer p. The use of the algorithm above can be easily extended to the case of subsequential transducers and even to a larger category of trans- ducers which can represent ambiguities and which we shall call p-subsequential trargsducers. These transducers are provided with p final functions ~i, (i E [1,p]) mapping F, the set of final states, to B*. Figure 4 gives an example of a 2-subsequentiai transducer. d dd Figure 4. 2-subsequential transducer T4. The application of these transducers to a string z is similar to the one generally used for sequential ones. It outputs a string correspond- ing to the concatenation of consecutive labels en- coutered. However, the output string obtained once reaching state q must here be completed by the ~i(q) without reading any additional input let- ter. The application of the transducer T4 to the word abc for instance provides the two outputs abca and abcb. The extension of the use of the algorithm above is easy. Indeed, in all cases p-subsequential 206 transducers can be transformed into sequential transducers by adding p new letters to the alpha- bet A, and by replacing the p final functions by transitions labeled with these new letters on in- put and the corresponding values of the functions on output. These transitions would leave the final states and reach a newly created state which would become the single final state of the transducer. The minimal transducer associated with the 2- subsequential transducer T4 is shown on figure 5. It results from T4 by merging the states 2 and 4 after the first stage of pseudo-determinization. b.~ ca c.~ d b occupying about 1,1 Mb. Also, as the transducer is sequential, it allows faster recognition times. In addition to the above results, the trans- ducer obtained by this algorithm has interesting properties. Indeed, when applied to an input word w which may not be a French word this transducer outputs the longest common prefix of the phonetic transcriptions of all words beginning with w. The input w -" opio for instance, though it does not constitute a French word, yields opjoman. Also, w - opht gives oftalm. This property of mini- real transducers as defined above could be used in applications such as OCR or spellchecking, in or- der to restore the correct form of a word from its beginning, or from the beginning of its pronunci- ation. Table 1. Results of minimization experiments Figure 5. Minimal 2-subsequential transducer Ts. In the following section, we shall describe some of the experiments we carried out and the corresponding results. These experiments use the notion of p-subsequential transducers just devel- opped as they all deal with cases where ambigui- ties appear. 3. EXPERIMENTS, RESULTS, AND PROPERTIES We have experimented the algorithm described above by applying it to several large scale dictio- naries. We have applied it to the transducer which associates with each French word the set of its pho- netic pronunciations. This transducer can be built from a dictionary (DELAPF) of inflected forms of French, each followed by its pronunciations (La- porte, 1988). It can be easily transformed into a sequential or p-subsequential transducer, where p, the maximum number of ambiguities for this transducer, is about four (about 30 words admit 4 different pronunciations). This requires that the transducer be kept deterministic while new asso- ciations are added to it. The dictionary contains about 480.000 entries of words and phonetic pronunciations and its size is about 10 Mb. The whole minimization algo- rithm, including building the transducer from the dictionary and the compression of the final trans- ducer, was quite fast: it took about 9 minutes using a HP 9000/755 with 128 Mb of RAM. The resulting transducer contains about 47.000 states and 130.000 transitions. Since it is sequential, it can be better compressed as one only needs to store the set of its transitions. The minimal trans- ducer obtained has been put in a compact form Initial size DELAPF FDELAF EDELAF Final size States Transitions 1,1 Mb 47.000 130.000 13.500 , Alphabet 1,6 Mb 66.000 195.000 20.000 20' Time spent 1 Mb 47.000 115.000 [IEVE , We have also performed the same experi- ment using 2 other large dictionaries: French (FDELAF) (Courtois, 1989) and English (EDF_,- LAF) (Klarsfeld, 1991) dictionaries of inflected forms. These dictionaries are made of associ- ations of inflected forms and their correspond- ing canonical representations. It took about 20 minutes constructing the 15-subsequential trans- ducer associated with the French dictionary of about 22 Mb. Here again, properties of the ob- tained transducers seem interesting for various ap- plications. Given the input w=transducte for in- stance the transducer provides the output trans- ducteur.Nl:m. Thus, although w is not a cor- rect French word, it provides two additional let- ters completing this word, and indicates that it is a masculine noun. Notice that no information is given about the number of this noun as it can be completed by an ending s or not. Analogous re- sults were obtained using the English dictionary. A part of them is illustrated by the table above. It allows to compare the initial size of the file representing these dictionaries and the size of the equivalent transducers in memory (final size). The third line of the table gives the maximum num- ber of lexical ambiguities encountered in each dic- tionary. The following lines indicate the number 207 of states and transitions of the transducers and also the size of the alphabet needed to represent the output labels. These experiments show that this size remains small compared to the number of transitions. Hence, the use of an additional al- phabet does not increase noticeably the size of the transducer. Also notice that the time indicated corresponds to the entire process of transforma- tion of the file dictionaries into tranducers. This includes of course the time spent for I/O's. We have not tried to optimize these results. Several available methods should help both to reduce the size of the obtained transducers and the time spent for the algorithm. 4. CONCLUSION We have informally described an algorithm which allows to compact sequential transducers used in the description of language. Experiments on large scale dictionaries have proved this algorithm to be efficient. In addition to its use in several applica- tions, it could help to limit the growth of the size of the representations of syntactic constraints. REFERENCES Aho, Alfred, John Hopcroft, Jeffery Ullman. 1974. The design and analysis o,f computer algorithms. Reading, Mass.: Addison Wesley. Courtois, Blandine. 1989. DELAS: Diction- naire Electronique du LADL pour les roots simples du franais Technical Report, LADL, Paris, France. Karttunen, Laura, Ronald M. Kaplan, and Annie Zaenen. 1992. Two-level Morphology with Composition. Proceedings o.f the fifteenth Inter- national Conference on Computational Linguistics (COLING'92}, Nantes, France, August. Kay, Martin, and Ronald M. Kaplan. 1994. Regular Models of Phonological Rule Systems. To appear in Computational Linguistics. Klarsfeld, Gaby. phologique de l'anglais. Paris, France. 1991. Dictionnaire mot- Technical Report, LADL, Koskenniemi Kimmo. 1990. Finite-state Parsing and Disambiguation. Proceedings of the thirteenth International Conference on Computa- tional Linguistics (COLING'90), Helsinki, Fin- land. Laporte, Eric. 1988. MJthodes algorithmiques et lezicales de phon~tisation de teztes. Ph.D the- sis, Universit4 Paris 7, Paris, France. Mohri, Mehryar. 1993. Analyse et representation par automates de structures syntaz- iques eompos~es. Ph.D thesis, Universit4 Paris 7, Paris, France. Mohri, Mehryar. 1994. Minimization of Se- quential Transducers. Proceedings of Combinato- rial Pattern Matchnig (CPM'9~), Springer-Verlag, Berlin Heidelberg New York. Also Submitted to Theoretical Computer Science. Pereira, Fernando C. N. 1991. Finite- State Approximation of Phrase Structure Gram- mars. Proceedings of the 29th Annual Meeting of the Association for Computational Linguistics (A CL '91), Berkeley, California. Roche Emmanuel. 1993. Analyse syntaz- ique translormationnelle du franfais par transduc- teur et lezique-grammaire. Ph.D thesis, Universitd Paris 7, Paris, France. 208 Mehryar MOHRI Institut Gaspard Monge Universit4 Marne-la-Vall4e 2, Rue de la Butte Verte 93166 NOISY-LE-GRAND CEDEX FRANCE Ph: 33 (I) 49 32 60 54 Fax: 33 (I) 43 04 16 05 209
1994
28
AN EXTENDED THEORY OF HEAD-DRIVEN PARSING Mark-Jan Nederhof * University of Nijmegen Department of Computer Science Toernooiveld, 6525 ED Nijmegen The Netherlands markj an¢cs, kun. nl Abstract We show that more head-driven parsing algorithms can he formulated than those occurring in the existing lit- erature. These algorithms are inspired by a family of left-to-right parsing algorithms from a recent publica- tion. We further introduce a more advanced notion of "head-driven parsing" which allows more detailed spec- ification of the processing order of non-head elements in the right-hand side. We develop a parsing algorithm for this strategy, based on LR parsing techniques. Introduction According to the head-driven paradigm, parsing of a formal language is started from the elements within the input string that are most contentful either from a syn- tactic or, more generally, from an information theoretic point of view. This results in the weakening of the left-to-right feature of most traditional parsing meth- ods. Following a pervasive trend in modern theories of Grammar (consider for instance [5, 3, 11]) the compu- tational linguistics community has paid large attention to the head-driven paradigm by investigating its appli- cations to context-free language parsing. Several methods have been proposed so far exploit- ing some nondeterministic head-driven strategy for context-free language parsing (see among others [6, 13, 2, 14]). All these proposals can be seen as general- izations to the head-driven case of parsing prescrip- tions originally conceived for the left-to-right case. The methods above suffer from deficiencies that are also no- ticeable in the left-to-right case. In fact, when more rules in the grammar share the same head element, or share some infix of their right-hand side including the head, the recognizer nondeterministically guesses a rule just after having seen the head. In this way analyses that could have been shared are duplicated in the pars- ing process. Interesting techniques have been proposed in the left- to-right deterministic parsing literature to overcome re- dundancy problems of the above kind, thus reducing *Supported by the Dutch Organisation for Scientific Re- search (NWO), under grant 00-62-518 Giorgio Satta Universith di Padova Dipartimento di Elettronica e Informatica via Gradenigo 6/A, 35131 Padova Italy satt a@dei, unipd, it the degree of nondeterminism of the resulting methods. These solutions range from predictive LR parsing to LR parsing [15, 1]. On the basis of work in [8] for nonde- terministic left-to-right parsing, we trace here a theory of head-driven parsing going from crude top-down and head-corner to more sophisticated solutions, in the at- tempt to successively make more deterministic the be- haviour of head-driven methods. Finally, we propose an original generalization of head- driven parsing, allowing a more detailed specification of the order in which elements of a right-hand side are to be processed. We study in detail a solution to such a head-driven strategy based on LR parsing. Other methods presented in this paper could be extended as well. Preliminaries The notation used in the sequel is for the most part standard and is summarised below. Let D be an alphabet (a finite set of symbols); D + denotes the set of all (finite) non-empty strings over D and D* denotes D + U {c}, where c denotes the empty string. Let R be a binary relation; R + denotes the transitive closure of R and R* denotes the reflexive and transitive closure of R. A context=free grammar G = (N, T, P, S) consists of two finite disjoint sets N and T of nonterminal and terminal symbols, respectively, a start symbol S E N, and a finite set of rules P. Every rule has the form A ~ a, where the left-hand side (lhs) A is an element from N and the right-hand side (rhs) e~ is an element from V +, where V denotes (N U T). (Note that we do not allow rules with empty right-hand sides. This is for the sake of presentational simplicity.) We use symbols A, B, C,... to range over N, symbols X, Y, Z to range over V, symbols a,/3, 7 .... to range over V*, and v, w, z,... to range over T*. In the context-free grammars that we will consider, called head grammars, exactly one member from each rhs is distinguished as the head. We indicate the head by underlining it, e.g., we write A --* aXfl. An expres- sion A --+ cr7/3 denotes a rule in which the head is some member within 7- We define a binary relation ~ such 210 that B <> A if and only if A --* otB__fl for some a and/3. Relation (>* is called the head-corner relation. For technical reasons we sometimes need the aug- mented set of rules PI, consisting of all rules in P plus the extra rule S ~ ~ .kS, where ff is a fresh nontermi- hal, and 3_ is a fresh terminal acting as an imaginary zeroth input symbol. The relation p,t is extended to a relation --. on V* x V* as usual. We write ~ P-~ 6 when- ever 7 "-'* 6 holds as an extension of p E P • We write 7t"P2"'~" 6 if 7 ~ 61 ~ 6 P" 2"" "6~-x ---,6 For a fixed grammar, a head-driven recognition algo- rithm can be specified by means of a stack automa- ton a = (T, alph, Init(n), ~--,, Fin(n)), parameterised with the length n of the input. In A, symbols T and Aiph are the input and stack alphabets respec- tively, lair(n), Fin(n) E Alph are two distinguished stack symbols and ~ is the transition relation, defined on Alph + x Alph + and implicitly parameterised with the input. Such an automaton manipulates stacks F E Aiph +, (constructed from left to right) while consulting the symbols in the given input string. The initial stack is Init(n). Whenever F ~ F r holds, one step of the automaton may, under some conditions on the input, transform a stack of the form F"F into the stack F"F ~. In words, F ~ F ~ denotes that if the top-most few sym- bols on the stack are F then these may be replaced by the symbols I'. Finally, the input is accepted whenever the automaton reaches stack Fin(n). Stack automata presented in what follows act as recognizers. Parsing algorithms can directly be obtained by pairing these automata with an output effect. A family of head-driven algorithms This section investigates the adaptation of a family of left-to-right parsing algorithms from [8], viz. top-down, left-corner, PLR, ELR, and LR parsing, to head gram- mars. Top-down parsing The following is a straightforward adaptation of top- down (TD) parsing [1] to head grammars. There are two kinds of stack symbol (items), one of the form [i, A, j], which indicates that some subderiva- tion from A is needed deriving a substring of ai+l • • • aj, the other of the form [i, k, A --* a • 7 •/3, m, j], which also indicates that some subderivation from A is needed deriving a substring of ai+l • • • aj, but specifically using the rule A --~ ot7/3, where 7 -'-~* ak+x ... a,n has already been establishe~. Formally, we have 1~ D = {[i,A,j]li<j} If ° = {[i,k,A-*a.y.fl, m,j ] IA--~aZ/3EP~ A i<k<m<_j) AIKorlthm 1 (Head-drlven top-down) a~ = (T, I~ D U IT D, 1nit(n), ~-*, Fin(n)), where Init(n) = [-1, -1, S' ---* • 3_ • S, 0, n], Fin(n) = [-1, -1, S' --** 3-S *, n, n], and the transition relation ~-~ is given by the following clauses. 0 [i,A,j] ~ [i,A,j][i,B,j] where there is A ~ aB__fl E pt 0a [i, k, A ---, a • 7 * B/3, m, j] ~-, [i,k,A---, a • 7 • B/3, m,j][m,B,j] 0b [i,k,A---* aB •7 * /3, m,j] [i,k,A ~ aB • 7 */3, m,j][i,B,k] 1 [i,A,j]~--*[i,k-l,A---*a.a•/3, k,j] where there are A ~ aafl E pt and k such that i<k_<jandak=a 2a [i,k,A ---* a • 7 * a[3, m,j] ~-~ [i,k,A ~ a • 7a */3, m + 1,j] provided m < j and am+l = a 2b Symmetric to 2a (eft 0a and 0b) 3 [i,A,j][i',k,B~•6•,m,j']~ [i,k,a ~ a • B * fl, m,j] where there is A ~ aBBfl E P? (i = i ~ and j = f are automatically satisfied) 4a [i, k, A ---* a • 7 * B[3, m,j][i', k', B --* •6,, m',j'] [i,k,A ~ c~ • 7B • fl, m',j] provided m = k' (m = i' and j = j' are automatically satisfied) 4b Symmetric to 4a We call a grammar head-recursive ifA ~+ A for some A. Head-driven TD parsing may loop exactly for the grammars which are head-recursive. Head reeursion is a generalization of left recursion for traditional TD pars- ing. In the ease of grammars with some parameter mech- anism, top-down parsing has the advantage over other kinds of parsing that top-down propagation of parame- ter values is possible in collaboration with context-free parsing (eft the standard evaluation of definite clause grammars), which may lead to more efficient process- ing. This holds for left-to-right parsing as well as for head-driven parsing [10]. Head-corner parsing The predictive steps from Algorithm 1, represented by Clause 0 and supported by Clauses 0a and 0b, can be compiled into the head-corner relation ~)*. This gives the head-corner (HC) algorithm below. The items from IT D are no longer needed now. We define I Hc = If D. Algorithm 2 (head-corner) A ~tc = (T, I Hc, Init(n), ~-*, Fin(n)), where Init(n) = [-1, -1, S' ~ • 3- • S, 0, n], Fin(n) = [-1, -1, S' --~ • IS =, n, hi, and ~ is given by the following clauses. (Clauses lb, 2b, 3b, 4b are omitted, since these are symmetric to la, 2a, 3a, 4a, respectively.) la [i,k,a--*aoT•B/3, m,j]~--~ [i, k, A ---* a • 7 • B/3, m, j][m, p- 1, C ~ rl.a.O, p, j] where there are C ~ r/a0 E pt and p such that m < p < j and ap = a and C ~* B 211 2a [i,k,A--* a * 7 * a~,m,j] ~-* [i,k,A--* a * -/a * 13, m+ 1,j] provided m < j and am+l ---- a 3a [i, k, D ~ a*7*A13, m,j][i', k', B ~ .6., ra',jq ~-. [i, k, D --~ o~.7.A13, rn, j][i', k', C --~ ~.B.8, rn', j'] provided m = ¢, where there is C --* r/_B0 E pt such that C <>* A (j = j' is automatically satisfied) 4a [i,k,A ~ a.7.B13, m,j][i',k',B ---* .6.,m',j'] [i, k,A ~ a *'IB * 13, m', j] provided m = k ~ (m = i' and j = j' are automatically satisfied) Head-corner parsing as well as all algorithms in the remainder of this paper may loop exactly for the gram- mars which are cyclic (where A ---~+ A for some A). The head-corner algorithm above is the only one in this paper which has already appeared in the literature, in different guises [6, 13, 2, 14]. Predictive HI parsing We say two rules A --* al and B --* ~2 have a common infix a if al = 1310¢-/1 and a2 = 132(:~-/2, for some 131,132, 71 and -/2. The notion of common infix is an adaptation of the notion of common prefix [8] to head grammars. If a grammar contains many common infixes, then HC parsing may be very nondeterministie; in particular, Clauses 1 or 3 may be applied with different rules C ---} 0_a0 E pt or C ---* r/__B0 E P~ for fixed a or B. In [15] an idea is described that allows reduction of nondeterminism in case of common prefixes and left- corner parsing. The resulting algorithm is called pre- dictive LR (PLR) parsing. The following is an adapta- tion of this idea to HC parsing. The resulting algorithm is called predictive HI (PHI) parsing. (HI parsing, to be discussed later, is a generalization of LR parsing to head grammars.) First, we need a different kind of item, viz. of the form [i, k,A --~ 7, re, j], where there is some rule A --* a_713. With such an item, we simulate computation of different items [i,k,A --* (~ * 7 * 13, re, j] E I Hc, for different a and 13, which would be treated individually by an HC parser. Formally, we have I Pm = {[i,k,A--,%m,j]lA--,ctT_13EP?A i<_k<m<_j} Algorithm 3 (Predictive HI) A PHI = (T, I PHI, lnit(n), ~-~, Fin(n)), where Init(n) = [-1, -1, S' ---* _1_, 0, n], Fin(n) = [-1,-1, S' ~ IS, n, n], and ~-* is given by the following (symmetric "b-clauses" omitted). la [i,k,A---*%m,j]~-* [i,k,A ~ % m,j][m,p- 1, C --* a,p,j] where there are C ~ y_a0, A ---* aTB13 ~ pt and p such that m < p < j and ap = a an(:] C O* B 2a [i,k,A--* 7, m,j] .-. [i,k,A--* Ta, m + 1,j] provided m < j and am+~ = a, where there is A --* aT_.a13 ~ pt 3a [i, k, o -~ % m,j][i', ~', B -~ ~, m', j'] [i,k,D ---* 7, m,j][i', k',C -+ B,m',j'] provided m = i' and B --* 6 E pt, where there are D --* a_TA13, C ---* q_B0 E pt such that C <>* A 4a [i,k,A ---* 7, m,j][i',k',B ~ 6, m',j'] [i, k, A ---, -/B, m', j] provided m = k' and B ~ _6 E pt, where there is A .---. ~7_.Bfl E pt Extended HI parsing The PHI algorithm can process simultaneously a com- mon infix a in two different rules A --* 131_~-/1 and A --* 132_~72, which reduces nondeterminism. We may however also specify an algorithm which suc- ceeds in simultaneously processing all common infixes, irrespective of whether the left-hand sides of the cor- responding rules are the same. This algorithm is in- spired by exlended LR (ELR) parsing [12, 7] for ex- tended context-free grammars (where right-hand sides consist of regular expressions over V). By analogy, it will be called extended HI (EHI) parsing. This algorithm uses yet another kind of item, viz. of the form [i,k,{A1,A~,...,Ap} --* -/,m,j], where there exists at least one rule A --* a_713 for each A E {A1,Au,...,Ap}. With such an item, we simu- late computation of different items [i, k, A --* a * -/ * 13, m, j] E I He which would be treated individually by an HC parser. Formally, we have I Em = {[i, k, zx --+ -/, m, j] I 0CAC{A I A ---, a-/13 E P t} A i<_k<m<j} Algorithm 4 (Extended HI) A EH1 = (T, I EHI, Init(n), ~-*, Fin(n)), where Init(n) = [-1,-1, {S'} --+ .1_, 0, n], Fin(n) = [-1,-1, {S'} ~ _l_S, n, n], and ~-~ is given by: la [i,k,A---*%m,j] [i, k, A --. % m, j] Ira, p - 1, A' ---. a, p, j] where there is p such that m < p < j and ap = a and A' = {C [ qc ~ 71a_O,A --~aT_Bfl E pI(A E A A C <~* B)} is not empty 2a [i,k,A---~ %m,j] ~ [i,k,A'---* Ta, m + 1,j] provided ra < j and am+a = a and A' = {A E A [ A --* a'/aft E pt} is not empty 3a [i, k, A --+ 7, m, j][i', k', A' ---* 6, m', j'] ~-+ [i,k,A --+ %m,j][i',k',A" ---, B,m',j'] provided rn = i' and B --* ti E pt for some B E A' such that A" = {C ] 3C ---+ yB6, D ---* aT_A~ E pt(D E A A C <>* A)} is not empty 4a [i, k,A -. %m,j][i',k',A' ~ a,,n',j'] [i,k,h"-+ -/B,m',j] provided m = k' and B ---+ 6_ E pl for some B E A' such that A" = {A E A I A ---+ crTB13 E pt} is not empty 212 This algorithm can be simplified by omitting the sets A from the items. This results in common infix (CI) parsing, which is a generalization of common pre- fix parsing [8]. CI parsing does not satisfy the correct subsequence property, to be discussed later. For space reasons, we omit further discussion of CI parsing. HI parsing If we translate the difference between ELR and LR pars- ing [8] to head-driven parsing, we are led to HI parsing, starting from EHI parsing, as described below. The al- gorithm is called HI because it computes head-inward derivations in reverse, in the same way as LR parsing computes rightmost derivations in reverse [1]. Head- inward derivations will be discussed later in this paper. Ill parsing uses items of the form [i, k, Q, m, j], where Q is a non-empty set of "double-dotted" rules A --* a * 3' * ft. The fundamental difference with the items in I EHl is that the infix 3' in the right-hand sides does not have to be fixed. Formally, we have IHI = {[i,k,Q,m,j] [ $c QC {A --a.7 .B ] A--*aT_~ E P~)^ i<k<m<j} We explain the difference in behaviour of Ill parsing with regard to EHI parsing by investigating Clauses la and 2a of Algorithm 4. (Clauses 3a and 4a would give rise to a similar discussion.) Clauses la and 2a both ad- dress some terminal ap, with m < p < j. In Clause la, the case is treated that ap is the head (which is not necessarily the leftmost member) of a rhs which the al- gorithm sets out to recognize; in Clause 2a, the case is treated that ap is the next member of a rhs of which some members have already been recognized, in which case we must of course have p = m + 1. By using the items from I t4r we may do both kinds of action simultaneously, provided p = m + 1 and ap is the leftmost member of some rhs of some rule, where it occurs as head) The lhs of such a rule should sat- isfy a requirement which is more specific than the usual requirement with regard to the head-corner relation. 2 We define the left head-corner relation (and the right head-corner relation, by symmetry) as a subrelation of the head-corner relation as follows. We define: B / A if and only if A ---* Bo~ for some a. The relation Z* now is called the left head-corner relation. We define gotorightl(Q, X) = {C ~ ~. x .o I c~,lEoePt^ 3A --* a * 7 * B~ E Q(C <>* B)} goloright 2( Q, X) = l If ap is not the leftmost member, then no successful parse will be found, due to the absence of rules with empty right-hand sides ( epsiion rules). 2Again, the absence of epsilon rules is of importance here. {C~.X.O[C~ X.OEPtA SA--.a .7 . Bfl E Q(C /* B)}U {A~a. TX .~ [A ~a.'r. X3E Q} and assume symmetric definitions for gotolefl 1 and gotoleft~. The above discussion gives rise to the new Clauses la and 2a of the algorithm below. The other clauses are derived analogously from the corresponding clauses of Algorithm 4. Note that in Clauses 2a and 4a the new item does not replace the existing item, but is pushed on top of it; this requires extra items to be popped off the stack in Clauses 3a and 4a. 3 Algorithm 5 (HI) A m = (T, I Hz, Init(n), ~"h Fin(n)), where lnit(n) = [-1, -1, {S' --+ * ]- * S}, O, n], Fin(n) = [-i, -1, {S' -~ * .kS .}, n, n], and ~ defined: la [i,k,Q,m,j] ~ [i,k,Q,m,j][m,p- 1,Q',p,j] where there is p such that m+ 1 < p_< j and ap = a and Q' = gotorightl(Q, a) is not empty 2a [i,k,Q,m,j]~-~ [i,k,Q,m,j][i,k,Q',m+ 1,j] provided m < j and am+l = a and Q' = gotoright~(Q, a) is not empty 3a [i,k,Q,m,j]Ii...I,_l[i',k',Q',m',j'] [i, k, Q, .~, ~][i', ~', Q", m', j'] provided m < k', where there is B ---* * X1...Xr * E Q' such that Q" = gotorighti(Q, B) is not empty 4a [i, k, Q, m, j]I~... I,_~ [i', k', Q', m', j'] [i, k, Q, m, j][i, k, Q", m', j] provided m = k' or k = k ', where there is B ~ * X1 ...Xr • E Q' such that Q" = gotorighl~(Q, B) is not empty We feel that this algorithm has only limited advan- tages over the EHI algorithm for other than degenerate head grammars, in which the heads occur either mostly leftmost or mostly rightmost in right-hand sides. In particular, if there are few sequences of rules of the form A ---* A___Lai,Ax ~ A__2ot2,...,Am-1 --~ Amain, or of the form A ---, alA__ i, A1 -', a2A__g,..., A,~-i ~ amAin, then the left and right head-corner relations are very sparse and HI parsing virtually simplifies to EHI pars- ing. In the following we discuss a variant of head gram- mars which may provide more opportunities to use the advantages of the LR technique. A generalization of head grammars The essence of head-driven parsing is that there is a distinguished member in each rhs which is recognized first. Subsequently, the other members to the right and to the left of the head may be recognized. An artifact of most head-driven parsing algorithms is that the members to the left of the head are recognized 3In • • • I~-1 represent a number of items, as many as there are members in the rule recognized, minus one. 213 strictly from right to left, and vice versa for the mem- bers to the right of the head (although recognition of the members in the left part and in the right part may be interleaved). This restriction does not seem to be justified, except by some practical considerations, and it prevents truly non-directional parsing. We propose a generalization of head grammars in such a way that each of the two parts of a rhs on both sides of the head again have a head. The same holds recursively for the smaller parts of the rhs. The con- sequence is that a rhs can be seen as a binary tree, in which each node is labelled by a grammar symbol. The root of the tree represents the main head. The left son of the root represents the head of the part of the rhs to the left of the main head, etc. We denote binary trees using a linear notation. For example, if a and /5 are binary trees, then (cOX(f 0 denotes the binary tree consisting of a root labelled X, a left subtree a and a right subtree ft. The notation of empty (sub)trees (e) may be omitted. The relation --** ignores the head information as usual. Regarding the procedural aspects of grammars, gen- eralized head grammars have no more power than tra- ditional head grammars. This fact is demonstrated by a transformation r head from the former to the latter class of grammars. A transformed grammar rhead(e) contains special nonterminals of the form [c~], where c~ is a proper subtree of some rhs in the original gram- mar G = (T, N, P, S). The rules of the transformed grammar are given by: A --* [a] X [fl] for each A ---* (a)X(f 0 • P [(a)X(/~)] ~ In] X [fl] for each proper subtree (a)X(fl) of a rhs in G where we assume that each member of the form [e] in the transformed grammar is omitted. It is interesting to note that vh,~d is a generalization of a transformation vt,~o which can be used to transform a context-free grammar into two normal form (each rhs contains one or two symbols). A transformed grammar rt~o(G) contains special nonterminals of the form [a], where c~ is a proper suffix of a rhs in G. The rules of rtwo(G) are given by A --~ X [a] for each A ---* Xa • P [X¢~] --* X [a] for each proper suffix Xa of a rhs in G where we assume that each member of the form [e] in the transformed grammar is omitted. HI parsing revisited Our next step is to show that generalized head gram- mars can be effectively handled with a generalization of HI parsing (generalized HI (GHI) parsing). This new algorithm exhibits a superficial similarity to the 2-dimensional LR parsing algorithm from [16]. For a set Q of trees and rules, 4 closure(Q) is defined to be 4It is interesting to compare the relation between trees and rules with the one between kernel and nonkernel items of LR parsing [1]. the smallest set which satisfies closure(Q) D_ Q U {A --~ (a)X(g) • P ] (7)A(~f) • closure(Q) v B • cl0sure(Q)} The trees or rules of which the main head is some specified symbol X can be selected from a set Q by goto(Q, x)= (t • Q It = = A -. In a similar way, we can select trees and rules according to a left or right subtree. gotoleft(Q,~) = {t • Q I t = (~)X(~) v t = A --* (a)X(/3)} We assume a symmetric definition for gotoright. When we set out to recognize the left subtrees from a set of trees and rules, we use the following function. left(Q) = closure({cr [ (~)X(/5) • Q v A --* (a)X(/~) • Q}) We assume a s~,mmetric definition for right. The set I all1 contains different kinds of item: • Items of the form [i,k,Q,m,j], with i < k < m < j, indicate that trees (a)X(fl) and rules A ---* (a)X(~) in Q are needed deriving a substring of ai+l ... aj, where X ---~* ak+x...a,~ has already been estab- lished. • Items of the form [k, Q, m, j], with k < m < j, indi- cate that trees (cOX(fl) and rules A ~ (a)X(fl) ill V are needed deriving a substring of ak+l. • • aj, where (~X ---~* ak+l ... a,, has already been established. Items of the form [i, k, Q, m] have a symmetric mean- ing. • Items of the form [k, t, m], with k < m, indicate that 7 "-'** ak+l ... am has been established for tree t = 7 or rule t = A --* 7. Algorithm 6 (Generalized HI parsing) A GH1 = (T, I Gin, Init(n), ~--~, Fin(n)), where Init(n) : [-1, {S' --* _L(S)}, O, n], Fin(n) = [-1, S' ---* _L(S), hi, and ~-* defined: la [i,k,Q,m,j]~--* [i,k,O',m] provided Q' = gotoright(Q, e) is not empty lb [i,k,Q,m,j] .--, [k,Q',m,j] provided Q' = gotoleft(Q, c) is not empty lc [k,Q,m,j] ,--* [k,t,m] provided t • gotoright(Q, e) ld [i, k, Q, m] ~ [k, t, m] provided t • gotoleft(Q, ~) 2a [i,k,Q,m,j] ~ [i,k,Q,m,j][m,p- 1,Q',p,j] where there is p such that m < p < j and Q' = goto(right(Q), ap) is not empty 2b [i,k,Q,m,j] ~ [i,k,Q,m,j][i,p- 1,Q',p,k] where there is p such that i < p < k and Q' = goto(lefl(Q), av) is not empty 214 Stack [-1, {S' [-1, {S' [-1, {S' [-1, {S' [-~,{s' --* _L(S)}, 0, 4] _L(S)}, 0, 4] [0, 3, {S ---, ((c)A(b))s, S ~ (A(d))s, S ---* (B)s}, 4, 4] -+ J_(S)}, 0, 4] [0, 3, {S -+ ((c)A(b))s, S ---, (A(d))s, S -~ (B)s}, 4] .L(S)}, 0, 4] [0, 3, {S ~ ((c)A(b))s, S ---* (A(d))s, S ---* (B)s}, 4] [0, 1, {A ~ a}, 2, 3] ---* .I_(S)}, 0, 4] [0, 3, {S ~ ((c)A(b))s, S ---* (m(d))s, S ---* (B)s}, 4] [0, 1, {A ---* a}, 2] [-1, {S' --* .L(S)}, 0, 4] [0, 3, {S ~ ((c)A(b))s, S ---* (A(d))s, S ~ (B)s}, 4] [1, A --~ a, 2] [-1, {S' ~ .L(S)}, 0, 4] [0, 3, {S ---* ((c)A(b))s, S ---* (A(d))s, S ~ (B)s}, 4] [0, 1, {(c)A(b), A(d), g(b)}, 2, 3] [ ... ] [0, 3, {S ---* ((c)A(b))s, S ---* (A(d))s, S ---* (B)s}, 4] [0, 1, {(c)m(b), A(d), A(b)}, 2, 3] [2, 2, {b}, 3, 3] [ ... ] [0, 3, {S ~ ((c)A(b))s, S ---* (A(d))s, S ---* (B)s}, 4] [0, 1, {(c)A(b), A(d), A(b)}, 2, 3] [2, b, 3] [ ... ] [0, 3, {S-+ ((c)A(b))s, S ---* (g(d))s, S ---* (B)s}, 4] [0, 1, {(c)A(b), m(b)}, 3] [ ... ] [0, 3, {S --~ ((c)A(b))s, S ~ (A(d))s, S ~ (B)s}, 4] [0, 1, {(c)A(b), A(b)}, 3] [0, 0, {c}, 1, 1] [-1, {S' --* .L(S)}, 0, 4] [0, 3, {S --* ((c)A(b))s, S --~ (A(d))s, S ---* (B)s}, 4] [0, 1, {(c)A(b), A(b)}, 3] [0, c, 1] [-1, {S' --~ J-(S)}, 0, 4] [0, 3, {S ~ ((c)A(b))s, S ---* (A(d))s, S --* (B)s}, 4] [0, (c)A(b), 3] [- 1, {S' --* _L(S) }, O, 41 [0, S --* ((c)A(b))s, 4] [-1, {S' --* ±(S)}, O, 4] [0,0, {S},4,4] [-1, {5" ~ J_(S)}, O, 4] [0,S,4] [- l, S' --, _L(S), 4] I Clause 3a la 3b la ld 7b 2a la, ld 4a 3b la, ld 5b 5b 7a la, ld 5a Figure 1: Generalized HI parsing 3a [k,Q, rn,j]~-, [k,Q,m,j][rn, p- 1,Q',p,j] where there is p such that m < p _< j and Q' - golo(righl(Q), ap) is not empty 35 [i,k,Q,m]~--* [i,k,Q,m][i,p- 1, Q',p,k] where there is p such that i < p < k and Q' = goto(iefl(Q), ap) is not empty 4a [i,k,Q,m,j][k',7, m' ] ~-. [i,k,Q',m'] provided m = k', where Q' = gotoright(Q, 7) 4b Symmetric to 4a (of. 2a and 2b) 5a [k, Q, m, j][k', 7, m'] *--* [k, t, rn'] provided m = k ~, where t E 9otoright(Q, 7) 5b Symmetric to 5a (cf. 3a and 3b) 6a [i, k, Q, m, j][k', A ---* 7, rn'] ~-~ [i, k, Q, m, j][m, k', Q', m', j] provided m < k', where Q' = goto(right(Q), A) 6b Symmetric to 6a 7a [k,Q,m,j][k',A---*7, m']~--* [k, Q, m, j][m, k', O', m', j] provided rn _< k', where Q' = goto(righl(Q), A) 7b Symmetric to 7a The algorithm above is based on the transformation rhead. It is therefore not surprising that this algorithm is reminiscent of LR parsing [1] for a transformed gram- mar rt~oo(G). For most clauses, a rough correspondence with actions of LR parsing can be found: Clauses 2 and 3 correspond with shifts. Clause 5 corresponds with reductions with rules of the form [Xa] ---, X [a] in rtwo(G). Clauses 6 and 7 correspond with reduc- tions with rules of the form A --* X [a] in rtwo(G). For Clauses 1 and 4, corresponding actions are hard to find, since these clauses seem to be specific to generalized head-driven parsing. The reason that we based Algorithm 6 on rheaa is twofold. Firstly, the algorithm above is more appro- priate for presentational purposes than an alternative algorithm we have in mind which is not based on "/'head , and secondly, the resulting parsers need less sets Q. This is similar in the case of LR parsing. 5 Example 1 Consider the generalized head grammar with the following rules: S --~ ((c)A(b))s ](A(d))s I(B)s A ~ a B -~ A(b) Assume the input is given by ala2a3a4 = c a b s. The steps performed by the algorithm are given in Figure 1. [] Apart from HI parsing, also TD, tIC, PHI, and EHI parsing can be adapted to generalized head-driven pars- ing. Correctness The head-driven stack automata studied so far differ from one another in their degree of nondeterminism. In this section we take a different perspective. For all these devices, we show that quite similar relations ex- ist between stack contents and the way input strings are visited. Correctness results easily follow from such characterisations. (Proofs of statements in this section are omitted for reasons of space.) Let G = (N, T, P, S) be a head grammar. To be used below, we introduce a special kind of derivation. sit is interesting to compare LR parsing for a context-free grammar G with LR parsing for the transformed grammar rtwo(G). The transformation has the effect that a reduc- tion with a rule is replaced by a cascade of reductions with smaller rules; apart from this, the transformation does not affect the global run-time behaviour of LR parsing. More serious are the consequences for the size of the parser: the required number of LR states for the transformed grammar is smaller [9]. 215 P•S ~2 ~3,0 X3,1 ~3,1 X3,2 ~3,2 X3,3 ~3,3 Figure 2: A head-outward sentential form derived by the composition of a-derivations Pi, 1 < i < 3. The starting place of each a-derivation is indicated, each triangle representing the application of a single produc- tion. Definition 1 A a-derivation has the form A p~p2...p~-~ 70B71 P. ---+ 70ar/flV1 p "-+ "/0 cl~z/~71 , (1) where Pl,P2 .... ,Ps are productions in pt, s > 1, Pi rewrites the unique nonterminai occurrence introduced as the head element of pi-1 for 2 < i < S, p, = (B c~) and p E P* rewrites t 1 into z E T +. The indicated occurrence of string 7/in (1) is called the handle of the a-derivation. When defined, the right- most (leftmost) nonterminal occurrence in a (/~, re- spectively) is said to be adjacent to the handle. The notions of handle and adjacent nonterminal occurrence extend in an obvious way to derivations of the form CA0 L ¢70Z710 , where A :--, 70z71 is a a-derivation. By composing a-derivations, we can now define the class of sentential forms we are interested in. (Figure 2 shows a case example.) Definition 2 A head-outward sentential form is ob- tained through a derivation Pl S ---, 71,0zl,t71,1 P2 ----+ "f2,0 X 2,1.),2,1X 2,2 ~,2,2 Pq Vq,OT.q,lVq,lXq,2Vq,2"''[q,q-lZq,qVq, q (2) where q > 1, each Pi is a a-derivation and, for 2 < i < q, only one string 7i-l,j is rewritten by applying Pi at a nonterminal occurrence adjacent to the handle of pi-1. Sequence Pl,p~,...,pq is said to derive the sentential form in (2). The definition of head-outward sentential form sug- gests a corresponding notion of head-outward deriva- tion. Informally, a head-outward derivation proceeds by recursively expanding to a terminal string first the head of a rule, and then the remaining members of the rhs, in an outward order• Conversely, we have head-inward (HI) derivations, where first the remaining members in the rhs are expanded, in an inward order (toward the head), after which the head itself is recursively ex- panded. Note that HI parsing recognizes a string by computing an HI derivation in reverse (of. Lit parsing). Let w = axa2 • .-an, n > 1, be a string over T and let a0 = .1_. For -1 < i < j < n, we write (i,j],, to denote substring ai+ l • • • aj . Theorem 1 For A one of Anc, A PH1 or A EH~, the following facts are equivalent: (i) A reaches a configuration whose stack contents are Il I~ . . . lq, q > 1, with It = [it,kt, At ---+ at * Ot * fit,mt,jt] or h = [it,kt,At ---+ ~h,mt,jt] or It = [it,kt, At ~ ~?t,mt,jt] for the respective automata, I < t < q; (it) a sequence of a-derivations Pl, P2,..., Pq, q >_ 1, de- rives a head-outward sentential form "/'0 (k~r(1), mr(1)]w71(k.(2), rn~(2)]wY2 • • • • " "Tq-1 (kTr(q), m~r(q)]w~/q where lr is a permutation of {1 .... ,q}, Pt has han- dle ~?t which derives (kTr(t),m~r(t)]w, I < t < q, and m~(t-1) < kTr(t), 2 < t < q. As an example, an accepting stack configuration [-1,-1,S l ---* • IS .,n,n] corresponds to a a- derivation (S' ---+ IS)p, p E P+, with handle _I_S which derives the head-outward sentential form 70(-1, n]~71 = _l_w, from which the correctness of the head-corner algorithm follows directly. If we assume that G does not contain any useless sym- bols, then Theorem 1 has the following consequence, if the automaton at some point has consulted the sym- bols ail,ai2,...,aim from the input string, il ..... im increasing indexes, then there is a string in the language generated by G of the form voai~vl ...vm_lai,~vm. Such a statement may be called correct subsequence property (a generalization of correct prefix property [8]). Note that the order in which the input symbols are con- sulted is only implicit in Theorem 1 (the permutation 7r) but is severely restricted by the definition of head- outward sentential form. A more careful characterisa- tion can be obtained, but will take us outside of the scope of this paper. The correct subsequence property is enforced by the (top-down) predictive feature of the automata, and holds also for A TD and A HI. Characterisations simi- lar to Theorem 1 can be provided for these devices. We investigate below the GHI automaton. For an item I E 1 GUt of the form [i,k,Q,m,j], [k, Q, m, j], It, k, Q, m] or [k, t, m], we say that k (m respectively) is its left (right) component. Let N ~ be 216 the set of nonterminals of the head grammar rhead(G). We need a function yld from reachable items in I am into (N' tO T)*, specified as follows. If we assume that (cQX(fl) E Qv A ~ (t~)X(/3) E Q and t = (a)X(/3) V t = A --* (a)X(~3), then X if / = [i, k, Q, m, j] yld(I) = [a]X ifI= [k,Q,m,j] X[fl] if I = [i, k, Q, m] [a]X[fl] if I = [k, t, m] It is not difficult to show that the definition of yld is consistent (i.e. the particular choice of a tree or rule from Q is irrelevant). Theorem 2 The following facts are equivalent: (i) A cHl reaches a configuration whose stack contents are Il I~ . . . Iq, q > 1, with kt and mt the left and right components, respectively, of It, and yld(It) = Yt, for l<t<q; (it) a sequence of tr-derivations Pl,P2,...,Pq, q > 1, de- rives in rh~aa(G) a head-outward sentential form 7o(k~r(1), m,r(1)]w"Y1 (kr(2), mr(2)]w72 • • • "'" 7q- 1 (k~-(q), m~(q)]w')'q where ~r is a permutation of {1,...,q}, Pt has han- dle tit which derives (k~(t),m,~(t)]w, 1 < t < q, and rex(t-l) <_ kx(t), 2 < t < q. Discussion We have presented a family of head-driven algorithms: TD, I/C, Pill, EHI, and HI parsing. The existence of this family demonstrates that head-driven parsing cov- ers a range of parsing algorithms wider than commonly thought. The algorithms in this family are increasingly deter- ministic, which means that the search trees have a de- creasing size, and therefore simple realizations, such as backtracking, are increasingly effÉcient. However, similar to the left-to-right case, this does not necessarily hold for tabular realizations of these al- gorithms. The reason is that the more refined an al- gorithm is, the more items represent computation of a single subderivation, and therefore some subderiva- tions may be computed more than once. This is called redundancy. Redundancy has been investigated for the left-to-right case in [8], which solves this problem for ELR parsing. Head-driven algorithms have an addi- tional source of redundancy, which has been solved for tabular I-IC parsing in [14]. The idea from [14] can also be applied to the other head-driven algorithms from this paper. We have further proposed a generalization of head- driven parsing, and we have shown an example of such an algorithm based on LR parsing. Prospects to even further generalize the ideas from this paper seem promising. References [1] A.V. Aho, R. Sethi, and J.D. Ullman. Compil- ers: Principles, Techniques, and Tools. Addison- Wesley, 1986. [2] G. Bouma and G. van Noord. Head-driven parsing for lexicalist grammars: Experimental results. In Sixth Conference of the European Chapter of the ACL, pages 71-80, April 1993. [3] G. Gazdar, E. Klein, G. Pullum, and I. Sag. Gen- eralized Phrase Structure Grammar. Harvard Uni- versity Press, Cambridge, MA, 1985. [4] Third International Workshop on Parsing Tech- nologies (IWPT 3), Tilburg (The Netherlands) and Durbuy (Belgium), August 1993. [5] R. Jackendoff. X-bar Syntax: A Study of Phrase Structure. The MIT Press, Cambridge, MA, 1977. [6] M. Kay. I/ead-driven parsing. In International Parsing Workshop '89, pages 52-62, Pittsburgh, 1989. [7] R. Leermakers. How to cover a grammar. In 27th Annual Meeting of the ACL, pages 135-142, June 1989. [8] M.J. Nederhof. An optimal tabular parsing algo- rithm. In this proceedings. [9] M.J. Nederhof and J.J. Sarbo. Increasing the ap- plicability of LR parsing. In [4], pages 187-201. [10] G. van Noord. Reversibility in Natural Language Processing. PhD thesis, University of Utrecht, 1993. [11] C. Pollard and I. Sag. Information-Based Syntax and Semantics, volume 1: Fundamentals. CSLI Lecture Notes Series No. 13, Center for the Study of Language and Information, Stanford University, Stanford, California, 1987. [12] P.W. Purdom, Jr. and C.A. Brown. Parsing extended LR(k) grammars. Acta Informatica, 15:115-127, 1981. [13] G. Satta and O. Stock. Head-driven bidirectional parsing: A tabular method. In International Pars- ing Workshop '89, pages 43-51, Pittsburgh, 1989. [14] K. Sikkel and R. op den Akker. Predictive head- corner chart parsing. In [4], pages 267-276. [15] E. Soisalon-Soininen and E. Ukkonen. A method for transforming grammars into LL(k) form. Acta Informatica, 12:339-369, 1979. [16] M. Tomita. Parsing 2-dimensional language. In M. Tomita, editor, Current Issues in Parsing Tech- nology, chapter 18, pages 277-289. Kluwer Aca- demic Publishers, 1991. 217
1994
29
PRIORITY UNION AND GENERALIZATION IN DISCOURSE GRAMMARS Claire Grover, Chris Brew, Suresh Manandhar, Marc Moens HCRC Language Technology Group The University of Edinburgh 2 Buccleuch Place Edinburgh EH8 9LW, UK Internet: C. Grover©ed. ac. uk Abstract We describe an implementation in Carpenter's ty- ped feature formalism, ALE, of a discourse gram- mar of the kind proposed by Scha, Polanyi, et al. We examine their method for resolving parallelism-dependent anaphora and show that there is a coherent feature-structural rendition of this type of grammar which uses the operations of prwrity union and generalization. We describe an augmentation of the ALE system to encompass these operations and we show that an appropriate choice of definition for priority union gives the de- sired multiple output for examples of vP-ellipsis which exhibit a strict/sloppy ambiguity. 1 Discourse Grammar Working broadly within the sign-based paradigm exemplified by HPSG (Pollard and Sag in press) we have been exploring computational issues for a discourse level grammar by using the ALE sy- stem (Carpenter 1993) to implement a discourse grammar. Our central model of a discourse gram- mar is the Linguistic Discourse Model (LDM) most often associated with Scha, Polanyi, and their co- workers (Polanyi and Scha 1984, Scha and Polanyi 1988, Priist 1992, and most recently in Priist, Scha and van den Berg 1994). In LDM rules are defi- ned which are, in a broad sense, unification gram- mar rules and which combine discourse constitu- ent units (DCUS). These are simple clauses whose syntax and underresolved semantics have been de- termined by a sentence grammar but whose fully resolved final form can only be calculated by their integration into the current discourse and its con- text. The rules of the discourse grammar act to establish the rhetorical relations between constitu- ents and to perform resolution of those anaphors whose interpretation can be seen as a function of discourse coherence (as opposed to those whose interpretation relies on general knowledge). For illustrative purposes, we focus here on Prfist's rules for building one particular type of rhetorical relation, labelled "list" (Priist 1992). His central thesis is that for DCUs to be combined into a list they must exhibit a degree of syntactic-semantic parallelism and that this parallelism will strongly determine the way in which some kinds of anaphor are resolved. The clearest example of this is vP- ellipsis as in (la) but Priist also claims that the subject and object pronouns in (lb) and (lc) are parallelism-dependent anaphors when they occur in list structures and must therefore be resolved to the corresponding fully referential subject/object in the first member of the list. (1) a. Hannah likes beetles. So does Thomas. b. Hannah likes beetles. She also likes caterpillars. c. Hannah likes beetles. Thomas hates them. (2) is Priist's list construction rule. It is intended to capture the idea that a list can be constructed out of two DCUs, combined by means of connec- tives such as and and or. The categories in Priist's rules have features associated with them. In (2) these features are sere (the unresolved semantic interpretation of the category), consem (the con- textually resolved semantic interpretation), and schema (the semantic information that is com- mon between the daughter categories). (2) list [sere: el T~ ((Cl ¢'$2) RS2), schema : C1 ¢ $2] ----4 DCUI [ sem : Sl,consem : C1] + DCU2 [sere : ~S2,consem : ((Cl ~'$2) ~$2)] Conditions: C1 • $2 is a characteristic generalization of C1 and S~; R E {and, or .... }. Priist calls the operation used to calculate the va- lue for schema the most specific common deno- minator (MSCD, indicated by the symbol ¢). The MSCD of C1 and $2 is defined as the most specific generalization of C1 that can unify with 5'2. It is essential that the result should be contentful to a degree that confirms that the list structure is an appropriate analysis, and to this end Pr/ist impo- ses the condition that the value of schema should 17 be a characteristic generalization of the informa- tion contributed by the two daughters. There is no formal definition of this notion; it would re- quire knowledge from many sources to determine whether sufficient informativeness had been achie- ved. However, assuming that this condition is met, Priist uses the common information as a source for resolution of underspecified elements in the second daughter by encoding as the value of the second daughter's consem the unification of the result of MSCD with its pre-resolved semantics (the formula ((Ca / $2) Iq $2)). So in Priist's rule the MSCD operation plays two distinct roles, first as a test for parallelism (as the value of the mother's schema) and second as a basis for resolution (in the com- posite operation which is the value of the second daughter's consem). There are certain problems with MSCD which we claim stem from this attempt to use one operation for two purposes, and our pri- mary concern is to find alternative means of achie- ving Prfist's intended analysis. 2 An ALE Discourse Grammar For our initial exploration into using ALE for dis- course grammars we have developed a small dis- course grammar whose lexical items are complete sentences (to circumvent the need for a sentence grammar) and which represents the semantic con- tent of sentences using feature structures of type event whose sub-types are indicated in the follo- wing part of the type hierarchy: event (3) agentive plus_patient prop-art emot-att action believe assume like hate kick catch In addition we have a very simplified semantics of noun phrases where we encode them as of type entity with the subtypes indicated below: (4) entity animate human animal female male insect ..4"-., hannah jessy thomas sam brother beetle bee cater- pillar Specifications of which features are appropriate for which type give us the following representations of the semantic content of the discourse units in (1): (5) a. Hannah likes beetles [ AGENT hannah ] PATIENT beetle like b. So does Thomas [ AGENT thomas ] agentive c. She also likes caterpillars [ AGENT female ] PATIENT caterpillar like d. Thomas hates them [ AGENT thomas ] PATIENT entity hate 2.1 Calculating Common Ground The SCHEMA feature encodes the information that is common between daughter Dcus and Prtist uses MSCD to calculate this information. A feature- structural definition of MSCD would return as a result the most specific feature structure which is at least as general as its first argument but which is also unifiable with its second argument. For the example in (lc), the MSCD operation would be given the two arguments in (5a) and (5d), and (6) would be the result. (6) [ AGENT human ] PATIENT beetle emot_att We can contrast the MSCD operation with an operation which is more commonly discussed in the context of feature-based unification systems, namely generalization. This takes two feature- structures as input and returns a feature struc- ture which represents the common information in them. Unlike MSCD, generalization is not asym- metric, i.e. the order in which the arguments are presented does not affect the result. The genera- lization of (5a) and (5d) is shown in (7). (7) [ AGENT human ] PATIENT entity emot_att It can be seen from this example that the MSCD result contains more information than the genera- lization result. Informally we can say that it seems to reflect the common information between the two inputs after the parallelism-dependent ana- phor in the second sentence has been resolved. The reason it is safe to use MSCD in this context is pre- cisely because its use in a list structure guarantees 18 that the pronoun in the second sentence will be resolved to beetle. In fact the result of MSCD in this case is exactly the result we would get if we were to perform the generalization of the resolved sentences and, as a representation of what the two have in common, it does seem that this is more de- sirable than the generalization of the pre-resolved forms. If we turn to other examples, however, we discover that MSCD does not always give the best results. The discourse in (8) must receive a constituent structure where the second and third clauses are combined to form a contrast pair and then this contrast pair combines with the first sentence to form a list. (Prfist has a separate rule to build contrast pairs but the use of MSCD is the same as in the list rule.) (8) Hannah likes ants. Thomas likes bees but Jessy hates them. (9) fAGENT hanna~ [._PATIENT insect_.J like AGENT hannah~ [AGENT human~ PATIENT ant _] PATIENT bee _J like e ~ ATIENT bee [ [-PATIENT entity I like hate The tree in (9) demonstrates the required struc- ture and also shows on the mother and interme- diate nodes what the results of MSCD would be. As we can see, where elements of the first argument of MSCD are more specific than the corresponding elements in the second, then the more specific one occurs in the result. Here, this has the effect that the structure [like, AGENT hannah, PATIENT ins- ect ] is somehow claimed to be common ground between all three constituents even though this is clearly not the case. Our solution to this problem is to dispense with the MSCD operation and to use generalization in- stead. However, we do propose that generalization should take inputs whose parallelism dependent anaphors have already been resolved. 1 In the case of the combination of (5a) and (5d), this will give 1As described in the next section, we use priority union to resolve these anaphors in both lists and con- trasts. The use of generalization as a step towards checking that there is sufficient common ground is sub- sequent to the use of priority ration as the resolution mechanism. exactly the same result as MSCD gave (i.e. (6)), but for the example in (8) we will get different re- sults, as the tree in (10) shows. (Notice that the representation of the third sentence is one where the anaphor is resolved.) The resulting generaliza- tion, [emot_att, AGENT human, PATIENT insect], is a much more plausible representation of the com- mon information between the three DCUs than the results of MSCD. (10) fAGENT huma~ [_PATIENT insect~ ATIENT ant ..J [_PATIENT bee _J like e ~ LPATIENT bee _] [_PATIENT bee ._] like hate 2.2 Resolution of Parallel Anaphors We have said that MSCD plays two roles in Pr/ist's rules and we have shown how its function in cal- culating the value of SCHEMA can be better served by using the generalization operation instead. We turn now to the composite operation indicated in (2) by the formula ((C, /S~)NS2). This com- posite operation calculates MSCD and then unifies it back in with the second of its arguments in or- der to resolve any parallelism-dependent anaphors that might occur in the second DCU. In the discus- sion that follows, we will refer to the first DcU in the list rule as the source and to the second DCU as the target (because it contains a parallelism- dependent anaphor which is the target of our at- tempt to resolve that anaphor). In our ALE implementation we replace Pr/ist's composite operation by an operation which has oc- casionally been proposed as an addition to feature- based unification systems and which is usually re- ferred to either as default unification or as priority union. 2 Assumptions about the exact definition of this operation vary but an intuitive description of it is that it is an operation which takes two feature structures and produces a result which is a merge of the information in the two inputs. However, the information in one of the feature structures is "strict" and cannot be lost or overridden while the information in the other is defensible. The opera- tion is a kind of union where the information in the strict structure takes priority over that in the ~See, for example, Bouma (1990), Calder (1990), Carpenter (1994), Kaplan (1987). 19 default structure, hence our preference to refer to it by the name priority union. Below we demon- strate the results of priority union for the exam- ples in (la)-(lc). Note that the target is the strict structure and the source is the defeasible one. (11) Hannah likes beetles. So does Thomas. Source: 5a Target: 5b Priority[ AGENT th°mas ] Union: PATIENT beetle like (12) Hannah likes beetles. She also likes caterpillars. Source: 5a Target: 5c [ AGENT hannah 1 Priority PATIENT caterpillar Union: like (13) Hannah likes beetles. Thomas hates them. Source: 5a Target: 5d AGENT thomas ] Priority PATIENT beetle Union: hate For these examples priority union gives us exactly the same results as Priist's composite operation. We use a definition of priority union provided by Carpenter (1994) (although note that his name for the operation is "credulous default unification"). It is discussed in more detail in Section 3. The pri- ority union of a target T and a source S is defined as a two step process: first calculate a maximal feature structure S' such that S' E S, and then unify the new feature structure with T. This is very similar to PriJst's composite opera- tion but there is a significant difference, however. For Priist there is a requirement that there should always be a unique MSCD since he also uses MSCD to calculate the common ground as a test for par- allelism and there must only be one result for that purpose. By contrast, we have taken Carpenter's definition of credulous default unification and this can return more than one result. We have strong reasons for choosing this definition even though Carpenter does define a "skeptical default unifi- cation" operation which returns only one result. Our reasons for preferring the credulous version arise from examples of vP-ellipsis which exhibit an ambiguity whereby both a "strict" and a "sloppy" reading are possible. For example, the second sen- tence in (14) has two possible readings which can be glossed as "Hannah likes Jessy's brother" (the strict reading) and "Hannah likes her own bro- ther" (the sloppy reading). (14) Jessy likes her brother. So does Hannah. The situations where the credulous version of the operation will return more than one result arise from structure sharing in the defeasible feature structure and it turns out that these are exactly the places where we would need to get more than one result in order to get the strict/sloppy ambi- guities. We illustrate below: (15) Jessy likes her brother. So does Hannah. Source: AGENT PATIENT like ~]jessy ] [] ] brother Target: [ AGENT hannah ] agentive Priority Union: " AGENT PATIENT like [] hannah 1 [ nl] brother AGENT PATIENT like hannah ] [ brother Here priority union returns two results, one where the structure-sharing information in the source has been preserved and one where it has not. As the example demonstrates, this gives the two readings required. By contrast, Carpenter's skeptical de- fault unification operation and Priist's composite operation return only one result. 2.3 Higher Order Unification There are similarities between our implementa- tion of Prfist's grammar and the account of vP- ellipsis described by Dalrymple, Shieber and Pe- reira (1991) (henceforth DSP). DSP gives an equational characterization of the problem of vp- ellipsis where the interpretation of the target phrase follows from an initial step of solving an equation with respect to the source phrase. If a function can be found such that applying that fun- ction to the source subject results in the source in- terpretation, then an application of that function to the target subject will yield the resolved inter- pretation for the target. The method for solving such equations is "higher order unification". (16) shows all the components of the interpretation of the example in (11). 20 (16) Hannah likes beetles. So does Thomas. Source: Target (T): Equation: Solution: Apply to T: like(hannah, beetle) P ( thomas ) P ( hannah ) = like(hannah, beetle) P = ~x.like(x, beetle) like(thomas, beetle) A prerequisite to the DSP procedure is the esta- blishment of parallelism between source and target and the identification of parallel subparts. For ex- ample, for (16) it is necessary both that the two clauses Hannah likes beetles and So does Thomas should be parallel and that the element hannah should be identified as a parallel element. DSP indicate parallel elements in the source by means of underlines as shown in (16). An underlined ele- ment in the source is termed a 'primary occur- rence' and DSP place a constraint on solutions to equations requiring that primary occurrences be abstracted. Without the identification of hannah as a primary occurrence in (16), other equations deriving from the source might be possible, for ex- ample (17) : (17) a. P(beetle) = like(hannah, beetle) b. P(like) = like(hannah, beetle) The DSP analysis of our strict/sloppy example in (14) is shown in (18). The ambiguity follows from the fact that there are two possible solutions to the equation on the source: the first solution involves abstraction of just the primary occurrence ofjessy, while the second solution involves abstraction of both the primary and the secondary occurrences. When applied to the target these solutions yield the two different interpretations: (18) Jessy Source: Target: Equation: Sol.1 ($1): Sol.2 (S2): Apply SI: Apply $2: likes her brother. So does Hannah. like(jessy, brother-of (jessy) ) P( hannah ) P(jessy) = like(jessy, brother-of (jessy) ) P = ~x.like(x, brother-of(jessy)) e = Ax.like(x, brother-of(x)) like(hannah, brother-of (jessy) ) like(hannah, brother-of(hannah)) DSP claim that a significant attribute of their ac- count is that they can provide the two readings in strict/sloppy ambiguities without having to postu- late ambiguity in the source. They claim this as a virtue which is matched by few other accounts of vP-ellipsis. We have shown here, however, that an account which uses priority union also has no need to treat the source as ambiguous. Our results and DSP's also converge where the treatment of cascaded ellipsis is concerned. For the example in (19) both accounts find six rea- dings although two of these are either extremely implausible or even impossible. (19) John revised his paper before the teacher did, and Bill did too. DSP consider ways of reducing the number of readings and, similarly, we are currently explo- ring a potential solution whereby some of the re- entrancies in the source are required to be trans- mitted to the result of priority union. There are also similarities between our account and the DSP account with respect to the esta- blishment of parallelism. In the DSP analysis the determination of parallelism is separate from and a prerequisite to the resolution of ellipsis. Howe- ver, they do not actually formulate how paralle- lism is to be determined. In our modification of Prfist's account we have taken the same step as DSP in that we separate out the part of the fea- ture structure used to determine parallelism from the part used to resolve ellipsis. In the general spirit of Priist's analysis, however, we have taken one step further down the line towards determi- ning parallelism by postulating that calculating the generalization of the source and target is a first step towards showing that parallelism exists. The further condition that Prfist imposes, that the common ground should be a characteristic genera- lization, would conclude the establishment of par- allelism. We are currently not able to define the notion of characteristic generalization, so like DSP we do not have enough in our theory to fully imple- ment the parallelism requirement. In contrast to the DSP account, however, our feature structural approach does not involve us having to explicitly pair up the component parts of source and target, nor does it require us to distinguish primary from secondary occurrences. 2.4 Parallelism In the DSP approach to vP-ellipsis and in our ap- proach too, the emphasis has been on semantic parallelism. It has often been pointed out, howe- ver, that there can be an additional requirement of syntactic parallelism (see for example, Kehler 1993 and Asher 1993). Kehler (1993) provides a use- ful discussion of the issue and argues convincingly that whether syntactic parallelism is required de- pends on the coherence relation involved. As the examples in (20) and (21) demonstrate, semantic parallelism is sufficient to establish a relation like contrast but it is not sufficient for building a co- herent list. (20) The problem was looked into by John, but no-one else did. (21) *This problem was looked into by John, and Bill did too. For a list to be well-formed both syntactic and semantic parallelism are required: 21 (22) John looked into this problem, and Bill did too. In the light of Kehler's claims, it would seem that a more far-reaching implementation of our prio- rity union account would need to specify how the constraint of syntactic parallelism might be imple- mented for those constructions which require it. An nPSG-style sign, containing as it does all types of linguistic information within the same feature structure, would lend itself well to an account of syntactic parallelism. If we consider that the DTRS feature in the sign for the source clause contains the entire parse tree including the node for the vP which is the syntactic antecedent, then ways to bring together the source vP and the target be- gin to suggest themselves. We have at our disposal both unification to achieve re-entrancy and the op- tion to use priority union over syntactic subparts of the sign. In the light of this, we are confident that it would be possible to articulate a more ela- borate account of vp-ellipis within our framework and that priority union would remain the opera- tion of choice to achieve the resolution. 3 Extensions to ALE In the previous sections we showed that Prfist's MSCD operation would more appropriately be re- placed by the related operations of generalization and priority union. We have added generalization and priority union to the ALE system and in this section we discuss our implementation. We have provided the new operations as a complement to the definite clause component of ALE. We chose this route because we wanted to give the gram- mar writer explicit control of the point at which the operations were invoked. ALE adopts a sim- ple eROLOG-like execution strategy rather than the more sophisticated control schemes of systems like CUF and TFS (Manandhar 1993). In princi- ple it might be preferable to allow the very gene- ral deduction strategies which these other systems support, since they have the potential to support a more declarative style of grammar-writing. Unfor- tunately, priority union is a non-monotonic ope- ration, and the consequences of embedding such operations in a system providing for flexible exe- cution strategies are largely unexplored. At least at the outset it seems preferable to work within a framework in which the grammar writer is requi- red to take some of the responsibility for the order in which operations are carried out. Ultimately we would hope that much of this load could be taken by the system, but as a tool for exploration ALE certainly suffices. 3.1 Priority Union in ALE We use the following definition of priority union, based on Carpenter's definition of credulous de- fault unification: (23) punion(T,S) = {unify(T,S') IS' K S is maximal such that unify(T,S') is defined} punion(T,S) computes the priority union oft (tar- get; the strict feature structure) with S (source; the defeasible feature structure). This definition relies on Moshier's (1988) definition of atomic fea- ture structures, and on the technical result that any feature structure can be decomposed into a unification of a unique set of atomic feature struc- tures. Our implementation is a simple procedura- lization of Carpenter's declarative definition. First we decompose the default feature structure into a set of atomic feature structures, then we search for the maximal subsets required by the definition. We illustrate our implementation of priority union in ALE with the example in (15): Source is the de- fault input, and Target is the strict input. The hierarchy we assume is the same as shown in (3) and (4). Information about how features are asso- ciated with types is as follows: • The type agentive introduces the feature AGENT with range type human. • The type plus-patient introduces the feature PA- TIENT with range type human. • The type brother introduces the feature BROTHER-OF with range type human. • The types jessy and hannah introduce no fea- tures. In order to show the decomposition into ato- mic feature structures we need a notation to re- present paths and types. We show paths like this: PATIENTIBROTHER-OF and in order to sti- pulate that the PATIENT feature leads to a struc- ture of type brother, we include type informa- tion in this way: (PATIENW/brother)[(BROTHER- of~human). We introduce a special feature (*) to allow specification of the top level type of the structure. The structures in (15) decompose into the following atomic components. (24) Default input: ( AGENT / jessy) ( D 1 ) (PATIENT/brother)I(BROTHER-OF/jessy) (D2) AGENT ---~ PATIENTIBROTHER-OF (D3) (*/like) (D4) Strict input: (AGENT~hannah) (S 1 ) ( * / agentive) ($2) Given the type hierarchy the expressions above ex- pand to the following typed feature structures: 22 (25) Default input: [ AGENT jessy ] agentive AGENT PATIENT plus-patient AGENT PATIENT plus-patient AGENT human ] PATIENT entity like human 1 brother human ] ] brother (D1) (D2) (D3) (D4) Strict input: [ AGENT hannah ] agentive (s1,s2) We can now carry out the following steps in order to generate the priority union. 1. Add (94) to the strict input. It cannot conflict. 2. Note that it is impossible to add (D1) to the strict input. 3. Non-deterministically add either (92) or (93) to the strict input. 4. Note that the results are maximal in each case because it is impossible to add both (D2) and (D3) without causing a clash between the dis- joint atomic types hannah and jessy. 5. Assemble the results into feature structures. If we have added (D3) the result will be (26) and if we have added (D2) the result will be (27). (26) Result 1: " AGENT [] hannah ] PATIENT [BROTHER-OF [] ] ] brother like (27) Result 2: AGENT PATIENT like hannah ] [BROTHER-OFjessy] brother In order to make this step-by-step description into an algorithm we have used a breadth-first search routine with the property that the largest sets are generated first. We collect answers in the order in which the search comes upon them and carry out subsumption checks to ensure that all the answers which will be returned are maximal. These checks reduce to checks on subset inclusion, which can be reasonably efficient with suitable set representati- ons. Consistency checking is straightforward be- cause the ALE system manages type information in a manner which is largely transparent to the user. Unification of ALE terms is defined in such a way that if adding a feature to a term results in a term of a new type, then the representation of the structure is specialized to reflect this. Since prio- rity union is non-deterministic we will finish with a set of maximal consistent subsets. Each of these subsets can be converted directly into ALE terms using ALE's built-in predicate add_to/5. The re- sulting set of ALE terms is the (disjunctive) result of priority union. In general we expect priority union to be a com- putationally expensive operation, since we cannot exclude pathological cases in which the system has to search an exponential number of subsets in the search for the maximal consistent elements which are required. In the light of this it is fortunate that our current discourse grammars do not re- quire frequent use of priority union. Because of the inherent complexity of the task we have fa- voured correctness and clarity at the possible ex- pense of efficiency. Once it becomes established that priority union is a useful operation we can begin to explore the possibilities for faster imple- mentations. 3.2 Generalization in ALE The abstract definition of generalization stipulates that the generalization of two categories is the lar- gest category which subsumes both of them. Mos- hier (1988) has shown that generalization can be defined as the intersection of sets of atomic fea- ture structures. In the previous section we outli- ned how an ALE term can be broken up into atomic feature structures. All that is now required is the set intersection operation with the addition that we also need to cater for the possibility that ato- mic types may have a consistent generalization. 1. For P and Q complex feature structures Gen(P,Q) =~! {Path: C I Path: A E P and Path : B E Q } where C is the most specific type which subsumes both A and B. 2. For A and B atomic types Gen(A, B) =dr C where C is the most specific type which subsu- mes both A and B. In ALE there is always a unique type for the gene- ralization. We have made a small extension to the ALE compiler to generate a table of type genera- lizations to assist in the (relatively) efficient com- putation of generalization. To illustrate, we show how the generalization of the two feature structu- res in (28) and (29) is calculated. 23 (28) (29) Hannah likes ants. AGENT hannah ] PATIENT ant like Jessy laughs. [AGENT jessy ] laugh These decompose into the atomic components shown in (30) and (31) respectively. (30) (*/like) (AGENT/hannah) (PATIENT/ant) (31) (*/Za.gh) (AGENT/jessy) These have only the AGENT path in common alt- hough with different values and therefore the ge- neralization is the feature structure corresponding to this path but with the generalization of the ato- mic types hannah and jessy as value: (32) [ AGENT female ] agentive 4 Conclusion In this paper we have reported on an implemen- tation of a discourse grammar in a sign-based for- malism, using Carpenter's Attribute Logic Engine (aLE). We extended the discourse grammar and ALE to incorporate the operations of priority union and generalization, operations which we use for resolving parallelism dependent anaphoric expres- sions. We also reported on a resolution mecha- nism for verb phrase ellipsis which yields sloppy and strict readings through priority union, and we claimed some advantages of this approach over the use of higher-order unification. The outstanding unsolved problem is that of esta- blishing parallelism. While we believe that gene- ralization is an appropriate formal operation to assist in this, we still stand in dire need of a con- vincing criterion for judging whether the genera- lization of two categories is sufficiently informative to successfully establish parMlelism. Acknowledgements This work was supported by the EC-funded project LRE-61-062 "Towards a Declarative Theory of Dis- course" and a longer version of the paper is available in Brew et al (1994). We have profited from discus- sions with Jo Calder, Dick Crouch, Joke Dorrepaal, Claire Gardent, Janet Hitzeman, David Millward and Hub Prfist. Andreas Schhter helped with the imple- mentation work. The Human Communication Rese- arch Centre (HCRC) is supported by the Economic and Social Research Council (UK). References Asher, N. (1993) Reference to Abstract Objects in Di- scourse. Dordrecht: Kluwer. Bouma, G. (1990) Defaults in Unification Grammar. In Proceedings of the 28th ACL, pp. 165-172, Uni- versity of Pittsburgh. Brew, C. et al (1994) Discourse Representation. De- liverable B+ of LRE-61-062: Toward a Declarative Theory of Discourse. Calder, J. H. R. (1990) An Interpretation of Paradig- matic Morphology. PhD thesis, Centre for Cognitive Science, University of Edinburgh. Carpenter, B. (1993) ALE. The Attribute Logic En- gine user's guide, version ~. Laboratory for Com- putational Linguistics, Carnegie Mellon University, Pittsburgh, Pa. Carpenter, B. (1994) Skeptical and credulous default unification with applications to templates and inhe- ritance. In T. Briscoe et al, eds., Inheritance, De- faults, and the Lexicon, pp. 13-37. Cambridge: Cam- bridge University Press. Dalrymple, M., S. Shieber and F. Pereira (1991) El- lipsis and higher-order unification. Linguistics and Philosophy 14(4), 399-452. Kaplan, R. M. (1987) Three seductions of computa- tional psycholinguistics. In P. J. Whitelock et al, eds., Linguistic Theory and Computer Applications, pp. 149-188. London: Academic Press. Kehler, A. (1993) The effect of establishing coherence in ellipsis and anaphora resolution. In Proceedings of the 31st ACL, pp. 62-69, Ohio State University. Manandhar, S. (1993) CUF in context. In J. Dbrre, ed., Computational Aspects of Constraint-Based Lingui- stics Description. DYANA-2 Deliverable. Moshier, D. (1988) Extensions to Unification Gram- mar for the Description of Programming Languages. PhD thesis, Department of Mathematics, University of California, Los Angeles. Polanyi, L. and R. Scha (1984) A syntactic approach to discourse semantics. In Proceedings of the tOth Coling and the 22nd ACL, pp. 413-419, Stanford University. Pollard, C. and I. A. Sag (in press) Head-Driven Phrase Structure Grammar. Chicago, Ill.: Univer- sity of Chicago Press and CSLI Publications. Priist, H. (1992) On Discourse Structuring, VP Ana- phora and Gapping. PhD thesis, Universiteit van Amsterdam, Amsterdam. Pr/Jst, H., R. Scha and M. van den Berg (1994} Dis- course grammar and verb phrase anaphora. Lingui- stics and Philosophy. To appear. Scha, R. and L. Polanyi (1988) An augmented context free grammar for discourse. In Proceedings of the 12th Coling, pp. 573-577, Budapest. 24
1994
3
GENERALIZED CHART ALGORITHM: AN EFFICIENT PROCEDURE FOR COST-BASED ABDUCTION Yasuharu Den ATR Interpreting Telecommunications Research Laboratories 2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-02, JAPAN Tel: +81-7749-5-1328, Fax: +81-7749-5-1308, e-mail: denQitl.atr.co.jp Abstract We present an efficient procedure for cost-based ab- duction, which is based on the idea of using chart parsers as proof procedures. We discuss in de- tail three features of our algorithm -- goal-driven bottom-up derivation, tabulation of the partial re- sults, and agenda control mechanism -- and report the results of the preliminary experiments, which show how these features improve the computational efficiency of cost-based abduction. Introduction Spoken language understanding is one of the most challenging research areas in natural language pro- cessing. Since spoken language is incomplete in var- ious ways, i.e., containing speech errors, ellipsis, metonymy, etc., spoken language understanding systems should have the ability to process incom- plete inputs by hypothesizing the underlying infor- mation. The abduction-based approach (Hobbs et al., 1988) has provided a simple and elegant way to realize such a task. Consider the following 3apanese sentence: (1) Sfseki kat- ta (a famous writer) buy PAST This sentence contains two typical phenomena aris- ing in spoken language, i.e., metonymy and the el- lipsis of a particle. When this sentence is uttered under the situation where the speaker reports his experience, its natural interpretation is the speaker bought a SSseki novel. To derive this interpreta- tion, we need to resolve the following problems: • The metonymy implied by the noun phrase S6seki is expanded to a S6seki novel, based on the pragmatic knowledge that the name of a writer is sometimes used to refer to his novel. • The particle-less thematic relation between the verb katta and the noun phrase SSseki is deter- mined to be the object case relation, based on the semantic knowledge that the object case relation between a trading action and a commodity can be linguistically expressed as a thematic relation. This interpretation is made by abduction. For instance, the above semantic knowledge is stated, in terms of the predicate logic, as follows: (2) sem(e,x) C trade(e) A commodity(x) A obj(e,x) Then, the inference process derives the consequent sem(e,x) by hypothesizing an antecedent obj(e,x), which is never proved from the observed facts. This process is called abduction. Of course, there may be several other possibili- ties that support the thematic relation sem(e,x). For instance, the thematic relation being deter- mined to be the agent case relation, sentence (1) can have another interpretation, i.e., Sfseki bought something, which, under some other situations, might be more feasible than the first interpretation. To cope with feasibility, the abduction-based model usually manages the mechanism for evaluating the goodness of the interpretation. This is known as cost-based abduction (Hobbs et al., 1988). In cost-based abduction, each assumption bears a certain cost. For instance, the assump- tion obj(e,x), introduced by applying rule (2), is specified to have a cost of, say, $2. The goodness of the interpretation is evaluated by accumulating the costs of all the assumptions involved. The whole process of interpreting an utterance is depicted in the following schema: 1. Find all possible interpretations, and 2. Select the one that has the lowest cost. In our example, the interpretation that as- sumes the thematic relation to be the object case relation, with the metonymy being expanded to a S6seki novel, is cheaper than the interpretation that assumes the thematic relation to be the agent case relation; hence, the former is selected. An apparent problem here is the high compu- tational cost; because abduction allows many pos- sibilities, the schema involves very heavy compu- tation. Particularly in the spoken language under- standing task, we need to consider a great number of possibilities when hypothesizing various underly- ing information. This makes the abduction process 218 computationally demanding, and reduces the prac- ticality of abduction-based systems. The existing models do not provide any basic solution to this problem. Charniak (Charniak and Husain, 1991; Charniak and Santos Jr., 1992) dealt with the prob- lem, but those solutions are applicable only to the propositional case, where the search space is rep- resented as a directed graph over ground formulas. In other words, they did not provide a way to build such graphs from rules, which, in general, contain variables and can be recursive. This paper provides a basic and practical so- lution to the computation problem of cost-based abduction. The basic idea comes from the natural language parsing literature. As Pereira and War- ren (1983) pointed out, there is a strong connec- tion between parsing and deduction. They showed that parsing of DCG can be seen as a special case of deduction of Horn clauses; conversely, deduction can be seen as a generalization of parsing. Their idea of using chart parsers as deductive-proof pro- cedures can easily be extended to the idea of using chart parsers as abductive-proof procedures. Be- cause chart parsers have many advantages from the viewpoint of computational efficiency, chart-based abductive-proof procedures are expected to nicely solve the computation problem. Our algorithm, proposed in this paper, has the following features, which considerably enhance the computational ef- ficiency of cost-based abduction: 1. Goal-driven bottom-up derivation, which reduces the search space. 2. Tabulation of the partial results, which avoids the recomputation of the same goal. 3. Agenda control mechanism, which realizes var- ious search strategies to find the best solution efficiently. The rest of the paper is organized as follows. First, we explain the basic idea of our algorithm, and then present the details of the algorithm along with simple examples. Next, we report the results of the preliminary experiments, which clearly show how the above features of our algorithm improve the computational efficiency. Then, we compare our algorithm with Pereira and Warren's algorithm, and finally conclude the paper. Head-driven Derivation Pereira and Warren showed that chart parsers can be used as proof procedures; they presented the Earley deduction proof procedure, that is a gener- alization of top-down chart parsers. However, they mentioned only top-down chart parsers, which is not always very efficient compared to bottom-up (left-corner) chart parsers. It seems that using left- corner parsers as proof procedures is not so easy, ...........':"'"'" Figure 1: Concept of Head-driven Derivation unless the rules given to the provers have a certain property. Here, we describe under what conditions left-corner parsers can be used as proof procedures. Let us begin with the general problems of Horn clause deduction with naive top-down and bottom- up derivations: • Deduction with top-down derivation is affected by the frequent backtracking necessitated by the inadequate selection of rules to be applied. • Deduction with bottom-up derivation is affected by the extensive vacuous computation, which never contributes to the proof of the initial goal. These are similar to the problems that typi- cally arise in natural language parsing with naive top-down and bottom-up parsers. In natural lan- guage parsing, these problems are resolved by intro- ducing a more sophisticated derivation mechanism, i.e., left-corner parsing. We have attempted to ap- ply such a sophisticated mechanism to deduction. Suppose that the proof of a goal g(x,y) can be represented in the manner in Figure 1; the first argument x of the goal g(x,y) is shared by all the formulas along the path from the goal g(z,y) to the left corner am(z,zm). In such a case, we can think of a derivation process that is similar to left- corner parsing. We call this derivation head-driven derivation, which is depicted as follows: Step 1 Find a fact a(w,z) whose first argument w unifies with the first argument x of the goal g(x,y), and place it on the left corner. Step 2 Find a rule am-l(W,Zrn-l) C a(W,Zm)/~ BZ ^ ... A Bn whose leftmost antecedent a(W,Zm) unifies with the left-corner key a(x,z), and introduce the new goals B1, ..., and Bn. If all these goals are recursively derived, then cre- ate the consequent a,,~_ 1 ( z ,zm_ 1 ), which domi- nates a(x,zm), B1, ..., and Bn, and place it on the left corner instead of a(x,z). Step3 If the consequent am-l(x,zm_l) unifies with the goal g(z,y), then finish the pro- cess. Otherwise, go back to step2 with am- 1 (x,zm_l) being the new left-corner key. 219 Left-corner parsing of DCG is just a special case of head-driven derivation, in which the in- put string is shared along the left border, i.e., the path from a nonterminal to the leftmost word in the string that is dominated by that nonterminal. Also, semantic-head-driven generation (Shieber el al., 1989; van Noord, 1990) and head-corner pars- ing ivan Noord, 1991; Sikkel and op den Akker, 1993) can be seen as head-driven derivation, when the semantic-head/syntactic-head is moved to the leftmost position in the body of each rule and the argument representing the semantic-feature/head- feature is moved to the first position in the argu- ment list of each formula. To apply the above procedures, all rules must be in chain form arn--l(W,Zrn-~) C arn(W,Zm) A B1 A ... A Bn; that is, in every rule, the first argu- ment of the leftmost antecedent must be equal to the first argument of the consequent. This is the condition under which left-corner parsers can be used as proof procedures. Because this condition is overly restrictive, we extend the procedures so that they allow non-chain rules, i.e., rules not in chain form. Step 1 is replaced by the following: Step 1 Find a non-chain rule a(w,z) C B1 A... A B~ such that the first argument w of the con- sequent a(w,z) unifies with the first argument z of the goal g(x,y), and introduce the new goals B1, ..., and /3,. A fact is regarded as a non-chain rule with an empty antecedent. If all these goals are recursively derived, then cre- ate the consequent a(z,z), which dominates B1, ..., and B,, and place it on the left corner. Generalized Chart Algorithm The idea given in the previous section realizes the goal-driven bottom-up derivation, which is the first feature of our algorithm. Then, we present a more refined algorithm based upon the idea, which real- izes the other two features as well as the first one. Chart Parsing and its Generalization Like left-corner parsing, which has the drawback of repeatedly recomputing partial results, head-driven derivation will face the same problem when it is executed in a depth-first manner with backtrack- ing. In the case of left-corner parsing, the prob- lem is resolved by using the tabulation method, known as chart parsing (Kay, 1980). A recent study by Haruno et al. (1993) has shown that the same method is applicable to semantic-head- driven generation. The method is also applicable to head-driven derivation, which is more general than semantic-head-driven generation. To generalize charts to use in proof procedures, m( <[AJ,[B]>,[A,B]) ,oO.O*°"°"°O°Oo..,. ...."" ..... / / ',..., ,.. / / ... k i /. '-.... \ / \\ h( <IA],[BI> A~> ) I <II.[BI>~.. m( <[],~f~..){]) g-" ..... -%- "A ..... C'" "" ........ ":':~ ~[A1JBI~(" -- ~/ <[l [l>~ Z \ u ".... ..... I (Some labels ..... m(<[A],[_]..~',tA]) are omitted) .................... .. m( <[A],IB]>,[B,A]) Figure 2: Example of Generalized Charts we first define the chart lexicons. In chart pars- ing, lexicons are the words in the input string, each of which is used as the index for a subset of the edges in the chart; each edge incident from (the start-point of) lexicon w represents the sub- structure dominating the sub-string starting from w. In our case, from the-similarity between left- corner parsing and head-driven derivation, lexicons are the terms that occur in the first argument po- sition of any formula; each edge incident from (the start-point of) lexicon x represents the substruc- ture dominating the successive sequence of the de- rived formulas starting from the fact in which z occupies the first argument position. For example, in the chart representing the proof in Figure 1, all the edges corresponding to the formulas on the left border, i.e. am(X,Zrn), am--l(Z,Zm--1),..., al(x,zl) and g(z,y), are incident from (the start-point of) lexicon z, and, hence, x is the index for these edges. Following this definition of the chart lexicons, there are two major differences between chart parsing and proof procedures, which Haruno also showed to be the differences between chart parsing and semantic-head-driven generation. 1. In contrast to chart parsing, where lexicons are determined immediately upon input, in proof procedures lexicons should be incrementally in- troduced. 2. In contrast to chart parsing, where lexicons are connected one by one in a linear sequence, in proof procedures lexicons should be connected in many-to-many fashion. In proof procedures, the chart lexicons are not determined at the beginning of the proof (because 220 we don't know which formulas are actually used in the proof), rather they are dynamically extracted from the subgoals as the process goes. In addi- tion, if the rules are nondeterministic, it sometimes happens that there are introduced, from one left- corner key, a(x,z), two or more distinct succes- sive subgoals, bl(wl,y~), b2(w2,y2), etc., that have different first arguments, w 1, w 2, etc. In such a case, one lexicon x should be connected to two or more distinct lexicons, w 1, w 2, etc. Furthermore, it can happen that two or more distinct left-corner keys, al(xl,zl), a2(x2,z2), etc., incidentally intro- duce the successive subgoals, bl(w,yl), b2(w,y~), etc., with the same first argument w. In such a case, two or more distinct lexicons, x 1, x 2, etc., should be connected to one lexicon w. Therefore, the connections among lexicons should be many- to-many. Figure 2 shows an example of charts with many-to-many connections, where the connections are represented by pointers A, B; etc. The Algorithm We, so far, have considered deduction but not ab- duction. Here, we extend our idea to apply to ab- duction, and present the definition of the algorithm. The extension for abduction is very simple. First, we add a new procedure, which introduces an assumption G for a given goal G. An assump- tion is treated as if it were a fact. This means that an assumption, as well as a fact, is represented as a passive edge in terms of the chart algorithm. Sec- ond, we associate a set S of assumptions with each edge e in the chart; S consists of all the assump- tions that are contained in the completed part of the (partial) proof represented by the edge e. More formally, the assumption set 5 associated with an edge e is determined as follows: 1. If e is a passive edge representing an assumption A, then S-- {A}. 2. If e is a passive/active edge introduced from a non-chain rule, including fact, then S is empty. 3. If e is a passive/active edge predicted from a chain rule with a passive edge e' being the left- corner key, then S is equal to the assumption set S' of e'. 4. If e is a passive/active edge created by combining an active edge el and a passive edge e2, then ,-q = $1 U $2 where 81 and ~q2 are the assumption sets of el and e2, respectively. Taking these into account, the definition of our algorithm is as follows, f is a function that assigns a unique vertex to each chart lexicon. The notation A:S stands for the label of an edge e, where A is the label of e in an ordinary sense and S is the assumption set associated with e. Initialization Add an active edge [[?IG]-I-:¢ to the chart, looping at vertex 0, where G is the initial goal. Apply the following procedures repeatedly until no procedures are applicable. Introduction Let e be an active edge labeled [...[?]Bj...]A:S incident from vertex s to t, where Bj = bj (zj,yj) is the first open box in e. 1. If the lexicon xj is never introduced in the chart, then introduce it and run a pointer from t to f(zj). Then, do the following: (a) For every non-chain rule a(w,z)C B1 A ... A Bn, including fact, such that w uni- fies with zi, create an active edge la- beled [[?]Bl'"[?lS,~]a(xj,z):¢ between ver- tex f(xj) and f(zj) + 1. (Create, instead, a passive edge labeled a(xj,z):¢ when the rule is a fact, i.e. n = 0.) (b) Create a passive edge labeled Bj:{Bj} be- tween vertex f(xj) and f(zj) + 1. 2. If the lexicon zj was previously introduced in the chart, then run a pointer from t to f(xj). In addition, if the passive edge Bj :{Bj } never exists in the chart, create it between vertex f(rj) and f(xj) + 1. Prediction Let e be a passive edge labeled C:S incident from vertex s to t. For every chain rule A' C A A B1 A ... A Bn such that A unifies with C, create an active edge labeled [A[?]B1...[?]Bn]A':,~ between vertex s and t. (Create, instead, a passive edge labeled A':S when A is the single antecedent, i.e., n = 0.) Combination Let ez be an active edge labeled ['" "[?]Bj[?]Bj+I'" .[?]B,~]A:$1 incident from ver- tex s to t, where Bj is the first open box in ez and let e2 be a passive edge labeled C:S~ inci- dent from vertex u to v. If Bj and C unify and there is a pointer from t to u, then create an ac- tive edge labeled [-..Bj[?]Bj+I...[?]Bn]A:S1 US2 between vertex s and v. (Create, instead, a pas- sive edge labeled A:S1 U S: when B 1 is the last element, i.e., j = n.) Each passive edge T:S represents an answer. Examples Here, we present a simple example of the appli- cation of our algorithm to spoken language un- derstanding. Figure 3 provides the rules for spo- ken Japanese understanding, with which the sen- tence (1) is parsed and interpreted. They include the pragmatic, semantic and knowledge rules as well as the syntactic and lexical rules. The syntactic rules allow the connection be- tween a verb and a noun phrase with or with- 221 Syntactic Rules s(i,k,e)Cvp(i,k,e) vp(i,k,e)Cnp(i,j,c,x) A vp(j,k,e) A depend( (c,e,x)d) vp( i,k,e)C np( i,j,x) A vp(j,k,e) A depend( (c,e,X)d) np(i,k,c,x)Cnp(i,j,x) A p(j,k,c) depend( (c,e,x)d)Cprag( (x)p,y) ^ sem( (c,e,y), ) Lexical Rules np([S6seki]k],k,x)C soseki(x) $~ vp([katta]k],k,e)C buy( e) *1 p([galk],k,c)c ga( e) p([ olk ],k,c)C wo( c) .1 Pragmatic Rules prag((x)p, ) prag( (x)p,y)C r ter(x) ^ wr te( ^ novel(y)Sl Semantic Rules sem( s)C ga( s,e) A ga(e) $3 sem(s)Cwo(s,e) ^ o(e) .3 ga( ( c,e,x) 8 ,c)C intend( e ) A person(x) A agt( ( e,x) e ) $2° wo( (c,e,x), ,c)C trade(e) A commodity(z) ^ obj( (e,x),) $~ Knowledge Rules person( x )C soseki( x ) w~ter(x)Csoseki(x) book(x)Cnovd(x) eommodity( ~ )C book(z) trade(e)Cbuy(e) intend( e)C trade( e) Figure 3: Example of Rules out a particle, which permit structures like [VP[NpS6sek2][vpkatla]]. Such a structure is evalu- ated by the pragmatic and semantic criteria. That is, the dependency between a verbal concept e and a nominal concept x is supported if there is an entity y such that x and y have a pragmatic relation, i.e., a metonymy relation, and e and y have a semantic relation, i.e., a thematic relation. The metonymy relation is defined by the pragmatic rules, based on certain knowledge, such as that the name of a writer is sometimes used to refer to his novel. Also, the thematic relation is defined by the semantic rules, based on certain knowledge, such as that the object case relation between a trading action and a com- modity can be linguistically expressed as a thematic relation. The subscript $c of a formula A represents the cost of assuming formula A. A is easy to as- sume when c is small, while A is difficult to as- sume when c is large. For instance, the cost of interpreting the thematic relation between a trad- ing action and a commodity as the object case re- lation is low, say $2, while the cost of interpret- ing the thematic relation between an intentional action and a third person as the agent case rela- tion is high, say $20. This assignment of costs is suitable for a situation in which the speaker re- ports his experience. In spite of the difficulty of assigning suitable costs in general, the cost-based interpretation is valuable, because it provides a uni- form criteria for syntax, semantics and pragmat- ics. Hopefully, several techniques, independently developed in these areas, e.g., stochastic parsing, example-based/corpus-based techniques for word sense/structural disambiguation, etc., will be us- able for better cost assignment. Probability will also be a key technique for the cost assignment (Charniak and Shimony, 1990). Figure 4 and Table 1 show the chart that is created when a sentence (1) is parsed and inter- preted using our algorithm. Although the diagram seems complicated, it is easy to understand if we break down the diagram. Included are the syntac- tic parsing of the sentence (indicated by edges 2, 6, 7, 14, 52 and 53), the pragmatic interpretation of the metonymy by S6seki S (indicated by edges 17, 18, 20 and 24), the semantic interpretation of the thematic relation between a buying event B and a novel N written by S6seki (indicated by edges 42, 44, 45, 47, 48 and 50), and so on. In the pragmatic interpretation, assumption novel(N) (edge 21) is introduced, which is reused in the semantic inter- pretation. In other words, a single assumption is used more than once. Such a tricky job is naturally realized by the nature of the chart algorithm. Agenda Control Since the aim of cost-based abduction is to find out the best solution, not all solutions, it is reason- able to consider combining heuristic search strate- gies with our algorithm to find the best solution efficiently. Our algorithm facilitates such an exten- sion by using the agenda control mechanism, which is broadly used in advanced chart parsing systems. The agenda is a storage for the edges created by any of the three procedures of the chart algorithm, out of which edges to be added to the chart are selected, one by one, by a certain criterion. The simplest strategy is to select the edge which has the minimal cost at that time, i.e., ordered search. Although ordered search guarantees that the first solution is the best one, it is not always very ef- ficient. We can think of other search strategies, like best first search, beam search, etc., which are more practical than ordered search. To date, we have not investigated any of these practical search strategies. However, it is apparent that our chart algorithm, together with the agenda control mechanism, will provide a good way to realize these practical search strategies. 222 [Soseki,katta] i I I i I I I i I I I I l 39,,,5..4. -.- ~°..°.°-° °°o°° g~gllIRmBSml~00 -? --....... 6"7,8 % \ f.l -*-o.~ i ............... !. i 35.. .... 34,49 " i ! a F <B,S>s i~ 1.~ iii ........ <s.v I " ..'L_ 3,4,5 ~. S Q.-" 20 /24 g / k "\ _ X Z:" ..... L 25 [katta] °°°'~°°°'o0 o • '...... 28,29 IW-. I I I I I i D ! P n \\ 13 ~.1-' -) 2 i -" I " J ...10,11,1 passive edge active edge pointer Figure 4: Chart Diagram for SSseki katta Preliminary Experiments We conducted preliminary experiments to compare four methods of cost-based abduction: top-down al- gorithm (TD), head-driven algorithm (HD), gener- alized chart algorithm with full-search (GCF), and generalized chart algorithm with ordered search (GCO). The rules used for the experiments are in the spoken language understanding task, and they are rather small (51 chain rules + 35 non-chain rules). The test sentences include one verb and 1-4 noun phrases, e.g., sentence (1). Table 2 shows the results. The performance of each method is measured by the number of compu- tation steps, i.e., the number of derivation steps in TD and HD, and the number of passive and active edges in GCF and GCO. The decimals in parentheses show the ratio of the performance of each method to the performance of TD. The table clearly shows how the three features of our algo- rithm improve the computational efficiency. The improvement from TD to HD is due to the first fea- ture, i.e., goal-driven bottom-up derivation, which eliminates about 50% of the computation steps; the improvement from HD to GCF is due to the sec- ond feature, i.e., tabulation of the partial results, Table 2: Comp. among TD, HD, GCF, and GCO Ns II TDI ttD GCF 1 215 112 (0.52) 83 (0.39) 2 432 218 (0.50) 148 (0.34) 3 654 330 (0.50) 193 (0.30) 4 876 442 (0.50) 238 (0.27) GCO 75 (0.35) 113 (0.26) 160 (0.24) 203 (0.23) which decreases the number of steps another 13%- 23%; the improvement from GCF to GCO is due to the last feature, i.e., the agenda control mechanism, which decreases the number of steps another 4%- 8%. In short, the efficiency is improved, maximally, about four times. Comparison with Earley Deduction We describe, here, some differences between our al- gorithm and Earley deduction presented by Pereira and Warren. First, as we mentioned before, our al- gorithm is mainly based on bottom-up (left-corner) derivation rather than top-down derivation, that Earley deduction is based on. Our experiments showed the superiority of this approach in our par- 223 titular, though not farfetched, example. Second, our algorithm does not use sub- sumption-checking of edges, which causes a serious computation problem in Earley deduction. Our al- gorithm needs subsumption-checking only when a new edge is introduced by the combination proce- dure. In the parsing of augmented grammars, even when two edges have the same nonterminal symbol, they are different in the annotated structures asso- ciated with those edges, e.g., feature structures; in such a case, we cannot use one edge in place of another. Likewise, in our algorithm, edges are al- ways annotated by the assumption sets, which, in most cases, prevent those edges from being reused. Therefore, in this case, subsumption-checking is not effective. In our algorithm, reuse of edges only be- comes possible when a new edge is introduced by the introduction procedure. However, this is done only by adding a pointer to the edge to be reused, and, to invoke this operation, equality-checking of lexicons, not edges, is sufficient. Finally, our algorithm has a stronger connec- tion with chart parsing than Earley deduction does. Pereira and Warren noted that the indexing of for- mulas is just an implementation technique to in- crease efficiency. However, indexing plays a con- siderable role in chart parsing, and how to index formulas in the case of proof procedures is not so obvious. In our algorithm, from the consideration of head-driven derivation, the index of a formula is determined to be the first argument of that for- mula. All formulas with the same index are derived the first time that index is introduced in the chart. Pointers among lexicons are also helpful in avoiding nonproductive attempts at applying the combina- tion procedure. All the devices that were originally used in chart parsers in a restricted way are in- cluded in the formalism, not in the implementation, of our algorithm. Concluding Remarks In this paper, we provided a basic and practi- cal solution to the computation problem of cost- based abduction. We explained the basic concept of our algorithm and presented the details of the algorithm along with simple examples. We also showed how our algorithm improves computational efficiency on the basis of the results of the prelimi- nary experiments. We are now developing an abduction-based spoken language understanding system using our algorithm. The main problem is how to find a good search strategy that can be implemented with the agenda control mechanism. We are investigating this issue using both theoretical and empirical ap- proaches. We hope to report good results along these lines in the future. Acknowledgments The author would like to thank Prof. Yuji Matsu- moto of Nara Institute of Science and Technology and Masahiko Haruno of NTT Communication Sci- ence Laboratories for their helpful discussions. References [Charniak and Husain, 1991] Eugene Charniak and Saadia Husain. A new admissible heuristic for minimal-cost proofs. Proceedings of the 12th IJCAI, pages 446-451, 1991. [Charniak and Santos Jr., 1992] Eugene Charniak and Eugene Santos Jr. Dynamic MAP calcu- lations for abduction. Proceedings of the lOth AAAI, pages 552-557, 1992. [Charniak and Shimony, 1990] Eugene Charniak and Solomon E. Shimony. Probabilistic seman- tics for cost based abduction. Proceedings of the 8th AAAI, pages 106-111, 1990. [Haruno et al., 1993] Masahiko Haruno, Yasuharu Den, Yuji Matsumoto, and Makoto Nagao. Bidi- rectional chart generation of natural language texts. Proceedings of the 11th AAAI, pages 350- 356, 1993. [Hobbs et at., 1988] Jerry R. Hobbs, Mark Stickel, Paul Martin, and Douglas Edwards. Interpreta- tion as abduction. Proceedings of the 26th An- nual Meeting of ACL, pages 95-103, 1988. [Kay, 1980] Martin Kay. Algorithm schemata and data structures in syntactic processing. Technical Report CSL-80-12, XEROX Palo Alto Research Center, 1980. [Pereira and Warren, 1983] Fernando C.N. Pereira and David H.D. Warren. Parsing as deduction. Proceedings of the 21st Annual Meeting of A CL, pages 137-144, 1983. [Shieber et at., 1989] Stuart M. Shieber, Gertjan van Noord, Robert C. Moore, and Fernando C.N. Pereira. A semantic-head-driven generation al- gorithm for unification-based formalisms. Pro- ceedings of the 27th Annual Meeting of ACL, pages 7-17, 1989. [Sikkel and op den Akker, 1993] Klaas Sikkel and Rieks op den Akker. Predictive head-corner chart parsing. The 3rd International Workshop on Parsing Technologies, pages 267-276, 1993. [van Noord, 1990] Gertjan van Noord. An over- view of head-driven bottom-up generation. Cur- rent Research in Natural Language Generation, chapter 6, pages 141-165. Academic Press, 1990. [van Noord, 1991] Gertjan van Noord. Head cor- ner parsing for discontinuous constituency. Pro- ceedings of the 29th Annual Meeting of ACL, pages 114-121, 1991. 224 Table 1: Table Representation of the Chart I1# I # I Arc A-Set [ From 1 [[?]s(,~,[],e)]T ¢ 1 - ' 2 ' [[?]s°seki(S) $1] d ' 1 np( ~, q2,S) ' 3 ' s°seki(S)~l {~} l 2 ' 4 I person(S) {a} , 3 ' 5 ' writer(S) {c~} , 3 ' 6 ' np(Cb,ffl,S) {c~} 2+B+3 ' ' 6 [?lvp(@,k,e) [?]depend((c,e,S)a)] ' 8 I [np(~,ff2,S) {a} ' 6 [?lp( kO,k,e ) ] np( d2 ,k,c,S) 9 I [[?]buy(B)Sl] ¢ } 7 . vp(~,~,B) 10 ' buy(B)~l {fl} i 9 ! 11 'trade(B) {/3} I 10 'i'2 l intend(B) {/3} I 11 i3 I vP(ql'~'B) {/3} I 9+D+10 141[np(rb,ff2,S) {c~,/3} 7+C+13 vp(~,~,B) [?]depend( ( P ,B ,S )d ) ] vp( q~,~,B ) 15 [[?]prag((S)p,X) ¢ 14 [?]sem((P,S,x)..)] depend(<P,S,S)a) 16 prag((S)p,S) ¢ , 15 i7 J [[?]writer(S) ' ¢ 15 [?]write( ( S,x )p ) $1° [7]novel(c) .1] [ prag((s)~,~) as ' [writer(S) ' {a} ' 17+G+5 [?]write( (S,N)p) ~1° [?]novel(N) ~1] prag((S>v'N) i 19 i write((S,N)p)'a° ' {7} ' 18 ' 20 [writer(S) ' {oqT} ' 18+H+19 write({S,N>.) [?]novel(N) ~1] prag((S)p,N) , ,' 'novel(N)" 20 22 book(N) t 6 } 21 23 ' commodity(N) ' {6} ' 22 24 ' prag(IS)v,N ) ' {a,7,8} ' 20+I+21 25 [prag((S)v,S" ) ¢ 15+F+16 t?]sem((P,B,S>~,)] , depend((P,B,S)d) 26 [[?]intend(By 25 [?]person(S) [?]agt( ( B,S) , )~2°] ga(<P,B,S),,P) 27'[[?]trade(B) ! ~b ,~ 25 [?]commodity(S) wo((P,B,S),,P) Arc [ A-Set I From J 28 [intend(B) ,{/3} 26+K+12 [?]person(S) [?]agt( < B,S), ) .2°] ga(<P,B,S)~,P) 29 I [trade(B) I {/3} I 27+K+11 [?]commodity(S) [e]obj((B,S)~) ~2] wo(<P,B,S),,P) '30 ' [intend(B) ' {a,/3} ' 28+L+4 ' person(S) i t?]agt( ( B,S), ) ~2°] ga(<P,B,S)~,P) , , , 31 ' agt((B S)s) ~2° {e} 30 32 ~ ga((P,B,S)s P) ' {a, fl, e} ' 33 I[ga((P,B,S),,P) ' {a,/3, e} I 30+M+31 I 32 [?]ga(P) ~3] sem((P,B,S),) , 34 ' ga(P) ~3 I , 35 [ sem((P,B,S)s) I {a, {¢} I 33 [ /3, e,¢} t 33+N+34 I 36 I depe~d((P,B,S)d) {a,/3, e,~'} I 25+J+35 l 37 I vp((~,~,B) ' {a,/3, e, ~} , , 14+E+36 38 ' ~¢,~,B) {or,/3, e, ~} , 37 , 39 ' ' {a, fl, e, (~} 1+A+38 40 [prag((S)p,N) {a,7,5} 15+F+24 [?]sem((P,S,g)s)] depend((P,B,S)a) 41 [[?]intend(B) ~ 40 [?]person(N) [?]agt((B,N),)'2°] : ga((P,B,N I ~,P)) 42 '[[?]trade(B) ~b ' 40 j [?]commodity(N) [?]obj((B,N),) $2] wo((P,B,i)s,P) ' [intend(B) {/3} ' 41+P+12 ' 43 i [?]person(N) i [?]agt((s,g)s)$2°] ! i ga((P,B,Nl~,P ) 44 ' [trade(B) ' {/3} ' 42+P+11 ' [?]commodity(N) [?]obj((B,N),) ~2] wo((P,B,N).,P) 45 [trade(B) {/3, 8} 44+Q+23 : commodity(N) [?]obj(<B,N) s) '2] wo((P,B,N),,P) 46 obj((B,g)~3 s2 {r/} , 45 t 47 'wo((P,B,N)~,P) t {fl, 6, r/} 45+R+46 48 [wo((P,B,lg)s,P) {/3, 6, r/} 47 [?]wo(P) s3] sem(<P,B,NL ) 49 ' wo(P) ~3 " {0} i 48 ' 50 ,I sem((P,B N)s) t {/3,6, r/,0} 48+S+49 t 51 t depend((P,B,S)a) ' {o~,/3, 7,6, rt, 0} t 40+0+50 't 52 J vp(¢~B) '" t {a,/3,7,6, r/,0} i 14+E+51 53 s(¢,D,B ) {a,/3, v, e,,, 0} 52 ' , {a, fl, 7,5,n, 0} , 1+A+53 , ¢ = iS6seki,katta], • = [katta] ~ 54 , T , .. 0} o~ = soseki(S) $1, fl = buy(B)$1, 7 = write((S,N)p) $1°, 6 = novel(N) $1, e = agt((B,S),) $2°, ¢ = ga(P)$3, , = obj((B,N)s)$2, 0 = wo(P) $3 225
1994
30
Tricolor DAGs for Machine Translation Koichi Takeda Tokyo Research Laboratory, IBM Research 1623-14 Shimotsuruma, Yamato, Kanagawa 242, Japan Phone: 81-462-73-4569, 81-462-73-7413 (FAX) t [email protected] Abstract Machine translation (MT) has recently been for- mulated in terms of constraint-based knowledge representation and unification theories~ but it is becoming more and more evident that it is not possible to design a practical MT system without an adequate method of handling mismatches be- tween semantic representations in the source and target languages. In this paper, we introduce the idea of "information-based" MT, which is consid- erably more flexible than interlingual MT or the conventional transfer-based MT. Introduction With the intensive exploration of contemporary theories on unification grammars[6, 15, 13] and feature structures[7, 19] in the last decade, the old image of machine translation (MT) as a bru- tal form of natural language processing has given way to that of a process based on a uniform and reversible architecture[16~ 1, 27]. The developers of MT systems based on the constraint-based formalism found a serious prob- lem in "language mismatching," namely, the dif- ference between semantic representations in the source and target languages. 1 Attempts to de- sign a pure interlingual MT system were therefore abandoned, 2 and the notion of "semantic trans- fer"[24, 22] came into focus as a practical so- lution to the problem of handling the language mismatching. The constraint-based formalism[2] seemed promising as a formal definition of trans- fer, but pure constraints are too rigid to be pre- cisely imposed on target-language sentences. Some researchers(e.g., Russell[14]) introduced 1For example, Yasuhara[26] reported there was an overlap of only 10% between his group's English and Japanese concept dictionaries, which covered 0.2 mil- lion concepts. 2Even an MT system with a controlled input language[12] does not claim to be a pure interlingual system. the concept of defeasible reasoning in order to for- malize what is missing from a pure constraint- based approach, and control mechanisms for such reasoning have also been proposed[5. 3]. With this additional mechanism, we can formulate the "transfer" process as a mapping from a set of con- straints into another set of mandatory and defen- sible constraints. This idea leads us further to the concept of "information-based" MT, which means that, with an appropriate representation scheme, a source sentence can be represented by a set of constraints that it implies and that, given a target sentence, the set Co of constraints can be divided into three disjoint subsets: • The subset Co of constraints that is also implied by the target sentence • The subset C+ of constraints that is not im- plied by, but is consistent with, the translated sentence • The subset C- of constraints that is violated by the target sentence The target sentence may also imply another set C~eto of constraints, none of which is in Ca. That is~ the set Ct of constraints implied by the tar- get sentences is a union of C0 and C~e~o, while Cs = CoUC+UC_. When Ca = Co = Ct, we have a fully interlingual translation of the source sen- tence. If C+ ¢ ¢, C_ = ¢, and Chew = ¢, the tar- get sentence is said to be under-generated~ while it is said to be over-generated when C+ = ¢, C- = ¢, and Cacao y~ ¢.s In either case, C- must be empty if a consistent translation is required. Thus, the goal of machine translation is to find an optimal pair of source and target sentences that minimizes C+~C-, and C~w. Intuitively, Co corresponds to essential information, and C+ and Cneto can be viewed as language-dependent supportive in- formation. C_ might be the inconsistency be- ZThe notions of completeness and coherence in LFG[6] have been employed by Wedekind[25] to avoid over- and under-generation. 226 tween the assumptions of the source- and target- language speakers. In this paper~ we introduce tricolor DAGs to represent the above constraints, and discuss how tricolor DAGs are used for practical MT sys- tems. In particular, we give a generation algo- rithm that incorporates the notion of semantic transfer by gradually approaching the optimal tar- get sentence through the use of tricolor DAGs, when a fully interlingual translation fails. Tricolor DAGs give a graph-algorithmic interpretation of the constraints, and the distinctions between the types of constraint mentioned above allow us to adjust the margin between the current and opti- mal solution effectively. Tricolor DAGs A tricolor DAG (TDAG, for short) is a rooted, directed, acyclic 4 graph with a set of three colors (red, yellow, and g'reen) for nodes and directed arcs. It is used to represent a feature structure of a source or target sentence. Each node represents either an atomic value or a root of a DAG, and each arc is labeled with a feature name. The only difference between the familiar usage of DAGs in unification grammars and that of TDAGs is that the color of a node or "arc represents its degree of importance: 1. Red shows that a node (arc) is essential. 2. Yellow shows that a node (arc) may be ignored, but must not be violated. 3. Green shows that a node (arc) may be violated. For practical reasons, the above distinctions are interpreted as follows: 1. Red shows that a node (arc) is derived from lexicons and grammatical constraints. 2. Yellow shows that a node (arc) may be inferred from a source or a target sentence by using do- main knowledge, common sense, and so on. 3. Green shows that a node (arc) is defeasibly in- ferred, specified as a default, or heuristically specified. When all the nodes and arcs of TDAGs are red, TDAGs are basically the same as the feature struc- tures 5 of grammar-based translation[25, 17]. A TDAG is well-formed iff the following conditions are satisfied: 4Acyclicity is not crucial to the results in this pa- per, but it significantly simplifies the definition of the tricolor DAGs and semantic transfer. SWe will only consider the semantic portion of the feature structure although the theory of tricolor DAGS for representing entire feature structures is an interest- ing topic. 1. The root is a red node. 2. Each red arc connects two red nodes. 3. Each red node is reachable from the root through the red arcs and red nodes. 4. Each yellow node is reachable from the root through the arcs and nodes that are red and/or yellow. 5. Each yellow arc connects red and/or yellow nodes. 6. No two arcs start from the same node, and have the same feature name. Conditions 1 to 3 require that all the red nodes and red arcs between them make a single, con- nected DAG. Condition 4 and 5 state that a de- feasible constraint must not be used to derive an imposed constraint. In the rest of this paper, we will consider only well-formed TDAGs. Further- more, since only the semantic portions of TDAGs are used for machine translation, we will not dis- cuss syntactic features. The subsurnption relationship among the TDAGs is defined a~ the usual subsumption over DAGs, with the following extensions. • A red node (arc) subsumes only a red node (arc). • A yellow node (arc) subsumes a red node (arc) and a yellow node (arc). • A green node (arc) subsumes a node (arc) with any color. The unification of TDAGs is similarly defined. The colors of unified nodes and arcs are specified as follows: • Unification of a red node (arc) with another node (arc) makes a red node (arc). • Unification of a yellow node (arc) with a yellow or green node (arc) makes a yellow node (arc). • Unification of two green nodes (arcs) makes a green node (arc). Since the green nodes and arcs represent defensible constraints, unification of a green node (either a root of a TDAG or an atomic node) with a red or yellow node always succeeds~ and results in a red or yellow node. When two conflicting green nodes are to be unified, the result is indefinite, or a single non-atomic green node. 6 Now, the problem is that a red node/arc in a source TDAG (the TDAG for a source sentence) 6An alternative definition is that one green node has precedence over the other[14]. Practically, such a conflicting unification should be postponed until no other possibility is found. 227 sg WISH _ n u m ~ ~ "JOHN Source T-DAG1 sg ,a num• "WISH o ~ "JOHN • agent "WALK Target T-DAG2 sg "WISH num .. .°...-" " "JOHN agent therne~ "WALK "WISH ~ "JOHN "WALK Target T-DAG4 Source T-DAG3 red node ~ red arc yellow node m m ~ yellow arc O green node ......... ,-- green arc Figure h Sample TDAGs may not always be a red node/arc in the target TDAG (the TDAG for a target sentence). For example, the functional control of the verb "wish" in the English sentence John ~ished to walk may produce the TDAGI in Figure 1, but the red arc corresponding to the agent of the *WALK predicate may not be preserved in a target TDAG2. 7 This means that the target sentence a]one cannot convey the information that it is John who wished to walk, even if this information can be understood from the context. Hence the red arc is relaxed into a yellow one, and any tar- get TDAG must have an agent of *WALK that is consistent with *JOHN. This relaxation will help the sentence generator in two ways. First, it can prevent generation failure (or non-termination in the worst case). Second, it retains important in- formation for a choosing correct translation of the verb "walk". s rFor example, the Japanese counterpart "~" for the verb "wish" only takes a sentential complement, and no functional control is observed. SWhether or not the subject of the verb is human is often crucial information for making an appropriate choice between the verb's two Japanese counterparts "~ <" and "~?~7o". Another example is the problem of iden- tifying number and determiner in Japanese-to- English translation. This type of information is rarely available from a syntactic representation of a Japanese noun phrase, and a set of heuris- tic rules[ll] is the only known basis for making a reasonable guess. Even if such contextual pro- cessing could be integrated into a logical inference system, the obtained information should be defea- sible, and hence should be represented by green nodes and arcs in the TDAGs. Pronoun resolu- tion can be similarly represented by using green nodes and arcs. It is worth looking at the source and tar- get TDAGs in the opposite direction. From the Japanese sentence, John +subj walk +nom +obj wished we get the source TDAG3 in Figure I, where func- tional control and number information are miss- ing. With the help of contextual processing, we get the target TDAG4, which can be used to gen- erate the English sentence "John wished to walk.;" Semantic Transfer As illustrated in the previous section, it is often the case that we have to solve mismatches between source and target TDAGs in order to obtain suc- cessful translations. Syntactic/semantic transfer has been formulated by several researchers[18, 27] as a means of handling situations in which fully interlingual translation does not work. It is not enough, however, to capture only the equivalent relationship between source and target semantic representations: this is merely a mapping among red nodes and arcs in TDAGs. What is missing in the existing formulation is the provision of some margin between what is said and what is trans- lated. The semantic transfer in our framework is defined as a set of successive operations on TDAGs for creating a sequence of TDAGs to, tl, ..., tk such that to is a source TDAG and tk is a target TDAG that is a successful input to the sentence generator. A powerful contextual processing and a do- main knowledge base can be used to infer addi- tional facts and constraints, which correspond to the addition of yellow nodes and arcs. Default in- heritance, proposed by Russell et al.[14], provides an efficient way of obtaining further information necessary for translation, which corresponds to the addition of green nodes and arcs. A set of well- known heuristic rules, which we will describe later in the "Implementation" Section, can also be used to add green nodes and arcs. To complete the model of semantic transfer, we have to introduce 228 a "painter." A painter maps a red node to ei- ther a yellow or a green node, a yellow node to a green node, and so on. It is used to loosen the constraints imposed by the TDAGs. Every appli- cation of the painter monotonically loses some in- formation in a TDAG, and only a finite number of applications of the painter are possible before the TDAG consists entirely of green nodes and arcs except for a red root node. Note that the painter never removes a node or an arc from a TDAG, it simply weakens the constraints imposed by the nodes and arcs. Formally, semantic transfer is defined as a se- quence of the following operations on TDAGs: • Addition of a yellow node (and a yellow arc) to a given TDAG. The node must be connected to a node in the TDAG by a yellow arc. • Addition of a yellow arc to a given TDAG. The arc must connect two red or yellow nodes in the TDAG. • Addition of a green node (and a green arc) to a given TDAG. The node must be connected to a node in the TDAG by the green arc. • Addition of a green arc to a given TDAG. The arc can connect two nodes of any color in the TDAG. • Replacement of a red node (arc) with a yellow one, as long as the well-formedness is preserved. • Replacement of a yellow node (arc) with a green one, as long as the well-formedness is preserved. The first two operations define the logical impli- cations (possibly with common sense or domain knowledge) of a given TDAG. The next two op- erations define the defensible (or heuristic) infer- ence from a given TDAG. The last two operations define the painter. The definition of the painter specifies that it can only gradually relax the con- straints. That is, when a red or yellow node (or arc) X has other red or yellow nodes that are only connected through X, X cannot be "painted" un- til each of the connected red and yellow nodes is painted yellow or green to maintain the reachabil- ity through X. In the sentence analysis phase, the first four operations can be applied for obtaining a source TDAG as a reasonable semantic interpretation of a sentence. The application of these operations can be controlled by "weighted abduction"[5], de- fault inheritance, and so on. These operations can also be applied at semantic transfer for augment- ing the TDAG with a common sense knowledge of the target language. On the other hand, these op- erations are not applied to a TDAG in the gener- ation phase, as we will explain in the next section. This is because the lexicon and grammatical con- straints are only applied to determine whether red nodes and arcs are exactly derived. If they are not exactly derived, we will end up with either over- or under-generation beyond the permissible margin. Semantic transfer is applied to a source TDAG as many times 9 as necessary until a successful gen- eration is made. Recall the sample sentence in Figure 1~ where two painter calls were made to change two red arcs in TDAG1 into yellow ones in TDAG2. These are examples of the first sub- stitution operation shown above. An addition of a green node and a green arc, followed by an ad- dition of a green arc, was applied to TDAG3 to obtain TDAG4. These additions are examples of the third and fourth addition operations. Sentence Generation Algorithm Before describing the generation algorithm, let us look at the representation of lexicons and gram- mars for machine translation. A lexical rule is represented by a set of equations, which intro- duce red nodes and arcs into a source TDAG. l° A phrasal rule is similarly defined by a set of equa- tions, which also introduce red nodes and arcs for describing a syntactic head and its complements. For example, if we use Shieber's PATR-II[15] notation~ the lexical rule for "wished" can be rep- resented as follows: V "-~ wished (V cat) ---- v (V form) - past (V subj cat} = np (V obj cat) = v (V obj form) = infinitival (V wed) -- *WISH (V pred agent) = (V subj pred) (V pred theme) = (V obj pred) (V pred theme agent) = (V subj pred) The last four equations are semantic equa- tions. Its TDAG representation is shown in Fig- ure 2. It would be more practical to further as- sume that such a lexicai rule is obtained from a type inference system, 11 which makes use of a syntactic class hierarchy so that each lexical class can inherit general properties of its superclasses. Similarly, semantic concepts such as *WISH and *WALK should be separately defined in an onto- logical hierarchy together with necessary domain knowledge (e.g., selectional constraints on case 9The iteration is bounded by the number of nodes and arcs in the TDAG, although the number of possi- ble sequences of operations could be exponential. 1°For simplicity, we will only consider semantic equations to form the TDAGs. 11as in Shieber[15], Pollard and Sag[13], and Russell et al.[14] 229 *WISH p r e d J ~ Qnp cat~ o~ ~prme: ~agent v c., v Figure 2: TDAG representation of the verb "wished" (embedded in the entire feature struc- ture) caller • -.. work-for • "-. *OFF!CF~BoSTON * C A L ' ~ definite singular Figure 3: Source TDAG for the sentence "The Boston Office called." fillers and part-of relationships. See KBMT-8918].) A unification grammar is used for both analysis and generation. Let us assume that we have two unification grammars for English and Japanese. Analyzing a sentence yields a source TDAG with red nodes and arcs. Semantic interpretation re- solves possible ambiguity and the resulting TDAG may include all kinds of nodes and arcs. For ex- ample, the sentence 12 The Boston office called would give the source TDAG in Figure 3. By utilizing the domain knowledge, the node labeled *PERSON is introduced into the TDAG as a real caller of the action *CALL, and two arcs repre- senting *PERSON work-for *OFFICE and *OF- FICE in *BOSTON are abductively inferred. Our generation algorithm is based on Wedekind's DAG traversal algorithm[25] for LFG. la The algorithm runs with an input TDAG by traversing the nodes and arcs that were derived from the lexicon mand grammar rules. The termi- nation conditions are as follows: 12in Hobbs et al.[5] 13It would be identical to Wedekind's algorithm if an input TDAG consisted of only red nodes and arcs. . *PERSON caller • ".. work-for • ",,. *OFFICE *BOSTON _ (~ *CALL= "~ A m . . . . . / (~ --npm~~r~~ (~ definite singular Figure 4: Target TDAG for the sentence "The Boston Office called." • Every red node and arc in the TDAG was de- rived. • No new red node (arc) is to be introduced into the TDAG if there is no corresponding node (arc) of any color in the TDAG. That is, the generator can change the color of a node (arc) to red, but cannot add a new node (arc). • For each set of red paths (i.e., the sequence of red arcs) that connects the same pair of nodes, the reentrancy was also derived. These conditions are identical to those of Wedekind except that yellow (or green) nodes and arcs may or may not be derived. For example, the sentence "The Boston Office called" in Figure 3 can be translated into Japanese by the following sequence of semantic transfer and sentence gener- ation. 1. Apply the painter to change the yellow of the definite node and the def arc to green. 2. Apply the painter to change the yellow of the singular node and the hum arc to green. The resulting TDAG is shown in Figure 4. 3. Run the sentence generator with an input fea- ture structure, which has a root and an arc pred connecting to the given TDAG. (See the node marked "1" in Figure 4.) 4. The generator applies a phrasal rule, say S ---* NP VP, which derives the subj arc connecting to the subject NP (marked "2"), and the agent arc. 5. The generator applies a phrasal rule, say NP ---+ MOD NP, TM which derives the npmod arc to the 14There are several phrasal rules for deriving this LHS NP in Japanese: (1) A noun-noun compound, (2) a noun, copula, and a noun, and (3) a noun, postposi- tional particle, and a noun. These three rules roughly correspond to the forms (1) Boston Office, (2) office of Boston, and (3) office in Boston. Inference of the "*OFFICE in *BOSTON" relation is easiest if rule (3) 230 modifier of the NP (marked "3") and the rood arc. 6. Lexical rules are applied and all the semantic nodes, *CALL, *OFFICE, and *BOSTON are derived. The annotated sample run of the sentence gen- erator is shown in Figure 5. The input TDAG in the sample run is embedded in the input feature structure as a set of PRED values, but the seman- tic arcs are not shown in the figure. The input feature structure has syntactic features that were specified in the lexical rules. The feature value *UNDEFINED* is used to show that the node has been traversed by the generator. The basic property of the generation algo- rithm is as follows: Let t be a given TDAG, tmi~ be the connected subgraph including all the red nodes and arcs in t, and t,~, be the connected subgraph of t obtained by changing all the colors of the nodes and arcs to red. Then, any successful generation with the derived TDAG tg satisfies the condition that t,,i~ subsumes ta, and t a subsumes trnaz. The proof is immediately obtained from the defini- tion of successful generation and the fact that the generator never introduces a new node or a new arc into an input TDAG. The TDAGs can also be employed by the semantic head-driven genera- tion algorithm[17] while retaining the above prop- erty. Semantic monotonicity always holds for a TDAG, since red nodes must be connected. It has been shown by Takeda[21] that semantically non- monotonic representations can also be handled by introducing a functional semantic class. Implementation We have been developing a prototype English- to-Japanese MT system, called Shalt2122], with a lexicon for a computer-manual domain includ- ing about 24,000 lexemes each for English and Japanese, and a general lexicon including about 50,000 English words and their translations. A sample set of 736 sentences was collected from the "IBM AS/400 Getting Started" manual, and was tested with the above semantic transfer and generation algorithmJ s The result of the syntac- tic analysis by the English parser is mapped to a TDAG using a set of semantic equations 16 oh- is used, but the noun-noun compound is probably the best translation. ! 15We used McCord's English parser based on his English Slot Grammar[10], which covered more than 93% of the sentences. l~We call such a set of semantic equations mapping rules (see Shalt2[20] or KBMT-8918]). ;; run the generator with input f-structure O> *J-GG-START called with ((PRED "~") (CAT V) (VTYPE V-bDAN-B) (SUBCAT TRANS) (ASP-TYPE SHUNKAN) (:MOOD ((PKED "@dec"))) (AUX ((PRED "@aux") (:TIME ((PRED "@past"))) (:PASSIVE ((PRED "@minus"))))) (SUBJ ((CAT N) (PRED "~i~;~") (XADJL1BCT ((XCOP ,,'C'Cr),,) (CAT N) (PRED ",~°5~ ~ ~")))))) °.. 3> *J-GG-S called ;;<start> ->...-> <S> 4> *J-GG-XP called with ;;subj-filler ((CASE (.0,'I* "~ .... %¢")) (CAT N) (NEG *UNDEFINED*) (PRED "~P~") (](ADJUNCT ((COP -) (CAT N) (PRED "~, }" >'")))) 5> *J-GG-NP called ;;head NP of subj 10< *GG-N-ROOT returns ;;np mod ",~°~ ~ M" ; ;"Boston" 9> *J-GG-N called ; ;head np 10< *GG-N-ROOT returns "~" ;;"office" 7< *9 (<SS> <NP>) returns ;;mod+NP 5< .i (<NP> <P>) returns ;;NP+case-marker '~A I- >z~$~I$, 4< *J-GG-XP returns "~°A b >'7~69~&~" 4> *J-GG-S called with ;;VP part 5> *J-GG-VP called ;;stem + 6> *J-GG-V called ; ;function word chains ( (SUBJ *UNDEFINED*) (ADVADJUBCT *UNDEFINED*) (PPAD JUNCT *UNDEFINED*) ( :MOOD *UNDEFINED*) (AUX ((:TIME ((PRED "@past"))) (:PASSIVE ((PRED (*OR* *UNDEFINED* "@minus"))) ) (PRED "@aux") )) (CAT V) (TYPE FINAL) (ASP-TYPE SHUNKAN) (VTYPE V-bDAN-B) (SUBCAT TRIIlIS) (PKED "l~2g" ) ) 7> *J-GG-RENTAI-PAST called ; ;past-form 14< *GG-V-ROOT returns "~" ; ;stem • . . 6< *J-GG-V returns "~[~ b~C" 5< *J-GG-VP returns "~[~ ~fC" 4< *J-GG-S returns "~[~ ~" 3< *J-GG-S returns O< *J-GG-START returns Figure 5: Sentence generation from the TDAG for "The Boston Office called." 231 tained from the lexicons. We have a very shal- low knowledge base for the computer domain, and no logical inference system was used to de- rive further constraints from the given source sen- tences. The Japanese grammar is similar to the one used in KBMT-89, which is written inpseudo- unification[23] equations, but we have added sev- eral new types of equation for handling coordi- nated structures. The Japanese grammar can gen- erate sentences from all the successful TDAGs for the sample English sentences. It turned out that there were a few collections of semantic transfer sequences which contributed very strongly to the successful generation. These sequences include • Painting the functional control arcs in yellow. • Painting the gaps of relative clauses in yellow. • Painting the number and definiteness features in yellow. • Painting the passivization feature in green. ~7 Other kinds of semantic transfer are rather id- iosyncratic, and are usually triggered by a par- ticular lexical rule. Some of the sample sentences used for the translations are as follows: ~s Make sure you are using the proper edition for the level of the product. ~-+f- ~ ~ © p~<m ~ ~t~ user +subj product +pos level +for proper edition +obj use +prog +nom +obj confirm +imp Publications are not stocked at the address publication +subj following +loc provide address +loc stock +passive +neg This publication could contain technical inaccuracies or typographical errors. this publication +subj technical inaccuracy or typographical error +obj contain +ability +past 17We decided to include the passivization feature in the semantic representation in order to determine the proper word ordering in Japanese. 1s Japanese translation reflects the errors made in English analysis. For example, the auxiliary verb "could" is misinterpreted in the last sample sentence. The overall accuracy of the translated sen- tences was about 63%. The main reason for trans- lation errors was the occurrence of errors in lexi- cal and structural disambiguation by the syntac- tic/semantic analyzer. We found that the accu- racy of semantic transfer and sentence generation was practically acceptable. Though there were few serious errors, some occurred when a source TDAG had to be com- pletely "paraphrased" into a different TDAG. For example, the sentence Let's get started. was very hard to translate into a natural Japanese sentence. Therefore, a TDAG had to be para- phrased into a totally different TDAG, which is an- other important role of semantic transfer. Other serious errors were related to the ordering of con- stituents in the TDAG. It might be generally ac- ceptable to assume that the ordering of nodes in a DAG is immaterial. However, the different order- ing of adjuncts sometimes resulted in a misleading translation, as did the ordering of members in a coordinated structure. These subtle issues have to be taken into account in the framework of seman- tic transfer and sentence generation. Conclusions In this paper, we have introduced tricolor DAGs to represent various degrees of constraint, and de- fined the notions of semantic transfer and sen- tence generation as operations on TDAGs. This approach proved to be so practical that nearly all of the source sentences that were correctly parsed were translated into readily acceptable sen- tences. Without semantic transfer, the translated sentences would include greater numbers of incor- rectly selected words, or in some cases the gener- ator would simply fail 19 Extension of TDAGs for disjunctive informa- tion and a set of feature structures must be fully incorporated into the framework. Currently only a limited range of the cases are implemented. Op- timal control of semantic transfer is still unknown. Integration of the constraint-based formalism, de- feasible reasoning, and practical heuristic rules are also important for achieving high-quality transla- tion. The ability to process and represent various levels of knowledge in TDAGs by using a uniform architecture is desirable, but there appears to be some efficient procedural knowledge that is very hard to represent declaratively. For example, the negative determiner "no" modifying a noun phrase in English has to be procedurally transferred into ~gThe Essential Arguments Algorithm[9] might be an alternative method for finding a successful genera- tion path. 232 the negation of the verb governing the noun phrase in 3 apanese. Translation of "any", "yet", "only", and so on involves similar problems. While TDAGs reflect three discrete types of constraints, it is possible to generalize the types into continuous, numeric values such as potential energy[4]. This approach will provide a consider- ably more flexible margin that defines a set of per- missible translations, but it is not clear whether we can successfully define a numeric value for each lexical rule in order to obtain acceptable transla- tions. Acknowledgments The idea of the tricolor DAGs grew from discus- sions with Shiho 0gino on the design and im- plementation of the sentence generator. I would also like to thank the members of the NL group - Naohiko Uramoto, Tetsuya Nasukawa, Hiroshi Maruyama, Hiroshi Nomiyama, Hideo Watanabe, Masayuki Morohashi, and Taijiro Tsutsumi -for stimulating comments and discussions that di- rectly and indirectly contributed to shaping the paper. Michael McDonald, who has always been the person I turn to for proofreading, helped me write the final version. References [1] M. Dymetman. "Inherently Reversible Grammars, Logic Programming and Computability". In Proc. of ACL Workshop on Reversible Grammar in Natural Lan- guage Processing, pages 20-30, Berkeley, California, June 1991. [2] M. Emele, U. Held, S. Momma, and R. Zajac. "Inter- actions between Linguistic Constraints: Procedural vs. Declarative Approaches". Machine Translation, 7(1- 2):61-98, 1992. [3] K. Hasida. "Common Heuristics for Parsing, Genera- tion, and Whatever, ...". In Proc. of a Workshop on Reversible Grammar in Natural Language Processing, pages 81-90, June 1991. [4] K. Haslda. "Dynamics of Symbol Systems - An Inte- grated Architecture of Cognition -". In Proc. of Interna- tional Conference on Fifth Generation Computer Sys- tems 1994 pages 1141-1148, June 1992. [5] J. R. Hobbs, M. E. Sticke], D. E. Appelt, and P. Martin. "Interpretation as abduction". Artificial Intelligence, 63:69-142, 1993. [el R. Kaplan and J. Bresnan. "Lexlcal-b'~nctional Gram- mar: A Formal System for Generalized Grammatical l~epresentatlon". In J. Bresnan, editor, "Mental Rep- resentation of Grammatical Relations", pages 173-281. MIT Press, Cambridge, Mass., 1982. [7] R. Kasper and W. C. Rounds. "A Logical Semantics for Feature Structures". In Proc. of the £.~th Annual Meet- ing of the Aasociation for Computational Linguistics, Columbia University, New York, NY, June 1986. [8] KBMT89. "Special Issue on Knowlege-based Machine Translation I and II". Machine Translatlon~ 4(2-3), March-June 1989. [9] M. Martinovic and T. Strzalkowski. "Comparing Two Gra~nmar-Bued Generation Algorithms: A Case Study". In Proc. of the 30th Annual Meeting of ACL, pages 81- 88, June 1992, 233 [1O] M. McCord. "Slot Grammar: A System for Simpler Construction of Practical Natural Language Grammars (Ed:Studsr, R.)", pages 118-145. Springer-Verlag, 1990. [11] M. Murata and M. Naga~. "Determination of Referential Property and Number of Nouns in Japanese Sentences for Machine Translation into English". In Prac. of the 5th International Conference on Theoretical and Method- ological lssues in Machine Translation, pages 218-225, Kyoto, Japan, July 1993. [12] E. H. Nyberg, 3rd and T. Mitamura. "The KANT Sys- tem: Fast, Accurate, High-Quality Translation in Prac- tical Domains". In Proc. of the 14th International Con- ference on Computational Linguistics, pages 1069-1073, July 1992. [131 C. Pollard and I. A. Sag. "An Information-Based Syn- tax and Semantics, Vol.1 Fitndamentals". CSLI Lecture Notes, Number 18, 1987. [14] G. Russell, A. Bdlim, J. Carroll, and S. Warwick- Armstrong. "A Practical Approach to Multiple Default Inheritance for Unlficatlon-Based Lexicons". Computa- tional Linguistics, 18(3):311-337, Sept. 1992. [15] S. M. Shleber. "An Introduction to Unlficatlon-Based Approaches to Grammar". CSLI Lecture Notes, Number 4, Stanford, CA, 1988. [161 S. M. Shleber. =A Uniform Architecture for Parsing and Generation". In Proc. of the l~th International Con- ference on Computational Linguistics, pages 614-619, August 1988. [17] S. M. Shleber, P. C. N. Perelra, G. van Noord, and R. C. Moore. "Semantic-Head-Drlven GenerationS. Computa- tional Linguistics, 16(1):30--42, March 1990. [18] S. M. Shieber and Y. Schabes. "Synchronous Tree- Adjoining Grammars". In Proc. of the 13th Interna- tional Conference on Computational Linguistics, pages 253-258, August 1990. [19] 13. Smolka. "A Feature Logic with Subsorts'. Technical Report LILOG-REPORT 33, IBM Deutschland GmbH, Stuttgart, West Germany, May 1988. [20] K. Takeda. "An Object-Oriented Implementation of Ma- chine Translation Systems ~. In Proc. of the 5th Inter- national Conference on Theoretical and Methodologi- cal Issues in Machine Translation, pages 154-167, July 1993. [21] K. Takeda. "Sentence Generation from Partially Con- strained Feature Structures ~. In Proc. of the Natural L~nguags Processing Pacific Rim Symposium, pages 7- 16, Dec. 1993. [22] K. Takeda~ N. Uramoto, T. Nasukawa, and T. Tsutsumi. "Shall2 - A Symmetric Machine Translation System with Conceptual Transfer". In Proc. of the l,~th International Conference on Computational Linguistics, pages 1034- 1038, July 1992. [23] M. Tomlta and K. Knight. =Pseudo Unification and Full Unification". Technical Report CMU-CMT-88-MEMO, Center for Machine Translation, Carnegie Mellon Uni- verslty, November 1987. [94] H. Uchida~ "ATLAS Ih A Machine Translation System Using Conceptual Structure as an Interlingua". In Proc. of ~nd Intl. Conf. on Theoretical and Methodological Issues in Machine Translation of Natural Languages, pages 150-160, June 1988. [25] J. Wedeklnd. "Generation as Structure Driven Deriva- tion". In Proc. of the 1J~th International Conference on Computational Liguistics, pages 732-737, August 1988. [26] H. Yasuhara. "Conceptual Transfer in an Interlingua Method and Example Based MT". In Proc. of the Nat- ural Language Processing Pacific Rim Symposium '93, pages 376-379, Fukuoka, Japan, Dec. 1993. [27] I~,. Zajac. "A Uniform Architecture for Parsing, Gen- eration and Transfer". In Proc. of a Workshop on Reversible Grammar in Natural Lan9uage Proceasing, pages 71--80, June 1991.
1994
31
Extracting Noun Phrases from Large-Scale Texts: A Hybrid Approach and Its Automatic Evaluation Kuang-hua Chen and Hsin-Hsi Chen Department of Computer Science and Information Engineering National Taiwan University Taipei, Taiwan, R.O.C. Internet: hh_chen@csie, ntu. edu.tw Abstract To acquire noun phrases from running texts is useful for many applications, such as word grouping, terminology indexing, etc. The reported literatures adopt pure probabilistic approach, or pure rule-based noun phrases grammar to tackle this problem. In this paper, we apply a probabilistic chunker to deciding the implicit boundaries of constituents and utilize the linguistic knowledge to extract the noun phrases by a finite state mechanism. The test texts are SUSANNE Corpus and the results are evaluated by comparing the parse field of SUSANNE Corpus automatically. The results of this preliminary experiment are encouraging. 1. Introduction From the cognitive point of view, human being must recognize, learn and understand the entities or concepts (concrete or abstract) in the texts for natural language comprehension. These entities or concepts are usually described by noun phrases. The evidences from the language learning of children also show the belief (Snow and Ferguson, 1977). Therefore, if we can grasp the noun phases of the texts, we will understand the texts to some extent. This consideration is also captured by theories of discourse analysis, such as Discourse Representation Theory (Kamp, 1981). Traditionally, to make out the noun phrases in a text means to parse the text and to resolve the attachment relations among the constituents. However, parsing the text completely is very difficult, since various ambiguities cannot be resolved solely by syntactic or semantic information. Do we really need to fully parse the texts in every application? Some researchers apply shallow or partial parsers (Smadja, 1991; Hindle, 1990) to acquiring specific patterns from texts. These tell us that it is not necessary to completely parse the texts for some applications. This paper will propose a probabilistic partial parser and incorporate linguistic knowledge to extract noun phrases. The partial parser is motivated by an intuition (Abney, 1991): (1) When we read a sentence, we read it chunk by chunk. Abney uses two level grammar rules to implement the parser through pure LR parsing technique. The first level grammar rule takes care of the chunking process. The second level grammar rule tackles the attachment problems among chunks. Historically, our statistics- based partial parser is called chunker. The chunker receives tagged texts and outputs a linear chunk sequences. We assign a syntactic head and a semantic head to each chunk. Then, we extract the plausible maximal noun phrases according to the information of syntactic head and semantic head, and a finite state mechanism with only 8 states. Section 2 will give a brief review of the works for the acquisition of noun phrases. Section 3 will describe the language model for chunker. Section 4 will specify how to apply linguistic knowledge to assigning heads to each chunk. Section 5 will list the experimental results of chunker. Following Section 5, Section 6 will give the performance of our work on the retrieval of noun phrases. The possible extensions of the proposed work will be discussed in Section 7. Section 8 will conclude the remarks. 2. Previous Works Church (1988) proposes a part of speech tagger and a simple noun phrase extractor. His noun phrase extractor brackets the noun phrases of input tagged texts according to two probability matrices: one is starting noun phrase matrix; the other is ending noun phrase matrix. The methodology is a simple version of Garside and Leech's probabilistic parser (1985). Church lists a sample text in the Appendix of his paper to show the performance of his work. It demonstrates only 5 out of 248 noun phrases are omitted. Because the tested text is too small to assess the results, the experiment for large volume of texts is needed. 234 Bourigault (1992) reports a tool, LEXTER, for extracting terminologies from texts. LEXTER triggers two-stage processing: 1) analysis (by identification of frontiers), which extracts the maximal-length noun phrase: 2) parsing (the maximal-length noun phrases), which, furthermore, acquires the terminology embedded in the noun phrases. Bourigault declares the LEXTER extracts 95°/'0 maximal-length noun phrases, that is, 43500 out of 46000 from test corpus. The result is validated by an expert. However, the precision is not reported in the Boruigault's paper. Voutilainen (1993) announces NPtool for acquisition of maximal-length noun phrases. NPtool applies two finite state mechanisms (one is NP-hostile; the other is NP-friendly) to the task. The two mechanisms produce two NP sets and any NP candidate with at least one occurrence in both sets will be labeled as the "ok" NP. The reported recall is 98.5-100% and the precision is 95- 98% validated manually by some 20000 words. But from the sample text listed in Appendix of his paper, the recall is about 85%, and we can find some inconsistencies among these extracted noun phrases. 3. Language Model Parsing can be viewed as optimizing. Suppose an n- word sentencc, w j, w 2 ..... w (including punctuation marks), the parsing task is to find a parsing tree T, such that P(7]w l, w e ..... w n) has the maximal probability. We define T here to be a sequence of chunks, cp c 2 ..... c m, and each c (0 < i <_ m) contains one or more words wj (0 < j _< n). For example, the sentence "parsing can be viewed as optimization." consists of 7 words. Its one possible parsing result under our demand is: (2) [parsing] [can be viewed] [as optimization] [.] C 1 C2 C3 C4 Now, the parsing task is to find the best chunk sequence, ('*. such that (3) C*=argmaxP((,Iw,) Tile ('i is one possible chunk sequence, c], C 2 ..... Cmi , where m i is the number of chunks of the possible chunk sequence. To chunk raw text without other information is ve.ry difficult, since the word patterns are many millions. Therefore, we apply a tagger to preprocessing the raw texts and give each word a unique part of speech. That is. for an n-word sentence, w 1, w 2 ..... w n (including punctuation marks), we assign part of speeches t l, t 2 ..... t n to the respective words. Now the real working model is: (4) C* = argmaxP(C~lt,") Using bi-gram language model, we then reduce P(Cilt 1, t 2 ..... tn) as (5), (5) n ~ n P(C, It, ) = P,(c, It, ) r~ C n _~ l-I P,(c, lc,_,,t~)× t],( ,it, ) k=l -~ l-I P,(c.ic._,) × P,(c.) k=l where Pi( " ) denotes the probability for the i'th chunk sequence and c o denotes the beginning mark of a sentence. Following (5), formula (4) becomes (6) argmaxP(C~lt~') = argmaxl- I P (c, Ic,_, ) x P (c,) k=l = argmax~llog(P ~ (c, Ic,_, )) + log(P~ (c,))l k=l In order to make the expression (6) match the intuition of human being, namely, 1) the scoring metrics are all positive, 2) large value means high score, and 3) the scores are between 0 and 1, we define a score function S(P( • )) shown as (7). (7) S(P( • )) = 0 when P( • ) = 0; S(P(. ))= 1.0/(1.0+ABS(Iog(P(. )))) o/w. We then rewrite (6) as (8). (8) C* = argmaxP(C, It,") n~ -= argmaxI- I P,(q [c._,) x P, (c.) f=l = argmax Z [log(P~ (c, Ic,_, )) + log(P~ (c,))l k=l r~ = argmaxE 18(P ~ (c. Ic._, )) + S(P, (c.))l k=l The final language model is to find a chunk sequence C*, which satisfies the expression (8). Dynamic programming shown in (9) is used to find the best chunk sequence. The score[i] denotes the score of position i. The words between position pre[i] and position i form the best chunk from the viewpoint of position i. The dscore(cO is the score of the probability 235 P(ci) and the cscore(ci[ci-l) is the score of the probability P(cilci-l). These scores are collected from the training corpus, SUSANNE corpus (Sampson, 1993; Sampson, 1994). The details will be touched on in Section 5. (9) Algorithm input : word sequence wl, w2 ..... wn, and the corresponding POS sequence t~, t2 ..... tn output : a sequence of chunks c~, c2, ..., Cm 1. score[0] = 0; prel0l = 0, 2. for (i = 1: i<n+l; i++) do 3 and 4; 3. j*= maxarg (score[prelJ]l+dscore(cj)+cscore(cjlcj-1)); 0~_j<i where cj = tj+~ ..... ti; Cj-1 = tpre[j]+l ..... tj; 4. score[il=scorelpreiJ*ll+dscore(cj*)+cscore(cj*lq*-0; prelil = j*: 5. for (i=n; i>0; i=preli]) do output the word Wpre[i]+l ..... wi to form a chunk; 4. Linguistic Knowledge In order to assign a head to each chunk, we first define priorities of POSes. X'-theory (Sells, 1985) has defined the X'-equivalences shown as Table 1. Table 1. X'-Equivalences R t, ~ X" NP V V' VP A A' AP p p' pp INFL S (I') S' (IP) Table 1 defines five different phrasal structures and the hierarchical structures. The heads of these phrasal structures are the first level of X'-Equivalences, that is, X. The other grammatical constituents function as the specifiers or modifiers, that is, they are accompanying words not core words. Following this line, we define the primary priority of POS listed in Table 1. (10) Primary POS priority 1 : V > N > A > P In order to extract the exact head, we further define Secondary POS priority among the 134 POSes defined in LOB corpus (Johansson, 1986). (11) Secondary POS priority is a linear precedence relationship within the primary priorities for coarse POSes I We do not consider the INFL. since our model will not touch on this structure. For example, LOB corpus defines four kinds of verbial words under the coarse POS V: VB*, DO*, BE* and HV* 2. The secondary priority within the coarse POS V is: (12) VB* > I-iV* > DO* > BE* Furthermore, we define the semantic head and the syntactic head (Abney, 1991). (13) Semantic head is the head of a phrase according to the semantic usage; but syntactic head is the head based on the grammatical relations. Both the syntactic head and the semantic head are useful in extracting noun phrases. For example, if the semantic head of a chunk is the noun and the syntactic one is the preposition, it would be a prepositional phrase. Therefore, it can be connected to the previous noun chunk to form a new noun phrase. In some case, we will find some chunks contain only one word, called one- word chunks. They maybe contain a conjunction, e.g., that. Therefore. the syntactic head and the semantic head of one-word chunks are the word itself. Following these definitions, we extract the noun phrases by procedure (14): (14) (a) Co) (c) (d) Tag the input sentences. Partition the tagged sentences into chunks by using a probabilistic partial parser. Decide the syntactic head and the semantic head of each chunk. According to the syntactic and the semantic heads, extract noun phrase from these chunks and connect as many noun phrases as possible by a finite state mechanism. raw tagged chunked (TAo- PER) NPso, Figure 1. The Noun Phrases Extraction Procedure Figure 1 shows the procedure. The input raw texts will be assigned POSes to each word and then pipelined into 2 Asterisk * denotes wildcard. Therefore, VB* represents VB (verb, base form), VBD (verb, preterite), VBG (present participle), VBN (past participle) and VBZ (3rd singular form of verb). 236 a chunker. The tag sets of LOB and SUSANNE are different. Since the tag set of SUSANNE corpus is subsumed by the tag set of LOB corpus, a TAG- MAPPER is used to map tags of SUSANNE corpus to those of LOB corpus. The chunker will output a sequence of chunks. Finally, a finite state NP- TRACTOR will extract NPs. Figure 2 shows the finite state mechanism used in our work. CD* * J."~ ~'.r,ff~* VBN or P'l _,..,N~w-w,~ "~'~ VBN o~ i~--,,w~ k~ Figure 2. The Finite State Machine for Noun Phrases The symbols in Figure 2 are tags of LOB corpus. N* denotes nous: P* denotes pronouns; J* denotes adjectives; A* denotes quantifiers, qualifiers and determiners; IN denotes prepositions: CD* denotes cardinals; OD* denotes ordinals, and NR* denotes adverbial nouns. Asterisk * denotes a wildcard. For convenience, some constraints, such as syntactic and semantic head checking, are not shown in Figure 2. 5. First Stage of Experiments Following the procedures depicted in Figure 1, we should train a chunker firstly. This is done by using the SUSANNE Corpus (Sampson, 1993; Sampson, 1994) as the training texts. The SUSANNE Corpus is a modified and condensed version of Brown Corpus (Francis and Kucera, 1979). It only contains the 1/10 of Brown Corpus, but involves more information than Brown Corpus. The Corpus consists of four kinds of texts: 1) A: press reportage; 2) G: belles letters, biography, memoirs; 3) J: learned writing; and 4) N: adventure and Western fiction. The Categories of A, G, J and N are named from respective categories of the Brown Corpus. Each Category consists of 16 files and each file contains about 2000 words. The following shows a snapshot of SUSANNE Corpus. G01:00]0a - YB ~minbrk> [Oh. Oh] G0]:O0]0b - JJ NORTHERN northern [O[S[Np:s. G01:0010c NN2 liberals liberal .Np:s] G0]:0010d - VBR are be [Vab. Vab] G0]:0010e AT the the [Np:e. G0l:0010f JB chief chief G0]:fl010g - NN2 supporters supporter G01:0010h - IO of of [Po. G01:0010i - JJ civil civi] [Np. G01:0010j - NN2 rights right .Np] G01:0020a - CC and and !Po~. G01:0020b - IO of of G01:0020c NNIu integration integration .Po+]Po]Np:eI5] G01:0020d - YF +. Table 2 lists basic statistics of SUSANNE Corpus. Table 2. The Overview of SUSANNE Corpus C~e~ofies [ Files [ Paragraphs I Sentences [ Words A 16 767 1445 37'180 G 16 280 1554 37583 J 16 197 1353 36554 N 16 723 2568 38736 To~l I 64 I 1967 I 6920 I 150053 In order to avoid the errors introduced by tagger, the SUSANNE corpus is used as the training and testing texts. Note the tags of SUSANNE corpus are mapped to LOB corpus. The 3/4 of texts of each categories of SUSANNE Corpus are both for training the chunker and testing the chunker (inside test). The rest texts are only for testing (outside test). Every tree structure contained in the parse field is extracted to form a potential chunk grammar and the adjacent tree structures are also extracted to form a potential context chunk grammar. After the training process, total 10937 chunk grammar rules associated with different scores and 37198 context chunk grammar rules are extracted. These chunk grammar rules are used in the chunking process. Table 3 lists the time taken for processing SUSANNE corpus. This experiment is executed on the Sun Sparc 10, model 30 workstation, T denotes time, W word, C chunk, and S sentence. Therefore, T/W means the time taken to process a word on average. [, A G J N Av. II Table 3. The Processing Time T/W T/C T/S 0.00295 0.0071 0.0758 0.00283 0.0069 0.0685 0.00275 0.0073 0.0743 0.00309 0.0066 0.0467 0.00291 1 0.0()70 ] 0.0663 According to Table 3, to process a word needs 0.00291 seconds on average. To process all SUSANNE corpus needs about 436 seconds, or 7.27 minutes. In order to evaluate the performance of our chunker, we compare the results of our chunker with the denotation made by the SUSANNE Corpus. This comparison is based on the following criterion: (15) The content of each chunk should be dominated by one non-terminal node in SUSANNE parse field. 237 This criterion is based on an observation that each non- terminal node has a chance to dominate a chunk. Table 4 is the experimental results of testing the SUSANNE Corpus according to the specified criterion. As usual, the symbol C denotes chunk and S denotes sentence. Table 4. Experimental Results [t Cat. C" [ -S -- # of correct 4866 380 10480 1022 A # of incorrect 40 14 84 29 total# 4906 394 10564 1051 correct rate 0.99 0.96 0.99 0.97 # of correct 4748 355 10293 1130 G # of incorrect 153 32 133 37 total# 4901 387 10426 1167 correct rate 0.97 0.92 0.99 0,97 # of correct 4335 283 9193 1032 J # of incorrect 170 15 88 23 total# 4505 298 9281 1055 correct rate 0.96 0.95 0.99 0,98 # of correct 5163 536 12717 1906 N # of incorrect 79 42 172 84 total# 5242 578 12889 1990 correct rate 0,98 0.93 0.99 0.96 # of correct 19112 1554 42683 5090 Av. # of incorrect 442 103 477 173 total# 19554 1657 43160 5263 correct rate 0.98 0.94 0.99 0.97 Table 4 shows the chunker has more than 98% chunk correct rate and 94% sentence correct rate in outside test, and 99% chunk correct rate and 97% sentence correct rate in inside test. Note that once a chunk is mischopped, the sentence is also mischopped. Therefore, sentence correct rate is always less than chunk correct rate. Figure 3 gives a direct view of the correct rate of this chunker. 1 0.94 0 92 09 II g8 . . . . Chunk Sentence Chunk Setltence Outside Test Inside Test Figure 3. The Correct Rate of Experiments 6. Acquisition of Noun Phrases We employ the SUSANNE Corpus as test corpus. Since the SUSANNE Corpus is a parsed corpus, we may use it as criteria for evaluation. The volume of test texts is around 150,000 words including punctuation marks. The time needed from inputting texts of SUSANNE Corpus to outputting the extracted noun phrases is listed in Table 5. Comparing with Table 3, the time of combining chunks to form the candidate noun phrases is not significant. Table 5. Time for Acquisition of Noun Phrases II A G J N Total II Words Time (see.) Time/Word 37180 112.32 0.00302 37583 108.80 0.00289 36554 103.04 0.00282 38736 122.72 0.00317 150053 I 446.88 I 0.00298 The evaluation is based on two metrics: precision and recall. Precision means the correct rate of what the system gets. Recall indicates the extent to which the real noun phrases retrieved from texts against the real noun phrases contained in the texts. Table 6 describes how to calculate these metrics. Table 6. Contingency Table for Evaluation 1 SUSANNE NP ] non-NP ]l NP syst°m ,l .on NP }} a I b The rows of "System" indicate our NP-TRACTOR thinks the candidate as an NP or not an NP: the columns of "SUSANNE" indicate SUSANNE Corpus takes the candidate as an NP or not an NP. Following Table 6, we will calculate precision and recall shown as (16). (16) Precision = a/(a+b) * 100% Recall = a/(a+c) * 100% To calculate the precision and the recall based on the parse field of SUSANNE Corpus is not so straightforward at the first glance. For example, (17) 3 itself is a noun phrse but it contains four noun phrases. A tool for extracting noun phrases should output what kind of and how many noun phrases, when it processes the texts like (17). Three kinds of noun phrases (maximal noun phrases, minimal noun phrases and ordinary noun phrases) are defined first. Maximal noun phrases are those noun phrases which are not contained in other noun phrases. In contrast, minimal noun phrases do not contain any other noun phrases. 3 This example is taken from N06:0280d-N06:0290d, Susanne Corpus (N06 means file N06, 0280 and 0290 are the original line numbers in Brown Corpus. Recall that the Susanne Corpus is a modified and reduced version of Brown Corpus). 238 Apparently, a noun phrase may be both a maximal noun phrase and a minimal noun phrase. Ordinary noun phrases are noun phrases with no restrictions. Take (17) as an example. It has three minimal noun phrases, one maximal noun phrases and five ordinary noun phrases. In general, a noun-phrase extractor forms the front end of other applications, e.g., acquisition of verb subcategorization frames. Under this consideration, it is not appropriate to taking (17) as a whole to form a noun phrase. Our system will extract two noun phrases from (17). "a black badge of frayed respectability" and "his neck". (17) ilia black badge] of lfrayed respectabilityll that ought never to have left [his neck]] We calculate the numbers of maximal noun phrases, minimal noun phrases and ordinary noun phrases denoted in SUSANNE Corpus, respectively and compare these numbers with the number of noun phrases extracted by our system. Table 7 lists the number of ordinary noun phrases (NP), maximal noun phrases (MNP), minimal noun phrases (mNP) in SUSANNE Corpus. MmNP denotes the maximal noun phrases which are also the minimal noun phrases. On average, a maximal noun phrase subsumes 1.61 ordinary noun phrases and 1.09 minimal noun phrases. Table 7. The Number of Noun Phrases in Corpus A G J N Total jNP[ MNPI mNPIMmNPI NP I mNP MNP 10063 5614 6503 3207 1.79 1.16 9221 5451 6143 3226 1.69 1.13 8696 4568 5200 2241 1.90 1.14 9851 7895 7908 5993 1.25 1.00 37831 23528 25754 14667 1.61 1.09 To calculate the precision, we examine the extracted noun phrases (ENP) and judge the correctness by the SUSANNE Corpus. The CNP denotes the correct ordinary noun phrases, CMNP the correct maximal noun phrases. CmNP correct minimal noun phrases and CMmNP the correct maximal noun phrases which are also the minimal noun phrases. The results are itemized in Table 8. The average precision is 95%. Table 8. Precision of Our System U ENp I I CMNP I CmNP I C nNP I eci ion A 8011 7660 3709 4348 3047 0.96 G 7431 6943 3626 4366 3028 0.93 J 6457 5958 2701 3134 2005 0.92 N 8861 8559 6319 6637 5808 0.97 To~l 30760 29120 16355 18485 13888 0.95 Here, the computation of recall is ambiguous to some extent. Comparing columns CMNP and CmNP in Table 8 with columns MNP and mNP in Table 7, 70% of MNP and 72% of mNP in SUSANNE Corpus are extracted, In addition, 95% of MmNP is extracted by our system. It means the recall for extracting noun phrases that exist independently in SUSANNE Corpus is 95%. What types of noun phrases are extracted are heavily dependent on what applications we will follow. We will discuss this point in Section 7. Therefore, the real number of the applicable noun phrases in the Corpus is not known. The number should be between the number of NPs and that of MNPs. In the original design for NP-TRACTO1L a maximal noun phrase which contains clauses or prepositional phrases with prepositions other than "of' is not considered as an extracted unit. As the result, the number of such kinds of applicable noun phrases (ANPs) form the basis to calculate recall. These numbers are listed in Table 9 and the corresponding recalls are also shown. Table 9. The limitation of Values for Recall A G J N Av, 1 ANP CNP 7873 7660 7199 6943 6278 5958 8793 8559 30143 29120 I Recall 0.97 0.96 0.95 0.97 0.96 The automatic validation of the experimental results gives us an estimated recall. Appendix provides a sample text and the extracted noun phrases. Interested readers could examine the sample text and calculate recall and precision for a comparison. 7. Applications Identification of noun phrases in texts is useful for many applications. Anaphora resolution (Hirst, 1981) is to resolve the relationship of the noun phrases, namely, what the antecedent of a noun phrase is. The extracted noun phrases can form the set of possible candidates (or universal in the terminology of discourse representation theory). For acquisition of verb subcategorization frames, to bracket the noun phrases in the texts is indispensable. It can help us to find the boundary of the subject, the object and the prepositional phrase. We would use the acquired noun phrases for an application of adjective grouping. The extracted noun phrases may contain adjectives which pre-modify the head noun. We then utilize the similarity of head nouns to group the adjectives. In addition, we may give the head noun a semantic tag, such as Roget's Thesaurus provides, and then analyze the adjectives. To automatically produce the index of a book, 239 we would extract the noun phrases contained in the book, calculate the inverse document frequency (IDF) and their term frequency (TF) (Salton, 1991), and screen out the implausible terms. These applications also have impacts on identifying noun phrases. For applications like anaphora resolution and acquisition of verb subcategorization frames, the maximal noun phrases are not suitable. For applications like grouping adjectives and automatic book indexing, some kinds of maximal noun phrases, such as noun phrases postmodified by "of" prepositional phrases, are suitable: but some are not, e.g., noun phrases modified by relative clauses. 8. Concluding Remarks The difficulty of this work is how to extract the real maximal noun phrases. If we cannot decide the prepositional phrase "over a husband eyes" is licensed by the verb "pull", we will not know "the wool" and "a husband eyes" are two noun phrases or form a noun pharse combined by the preposition "over". (18) to pull the wool over a husband eyes to sell the books of my uncle In contrast, the noun phrase "the books of my uncle" is so called maximal noun phrase in current context. As the result, we conclude that if we do not resolve PP- attachment problem (Hindle and Rooth, 1993), to the expected extent, we will not extract the maximal noun phrases. In our work, the probabilistic chunker decides the implicit boundaries between words and the NP- TRACTOR connects the adjacent noun chunks. When a noun chunk is followed by a preposition chunk, we do not connect the two chunks except the preposition chunk is led by "of' preposition. Comparing with other works, our results are evaluated by a parsed corpus automatically and show the high precision. Although we do not point out the exact recall, we provide estimated values. The testing scale is large enough (about 150,000 words). In contrast, Church (1988) tests a text and extracts the simple noun phrases only. Bourigault's work (1992) is evaluated manually, and dose not report the precision. Hence, the real performance is not known. The work executed by Voutilainen (1993) is more complex than our work. The input text first is morphologizied, then parsed by constraint grammar, analyzed by two different noun phrases grammar and finally extracted by the occurrences. Like other works, Voutilainen's work is also evaluated manually. In this paper, we propose a language model to chunk texts. The simple but effective chunker could be seen as a linear structure parser, and could be applied to many applications. A method is presented to extract the noun phrases. Most importantly, the relations of maximal noun phrases, minimal noun phrases, ordinary noun phrases and applicable noun phrases are distinguished in this work. Their impacts on the subsequent applications are also addressed. In addition, automatic evaluation provides a fair basis and does not involve human costs. The experimental results show that this parser is a useful tool for further research on large volume of real texts. Acknowledgements We are grateful to Dr. Geoffrey Sampson for his kindly providing SUSANNE Corpus and the details of tag set to US. References Abney, Steven (1991), "Parsing by Chunks," in Principle-Based Parsing, Berwick, Abney and Tenny (Eds.), Khiwer Academic Publishers, pp. 257-278. Bourigault, Didier (1992), "Surface Grammatical Analysis for the Extraction of Terminological Noun Phrases," Proceedings of the 15th International Conference on Computational Linguistics, COLING-92, Vol. III, Nantes, France, pp. 977-98 I. Church, Kenneth (1988), "A Stochastic Parts Program and Noun Phrase Parser for Unrestricted Text," Proceedings of ,Second Conference on Applied Natural Language Processing, pp. 136-143. Francis, N. and Kucera, H. (1979), Manual of Information to Accompany a Standard Sample of Presentday Edited American English, for Use with Digital Computers, Department of Linguistics, Brown University, Providence, R. I., U.S.A., original ed. 1964, revised 1971, revised and augmented 1979. Garside, Roger and Leech, Geoffrey (1985), "A Probabilistic Parser," Proceedings of Second Conference of the European Chapter of the A CL. pp. 166-170. Hindle, Donald (1990), "Noun Classification from Predicate-Argument Structures," Proceedings of 28th Annual Meeting of ACL, pp. 268-275. Hindle, Donald and Rooth, Mats (1993), "Structural Ambiguity and Lexical Relations," Computational Linguistics, 19(1), pp. 103-120. Hirst, G. (1981), Anaphora in Natural Language Understanding: a ,Survey, Lecture Notes 119. Springer-Verlag. Johansson, Stig (1986), The Tagged LOB Corpus: Users' Manual, Bergen: Norwegian Computing Centre for the Humanities. 240 Kamp, H. (1981), "A Theory of Truth and Semantic Representation," Formal Methods in the Study of Language, Vol. 1, (J. Groenendijk, T. Janssen, and M. Stokhof Eds.), Mathema-tische Centrum. Salton, G. (1991), "Developments in Automatic Text Retrieval," Science, Vol. 253, pp. 974-979. Sampson, Geoffrey (1993), "The SUSANNE Corpus," l('AME.lournal, No. 17, pp. 125-127. Sampson, Geoffrey (1994), English for the Computer, Oxford University Press. Sells, Peter (1985), Lectures on Contemporary 5~vntactic Theories, Lecture Notes, No. 3, CSLI. Smadja, Frank (1991), Extracting Collocations from Text. An Application: Language Generation, Ph.D. Dissertation. Columbia University, 1991. Snow. C.E. and Ferguson, C.A. (Eds.) (1977), Talking to ('hildren: Language lnput and Acquisition, Cambridge, Cambridge University Press. Voutilalnen, Atro (1993), "NPtool, a Detector of English Noun Phrases." Proceedings of the Workshop on l/ery Large Corpora: Academic and Industrial Perspectives, Ohio State University, Columbus, Ohio, USA, pp. 48-57. Appendix For demonstration, we list a sample text quoted from N18:0010a-N18:0250e, SUSANNE Corpus. The extracted noun phrases are bracketed. We could compute the precision and the recall from the text as a reference and compare the gap with the experimental results itemized in Section 6. In actual, the result shows that the system has high precision and recall for the text. I Too_QL many AP people_NNS ] think VB that CS [ the ATI primary_JJ purpose_.NN of_IN a AT higher_J JR education_NN ] is -BEZ to TO help_ VB I you_PP2 1 mal<e_VB [ a_AT living NN ] +;_; ~ DT is BEZ not XNOT so RB +,_, for_CS [ education_NN ] offers ~'BZ [ all ABN kinds_NN-S of IN dividends_NNS ] +,_, including IN how WRB toTO pull_VB [ the ATI wool NN ] over_IN [ a AT husband NN eyes NNS ] while_CS-[ you_PP2- l are BER having I~VG I an AT-affair NN I with_IN [ his_PP$ wife_NN ] ~_. If CS [ it_PP3 l were_ BED not_X'NOT for IN [ an AT old JJ professor NPT] who WPR made VBD [ me_PPIO ] rea-d VB [ the_ATl classics_NN ] [ I..PPIA ] would_MD have_HV been_BEN stymied_VBN on IN what WDT to_TO do DO +,_, and CC now RN [ I_PP1A] understand VB why_WRl3 [ they PP3AS ] are_BER [-classics_NN ] +;; those DTS who WPR wrote VBD I them PP3OS ] knew VBD [ people NNS ] and CC what WDT made VBD [ people-NNS] tick VB . . [ I_PP1A-] worked ~'BD for IN [ my_PP$ Uncle_NPT ] (_( [ +an_AT Uncle NPT by_ll~ marriage_NN ] so_RB [ you_PP2 ] will MD not XNOT-think VB this DT has HVZ [ a AT mild JJ undercurrent ~[N of IN incest NN- ] +) ~- who WP-R ran VBD I one_CDl of IN those DTS antique_JJ shops_NNS ] in_IN [ New JJ Orleans NP ] Vieux_&F-W Carre_&FW +,_, [ the_ATl old JJ French-JJ Quarter_NPL ] ._. [ The_ATI arrangement NN ] [ I_PPI,~ ] had HVD with IN [ him PP30 ] was_BEDZ to_TO work VB [ four_CD hours NRS ] I a_AT day_NR 1 ._- [ The ATI rest N-N of IN the ATI time NR I [ I_PPIA 1 devoted_VBD to_I/~ painting~VBG or CC to IN those DTS [ other JJB activities_NNS I [ a_AT young_J-J and CC healtl~y_JJ man_NN-] just_RB out IN of_IN [ college_NN ] finds VCBZ interesting_JJ . . [ I_PP1A ] had HVD [ a AT one-room JJ studio NN I which WDTR overlooked VBD I an_AT ancient JJ courtyard_NN I filled_-VBN with IN l mowers NNS and_CC piants_NNS ] ~.. blooming_VBG everlastingly_Rl3 in IN I the ATI southern JJ sun_NN ] ._. I I_PPIA ] had_HVD-come_VBN to IN [ New JJ Orleans_NP ] [ two CD years_NRS ] earlier_RBR after IN [ graduating_VBG college_NN ] +,_, partly_RB because CS [ 1 PPI A I Ioved_VBD I the ATI city_NPL ] and_CC partly RB because CS there_EX was_BEDZ quite_QL [ a AT noted JJ art NN colony NN I there RN . . When_CS [ my_PP$ Uncle NPT ]- offered VBD [ me_-PPlO ] l aAT part-time JJ job_NN ] which_WDTR would MD take VB I care NN ] of_IN I my_PP$ normal_JJ expenses I~NS and_-CC give_Vl3 [ me_PP10 ] I time_NR ] to_TO paint_VB [ I_PPIA accepted_VBD ._. [ The_ATI arrangement_NN ] turned VBD out_RP to TO be BE excellent JJ . . [ I_PP1A ] loved VB-D [ the ATI city_NPL ] and_CC [ I_PP1A ] particularly_RB loved VBD [ the_ATl gaiety_NN and CC spirit_NN ] of_IN [ Mardi NR-Gras NR ] ._ I I_PP1A l hadSlVD seen_VBN I two_CD of IN them PP3OS-] and_CC [ we_PPIAS ] would MD soon RB be_BE in_IN-another DT city- wide_JJ +,_, [ joyous_JJ celebration_NN with IN romance_N-N ] in IN [ the_ATI air_NN ] +;_; and_CC +,_, when C-S [ you_PP2 l took V-BD [ a_AT walk NPL ] l you_PP2 ] never RB knew_VBD what WDT [ adventure ~IN or CC pair_NN of i-N sparkling_JJ eyes_NNS] were_BED waiting_Vl3G around_IN [ the_-ATI next_OD corner_NPL ] ._. 241
1994
32
A Corpus-based Approach to Automatic Compound Extraction Keh-Yih Su Ming-Wen Wu Jing-Shin Chang Dept. of Electrical Engineering Behavior Design Corporation Dept. of Electrical Engineering National Tsing-Hua University No. 28, 2F, R&D Road II National Tsing-Hua University Hsinchu, Taiwan 30043, R.O.C. Science-Based Industrial Park Hsinchu, Talwan 30043, R.O.C. kysu©bdc, com. tw Hsinchu, Taiwan 30077, R.O.C. shin©hera, ee.nthu, edu. 1;w mingwen~bdc, com. tw Abstract An automatic compound retrieval method is pro- posed to extract compounds within a text mes- sage. It uses n-gram mutual information, relative frequency count and parts of speech as the features for compound extraction. The problem is mod- eled as a two-class classification problem based on the distributional characteristics of n-gram to- kens in the compound and the non-compound clus- ters. The recall and precision using the proposed approach are 96.2% and 48.2% for bigram com- pounds and 96.6% and 39.6% for trigram com- pounds for a testing corpus of 49,314 words. A significant cutdown in processing time has been observed. Introduction In technical manuals, technical compounds [Levi 1978] are very common. Therefore, the qual- ity of their translations greatly affects the per- formance of a machine translation system. If a compound is not in the dictionary, it would be translated incorrectly in many cases; the reason is: many compounds are not compositional, which means that the translation of a compound is not the composite of the respective translations of the individual words [Chen and Su 1988]. For exam- ple, the translation of 'green house' into Chinese is not the composite of the Chinese ~anslations of 'green' and 'house'. Under such circumstances, the number of parsing ambiguities will also in- crease due to the large number of possible parts of speech combinations for the individual words. It will then reduce the accuracy rate in disam- biguation and also increase translation time. In practical operations, a computer-translated • manual is usually concurrently processed by sev- eral posteditors; thus, to maintain the consistency of translated terminologies among different poste- ditors is very important, because terminological consistency is a major advaatage of machine trans- lation over human translation. If all the termi- nologies can be entered into the dictionary before translation, the consistency can be automatically maintained, the translation quality can be greatly improved, and lots of postediting time and consis- tency maintenance cost can be saved. Since compounds are rather productive and new compounds are created from day to day, it is impossible to exhaustively store all compounds in a dictionary. Also, it is too costly and time- consuming to inspect the manual by people for the compound candidates and update the dictio- nary beforehand. Therefore, it is important that the compounds be found and entered into the dic- tionary before translation without much human effort; an automatic and quantitative tool for ex- tracting compounds from the text is thus seriously required. Several compound extracting approaches have been proposed in the literature [Bourigault 1992, Calzolari and Bindi 1990]. Traditional rule-based systems are to encode some sets of rules to ex- tract likely compounds from the text. However, a lot of compounds obtained with such approaches may not be desirable since they are not assigned objective preferences. Thus, it is not clear how likely one candidate is considered a compound. In LEXTER, for example, a text corpus is ana- lyzed and parsed to produce a list of likely ter- minological units to be validated by an expert [Bourigault 1992]. While it allows the test to be done very quickly due to the use of simple anal- ysis and parsing rules, instead of complete syn- tactic analysis, it does not suggest quantitatively to what extent a unit is considered a terminology and how often such a unit is used in real text. It might therefore extract many inappropriate termi- nologies with high false alarm. In another statis- tical approach by [Calzolari and Bindi 1990], the association ratio of a word pair and the disper- sion of the second word are used to decide if it. is a fixed phrase (a compound). The drawback is that it does not take the number of occurrences of the word pair into account; therefore, it is not. 242 known if the word pair is commonly or rarely used. Since there is no performance evaluation reported in both frameworks, it is not clear how well they work. A previous framework by [Wu and Su 1993] shows that the mutual information measure and the relative frequency information are discrimi- native for extracting highly associated and fre- quently encountered n-gram as compound. How- ever, many non-compound n-grams, like 'is a', which have high mutual information and high rel- ative frequency of occurrence are also recognized as compounds. Such n-grams can be rejected if syntactic constraints are applied. In this paper, we thus incorporate parts of speech of the words as a third feature for compound extraction. An automatic compound retrieval method combining the joint features of n-gram mutual information, relative frequency count and parts of speech is pro- posed. A likelihood ratio test method, designed for a two-class classification task, is used to check whether an n-gram is a compound. Those n-grams that pass the test are then listed in the order of significance for the lexicographers to build these entries into the dictionary. It is found that, by incorporating parts of speech information, both the recall and precision for compound extraction is improved. The simulation result shows that the proposed approach works well. A significant cut- down of the postediting time has been observed when using this tool in an MT system, and the translation quality is greatly improved. A Two Cluster Classification Model for Compound Extraction The first step to extract compounds is to find the candidate list for compounds. According to our experience in machine translation, most com- pounds are of length 2 or 3. Hence, only bigrams and trigrams compounds are of interest to us. The corpus is first processed by a morphological ana- lyzer to normalize every word into its stem form, instead of surface form, to reduce the number' of possible alternatives. Then, the corpus is scanned from left to right with the window sizes 2 and 3. The lists of bigrams and trigrams thus acquired then form the lists of compound candidates of in- terest. Since the part of speech pattern for the n- grams (n=2 or 3) is used as a compound extraction feature, the text is tagged by a discrimination ori- ented probabilistic lexical tagger [Lin et al. 1992]. The n-gram candidates are associated with a number of features so that they can be judged as being compound or non-compound. In particular, we use the mutual information among the words in an n-gram, the relative frequency count of the n-gram, and the part of speech patterns associated 243 with the word n-grams for the extraction task. Such features form an 'observation vector' £ (to be described later) in the feature space for an input n-gram. Given the input features, we can model the compound extraction problem as a two-class classification problem, in which an n-gram is ei- ther classified as a compound or a non-compound, using a likelihood ratio )t for decision making: ,x = P(,~IM¢) x P(M¢) P(~IMn¢) x P(M,~) where Mc stands for the event that 'the n-gram is produced by a compound model', Mnc stands for the alternative event that 'the n-gram is pro- duced by a non-compound model', and £ is the observation associated with the n-gram consisting of the joint features of mutual information, rela- tive frequency and part of speech patterns. The test is a kind of likelihood ratio test commonly used in statistics [Papoulis 1990]. If A > 1, it is more likely that the n-gram belongs to the com- pound cluster. Otherwise, it is assigned to the non-compound cluster. Alternatively, we could use the logarithmic likelihood ratio In A for testing: if In A > O, the n-gram is considered a compound; it is, otherwise, considered a non-compound. Features for Compound Retrieval The statistics of mutual information among the words in the n-grams, the relative frequency count for each n-gram and the transition probabilities of the parts of speech of the words are adopted as the discriminative features for classification as described in the following subsections. Mutual Information Mutual information is a measure of word association. It compares the probability of a group of words to occur together (joint probability) to their probabilities of occur- ring independently. The bigram mutual informa- tion is known as [Church and Hanks 1990]: P(x, y) I(x; y) = log2 P(x) x P(y) where x and y are two words in the corpus, and I(x;y) is the mutual information of these two words (in this order). The mutual information of a trigram is defined as [Su et al. 1991]: PD(X,y,z) I(x; y; z) = log 2 Pz(x, y, z) where PD(X,y,z) -- P(x,y,z) is the probability for x, y and z to occur jointly (Dependently), and Pi(x, y, z) is the probability for x, y and z to oc- cur by chance (Independently), i.e., Pz(x, y, z) =_ P(x) x P(y) x P(z)+P(x) x P(y, z)+P(x, y) x P(z). In general, I(.) >> 0 implies that the words in the u-gram are strongly associated. Ot.herwise, their appearance may be simply by chance. Relative Frequency Count The relative fre- quency count for the i th n-gram is defined as: f~ K where fi is the total number of occurrences of the i th n-gram in the corpus, and K is the average number of occurrence of all the entries. In other words, f~ is normalized with respect to K to get the relative frequency. Intuitively, a frequently en- countered word n-gram is more likely to be a com- pound than a rarely used n-gram. Furthermore, it may not worth the cost of entering the compound into the dictionary if it occurs very few times. The relative frequency count is therefore used as a fea- ture for compound extraction. Using both the mutual information and rel- ative frequency count as the extraction features is desirable since using either of these two fea- tures alone cannot provide enough information for compound finding. By using relative frequency count alone, it is likely to choose the n-gram with high relative frequency count but low as- sociation {mutual information) among the words comprising the n-gram. For example, if P(x) and P(y) are very large, it may cause a large P(z,y) even though they are not related. How- ever, P(x, y)/P(z) × P(y) would be small for this case. On the other hand, by using mutual informa- tion alone it may be highly unreliable if P(x) and P(y) are too small. An n-gram may have high mutual information not because the words within it are highly correlated but due to a large estima- tion error. Actually, the relative frequency count and mutual information supplement each other. A group of words of both high relative frequency and mutual information is most likely to be com- posed of words which are highly correlated, and very commonly used. Hence, such an n-gram is a preferred compound candidate. The distribution statistics of the training cor- pus, excluding those n-grams that appear only once or twice, is shown in Table 1 and 2 (MI: mu- tual information, RFC: relative frequency count, cc: correlation coefficient, sd: standard devia- tion). Note that the means of mutual informa- tion and relative frequency count of the compound cluster are, in general, larger than those in the non-compound cluster. The only exception is the means of relative frequencies for trigrams. Since almost 86.5% of the non-compound trigrams oc- cur only once or twice, which are not considered in estimation, the average number of occurrence of such trigrams are smaller, and hence a larger In°°f I mean°f I sd°f I tokens MI MI bigram I 862 I 7.49 I 3.08 I trigram 245 7.88 2.51 I I I RFC I covariance cc bigram I 3.18 I -0.71 I-0.0721 trigram 2.18 -0.41 -0.074 Table 1: Distribution statistics of pounds mean of RFC 2.43 2.92 corn- inoof I mo nof I sdof I tokens MI MI trigram 8057 3.55 2.24 I RFC I covariance cc bigram I 3.50 -0.45 l-0.0511 trigram 2.99 -0.33 -0.049 mean of RFC 2.28 3.14 Table 2: Distribution statistics of non- compounds relative frequency than the compound cluster, in which only about 30.6% are excluded from consid- eration. Note also that mutual information and rel- ative frequency count are almost uncorrelated in both clusters since the correlation coefficients are close to 0. Therefore, it is appropriate to take the mutual information measure and relative fre- quency count as two supplementary features for compound extraction. Parts of Speech Part of speech is a very impor- tant feature for extracting compounds. In most cases, part of speech of compounds has the forms: [noun, noun] or [adjective, noun] (for bigrams) and [noun, noun, noun], [noun, preposition, noun] or [adjective, noun, noun] (for trigrams). There- fore, n-gram entries which violate such syntactic constraints should be filtered out even with high mutual information and relative frequency count. The precision rate of compound extraction will then be greatly improved. Parameter Estimation and Smoothing The parameters for the compound model Mr and non-compound model M,c can be evaluated form a training corpus that is tagged with parts of speech and normalized into stem forms. The cor- 244 pus is divided into two parts, one as the training corpus, and the other as the testing set. The n- grams in the training corpus are further divided into two clusters. The compound cluster com- prises the n-grams already in a compound dictio- nary, and the non-compound cluster consists of the n-grams which are not in the dictionary. How- ever, n-grams that occur only once or twice are excluded from consideration because such n-grams rarely introduce inconsistency and the estimation of their mutual information and relative frequency are highly unreliable. Since each n-gram may have different part of speech (POS) patterns Li in a corpus (e.g., Li = [n n] for a bigram) the mutual information and relative frequency counts will be estimated for each of such POS patterns. Furthermore, a partic- ular POS pattern for an n-gram may have several types of contextual POS's surrounding it. For ex- ample, a left context of 'adj' category and a right context of 'n' together with the above example POS pattern can form an extended POS pattern, such as Lij = [adj (n n) n], for the n-gram. By considering all these features, the numerator fac- tor for the log-likelihood ratio test is simplified in the following way to make parameter estimation feasible: P(aT]Mc) x P(Mc) Hi:I [P(it,, RL [Mc) n, P(Mc) " , I-Ij=l P(Lij IMc)] x where n is the number of POS patterns occuring in the text for the n-gram, rt i is the number of extended POS patterns corresponding to the i th POS pattern, Li, Lij is the jth extended POS pat- tern for Li, and MLI and RL~ represent the means of the mutual information and relative frequency count, respectively, for n-grams with POS pattern Li. The denominator factor for the non-compound cluster can be evaluated in the same way. For simplicity, a subscript c (/nc) is used for the parameters of the compound (/non- compound) model, e.g., P(~.IMc) ~- Pc(Z). As- sume that ML. and RL~ are of Gaussian distribu- tion, then the bivariate probability density func- tion Pc(ML,,RL,) for MLi and RL~ can be evalu- ated from their estimated means and standard de- viations [Papoulis 1990]. Further simplification on the factor Pc(Lij) is also possible. Take a bigram for example, and assume that the probability den- sity function depends only on the part of speech pattern of the bigram (C1, C2) (in this order), one left context POS Co and one right lookahead POS C3, the above formula can be decomposed as: P(Lo [Me) = Pc(CO, C1, C2, C3) Pc(CaJC=) x Pc(C2[C,) x Pc(C, lCo) x &(Co) A similar formulation for trigrams with one left context POS and one right context POS, i.e., Pc(Co, C1, C2, C3, C4), can be derived in a similar way. The n-gram entries with frequency count _ < 2 are excluded from consideration before estimating parameters, because they introduce little inconsis- tency problem and may introduce large estimation error. After the distribution statistics of the two clusters are first estimated, we calculate the means and standard deviations of the mutual informa- tion and relative frequency counts. The entries with outlier values (outside the range of 3 stan- dard deviations of the mean) are discarded for es- timating a robust set of parameters. The factors, like Pc(C2[C1), are smoothed by adding a flatten- ing constant 1/2 [Fienberg and Holland 1972] to the frequency counts before the probability is es- timated. Simulation Results After all the required parameters are estimated, both for the compound and non-compound clus- ters, each input text is tagged with appropriate parts of speech, and the log-likelihood function In$ for each word n-gram is evaluated. If it turns out that In ~ is greater than zero, then the n-gram is included in the compound list. The entries in the compound list are later sorted in the descend- ing order of A for use by the lexicographers. The training set consists of 12,971 sentences (192,440 words), and the testing set has 3,243 sentences (49,314 words) from computer manu- als. There are totally 2,517 distinct bigrams and 1,774 trigrams in the testing set, excluding n- grams which occur less than or equal to twice. The performance of the extraction approach for bigrams and trigrams is shown in Table 3 and 4. The recall and precision for the bigrams are 96.2% and 48.2%, respectively, and they become 96.6% and 39.6% for the trigrams. The high recall rates show that most compounds can be captured to the candidate list with the proposed approach. The precision rates, on the other hand, indicate that a real compound can be found approximately every 2 or 3 entries in the candidate list. The method therefore provides substantial help for updating the dictionary with little human efforts. Note that the testing set precision of bigrams is a little higher than the training set. This sit- uation is unusual; it is due to the deletion of the low frequency n-grams from consideration. For in- stance, the number of compounds in the testing set occupies only a very small portion (about 2.8%) after low frequency bigrams are deleted from con- sideration. The recall for the testing set is there- fore higher than for the training set. 245 To make better trade-off between the preci- sion rate and recall, we could adjust the threshold for ln~. For instance, when ln~ = -4 is used for separating the two clusters, the recall will be raised with- a lower precision. On the contrary, by raising the threshold for In ~ to positive numbers, the precision will be raised at the cost of a smaller recall. training set testing set I recall rate (%) 97.7 96.2 precision rate (%) 44.5 48.2 Table 3: Performance for bigrams [ training set testing set recall rate (%) I 97.6 96.6 precision rate (%) I 40.2 39.6 Table 4: Performance for trigrams Table 5 shows the first five bigrams and tri- grams with the largest ,~ for the testing set. Among them, all five bigrams and four out of five trigrams are plausible compounds. -------~ram I tr~gram ] dialog box mail label Word User's guide Microsoft Word User's main document data file File menu Template option button new document base File Name box Table 5: The first five bigrams and trigrams with the largest A for the testing set. Concluding Remarks In machine translation systems, information of the source compounds should be available before any translation process can begin. However, since compounds are very productive, new compounds are created from day to day. It is obviously im- possible to build a dictionary to contain all com- pounds. To guarantee correct parsing and transla- tion, new compounds must be extracted from the input text and entered into the dictionary. How- ever, it is too costly and time-consuming for the human to inspect the entire text to find the com- pounds. Therefore, an automatic method to ex- tract compounds from the input text is required. The method proposed in this paper uses mu- tual information, relative frequency count and part of speech as the features for discriminating compounds and non-compounds. The compound extraction problem is formulated as a two cluster classification problem in which an n-gram is as- signed to one of those two clusters using the like- lihood test method. With this method, the time for updating missing compounds can be greatly reduced, and the consistency between different posteditors can be maintained automatically. The testing set performance for the bigram compounds is 96.2% recall rate and 48.2% precision rate. For trigrams, the recall and precision are 96.6% and 39.6%, respectively. References [Bourigault 1992] D. Bouriganlt, 1992. "Surface Grammar Analysis for the Extraction of Ter- minological Noun Phrases," In Proceedings of COLING-92, vol. 4, pp. 977-981, 14th Inter- national Conference on Computational Linguis- tics, Nantes, France, Aug. 23-28, 1992. [Calzolari and Bindi 1990] N. Calzolari and R. Bindi, 1990. "Acquisition of Lexical Infor- mation from a Large Textual Italian Corpus," In Proceedings of COLING-90, vol. 3, pp. 54- 59, 13th International Conference on Computa- tional Linguistics, Helsinki, Finland, Aug. 20- 25, 1990. [Chen and Su 1988] S.-C. Chen and K.-Y. Su, 1988. "The Processing of English Compound and Complex Words in an English-Chinese Ma- chine Translation System," In Proceedings of ROCLING L Nantou, Taiwan, pp. 87-98, Oct. 21-23, 1988. [Church and Hanks 1990] K. W. Church and P. Hanks, 1990. "Word Association Norms, Mu- tual Information, and Lexicography," Compu- tational Linguistics, pp. 22-29, vol. 16, Mar. 1990. [Fienberg and Holland 1972] S. E. Fienberg and P. W. Holland, 1972. "On the Choice of Flat- tening Constants for Estimating Multinominal Probabilities," Journal of Multivariate Analy- sis, vol. 2, pp. 127-134, 1972. [Levi 1978] J.-N. Levi, 1978 The Syntax and Se- mantics of Complex Nominals, Academic Press, Inc., New York, NY, USA, 1978. [Linet al. 1992] Y.-C. Lin, T.-H. Chiang and K.- Y. Su, 1992. "Discrimination Oriented Proba- bilistic Tagging," In Proceedings of ROCLING V, Taipei, Taiwan, pp. 85-96, Sep. 18-20, 1992. [Papoulis 1990] A. Papoulis, 1990. Probability ~' Statistics, Prentice Hall, Inc., Englewood Cliffs, N J, USA, 1990. [Su et al. 1991] K.-Y. Su, Y.-L. Hsu and C. Sail- lard, 1991. "Constructing a Phrase Structure 246 Grammar by Incorporating Linguistic Knowl- edge and Statistical Log-Likelihood Ratio," In Proceedings of ROCLING IV, Kenting, Taiwan, pp. 257-275, Aug. 18-20, 1991. [Wu and Su 1993] Ming-Wen Wu and Keh-Yih Su, 1993. "Corpus-based Automatic Com- pound Extraction with Mutual Information and Relative Frequency Count", In Proceedings of ROCLING VI, Nantou, Taiwan, ROC Compu- tational Linguistics Conference VI, pp. 207-216, Sep. 2-4, 1993. 247
1994
33
An Automatic Treebank Conversion Algorithm for Corpus Sharing Jong-Nae Wang Behavior Design Corporation No. 28, 2F, R&D Road II Science-Based Industrial Park Hsinchu, Taiwan 30077, R.O.C. wj n@bdc, com. tw Jing-Shin Chang and Keh-Yih Su Dept. of Electrical Engineering National Tsing-Hua University Hsinchu, Taiwan 30043, R.O.C. sh±n~hera, ee. nthu. edu. tw kysu~bdc, com. tw Abstract An automatic treebank conversion method is pro- posed in this paper to convert a treebank into an- other treebank. A new treebank associated with a different grammar can be generated automati- cally from the old one such that the information in the original treebank can be transformed to the new one and be shared among different research communities. The simple algorithm achieves con- version accuracy of 96.4% when tested on 8,867 sentences between two major grammar revisions of a large MT system. Motivation Corpus-based research is now a major branch for language processing. One major resource for corpus-based research is the treebanks available in many research organizations [Marcus et al.1993], which carry skeletal syntactic structures or 'brack- ets' that have been manually verified. Unfortu- nately, such resources may be based on different tag sets and grammar systems of the respective research organizations. As a result, reusability of such resources across research laboratories is poor, and cross-checking among different grammar sys- tems and algorithms based on the same corpora can not be conducted effectively. In fact, even for the same research organization, a major revision of the original grammar system may result in a re-construction of the system corpora due to the variations between the revisions. As a side effect, the evolution of a system is often blocked or dis- couraged by the unavailability of the correspond- ing corpora that were previously constructed. Un- der such circumstances, much energy and cost may have to be devoted to the re-tagging or re- construction of those previously available corpora. It is therefore highly desirable to automatically convert an existing treebank, either from a previ- ous revision of the current system or from another research organization, into another that is com- patible with the current grammar system. 248 SeverM problems may prevent a treebank con- version algorithm from effective conversion of the treebanks. Firstly, the tag sets, including ter- minal symbols (parts of speech) and nonterminal symbols (syntactic categories) may not be identi- cM in the two systems; the number of such sym- bols may be drastically different and the map- ping may not be one-to-one. Furthermore, the hierarchical structures, i.e., the underlying phrase structure grammars, of two grammar systems may not be easily and uniquely mapped. In fact, the number of mapping units and mapping rules be- tween two systems may become untolerably large if no systematic approach is available to extract the atomic mapping units and the mapping op- erations [Chang and Su 1993]. In addition, some constructs in one system may not be representable in terms of the grammar of another system; com- patibility of two grammar systems thus further complicates the conversion problems. In many cases, a publicly available corpus may contain only the simplest annotations, like brack- ets (skeletal structure representations) for some major syntactic categories [Marcus et a1.1993]. In particular, a research organization may not want to contribute its corpora in full detail for free to the public since it may reveal the underlying knowledge, such as the grammar rules, used in the proprietary system. Therefore, the primitive an- notations, like brackets, are very likely to be the sole information available to the public in the near future. And corpus exchange is very likely to be limited to such primitive annotations. Such re- sources may not be directly usable by a system which needs much more information than anno- tated. In such cases, it is, however, desirable to be able to use the large amount of simply tagged corpus to help construct or bootstrap a large corpus which contains more detailed annotation. We thus try to address such problems by us- ing a simple and automatic approach for treebank conversion. Since the bracket information from a large treebank is the major external information required, the proposed algorithm is expected to be very useful and cost-effective for bootstrapping the corpus, in terms of corpus size and annotated in- formation, of a system by using publicly available treebanks or home-made treebanks, which are less costly than fully annotated corpora. In the following sections, the treebank conver- sion task is modeled as a transfer problem, com- monly encountered in an MT system, between two representations of the same language. A matching metric for selecting the best conversion among all candidates is then proposed, followed by the tree- bank conversion algorithm. Finally, experiment results are reported, which show a very promising conversion accuracy with the proposed approach. In the current task, we will assume that the new treebank will be compatible with an underly- ing target grammar of any appropriate form and a target tag set (including terminal and nontermi- hal symbols) associated with that grammar; since, otherwise, we could simply use the the original treebank directly without doing any conversion. This assumption is reasonable since most natural language research laboratories who deal with syn- tactic level processing and those who need a tree- bank is supposed to have an underlying phrase structure grammars or rules for identifying appro- priate constituents in the input text. Task Definition for Treebank Conversion Formally, the task for a treebank conversion al- gorithm is to map a source tree (generated from a source grammar or bracketed by hand) into its corresponding target tree that would be gener- ated from a second grammar (hereinafter, the tar- get grammar) without changing, vaguely speaking, its structures or semantics. The conversion must therefore satisfies several criteria so that the target tree could be reused in the target system. First of all, the target tree must be compatible with the second grammar. This means that the target tree must also be generatable from the second gram- mar. Secondary, the source tree and target tree must be 'similar' in a sense that their~correspond - ing terminal symbols (parts of speech), nontermi- nal symbols (syntactic categories) and structures (production rules) preserve essentially similar cat- egorial or structural information. A simple model for such a conversion problem is shown in Figure 1, where S is a sentence in the treebank, G1 and G2 are the grammars for the original treebank and the target system, respec- tively, T~ is the manually proved tree for S in the treebank, T/t are all the possible ambiguous syn- tax trees for S as generated by the target grammar S G2 (ambiguity) ~,[ II 7t i=l,N = Parser I I Mapping Parser I ~-~ T~ ~_~Algorithrn----~T~ 1 I ('l~eebank) t t s c/1 human Score(T i [Td) disambiguation Figure 1: A Simple Model for Treebank Con- version G2, and T~ is the best target tree selected from T/t based on a mapping score Score(T/]T~) defined on the treebank tree and the ambiguous construc- tions. The "conversion" from T~ to T~ is actually done by a matching algorithm. To ensure compatibility of the target trees with the target grammar, the sentences from which the source treebank was constructed are parsed by a parser (Parser II) using the target grammar. (It is also possible to enumerate all pos- sible constructs via other apparatus. The parser here is just a characterization of such an appara- tus.) All the possible target constructs for a sen- tence are then matched against the source tree, and the one that best matches the source tree is selected as the preferred conversion. In the above model, it is, of course, possible to incorporate any kind of preference mechanism in the parsing mech- anism of Parser II to prevent the converter from enumerating all possible syntactic structures al- lowed by the target grammar. In fact, the orig- inal design of the conversion model is to hook a matching module to the end of any existing pars- ing mechanism, so that the ambiguous structures are matched against manually verified structure information in the source treebank and pick up the correct parse without human inspection. To use the proposed model, a mapping met- ric is required for measuring the mapping pref- erence between the source tree and the candi- date target trees. Several frameworks for find- ing translation equivalents or translation units in machine translation, such as [Chang and Su 1993, Isabelle et al.1993] and other example-based MT approaches, might be used to select the pre- ferred mapping. A general corpus-based statistics- oriented model for statistical transfer in machine translation in [Chang and Su 1993] is especially suitable for such a task. One can, in fact, model the treebank conversion problem as a (statistical) transfer problem in machine translation because both problems deal with the mapping between two structure representations of the same sentence. The difference is: the transfer problem deals with 249 sentences that are in two different languages while the treebank conversion problem deals with only one language. The mechanism used to find the transfer units and transfer rules together with the transfer score used in the above frameworks can thus be used for treebank conversion with little modification. Matching Metric for Treebank Conversion The matching metric or matching score for tree- bank conversion is much simpler than the trans- fer score for the transfer task between two syntax trees for two languages. The intuition is to assume that: it is very likely that the tree representation for a sentence in a particular language will have essentially the same bracket representation, which may possibly be associated with different (termi- nal or nonterminal) symbols, when expressed in another grammar. We thus use the number of matching constituents in the source and target trees as the matching score for converting from one source tree to a target tree. ~ 3,4,5) ~(1,2) ~,4,5) 3,4,5) ) Figure 2: An Example for the Tree Matching Metric Take Figure 2 as an example. Node '9' in the source (left) tree contains Nodes '3', '4', '5' as its children; Node 'h' in the target (right) tree also has Nodes '3', '4', '5' as its children. We therefore add a constant score of 1 to the matching score for this tree pair. The same is true for Node '10' and Node 'i'. Since Node '7' in the source tree and Node 'f' in the target tree do not have any corresponding node as their counterparts, they contribute nothing to the matching preference. When there are single productions, like the construct for Node '8' and its sole child Node '6', such constituents will be regarded as the same entity. Therefore, the match between Node '8' (or Node '6') and Node 'g' will be assigned only one constant score of 1. This step corresponds to reducing such 'single production' rules into only one bracket. (For instance, X Y ~ a b c will have the bracket representation of [a b c], instead of [[a b c]].) As a result, the matching score for the example tree pair is 3. To facilitate such matching operations and matching score evaluation, the word indices of the sentence for the source/target tree pair is perco- lated upward (and recursively) to the tree nodes by associating each nonterminal node with the list of word indices, called an index list, acquired by concatenating the word indices of its children. (The index lists are shown near the nodes in Fig- ure 2.) Two nonterminal nodes which have the same index list form an aligned node pair; the subtrees rooted at such aligned nonterminal nodes and terminated with aligned nodes then consti- tute the mapping units between the two trees. The number of such matches thus represents a simple matching score for the tree pair. The in- dex lists can be easily established by a depth-first traversal of the tree. Furthermore, the existence of one constituent which consists of terminal nodes (l,l + 1,...,m) can be saved in a chart (a lower triangular matrix), where chart(l, m) records the number of nodes whose terminal children are num- bered from l to m. By using a chart for a tree, all nodes in a chain of single productions will cor- respond to the same count for a particular chart entry. A match in a source/target node pair will correspond to a pair of nonzero cells in the charts; the matching score then reduces to the number of such pairs. We therefore have the following treebank conversion algorithm based on the simple matching metric described here. The Baseline Treebank Conversion Algorithm With the highly simplified mapping model, we can convert a tree in a treebank into another which is compatible with the target grammar with the following steps: * 1. Parse the sentence of the source tree with a parser of the target system based on the target grammar. • 2. For each ambiguous target tree produced in step 1 and the source tree in the original treebank, associate each terminal word with its word index and associate each nonterminal node with the concatenation of the word indices of its children nodes. This can be done with a depth- first traversal of the tree nodes. • 3. For the trees of step 2, associate each tree with a Chart (a lower triangular matrix), which is initially set to zero in each matrix cell. Make a traversal of all the tree nodes, say in the depth-first order, and increment the number in Chart(l, m) by one each time a node with the indices (l,...,m) is encountered. 250 ,, 4. For each chart of the candidate target trees, compare it with the chart of the source tree and associate a mapping score to the target tree by scanning the two charts. For each index range (l, m), increment the score for the target tree by one if both the Chart(l, m) entries for the source tree and the target tree are non-zero. • 5. Select the target tree with the highest score as the converted target tree for the source tree. When there are ties, the first one encountered is selected. In spite of its simplicity, the proposed algo- rithm achieves a very promising conversion accu- racy as will be shown in the next section. Note that the parser and the grammar system of the tar- get. system is not restricted in any way; therefore, the annotated information to the target treebank can be anything inherent from the target system; the bracket information of the original treebank thus provides useful information for bootstrapping the corpus size and information contents of the target treebank. Note also that we do not use any informa- tion other than the index lists (or equivalently the hracket information) in evaluating the matching metric. The algorithm is therefore surprisingly simple. Further generalization of the proposed conversion model, which uses more information such as the mapping preference for a source/target tag pair or mapping unit pair, can be formulated by following the general corpus-based statistics- oriented transfer model for machine translation in [Chang and Su 1993]. In [Chang and Su 1993], the transfer preference between two trees is mea- sured in terms of a transfer score: p(Tt[T~) = ~'=1 P(t~,j[t~j) where T~ and T/t are the source tree and the i th possible target tree, which can be decomposed into pairs of transfer (i.e., mapping) units (t~ j, t~ j ) (local subtrees). The transfer pairs can be f()un~ by aligning the terminal and nonter- minal nodes with the assistance of the index lists as described previously [Chang and Su 1993]. In fact, the current algorithm can be regarded as a highly simplified model of the above cited framework, in which the terminal words for the source tree and the target tree are identical and are implicitly aligned exactly 1-to-l; the mapping units are modeled by the pairs of aligned nodes; and the probabilistic mapping information is re- placed with binary constant scores. Such assign- ment of constant scores eliminate the requirement for estimating the probabilities and the require- ment of treebank corpora for training the mapping scores. The following examples show a correctly matched instance and an erroneouly matched one. INPUT: Depending on the type of control used , it may or may not respond quickly enough to protect against spikes and faults • (Correct answer and selected output are #3.) 1. [[[Depending-on [[the type] [of [control used]]]] ,] it [may-or-may-not respond [quickly [enough to [protect [against [spikes and faults]]]]]]] . 2. [[[Depending-on [[the type] [of [control used]]]] ,] it [may-or-may-not respond [quickly [enough to [protect [against [spikes and faults]]]]]]] . 3. [[[Depending-on [[the type] [of [control used]]]] ,] it [may-or-may-not respond [[quickly enough] [to [protect [against [spikes and faults]]]]]]] . 4. [[[Depending-on [[the type] [of [control used]]]] ,] it [may-or-may-not respond [[quickly enough] [to [protect [against [spikes and faults]]]]]]] . INPFr: The PC's power supply is capable of absorbing most noise , spikes , and faults (The correct answer is #3 while the selected output is #2). 1. [[[The PC's] power-supply] [is [capable [of [ab- sorbing [[[[most noise] ,] spikes ,] and faults]]]]]] . 2. [[The PC's] power-supply] [is [capable [of [ab- sorbing [[[most noise], spikes ,] and faults]]]]]] . 3. [[[The PC's] power-supply] [is [capable [of [ab- sorbing [most [[[noise ,] spikes ,] and faults]]]]]]] . 4. [[[The PC's] power-supply] [is [capable [of [[ab- sorbing most] [[[noise ,] spikes ,] and faults]]]]]] . 5. [[[The PC's] power-supply] [is [capable [of [[[[[absorbing most] noise] ,] spikes ,] and faults]]]]] 6. [[[The PC's] power-supply] [is [capable [of [[[[ab- sorbing most] noise] , spikes ,] and faults]]]]] . Experiment Results The performance of the proposed approach is evaluated on a treebank consisting of 8,867 En- glish sentences (about 140,634 words in total) from the statistical database of the BehaviorTran (formerly the ArchTran [Su and Chang 1990, Chen el a!.1991]) MT system. The English sen- tences are acquired from technical manuals for computers and electronic instruments. Two ver- sions of the grammar used in this MT system are used in the experiment. The basic parame- ters for these two grammars are shown in Table 1, where G1 and G2 are the source and target grammars, #P is the number of production rules (i.e., context-free phrase structure rules), #E is the number of terminal symbols, #A/" is the num- ber of nonterminal symbols and #.,4 is the number of semantic constraints or actions associated with the phrase structure rules. 251 I G1 I a~ I #:P )rbduction) 1,088 1,101 #E terminal) 37 30 #Af J (nonterminal) 107 141 #A (constraints) 144 138 Table 1: Basic Parameters of the Two Gram- mars under Testing The target grammar shown here is an improved version of the source grammar. It has a wider coverage, a little more ambiguous structures, and shorter processing time than the old one. The ma- jor changes are the representations of some con- structs in addition to the changes in the parts of speech and nonterminal syntactic categories. For instance, the hierarchy is revised in the new revi- sion to better handle the 'gaps' in relative clauses, and the tag set is modified to better characterize the classification of the various words. Such modi- fications are likely to occur between any two gram- mar systems, which adopt different tag sets, syn- tactic structures and semantic constraints. There- fore, it, in some sense, characterizes the typical op- erations which may be applied across two different systems. Each sentence produces about 16.9 ambiguous trees on the average under the new grammar G~. The source trees contain brackets corresponding to the fully parsed structures of the input sen- tences; however, multiple brackets which corre- spond to "single productions" are eliminated to only one bracket. For instance, a structure like X ---* Y ~ Z --~ ab will reduces to the equiv- alent bracket structure of [ a b]. This reduction process is implied in the proposed algorithm since we increment the matching score by one whenever the two charts have the same word index range which contains non-zero counts; we do not care how large the counts are. This also implies that the target tree brackets are also reduced by the same process. The reduced brackets, on which the matching is based, in the source and target trees are thus less detailed than their fully parsed trees structures. After feeding the 8,867 sentences into the parser and selecting the closest match among the target trees against the source trees in the tree- bank, it is found that a total of 115 sentences do not produce any legal syntactic structures under the new grammar, 158 sentences produce no cor- rect structure in terms of the new grammar (in- cluding 12 sentences which produce unique yet er- roneous parses), and 1,546 sentences produce, un- ambiguously, one correct analysis. The former two cases, which is mostly attributed to the coverage of the target grammar, indicate the degree of incom- patibility between the two grammars. The latter case will not indicate any difference between any tree conversion algorithms. Therefore, they are not considered in evaluating the performance of the conversion procedure. For the remaining 7,048 sentences, 6,799 source trees axe correctly mapped to their coun- terpart in the new grammar; only 249 trees are incorrectly mapped; therefore, excluding unam- biguously parsed sentences, a conversion accuracy of 96.46% (6,799/7,048) is obtained. The results appear to be very promising with this simple algo- rithm. It also shows that the bracket information and the mapping metric do provide very useful in- formation for treebank conversion. Eru~oa TYPE I Percentage (%) I Tag Error 19.6 Conjunction Error 51.4 Attachment Error 23.6 Drastic Structural Error 5.4 Table 2: Error Type Analysis A sampling of 146 trees from the 249 incor- rectly mapped trees reveals the error types of mis- match as tabulated in Table 2. The error in- troduced by inappropriate tags is about 19.6%. Structural error, on the other hand, is about 80.4%, which can be further divided into errors due to: incorrect mapping of conjunct elements and/or appositions (51.4%), incorrect attachment patterns between heads and modifiers (23.6%) and drastic structure variation (5.4%). Note that tag- ging error is far less than structural error; further- more, two trees with drastically different struc- tures are rarely matched. A closer look shows that 2.72% (185/6799) of the correctly mapped trees and 31.73% (79/249) of the incorrectly mapped trees have the same scores ms the other competing trees; they are selected because they are the first candidate. The current solution to tie, therefore, tends to introduce incorrectly mapped trees. A better way may be required to avoid the chance of tie. For instance, we may increment different scores for different types of matches or different syntactic categories. The above experiment results confirm our pre- vious assumption that even the simplest skeletal structure information, like brackets, provides sig- nificant information for selecting the most likely structure in another grammar system. This fact partially explains why the simple conversion algo- rithm achieves a satisfactory conversion accuracy. Note that a mapping metric against the source tree may introduce systematic bias that prefers the 252 source structures rather than the target grammar. This phenomenon could prevent the improvement of the new grammar from being reflected in the converted corpus if the new grammar is a revi- sion of the old one. Attachment and conjunction scopes, which may vary from system to system, are more likely to suffer from such a bias as shown in the above experiment results. A wise way to incor- porate preference form the target grammar may be necessary if such bias introduces a significant frac- tion of errors. Such preference information may include mapping preference acquired from other extra information or by using other more compli- cated models. From the low error rate of the overall perfor- mance, however, it seems that we need not be too pessimistic with such a bias since most major con- stituents, like noun phrases and verb phrases, rec- ognized by different persons are in agreement to a large extent. It is probably also true even'for persons across different laboratories, Since the conversion rate is probably high enough, it. is possible simply to regard errors in the converted treebank as noise in probabilistic frameworks, which use the converted treebank for parameter training. In these cases, further man- ual inspection is not essential and the conversion is basically automatic. This situation is particularly true if the original source treebank had been man- ually verified, since we can at least make sure that the target trees are legal, even though not pre- ferred. If serious work is necessary to avoid error accumulation in the treebank, say in the grammar revision process, it is suggested only to check a few high-score candidates to save checking time. If, in addition, the major differences of the two grammars are known, the checking time could be further reduced by only applying detailed checking to the trees that have relevant structure changes. Of course, there are many factors which may affect the performance of the proposed approach among different grammar systems. In particu- lar, we did not use the information between the mapping of the parts of speech (terminal sym- bols) and the syntactic categories (nonterminal symbols), which may be useful in the cases where the mapping is applied to two trees with the same bracket representation. In our future research, we will try to convert large treebanks, such as the Penn Treebank, available in the community into our grammar system, and make use of more infor- mation on the parts of speech and syntactic cat- egories so that a robust conversion algorithm can be developed. Concluding Remarks It is important to be able to share treebanks among different research organizations. The sig- nificance for developing a treebank conversion technique includes at least: (1) corpus sharing among different grammar systems and research or- ganizations; (2) automatic system corpus updat- ing between two major revisions; (3) corpus boot- strapping with a large and cheaply tagged tree- bank; (4) avoidance of duplicated investment in the construction and maintenance of proprietary corpora; (5) promoting continuous evolution of an old grammar system for a corpus-based system. In this paper, we therefore proposed a simple approach for converting one treebank into another across two different grammar systems using a sim- ple conversion metric based one the bracket infor- mation of the original treebank. The simple met- ric, which evaluates the number of bracket match- ing, turns out to be effective in preserving the structures across two different grammars. The ex- periment results show that, excluding unambigu- ous sentences, the conversion accuracy, in terms of the number of correctly converted trees, achieves as high as 96.4%. References [Chang and Su 1993] Jing-Shin Chang and Keh- Yih Su, 1993. "A Corpus-Based Statistics- Oriented Transfer and Generation Model for Machine Translation," In Proceedings of TMI- 93, pp. 3-14, 5th Int. Conf. on Theoretical and Methodological Issues in Machine Translation, Kyoto, Japan, July 14-16, 1993. [Chen et al. 1991] Shu-Chuan Chen, Jing-Shin Chang, Jong-Nae Wang and Keh-Yih Su, 1991. "ArchTran: A Corpus-based Statistics-oriented English-Chinese Machine Translation System," In Proceedings of Machine Translation Summit III, pp. 33-40, Washington, D.C., USA, July 1- 4, 1991. [Isabelle et al. 1993] Pierre Isabelle, Marc Dymet- man, George Forster, Jean-Marc Jutras, Elliott Machkovitch, Franqois, Perrault, Xiaobo Ren and Michel Simard, 1993. "Translation Anal- ysis and Translation Automation," Proceedings of TMI-93, pp. 201-217, 5th Int. Conf. on The- oretical and Methodological Issues in Machine Translation, Kyoto, Japan, July 14-16, 1993. [Marcus et al. 1993] Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz, 1993. "Building a Large Annotated Corpus of English: The Penn Treebank," Computational Linguis- tics, vol. 19, no. 2, pp. 313-330, June 1993. [Su and Chang 1990] Keh-Yih Su and Jing-Shin Chang, 1990. "Some Key Issues in Designing 253 MT Systems," Machine Translation, vol. 5, no. 4, pp. 265-300, 1990. 254
1994
34
An Attributive Logic of Set Descriptions Set Operations Suresh Manandhar HCRC Language Technology Group The University of Edinburgh 2 Buccleuch Place Edinburgh EH8 9LW, UK Internet: Suresh. Manandhar@ed. ac. uk and Abstract This paper provides a model theoretic semantics to fea- ture terms augmented with set descriptions. We pro- vide constraints to specify HPSG style set descriptions, fixed cardinality set descriptions, set-membership con- straints, restricted universal role quantifications, set union, intersection, subset and disjointness. A sound, complete and terminating consistency checking proce- dure is provided to determine the consistency of any given term in the logic. It is shown that determining consistency of terms is a NP-complete problem. Subject Areas: feature logic, constraint-based gram- mars, HPSG 1 Introduction Grammatical formalisms such as HPSG [Pollard and Sag, 1987] [Pollard and Sag, 1992] and LFG [Kaplan and Bresnan, 1982] employ feature de- scriptions [Kasper and Rounds, 1986] [Smolka, 1992] as the primary means for stating linguistic theories. However the descriptive machinery employed by these formalisms easily exceed the descriptive machinery available in feature logic [Smolka, 1992]. Furthermore the descriptive machinery employed by both HPSG and LFG is difficult (if not impossible) to state in fea- ture based formalisms such as ALE [Carpenter, 1993], TFS [Zajac, 1992] and CUF [D6rre and Dorna, 1993] which augment feature logic with a type system. One such expressive device employed both within LFG [Kaplan and Bresnan, 1982] and HPSG but is unavailable in feature logic is that of set descriptions. Although various researchers have studied set de- scriptions (with different semantics) [Rounds, 1988] [Pollard and Moshier, 1990] two issues remain unad- dressed. Firstly there has not been any work on consi- stency checking techniques for feature terms augmen- ted with set descriptions. Secondly, for applications within grammatical theories such as the HPSG forma- lism, set descriptions alone are not enough since de- scriptions involving set union are also needed. Thus to adequately address the knowledge representation needs of current linguistic theories one needs to provide set descriptions as well as mechanisms to manipulate these. In the HPSG grammar forma- lism [Pollard and Sag, 1987], set descriptions are em- ployed for the modelling of so called semantic indices ([Pollard and Sag, 1987] pp. 104). The attribute INDS in the example in (1) is a multi-valued attribute whose value models a set consisting of (at most) 2 objects. However multi-valued attributes cannot be descri- bed within feature logic [Kasper and Rounds, 1986] [Smolka, 1992]. (1) Io DREL --4 °~TIs~R[] / Ls'~E~ w J [NDS IRESTINAME ~andy ]['IRESTINAME kim II ¢ L L N*M" D JIL L JJJ A further complication arises since to be able to deal with anaphoric dependencies we think that set mem- berships will be needed to resolve pronoun dependen- cies. Equally, set unions may be called for to incremen- tally construct discourse referents. Thus set-valued extension to feature logic is insufficient on its own. Similarly, set valued subcategorisation frames (see (2)) has been considered as a possibility within the HPSG formalism. (2) believes= IYNILOCISUBCAT~ [[SYN~LOOIHEADICAT v] But once set valued subeategorisation frames are em- ployed, a set valued analog of the HPSG subcategorisa- tion principle too is needed. In section 2 we show that the set valued analog of the subcategorisation principle can be adequately described by employing a disjoint union operation over set descriptions as available wit- hin the logic described in this paper. 2 The logic of Set descriptions In this section we provide the semantics of feature terms augmented with set descriptions and various constraints over set descriptions. We assume an al- phabet consisting of x, y, z,... 6 )2 the set of variables; f,g,... E Y: the set of relation symbols; el, c2,... E C the set of constant symbols; A,B,C,... 6 7 ) the set of primitive concept symbols and a,b,... 6 .At the set of atomic symbols. Furthermore, we require that /,T E T'. 255 The syntax of our term language defined by the follo- wing BNF definition: P > x I a t c I C [ -~x I -~a [ -~c [ -~C S,T- > P f : T feature term Sf : T existential role quantification Vf : P universal role quantification f: {T1,...,Tn} set description f {T1,.., Tn}= fixed cardinality set description f : g(x) U h(y) union f: g(x) rq h(y) intersection f :~ g(x) subset f(x) # g(y) disjointness S Iq T conjunction where S, T, T1,..., Tn are terms; a is an atom; c is a constant; C is a primitive concept and f is a relation symbol. The interpretation of relation symbols and atoms is provided by an interpretation Z =</4I I > where/41 is an arbitrary non-empty set and I is an interpretation function that maps : 1. every relation symbol f • ~" to a binary relation fl C_/4I x/4I 2. every atom a • .At to an element a I • bl x Notation: • Let if(e) denote the set {e'[ (e,e') • if} • Let fI(e) T mean fl(e) = 0 Z is required to satisfy the following properties : 1. if al ~ a2 then all # hi2 (distinctness) 2. for any atom a • At and for any relation f • ~" there exists no e • U 1 such that (a, e) • fl (atomicity) For a given interpretation Z an Z-assignment a is a function that maps : 1. every variable x • ]2 to an element a(x) • 141 2. every constant c • C to an element a(c) •/41 such that for distinct constants Cl, c2 : a(cl) # a(c2) 3. every primitive concept C • 7 ) to a subset a(C) C /41 such that: • ~(_L) = 0 • a(T) =/41 The interpretation of terms is provided by a denotation function [[.]]z,a that given an interpretation Z and an Z-assignment a maps terms to subsets of/41. The function [.]]z,a is defined as follows : ~x~z," = {,~(x)} [[a]]Z, ~ = {a I} [cK'" = {a(e)} Iv] z,~ = ~(c) If: T] z'" = {e •/411 he' •/4i: fZ(e ) = {e'} A e' • ~T] z'e} [3f : T~ :r'a = {e•/4 llqe'•/4(l:(e,e') •f! A e' • IT] z'"} IV f: T]] z'~ = {e • W' lye' •/41: (e, e') • f1 =~ e' • IfT] z'"} U: {T,,...,T~}K," = {e E U I [ 9el,...,ge~ e U I : f1(e) = {el,...,e,}^ el e IT1] z'a A... A e,~ • [T,~] z'~} If: {T1,..., Tn}=] z'a = {e •/4I I 9el,...,ge~ •/4I : Ifl(e) l = n A fI(e) = {el,... ,en}A el • [Tx]Z'a A... A e~ • [T,] z'"} If: g(x) U h(y)]] z'a = {e • LI I I fl(e) = gl(a(x)) U hI(a(y))} If: g(x) N h(y)] z'a = {e •/41 [ fi (e ) = gi (c~(x) ) rq hl (c~(y) ) } If :~_ g(x)lz, ~' = {e • u ~ I f(e) ~ g1(~(x))} if(x ) # g(y)]]z,c~ = • 0 if fl(a(x)) n gl(a(y)) # O • U I if f1(a(x)) A g1(a(y)) = 0 IS rl T]] z,a = [[S]] z,a N [T]] z,a [-~T~ ~," = U' - [T~ z," The above definitions fix the syntax and semantics of every term. It follows from the above definitions that: I:T - /:{T} -I:{T}= Figure 1 Although disjoint union is not a primitive in the logic it can easily be defined by employing set disjointness and set union operations: f: g(x) eJ h(y) =de/ g(x) # h(y) ~q f: g(x) U h(y) Thus disjoint set union is exactly like set union except that it additionally requires the sets denoted by g(x) and h(y) to be disjoint. The set-valued description of the subcategorisation principle can now be stated as given in example (3). (3) Subcategorisation Principle SYN,LOC Y ]] TRS X n [HL-DTR[SYN[LOC[SUBCAT c-dtrs(X) ~ subcat(Y) The description in (3) simply states that the subcat value of the H-DTR is the disjoint union of the subcat value of the mother and the values of C-DTRS. Note that the disjoint union operation is the right operation to be specified to split the set into two disjoint subsets. Employing just union operation would not work since 256 Decomposition rules x=F:TAC~ (DFeat) x=F:yAy=TACs if y is new and T is not a variable and F ranges over Sf, f x = Vf : ~ A C~ (DForall) x=Vf:yAy=~ACs if y is new and ~ ranges over a, c. (DSet) x = f: {Ti,...,T~} A C~ x = I: {xl,...,x~}^xl =T1 ^...ix~ =T~ACs if xi,..., xn are new and at least one of Ti : 1 < i < n is not a variable x= f : {Ti,...,T,}=A Cs (DSetF) x = f : {Xl,... , xn} A X = f: {Xl,... , Xn}= A X 1 = T 1 ^... i x n = T n i C s if xi,..., x~ are new and at least one of Ti : 1 < i < n is not a variable x=SNTAC,~ (DConj) x = S i x = T A gs Figure 2: Decomposition rules it would permit repetition between members of the SUBCAT attribute and C-DTRS attribute. Alternatively, we can assume that N is the only multi- valued relation symbol while both SUBCAT and C-DTRS are single-valued and then employ the intuitively ap- pealing subcategorisation principle given in (4). (4) Subcategorisation Principle TRS [H-DTRISYNILOCISUBCATIN N(X) ~ N(Y) C-DTRS X With the availability of set operations, multi-valued structures can be incrementally built. For instance, by employing union operations, semantic indices can be incrementally constructed and by employing members- hip constraints on the set of semantic indices pronoun resolution may be carried out. The set difference operation f : g(y) - h(z) is not avai- lable from the constructs described so far. However, assume that we are given the term x R f : g(y) - h(z) and it is known that hZ(~(z)) C_ gZ(a(y)) for every in- terpretation 27, (~ such that [x R f : g(y)- h(z)~ z,~ ¢ 0. Then the term x N f : g(y) - h(z) (assuming the ob- vious interpretation for the set difference operation) is consistent iff the term y [] g : f(x) t~ h(z) is consistent. This is so since for setsG, F,H:G-F=HAFCG i]:f G = F W H. See figure 1 for verification. 3 Consistency checking To employ a term language for knowledge representa- tion tasks or in constraint programming languages the minimal operation that needs to be supported is that of consistency checking of terms. A term T is consistent if there exists an interpreta- tion 2: and an/:-assignment (~ such that [T] z'a ~ 0. In order to develop constraint solving algorithms for consistency testing of terms we follow the approaches in [Smolka, 1992] [Hollunder and Nutt, 1990]. A containment constraint is a constraint of the form x = T where x is a variable and T is an term. Constraint simplification rules - I x=yACs (SEquals) x = y A [x/y]Cs if x ~ y and x occurs in Cs (SConst) x=~Ay=~ACs x=yAx=~ACs where ~ ranges over a, c. (SFeat) x= f :yAx= F :zZACs x=/:yAy= ACs where F ranges over f, 3f, Vf (SExists) x=gf:yAx=Vf:zAC~ x= f :yAy=zACs (SForallE) x = V__] : C A x = 9f : y A C~_ x =V/: CAx = 3/: yAy = CAC~ if C ranges over C, -~C, -~a, --c, -~z and Cs Vy =C. Figure 3: Constraint simplification rules - I In addition, for the purposes of consistency checking we need to introduce disjunctive constraints which are of the form x -- Xl U ... U x,~. We say that an interpretation Z and an/-assignment a satisfies a constraint K written 27, a ~ K if. • Z,a~x=Tv=~a(x) E[T~ z'a • Z,a~x=xlU...Uxn.: ~.a(x)=a(xi)forsome xi:l <i<n. A constraint system Cs is a conjunction of con- straints. We say that an interpretation 27 and an Z-assignment a satisfy a constraint system Cs iffZ, a satisfies every constraint in Cs. The following lemma demonstrates the usefulness of constraint systems for the purposes of consistency checking. Lemma 1 An term T is consistent iff there exists a variable x, an interpretation Z and an Z-assignment a such that Z, a satisfies the constraint system x = T. Now we are ready to turn our attention to constraint solving rules that will allow us to determine the con- sistency of a given constraint system. 257 Constraint simplification rules - II (SSetF) x=F:yAx=f:{Xl,...,xn}AC8 x= f :yAy=xlA...Ay=xnACs where F ranges over f, Vf (SSet) x = f: {y} A C8 x= f :yAC8 (SDup) x=f:{Xl,...,xi,...,xj,...,x,~}AC8 x = f : {Zl,...,x,...,...,x,} ^ C8 if xi -- x i (SForaU) x = Vf : CA x = f : {xl,...,xn} A C8 x =f: =-C^C8 if C ranges over C, -~C, -~a, -~c, -~z and there exists xi : 1 < i < n such that Cs ~1 xi = C. x = Bf : yAx = f : {Xl,...,x,~} A C8 (SSetE) x=f:{Xl,...,x,~}Ay=xlU...UxnAC8 (SSetSet) X=f:{Xl,...,Xn}AX=f:{yl,...,ym}AC8 x = I: Xl = Yl II... II Ym ^ • .. ^ Xn = Yl II... II ymA Yl ---- xz [J .. • II xn A... A Ym = Xl II... II xn A 68 where n _< m x= x I II...Uxn ACs (SDis) x = Xl M... IJ x~ A x = xi A C8 ifl <i<nand there is no x j, 1 < j < n such that C8 F x = x: Figure 4: Constraint We say that a constraint system C8 is basic if none of the decomposition rules (see figure 2) are applicable to c8. The purpose of the decomposition rules is to break down a complex constraint into possibly a number of simpler constraints upon which the constraint simpli- fication rules (see figures 3, 4 and 5 ) can apply by possibly introducing new variables. The first phase of consistency checking of a term T consists of exhaustively applying the decomposition rules to an initial constraint of the form x = T (where x does not occur in T) until no rules are applicable. This transforms any given constraint system into basic form. The constraint simplification rules (see figures 3, 4 and 5 ) either eliminate variable equalities of the form x = y or generate them from existing constraints. However, they do not introduce new variables. The constraint simplification rules given in figure 3 are the analog of the feature simplification rules provided in [Smolka, 1991]. The main difference being that our simplification rules have been modified to deal with relation symbols as opposed to just feature symbols. The constraint simplification rules given in figure 4 simplify constraints involving set descriptions when they interact with other constraints such as feature constraints - rule (SSetF), singleton sets - rule (SSet), duplicate elements in a set - rule (SDup), universally quantified constraint - rule (SForall), another set de- scription - rule (SSetSet). Rule (SDis) on the other hand simplifies disjunctive constraints. Amongst all simplification rules - II the constraint simplification rules in figures 3 and 4 only rule (SDis) is non-deterministic and creates a n- ary choice point. Rules (SSet) and (SDup) are redundant as comple- teness (see section below) is not affected by these rules. However these rules result in a simpler normal form. The following syntactic notion of entailment is em- ployed to render a slightly compact presentation of the constraint solving rules for dealing with set operations given in figure 5. A constraint system Cs syntactically entails the (con- junction of) constraint(s) ¢ if Cs F ¢ is derivable from the following deduction rules: 1. ¢AC8 F¢ 2. C~Fx=x 3. CsFx=y >CsFy=x 4. CsFx=yACsFy=z >CsFx=z 5. Cs F x = -~y > C~ F y = -~x 6. CsFx=f:y >CsFx=3f:y 7. CsFx=f:y >CsFx=Vf:y 8. CsFx=I:{...,xi,...} >C~Fz=3I:zi Note that the above definitions are an incomplete list of deduction rules. However C~ I- ¢ implies C~ ~ ¢ where ~ is the semantic entailment relation defined as for predicate logic. We write C8 t/¢ if it is not the case that C~ I- ¢. The constraint simplification rules given in figure 5 deal with constraints involving set operations. Rule (C_) propagates g-values of y into I-values of x in the presence of the constraint x = f :_D g(y). Rule 258 Extended (c_) x = if: (ULeft) x= if Cs Constraint simplification rules x = f :D g(y) A C~ f :D g(y) A z = 3f : Yi A Cs F/x = 3f : yi and F y = 3g : yi x = I: g(y) u h(z) A f: g(y) W h(z) A x = f :D g(y) A Cs ~/ x = f :D g(y) (URight) x = f: g(y) U h(z) A Cs x = f: g(y) U h(z) A x = f :D h(z) A Cs if Cs V z = f :__D h(z) (UDown) x = f: g(y) U h(z) A Cs x = f : g(y) U h(z) A y = 3g : xi I z = 3h : xi A Cs if: • C~/y=3g:xiand • Cst/z=3h:xiand • C~l-x=3f:xi ( nDown ) = f: g(y) n h(z) A x = f : g(y) n h(z) A y = 3g : xi A z = 3h : xi A C if: • (Cs[/y=3g:xiorCsVz=3h:xi) and • C~Fx=3f:x~ x = f: g(y) n h(z) A Cs (nUp) x = f : g(y) n h(z) A x = 3f : xi A Cs if: • Cs ~x=3f:xi and • CsFy=3g:xiand • C~Fz=3h:xi Figure 5: Constraint solving with set operations (ULeft) (correspondingly Rule (URight)) adds the constraint x = f :_D g(y) (correspondingly x = f :D h(z)) in the presence of the constraint x = f : g(y) U h(z). Also in the presence of x = f : g(y) U h(z) rule (UDown) non-deterministically propagates an I-value of x to either an g-value of y or an h-value of z (if neither already holds). The notation y = 3g : xi ] z = 3h : xi denotes a non-deterministic choice between y = 3g : x~ and z = 3h : xi. Rule (nDown) propaga- tes an f-value of x both as a g-value of y and h-value of z in the presence of the constraint x = f : g(y) n h(z). Finally, rule (nUp) propagates a common g-value of y and h-value of z as an f-value of x in the presence of the constraint x = f : g(y) n h(z). 4 Invariance, Completeness and Termination In this section we establish the main results of this paper - namely that our consistency checking proce- dure for set descriptions and set operations is invari- ant, complete and terminating. In other words, we have a decision procedure for determining the consi- stency of terms in our extended feature logic. For the purpose of showing invariance of our ru- les we distinguish between deterministic and non- deterministic rules. Amongst all our rules only rule (SDis) given in figure 4 and rule (UDown) are non- deterministic while all the other rules are determini- stic. Theorem 2 (Invariance) 1. If a decomposition rule transforms Cs to C~s then Cs is consistent iff C~ is consistent. 2. Let Z,a be any interpretation, assignment pair and let Cs be any constraint system. • If a deterministic simplification rule transforms Cs to C' s then: iff p c" • If a non-deterministic simplification rule applies to Cs then there is at least one non-deterministic choice which transforms Cs to C' s such that: z,a p iffz, apc; A constraint system Cs is in normal form if no rules are applicable to Cs. Let succ(x, f) denote the set: succ(x, f) = {y I c8 x = 3f : y} A constraint system Cs in normal form contains a clash if there exists a variable x in C8 such that any of the following conditions are satisfied : 1. C~Fx=al andC~Fx=a2suchthatal ~a2 2. Cs F x = cl and Cs F x = c2 such thatcl ~c2 3. Cs F x = S and Cs F x = -,S where S ranges over x, a, c, C. 4. CsFx=3f:yandCsFx=a 5. C~ F f(x) ¢ g(y) and succ(x, f) n succ(y, g) 7~ 6. C~ F x = f: {xz,...,xn}= and Isucc(x,f) I < n If Cs does not contain a clash then C~ is called clash- free. The constraint solving process can terminate as soon as a clash-free constraint system in normal form is fo- und or alternatively all the choice points are exhau- sted. The purpose of the clash definition is highlighted in the completeness theorem given below. For a constraint system Cs in normal form an equiva- lence relation ~_ on variables occurring in Cs is defined as follows: x-~ y ifC~ F x = y For a variable x we represent its equivalence class by Theorem 3 (Completeness) A constraint system Cs in normal form is consistent iff Cs is clash-free. Proof Sketch: For the first part, let C~ be a constraint system containing a clash then it is clear from the de- finition of clash that there is no interpretation Z and Z-assignment a which satisfies Cs. Let C~ be a clash-free constraint system in normal form. We shall construct an interpretation 7~ =< L/R, .R > 259 and a variable assignment a such that T~, a ~ Cs. Let U R = V U ,4t UC. The assignment function a is defined as follows: 1. For every variable x in )2 (a) if C8 }- x = a then ~(x) = a (b) if the previous condition does not apply then ~(x) = choose(Ix]) where choose([x]) denotes a unique representative (chosen arbitrarily) from the equivalence class [x]. 2. For every constant c in C: (a) if Cs F x = c then a(c) = (~(x) (b) if c is a constant such that the previous condition does not apply then (~(c) -- c 3. For every primitive concept C in P: = I C8 x = The interpretation function .n is defined as follows: • fR(x) = succ( , f) • aR=a It can be shown by a case by case analysis that for every constraint K in C~: 7~,a~ K. Hence we have the theorem. Theorem 4 (Termination) The consistency checking procedure terminates in a fi- nite number of steps. Proof Sketch: Termination is obvious if we observe the following properties: 1. Since decomposition rules breakdown terms into smaller ones these rules must terminate. 2. None of the simplification rules introduce new va- riables and hence there is an upper bound on the number of variables. 3. Every simplification rule does either of the following: (a) reduces the 'effective' number of variables. A variable x is considered to be ineffective if it occurs only once in Cs within the constraint x = y such that rule (SEquals) does not apply. A variable that is not ineffective is considered to be effective. (b) adds a constraint of the form x = C where C ranges over y, a, c, C, -~y, -~a, -~c, -~C which means there is an upper bound on the number of con- straints of the form x = C that the simplification rules can add. This is so since the number of va- riables, atoms, constants and primitive concepts are bounded for every constraint system in basic form. (c) increases the size of succ(x,f). But the size of succ(x, f) is bounded by the number of variables in Cs which remains constant during the applica- tion of the simplification rules. Hence our con- straint solving rules cannot indefinitely increase the size of succ(x, f). 5 NP-completeness In this section, we show that consistency checking of terms within the logic described in this paper is NP-complete. This result holds even if the terms involving set operations are excluded. We prove this result by providing a polynomial time transla- tion of the well-known NP-complete problem of de- termining the satisfiability of propositional formulas [Garey and Johnson, 1979]. Theorem 5 (NP-Completeness) Determining consistency of terms is NP-Complete. Proof: Let ¢ be any given propositional formula for which consistency is to be determined. We split our translation into two intuitive parts : truth assignment denoted by A(¢) and evaluation denoted by r(¢). Let a, b,... be the set of propositional variables occur- ring in ¢. We translate every propositional variable a by a variable xa in our logic. Let f be some relation symbol. Let true, false be two atoms. Furthermore, let xl, x2,.., be a finite set of variables distinct from the ones introduced above. We define the translation function A(¢) by: A(¢) = f: {true, false}n 3f :xa nSf :xbn...n 3f : xl n 3f : x2 n ... The above description forces each of the variable Xa,Xb,... and each of the variables xl,x2,.., to be either equivalent to true or false. We define the evaluation function T(¢) by: = xo T(S&T) = T(S) n r(T) T(SVT) = xi n 3f : (]: {~(S),r(T)} n 3f: xi) where xi 6 {xl,x2,...} is a new variable r(~S) = xi n 3f : (r(S) n ~z~) where xi 6 {xl,x2,...} is a new variable Intuitively speaking T can be understood as follows. Evaluation of a propositional variable is just its value; evaluating a conjunction amounts to evaluating each of the conjuncts; evaluating a disjunction amounts to evaluating either of the disjuncts and finally evaluating a negation involves choosing something other than the value of the term. Determining satisfiability of ¢ then amounts to deter- mining the consistency of the following term: 3f : A(¢) n 3f: (true n r(¢)) Note that the term truenT(¢) forces the value of T(¢) to be true. This translation demonstrates that deter- mining consistency of terms is NP-hard. On the other hand, every deterministic completion of our constraint solving rules terminate in polynomial time since they do not generate new variables and the number of new constraints are polynomially bounded. This means determining consistency of terms is NP- easy. Hence, we conclude that determining consistency of terms is NP-complete. 6 Translation to Sch6nfinkel-Bernays class The Schhnfinkel-Bernays class (see [Lewis, 1980]) con- sists of function-free first-order formulae which have 260 the form: 3xt ... xnVyl • .. ym6 In this section we show that the attributive logic developed in this paper can be encoded within the SchSnfinkel-Bernays subclass of first-order formulae by extending the approach developed in [Johnson, 1991]. However formulae such as V f : (3 f : (Vf : T)) which involve an embedded existential quantification cannot be translated into the SchSnfinkel-Bernays class. This means that an unrestricted variant of our logic which does not restrict the universal role quantification can- not be expressed within the SchSnfinkel-Bernays class. In order to put things more concretely, we provide a translation of every construct in our logic into the SchSnfinkel-Bernays class. Let T be any extended feature term. Let x be a va- riable free in T. Then T is consistent iff the formula (x = T) 6 is consistent where 6 is a translation function from our extended feature logic into the SchSnfinkel- Bernays class. Here we provide only the essential de- finitions of 6: • • =x#a • (x = f : T) ~ = f(x, y) & (y = T) ~ ~ Vy'(f(x, y') -+ y = y') where y is a new variable • (x=qf:T) ~=f(x,y) & (y=T) '~ where y is a new variable • (x = V f: a) ~ = Vy(f(x,y) --+ y = a) • (x = V f: ~a) ~ = Vy(f(x,y) .-+ y # a) • (x = f: {T1,...,Tn}) ~ -- f(X, Xl) & ... ~ f(X, Xn),~ Vy(f(x,y) --~ y = Xl V ... V y = xn)& (xl = T1) & ... & (zl = where Xl ,..., Xn are new variables • (x = f: g(y) U h(z)) ~ = Vxi(f(x, xi) -'+ g(y, xi) V h(z, xi)) ~: Vy,(g(y, Yi) -4 f(x, Yi)) & Vzi(h(z, zi) -+ f(x, zi)) • (x = f: (y) # g(z)) ~ = Vyizj(f(y, yi) & g(z, zi) --+ Yi # zi) • (x=SlqT) '~=(x=S) ~ & (x=T) ~ These translation rules essentially mimic the decom- position rules given in figure 2. Furthermore for every atom a and every feature f in T we need the following axiom: • Vax(-~f(a, x)) For every distinct atoms a, b in T we need the axiom: •a#b Taking into account the NP-completeness result established earlier this translation identifies a NP- complete subclass of formulae within the SchSnfinkel- Bernays class which is suited for NL applications. 7 Related Work Feature logics and concept languages suchas KL-ONE are closely related family of languages [Nebel and Smolka, 1991]. The principal difference being that feature logics interpret attributive labels as functional binary relations while concept langua- ges interpret them as just binary relations. However the integration of concept languages with feature lo- gics has been problematic due to the fact the while path equations do not lead to increased computatio- nal complexity in feature logic the addition of role- value-maps (which are the relational analog of path equations) in concept languages causes undecidabi- lity [Schmidt-Schant3, 1989]. This blocks a straight- forward integration of a variable-free concept language such as ALC [Schmidt-SchanB and Smolka, 1991] with a variable-free feature logic [Smolka, 1991]. In [Manandhax, 1993] the addition of variables, fea- ture symbols and set descriptions to ALC is investi- gated providing an alternative method for integrating concept languages and feature logics. It is shown that set descriptions can be translated into the so called "number restrictions" available within concept langu- ages such as BACK [yon Luck et al., 1987]. However, the propositionally complete languages ALV and ALS investigated in [Manandhar, 1993] are PSPACE-hard languages which do not support set operations. The work described in this paper describes yet another unexplored dimension for concept languages - that of a restricted concept language with variables, feature symbols, set descriptions and set operations for which the consistency checking problem is within the com- plexity class NP. 8 Summary and Conclusions In this paper we have provided an extended feature lo- gic (excluding disjunctions and negations) with a range of constraints involving set descriptions. These con- straints are set descriptions, fixed cardinality "set de- scriptions, set-membership constraints, restricted uni- versal role quantifications, set union, set intersection, subset and disjointness. We have given a model theo- retic semantics to our extended logic which shows that a simple and elegant formalisation of set descriptions is possible if we add relational attributes to our logic as opposed to just functional attributes available in feature logic. For realistic implementation of the logic described in this paper, further investigation is needed to develop concrete algorithms that are reasonably efficient in the average case. The consistency checking procedure de- scribed in this paper abstracts away from algorithmic considerations and clearly modest improvements to the basic algorithm suggested in this paper are feasible. However, a report on such improvements is beyond the scope of this paper. For applications within constraint based grammar formalisms such as HPSG, minimally a type sy- stem [Carpenter, 1992] and/or a Horn-like extension [HShfeld and Smolka, 1988] will be necessary. We believe that the logic described in this paper pro- vides both a better picture of the formal aspects of 261 current constraint based grammar formalisms which employ set descriptions and at the same time gives a basis for building knowledge representation tools in order to support grammar development within these formalisms. 9 Acknowledgments The work described here has been carried out as part of the EC-funded project LRE-61-061 RGR (Reusa- bility of Grammatical Resources). A longer version of the paper is available in [Erbach et al., 1993]. The work described is a further development of the aut- hor's PhD thesis carried out at the Department of Ar- tificial Intelligence, University of Edinburgh. I thank my supervisors Chris Mellish and Alan Smaill for their guidance. I have also benefited from comments by an anonymous reviewer and discussions with Chris Brew, Bob Carpenter, Jochen DSrre and Herbert Ruessink. The Human Communication Research Centre (HCRC) is supported by the Economic and Social Research Council (UK). References [Carpenter, 1992] Bob Carpenter. The Logic of Typed Fea- ture Structures. Cambridge University Press, 1992. [Carpenter, 1993] Bob Carpenter. ALE:Attribute Logic Engine Users Guide, Version/~. Technical report, Car- negie Mellon University, Pittsburgh, PA 15213, 1993. [DSrre and Dorna, 1993] Jochen DSrre and Michael Dorna. CUF: A Formalism for Linguistic Knowledge Representation. Dyana-2 deliverable, IMS, Stuttgart, Germany, August 1993. [Erbach et al., 1993] Gregor Erbach, Mark van der Kraan, Suresh Manandhar, M. Andrew Moshier, Herbert Rues- sink, and Craig Thiersch. Specification of Datatypes. In Deliverable D.B of LRE-61-061 "The Reusability of Grammatical Resources". 1993. [Garey and Johnson, 1979] M. R. Garey and D. S. John- son. Computers and Intractability : A Guide to the Theory of NP-Completeness. Freeman, San Francisco, 1979. [HShfeld and Smolka, 1988] Markus HShfeld and Gert Smolka. Definite relations over constraint languages. LI- LOG Report 53, IBM Deutschland, Stuttgart, Germany, October 1988. [Hollunder and Nutt, 1990] B. Hollunder and W. Nutt. Subsumption Algorithms for Concept Languages. Rese- arch Report RR-90-04, German Research Center for Ar- tificial Intelligence (DFKI), Stuhlsatzenhausweg 3, 6600 Saarbr/icken 11, Germany, 1990. [Johnson, 1991] Mark Johnson. Features and Formulae. Computational Linguistics, 17(2):131-151, June 1991. [Kaplan and Bresnan, 1982] Ronald M. Kaplan and Joan Bresnan. Lexical-Functional Grammar: A formal system for grammatical representation. In Joan Bresnan, editor, The Mental Representation of Grammatical Relations, pages 173 - 281. MIT Press, Cambridge, Massachussets, 1982. [Kasper and Rounds, 1986] Robert Kasper and William Rounds. A logical semantics for feature structures. In 24th Annual Meeting of the Association for Computatio- nal Linguistics, Columbia University, New York, pages 257-265, 1986. [Lewis, 1980] Harry R. Lewis. Complexity Results for Classes of Quantificational Formulae. Journal of Com- puter and System Sciences, 21:317-353, 1980. [Manandhar, 1993] Suresh Manandhar. Relational Exten- sions to Feature Logic: Applications to Constraint Based Grammars. PhD thesis, Department of Artificial Intel- ligence, University of Edinburgh, 1993. [Nebel and Smolka, 1991] Bernhard Nebel and Gert Smolka. Attributive description formalisms and the rest of the world. Research Report RR-91-15, Ger- man Research Center for Artificial Intelligence (DFKI), Saarbr/icken, Germany, May 1991. [Pollard and Moshier, 1990] Carl J. Pollard and M. Drew Moshier. Unifying partial descriptions of sets. In Phi- lip P. Hanson, editor, Information, Language and Cogni- tion. University of British Columbia Press, Vancouver, Canada, 1990. Vancouver Studies in Cognitive Science, no. 1. [Pollard and Sag, 1987] Carl Pollard and Ivan Andrew Sag. Information-Based Syntax and Semantics: Volume 1 Fundamentals, volume 13 of Lecture Notes. Center for the Study of Language and Information, Stanford, CA, 1987. [Pollard and Sag, 1992] Carl Pollard and Ivan Andrew Sag. Head-driven Phrase Structure Grammar: Volume 2. MIT Press, 1992. Forthcoming. [Rounds, 1988] William C. Rounds. Set values for unification-based grammar formalisms and logic pro- gramming. Technical report, Center for the Study of Language and Information, Stanford, CA, 1988. [Schmidt-SchauB and Smolka, 1991] Manfred Schmidt-Schaufl and Gert Smolka. Attributive Concept Descriptions with Unions and Complements. Artificial Intelligence, 48:1-26, 1991. Also available as IWBS Re- port 68, IBM Germany, Scientific Center, IWBS, Stutt- gart, Germary, June 1989. [Schmidt-Schaufl, 1989] Manfred Schmidt-Schaufl. Sub- sumption in KL-ONE is undecidable. In First Interna- tional Conference on Principles of Knowledge Represen- tation and Reasoning, KR' 89, Toronto, Canada, pages 421-431, May 1989. [Smolka, 1991] Gert Smolka. A feature logic with subsorts. In Jfirgen Wedekind and C. Rohrer (eds.), editors, Uni- fication in Grammar. MIT Press, 1991. Also appeared as LILOG Report no. 33, IWBS, IBM Deutschland. [Smolka, 1992] Gert Smolka. Feature constraint logics for unification grammars. Journal of Logic Programming, 12:51-87, 1992. [von Luck et al., 1987] K. von Luck, B. Nebel, C. Pelta- son, and A. Schmiedel. The Anatomy of the BACK Sy- stem. KIT Report 41, Department of Computer Science, Technische Universit~t Berlin, Berlin, Germany, 1987. [Zajac, 1992] R~mi Zajac. Inheritance and Constraint- Based Grammar Formalisms. Computational Lingui- stics, 18(2):159-182, 1992. 262
1994
35
Multiset-Valued Linear Index Grammars: Imposing Dominance Constraints on Derivations Abstract This paper defines multiset-valued linear index gram- mar and unordered vector grammar with dominance links. The former models certain uses of multiset- valued feature structures in unification-based for- malisms, while the latter is motivated by word order variation and by "quasi-trees", a generalization of trees. The two formalisms are weakly equivalent, and an im- portant subset is at most context-sensitive and polyno- mially parsable. Introduction Early attempts to use context-free grammars (CFGs) as a mathematical model for natural language syntax have largely been abandoned; it has been shown that (un- der standard assumptions concerning the recursive na- ture of clausal embedding) the cross-serial dependencies found in Swiss German cannot be generated by a CFG (Shieber, 1985). Several mathematical models have been proposed which extend the formal power of CFGs, while still maintaining the formal properties that make CFGs attractive formalisms for formal and computa- tional linguists, in particular, polynomial parsability and restricted weak generative capacity. These mathe- matical models include tree adjoining grammar (TAG) (Joshi et al., 1975; Joshi, 1985), head grammar (Pollard, 1984), combinatory categorial grammar (CCG) (Steed- man, 1985), and linear index grammar (LIG) (Gaz- dar, 1988). These formalisms have been shown to be weakly equivalent to each other (Vijay-Shanker et al., 1987; Vijay-Shanker and Weir, 1994); we will refer to them as "LIG-equivalent formalisms". LIG is a vari- ant of index grammar (IG) (Aho, 1968). Like CFG, IG is a context-free string rewriting system, except that the nonterminal symbols in a CFG are augmented with stacks of index symbols. The rewrite rules push or pop indices from the index stack. In an IG, the index stack is copied to all nonterminal symbols on the right-hand side of a rule. In a LIG, the stack is copied to exactly one right-hand side nonterminal. 1 1Note that a LIG is not an IG that is linear (i.e., whose productions have at most one nonterminal on the right-hand Owen Rambow Univesit@ Paris 7 UFR Linguistique, TALANA Case 7003, 2, Place Jussieu F-75251 Paris Cedex 05, France rambow©linguist, j ussieu, fr While LIG-equivalent formalisms have been shown to provide adequate formal power for a wide range of lin- guistic phenomena (including the aforementioned Swiss German construction), the need for other mathemati- cal formalisms has arisen in several unrelated areas. In this paper, we discuss three such cases. First, captur- ing several semantic and syntactic issues in unification- based formalisms leads to the use of multiset-valued feature structures. Second, word order facts from lan- guages such as German, Russian, or Turkish cannot be derived by LIG-equivalent formalisms. Third, a gener- alization of trees to "quasi-trees" (Vijay-Shanker, 1992) in the spirit of D-Theory (Marcus et al., 1983) leads to the definition of a new formal system. In this pa- per, we introduce two new equivalent mathematical for- malisms which provide adequate descriptions for these three phenomena. The paper is structured as follows. First, we present the three phenomena in more detail. We then introduce multiset-valued LIG and present some formal proper- ties. Thereafter, we introduce a second rewriting sys- tem and show that it is weakly equivalent to the LIG variant. We then briefly mention some related for- malisms. We conclude with a brief summary. Three Problems for LIG-Equivalent Formalisms The three problems we present are of a rather differ- ent nature. The first arises from the way a linguis- tic problem is treated in a specific type of framework (unification-based formalisms). The second problem derives directly from linguistic data. The third prob- lem is a formalism which has been motivated on in- dependent, methodological grounds, but whose formal properties are unknown. Multiset-Valued Feature Structures HPSG (Pollard and Sag, 1987; Pollard and Sag, 1994) uses typed feature structures as its formal basis, which are Turing-equivalent. However, it is not necessarily side), but rather, it is a context-free grammar with linear indices (i.e., the indices are never copied). 263 the case that the full power of the system is used in the linguistic analyses that are expressed in it. HPSG analyses include information about constituent struc- ture which can be represented as a context-free phrase- structure tree. In addition, various mechanisms have been proposed to handle certain linguistic phenomena that relate two nodes within this tree. One of these is a multiset-valued feature that is passed along the phrase-structure tree from daughter node to mother node. Multiset-valued features have been proposed for the SLASH feature which handles wh-dependencies (Pol- lard and Sag, 1994, Chapter 4), and for certain semantic purposes, including the representation of stored quan- tifiers in a mechanism similar to Cooper-storage. An- other use may be the representation of anti-coreference constraints arising from Principle C of Binding Theory (be it that of (Chomsky, 1981) or of Pollard and Sag (1992)). It is desirable to be able to assess the formal power of such a system, for both theoretical and practical reasons. Theoretically, it would be interesting if it turned out that the linguistic principles formulated in HPSG naturally lead to certain restricted uses of the unification-based formalism. Clearly this would repre- sent an important insight into the nature of grammat- ical competence. On the practical side, formal equiv- alences can guide the building of applications such as parsers for existing HPSG grammars. For example, it has been proposed that HPSG grammars can be "com- piled" into TAGs in order to obtain a computationally more tractable system (Kasper, 1992), thus sidestep- ping the issue of building parsers for HPSG directly. However, LIG-equivalent formalisms cannot serve as targets for compilations in cases in which HPSG uses multiset-valued feature structures. Word Order Variation Becket et al. (1991) discuss scrambling, which is the permutation of verbal arguments in languages such as German, Korean, Japanese, Hindi, Russian, and Turk- ish. If there are embedded clauses, scrambling in many languages can affect arguments of more than one verb ("long-distance" scrambling). (1) ... dab [den Kiihlschrank]i bisher noch ... that the refrigeratorAcc so far yet niemand [ti zu reparieren] versprochen hat no-onesoM to repair promised has ... that so far, no-one has promised to repair the re- frigerator Scrambling in German is "doubly unbounded" in the sense that there is no bound on the number of clause boundaries over which an element can scramble, and an element scrambled (long-distance or not) from one clause does not preclude the scrambling of an element from another clause: (2) ... dab [dem Kunden]i [den Kfihlschrank]j ... that the clientDAW the refrigeratorAcc bisher noch niemand ti [[tj ZU reparieren] so far yet no-oneNoM to repair zu versuchen] versprochen hat to try promised has ... that so-far, no-one yet has promised the client to repair the refrigerator Similar data has been observed in the literature for other languages, for example for Finnish by Karttunen (1989). Becker et al. (1991) argue that a simple TAG (and the other LIG-equivalent formalisms) cannot de- rive the full range of scrambled sentences. Rambow and Satta (1994) propose the use of unordered vector gram- mar (UVG) to model the data. In UVG (Cremers and Mayer, 1973), several context-free string rewriting rules are grouped into vectors, as for verspricht 'promises': (3) ((S --+ NPnom VP), (VP -4 NPdat VP), (VP --~ Sinf V), (V ~ verspricht) ) During a derivation, rules from a vector can be ap- plied in any order, and rules from different vectors call be interleaved, but at the end, all rules from an instance of a vector must have been used in the derivation. By varying the order in which rules from different vectors are applied, we can derive different word orders. Ob- serve that the vector in (3) contains exactly one ter- minal symbol (the verb); grammars in which every el- ementary structure (vector in UVG, tree in TAG, rule in CFG) contains at least one terminal symbol we will call lexicalized. Languages generated by UVG are known to be context-sensitive and semilinear (Cremers and Mayer, 1974) and polynomially parsable (Satta, 1993). How- ever, they are not adequate for modeling natural lan- guage syntax. In the following example, (4a) is out since there is no analysis in which the moved NP c-commands its governing verb, as is the case in (4b). (4) a. *... dab niemand [dem Kundeu] [ti ... that no-onesoM the clientDAT ZU versuchen] [den Kiihlschrank]j versprochen to try the refrigeratorAcc promised hat [tj zu reparieren]i has to repair b. ? ... daft niemand [dem Kunden] [den Kiihlschrank]j [ti zu versuchen] versprochen hat [tjzu reparieren]i What is needed is an additional mechanism that en- forces a dominance relation between the sister node of an argument and its governing verb. Quasi-Trees Vijay-Shanker (1992) introduces "quasi-trees" as a gen- eralization of trees. He starts from the observation that the traditional definition of tree adjoining gram- 264 mar (TAG) is incompatible with a unification-based ap- proach because the trees of a TAG start out as fully specified objects, which are later modified; in particu- lar, immediate dominance relations in a tree need not hold after another tree is adjoined into it. In order to ar- rive at a definition that is compatible with a unification- based approach, he makes three minimal assumptions about the nature of the objects used for the representa- tion of natural language syntax. The first assumption (left implicit) is that these objects represent phrase- structure. The second assumption is that they "give a sufficiently enlarged domain of locality that allows localization of dependencies such as subcategorization, and filler-gap" (Vijay-Shanker, 1992, p.486). The third assumption is that dominance relations can be stated between different parts of the representation. These assumptions lead Vijay-Shanker to define quasi-trees, which are partial descriptions of trees in which "quasi- nodes" (partial descriptions of nodes) are related by dominance constraints. Each node in a traditional tree (as used in TAG) corresponds to two quasi-nodes, a top and a bottom version, such that the top dominates the bottom. There are two ways of interpreting quasi-trees: ei- ther quasi-trees can be seen as data structures in their own right; or quasi-trees can be seen as descriptions of trees whose denotations are sets of (regular) trees. If quasi-trees are defined as data structures, we can define operations such as adjunction and substitution and notions such as "derived structure". More pre- cisely, we define quasi-trees to be structures consisting of pairs of nodes, called quasi-nodes, such that one is the "top" quasi-node and the other is the "bottom" quasi-node. The top and bottom quasi-node of a pair are linked by a dominance constraint. Bottom quasi- nodes immediately dominate top quasi-nodes of other quasi-node pairs, and each top quasi-node is immedi- ately dominated by exactly one bottom quasi-node. For simplicity, we will assume that there is only a bottom root quasi-node (i.e., no top root quasi-node), and that bottom frontier quasi-nodes are omitted (i.e., frontier nodes just consist of top quasi-nodes). Furthermore, we will assume that each quasi-node has a label, and is equipped with a finite feature structure. A sample quasi-tree is shown in Figure 1 (quasi-tree a5 of Vijay- Shanker (1992, p.488)). We follow Vijay-Shanker (1992, Section 2.5) in defin- ing substitution as the operation of forming a quasi-node pair from a frontier node of one tree (which becomes the top node) and the root node of another tree (which be- comes the bottom node). As always, a dominance link relates the two quasi-nodes of the newly formed pair. Adjunction is not defined separately: it suffices to say that a pair of quasi-nodes is "broken up", thus forming two quasi-trees. We then perform two substitutions. Observe that nothing keeps us from breaking up more than one pair of quasi-nodes in either of two quasi-trees, and then performing more than two substitutions (as $ NP S I s NP VP 1 I vP V e Figure 1: Sample quasi-tree long as dominance constraints are respected); there are no operations in regular TAG that correspond to such operations. We will say that a quasi-tree is derived if in all quasi-node pairs, the two quasi-nodes are equated, meaning that they have the same label and the two feature structures are unified, and furthermore, if all frontier quasi-nodes have terminal labels. The string associated with this quasi-tree is defined in the usual way. We have now fully defined a formalism (if informally): its data structures (quasi-trees), its combination oper- ation (substitution), and the notion of derived struc- ture. We will call this formalism Quasi-Tree Substitu- tion Grammar (QTSG). It can easily be seen that all examples discussed by Vijay-Shanker (1992) are deriva- tions in QTSG. The question arises as to the formal and computational properties of QTSG. Multiset-Valued LIG In order to find a mathematical model for certain uses of multiset-valued feature structures, discussed above, we now introduce a multiset-valued variant of LIG. We denote by .A4(A) the set of multisets over the elements of A, and we use the standard set notation to refer to the corresponding multiset operations. Definition 1 A multlset-valued Linear Index Grammar ({}-LIG) is a 5-tuple (tiN, VT, ~, P, S), where VN, VT, and VI are disjoint sets of terminals, non-terminals, and indices, respectively; S E VN is the start symbol; and P is a set of productions of the fol- lowing form: p : As ) voBlslvl ...v~-lB, snvn for some n > O, A, B1,...,Bn E VN, s, sl,...,sn mul- tisets of members of VI, and vo, . . ., vn E V~. The derivation relation ~ for a {}-LIG is defined as follows. Let ~,7 • (VN-A4(~) U VT)*, t,tl,...,tn multisets of members of VI, and p • P of the form given above. Then we have ~At7 ~ ~voBltlvx • .. vn-lB, t~v~7 such that t = U~=l(ti \ si)Us. If G is a {}-LIG, L(G) = {w IS=Z=~c w,w • v4}. 265 Suppose we want to apply rule p to an instance of nonterminal A with an index multiset t in a sentential form. First, we remove the indices in s from t, then we rewrite the nonterminal, then we distribute the remain- ing indices freely among the newly introduced nonter- minals B1,..., Bn, creating new multisets, and finally we add si to the new multiset for each Bi, creating the new ti. The reader will have noticed, and hopefully excused, the abuse of notation in this definition, which results from mixing set-notation and string-notation. We can also define {}-LIG as a pure string-rewriting system which does not require the definition of additional data structures (the multisets) for the notion of "derivation" (see (Rambow, 1994)). However, the definition pro- vided here (using an explicit representation of multi- sets) has the advantage of corresponding more directly to the intuition underlying {}-LIG and is much easier to understand and use in proofs. The issue is purely notational. We now introduce a restriction on derivations, which will be useful later. Definition 2 A linearly-restrlcted derivation in a {}-LIG is a derivation 0 : S ~ w with w E V~. such that: I. The number of index symbols added (and hence re- moved) during the derivation is linearly bounded by Iwl. 2. The number ore-productions used during the deriva- tion is linearly bounded by Iwl. We let LR(G) = {w I there is aderivation e : S ~ w such that 0 is linearly-restricted}, and we let £R({}-LIG) = {LMG ) [ G a {}-LIG}. If G is a {}-LIG such that LR(G) = L(G), we say that G is linearly restricted. Many of the results that we will show ap- ply only to linearly restricted {}-LIGs. However, as we will see, all linguistic applications will make use of this restricted version. EXAMPLE 1 The following grammar derives the language COUNT-5, where COUNT-5 = {anbncndne n In > 0}. Let G1 = (VN, VT, VI, P, S) with: VN = {S,A,B,C,D,E} V T = {a,b,c,d,e} ¼ = {s~,Sb, S~,Sd,S~} P = {PI :S > S{Sa,Sb, Sc,Sd, S~} P2 : S ~ ABCDE P3 : A{s~} ~ Aa, P4 : A--~ E p5 : B{Sb } > Bb, P6 : B -----+ E pT : C{s~} ----r Cc, ps : C ~ e p9 : D{sd} > Dd, Plo : D > Pll : E{se} ) Ee, Pl~ : E > e } A sample derivation is shown in Figure 2. This example shows that Z:({}-LIG) is not contained in/:(LIG), since the latter cannot derive COUNT-5. We now define two normal forms which will be used later. We omit the proofs and refer to (Rambow, 1994) for details. Definition 3 A {}-LIG G = (VN, VT, VI, P, S) is in restricted index normal form or RINF if all pro- ductions in P are of one of the following forms 'where A, B E VN, f E VI and a E (VTU VN)*): 1. A )a g. A )Bf 3. AI ~B Theorem 1 For any {}-LIG, there is an equivalent {}-LIG in RINF. Definition 4 A {}-LIG G = (VN, VT, V~, P, S) is in Extended Two Form (ETF) if every production in P has the form As --+ BlSlB~S2, As --* Bs', or A -* a, where A, Bx,B2 E VN, s, sl,s2, s' E VI*, and a E VT U {e}. Theorem 2 For any {}-LIG, there is an equivalent {}-LIG in ETF. We now discuss some formal properties of {}-LIG. For reasons of space limitation, we only sketch the proofs; full versions can be found in (Rainbow, 1994). We start with the weak generative power. We have al- ready seen that {}-LIG can generate languages not in £(LIG) (and hence not in £(TAG)). We will now show that linearly restricted {}-LIGs are at most context- sensitive. Theorem 3 £R({}-LIG) _C £(CSG). Outline of the proof. We simulate a derivation in a linear bounded automaton. The space needed for this is bounded linearly in the length of the input word, since the number of the symbols that are erased, the index symbols and nonterminals that rewrite to ¢, is linearly bounded. • What sort of languages could a {}-LIG possibly not generate? Consider the copy language L = {ww ]w E {a, b}*}, and let us suppose that it is generated by G, a {}-LIG. This language cannot be generated by a CFG. We therefore know that for any integer M, there are in- finitely many strings in L whose derivation in G is such that at some point, an index multiset in the sentential form contains more than M index symbols (since any finite use of index symbols can be simulated by a pure CFG). It must be the case that this unbounded multiset is crucial in restricting the second half of the generated string in such a way that it copies the first half (again, since a pure CFG cannot derive such strings). However, it is impossible for a data structure like a (multi-)set (over a finite index alphabet) to record the required se- quential information. Therefore, the second half of the string cannot be adequately constrained, and G cannot exist. This argument nmtivates the following conjec- ture. Conjecture 4 {wwlw E {a,b}*} is not in £:({}-LIG). 266 S S{Sa, 8b, Se, 8d, Be} S{8a, Sb, 8e, Sd, Se, Sa, 8b, Se, Sd, Se, Sa, Sb, Se, Sd, Se } A{sa, sa, sa}B{sb, Sb, sb}C{sc, sc, sc}D{sd, Sd, sd}E{so, se, se} A{s., s., s.}B{sb, Sb, sb}C{so, so}eD{Sd, Sd, Sd}E{s , s., aaaB{Sb, 8b, 8b}C{ o, o}eD{ d, sd}E{80, s.} aaabbbcccdddeee Figure 2: Sample derivation in {}-LIG G1 We now turn to closure properties. Theorem 5 L:({}-LIG) is a substitution-closed full ab- stract family of languages (AFL). Outline of the proof. Since £({}-LIG) contains all context-free languages, it contains all regular languages, and therefore it is sufficient to show that L:({}-LIG) is closed under intersection with regular languages and substitution. These results are shown by adapting the techniques used to show the corresponding results for CFGs. • Finally, we turn to the recognition and parsing prob- lem. Again, we will restrict our attention to the linearly restricted version of {}-LIG. Theorem 6 Each language in/~R({}-LIG) can be rec- ognized in polynomial deterministic time. Outline of the proof. We extend the CKY parser for CFG. Let G be a {}-LIG in ETF. Since G may contain e-productions, the algorithm is adapted by letting the indices of the matrix refer to positions between sym- bols in the input string, not the symbols themselves. In order to account for the index multiset, we let the entries in the recognition matrix be pairs consisting of a nonterminal symbol and a [Y}l-tuple of integers: (A, (nl,..., nlv, I)) The IVil-tuple of integers represents a multiset, with each integer designating the number of copies of a given index symbol that the set contains. In an entry of the matrix, each pair represents a partial derivation of a substring of the input string. More precisely, if the input word is al .-.an, and if ~ = {il,...,ilv, I}, then we have (A, (nl,...,nlvd)) in entry ti,j of the recognition matrix if and only if there is a derivation As ::=¢. ai+l ...aj, where multiset s contains nk copies of index symbol it,, 1 < k < I vii. Clearly, there is a derivation in the grammar if and only if entry t0,n contains the pair (S, (0,...,0)). Now since the grammar is linearly restricted, each nk is bounded by n, and hence the number of different pairs is linearly bounded by IVNIn W'I. Thus each entry in the matrix can be computed in O(n l+21vd) steps, and since there are O(n 2) entries, we get an overall time complexity of O(n3+21v, I). • UVG with Dominance Links We now formally define UVG with dominance links (UVG-DL), which serves as a formal model for the sec- ond and third phenomena introduced above, word order variation and quasi-trees. The definition differs from that of UVG only in that vectors are equipped with dominance relations which impose an additional condi- tion on derivations. Note that the definition refers to the notion of derivation tree of a UVG, which is defined as for CFG. Definition 5 An Unordered Vector Grammar with Dominance Links (UVG-DL) is a 4-tuple (VN, VT, V, S), where VN and VT are sets of nonter- minals and terminals, respectively, S is the start sym- bol, and V is a set of vectors of context-free produc- tions equipped with dominance links. For a given vec- tor v E V, the dominance links form a binary relation domv over the set of occurrences of non-terminals in the productions of v such that if domv(A, B), then A (an instance of a symbol) occurs in the right-hand side of some production in v, and B is the left-hand symbol (instance) of some production in v. IfG is a UVG-DL, L(G) consists of all words w E VYt which have a derivation p of the form such that ~ meets the following two conditions: 1. piP2 • • .Pr is a permutation of a member of V*. 2. The dominance relations of V, when interpreted as the standard dominance relation defined on trees, hold in the derivation tree of ~. The second condition can be formulated as follows: if v in V contributes instances of productions Pl and P2 (and perhaps others), and the kth daughter in the right-hand side of Pl dominates the left-hand nonter- minal of P2, then in the context-free derivation tree as- sociated with # (the unique node associated with) the kth daughter node of pl dominates (the unique node associated with) P2. We now give an example. (The superscripts distinguish instances of symbols and are not part of the nonterminal alphabet.) EXAMPLE 2 Let G2 ---- (VN, VT, V, S t) with: 267 v1: {(S' ~ daft VP)} with dome, = I~ v2: {(VP (1) ~ NPnom VP(2)), (VP (3) dom~ = {(VP(2), Vp(S)), (VP(4), VP(S)), (VP(~),VP(S))} vz: {(VP (1) ----+ VP(D Vp(2)), (Vp(3) ----+ zu versuchen)} with domvs v4: {(Vp(D ----+ NFacc VP(2)), (VP (3) > zu reparieren)} with dome, vh: {(NPnom -'-+ der Meister)} with domvs = v6: {(NPdat ~ niemandem)} with dome. = 0 vr: {(NPacc ~ den K~hlschrank)} with dome, = 0 ) NPdat Vp(4)), (VP (5) ---+ VP (6) VP(r)), (VP (s) > verspricht)} with. = {(VP(2), Vp(3))} = {(VP(~), Vp(3))} Figure 3: Definition of V for UVG-DL G2 NP?~) "...vP(p41) '°'°o* =oO.Oo.. der Meister NP(Pn) "'"'.. ._.~.. (~l) ..... ".. den Kuehlschrank VP(_Pz2) : ""... VP(IP42) ~ . . . . . ".. ! * % • NP(i61) " VP(P23) "'..oi zu reparieren niemandem VPIP~2) VPIP24) zu versuchen verspricht Figure 4: Sample UVG-DL derivation VN = {S', VP, NPnom, NPdat, NPaec} VT = {daft, verspricht, zu versuchen, zu reparieren, der Meister, niemandem, den Kiihlschrank} 2 V = {vx, v2, v3, v4, vh, vr, VT} where the vi are as defined in Figure 3. A sample derivation is shown in Figure 4, where the dominance relations are shown by dotted lines. Ob- serve that the example grammar is lexicalized. We will denote the class of lexicalized UVG-DL by UVG-DLLex. It is clear that the dominance links of UVG-DL are the additional constraints that we argued above are nec- essary to adequately restrict the structural relation be- tween arguments and their verbs. Furthermore, UVG- DL is a notational variant of QTSG: every vector rep- resents a quasi-tree, and identifying quasi-nodes cor- responds to rewriting. The condition on a successful derivation in QTSG - that all nonterminal nodes be identified - corresponds to the definition of a derivation in UVG-DL. We have therefore found a mathematical model for the second and third phenomenon mentioned ~Gloss (in order): that, promises, to try, to repair, the master, no-one, the refrigerator. in Section 2. We now turn to the formal properties of UVG-DL. Our main result is that UVG-DL is weakly equivalenl~ to {}-LIG. The sets of a {}-LIG implement the domi- nance links and make sure that all members from one set of rules are used during a derivation. We first in- troduce some more terminology with which to describe the derivations of UVG-DLs. If two productions P~,1 and Pv,2 from vector v are linked by a dominance link from a right-hand side nonterminal of p~,l to the left.- hand nonterminal Pv,2, then we will denote this link by l,,1,~. We will say that p~.l (or the right-hand side non- terminal in question) has a passive dominance require- ment of Iv,l,2, and that Pv,2 has an active dominance requirement of Iv,l,2. If Pv,1 or Pv,2 is used in a partial derivation such that the other production is not used in the derivation, the dominance requirement (passive or active) will be called unfulfilled. Let ~0 be a (partial) derivation. We associate with # a multi-set which rep- resent all the unfulfilled active dominance requirements of ~0, written T(L0). Theorem 7 ~(UVG-DL) = L:({}-LIG) 268 Outline of the proof. The theorem is proved in two parts (one for each inclusion). We first show the inclu- sion Z(UVG-DL) C_ L:({}-LIG). Let G = (VN, Vw, V, S) be a UVG-DL, where V = {Vl,..., vK} with vi = (pi,1,...,pi,k,), kl = Ivil, 1 < i < K. We construct a {}-LIG G' = (VN, VT, Yi, P, S). Let Yi = {li,j,k I 1 < i < K, 1 < j, k < ki }. Define P as follows. Let v in V, and let p in v be the production A ) WoBlWl...B~w, be in yr. In the following, we will denote by T(p) the multiset of active dominance re- quirements of p, and by .l-i(p) the multiset of passive dominance requirements of Bi, 1 < i < n. Add to P the following production: A T(p) ~ woBI.J-I(p)wl'" "Bn-l-n(p)wn P contains no other productions. We show by induc- tion that for A in VN, and w in V.~, we have A =~a w iff A =~=:'c' w. Specifically, we show that for all integers k k, 0 : A =:~c w, w E V~, with unfulfilled active domi- nance requirements T(0), implies that there is a deriva- tion AT(0) =~:¢'G' w, and, conversely, we show that for i all integers k, At ==~G, a, A E VN, t a multiset of ele- ments of l/i, and a E V~, implies that there is a deriva- tion 0 : A =~G a such that T(0 ) = t. For the inclusion/:({}-LIG) C L:(UVG-DL), we take a slightly different approach to avoid notational com- plexity. Let G = (VN, VT, Vx,P,S) be a {}-LIG in RINF. We construct a UVG-DL G' = (VN, VT, V,S), where V is defined as follows: 1. Ifp E P is a {}-LIG production of RINF type 1, then ((p), 0) E V. 2. If p E P is a {}-LIG production of RINF type 2, with p = A ----+ Bf for A, B E VN, f E I,~, then for all q E P such that q = Cf ~ D, v = ((A B,C . ~ D),domv(B,C)) is in V. Let A be in tiN, and w in V~. We show by induction that S =~:~a w iff S =~:~a' w. Specifically, we first show that for all integers k, for all {}-LIGs G and the corre- sponding UVG-DL G' as constructed above, if there is a derivation t~ : S {} ~::~e w with k instances of ap- plications of rules of type 2, then there is a deriva- tion 0 ' : S :~::~a' w such that 0 and O ~ are identical except for the index symbols in the sentential forms of 0. For the converse inclusion, we show that for all integers k, for all {}-LIGs G and the correspond UVG- DL G ~ as constructed above, if there is a derivation O' : S {} ~a, w with k instances of applications of rules from vectors with two elements, then there is a derivation O : S ::~=~a w such that g and 0 ~ are identical except for the index symbols in the sentential forms of 0. • This equivalence lets us transfer results from {}-LIG to UVG-DL. It can easily be seen from the construction employed in the proof of Theorem 7 that a lexicalized UVG-DL maps to a linearly restricted {}-LIG. For lin- guistic purposes we are only interested in lexicalized grammars, and therefore the linear restriction is quite natural. We obtain the following corollaries thanks to Theorem 7. Corollary 8 L:(UVG-DLLex) C_ LI(CSG). Corollary 9 L:(UVG-DL) is a substitution-closed full AFL. Corollary 10 Each language in /:(UVG-DLLex) can be recognized in polynomial deterministic time. Related Formalisms Based on word-order facts from Turkish, Hoffman (1992) proposes an extension to CCG called {}-CCG, in which arguments of functors form sets, rather than be- ing represented in a curried notation. Under function composition, these sets are unioned. Thus the move from CCG to {}-CCG corresponds very much to the move from LIG to {}-LIG. We conjecture that (a ver- sion of) {}-CCG is weakly equivalent to {}-LIG. Staudacher (1993) defines a related system called dis- tributed index grammar or DIG. DIG is like LIG, except that the stack of index symbols can be split into chunks and distributed among the daughter nodes. However, the formalism is not convincingly motivated by the lin- guistic data given (which can also be handled by a sim- ple LIG) or by other considerations. Several extensions to {}-LIG and UVG-DL are de- fined in (Rambow, 1994). First, we can introduce the "integrity" constraint suggested by Becker et al. (1991) which restricts long-distance relations through nodes. This is necessary to implement the linguistic notion of "barrier" or "island". Second, we can define the tree- rewriting version of UVG-DL, called V-TAG. This is motivated by Conjecture 4, which (if true) means that UVG-DL cannot derive Swiss German. Under either ex- tension, the weak generative power is extended, but the formal and computational results obtained for {}-LIG and UVG-DL still hold. Conclusion This paper has presented two equivalent formalisms, {}-LIG and UVG-DL, which provide formal models for the three different phenomena that we identified in the beginning of the paper. We have shown that both for- malisms, under certain restrictions that are compati- ble with the motivating phenomena, are restricted ill their generative capacity and polynomially parsable, thus making them attractive candidates for modeling natural language. Furthermore, the formalisms are substitution-closed AFLs, suggesting that the defini- tions we have given are "natural" from the point of view of formal language theory. Acknowledgments I would like to thank Bob Kasper, Gai~lle Recourcd, Giorgio Satta, Ed Stabler, two anonymous reviewers, 269 and especially K. Vijay-Shanker for useful comments and discussions. The research reported in this paper was conducted while the author was with the Com- puter and Information Science Department of the Uni- versity of Pennsylvania. The research was sponsored by the following grants: ARO DAAL 03-89-C-0031; DARPA N00014-90-J-1863; NSF IRI 90-16592; and Ben Franklin 91S.3078C-1. Bibliography Aho, A. V. (1968). Indexed grammars - an extension to context free grammars. J. ACM, 15:647-671. Becker, Tilman; Joshi, Aravind; and Rambow, Owen (1991). Long distance scrambling and tree adjoin- ing grammars. In Fifth Conference of the European Chapter of the Association for Computational Lin- guistics (EACL'91), pages 21-26. ACL. Chomsky, Noam (1981). Lectures in Government and Binding. Studies in generative grammar 9. Foris, Dordrecht. Cremers, A. B. and Mayer, O. (1973). On matrix lan- guages. Information and Control, 23:86-96. Cremers, A. B. and Mayer, O. (1974). On vector lan- guages. J. Comput. Syst. Sei., 8:158-166. Gazdar, G. (1988). Applicability of indexed grammars to natural languages. In Reyle, U. and Rohrer, C., editors, Natural Language Parsing and Linguistic Theories. D. Reidel, Dordrecht. Hoffman, Beryl (1992). A CCG approach to free word order languages. In 30th Meeting of the Associa- tion for Computational Linguistics (ACL'92). Joshi, Aravind; Levy, Leon; and Takahashi, M (1975). Tree adjunct grammars. J. Comput. Syst. Sci., 10:136-163. Joshi, Aravind K. (1985). How much context- sensitivity is necessary for characterizing struc- tural descriptions -- Tree Adjoining Grammars. In Dowty, D.; Karttunen, L.; and Zwicky, A., ed- itors, Natural Language Processing -- Theoreti- cal, Computational and Psychological Perspective, pages 206-250. Cambridge University Press, New York, NY. Originally presented in 1983. Karttunen, Lauri (1989). Radical lexicalism. In Baltin, Mark and Kroch, Anthony S., editors, Alternative conceptions of phrase structure, pages 43-65. Uni- versity of Chicago Press, Chicago. Kasper, Robert (1992). Compiling head-driven phrase structure grammar into lexicalized tree adjoining grammar. Presented at the TAG+ Workshop, Uni- versity of Pennsylvania. Marcus, Mitchell; Hindle, Donald; and Fleck, Margaret (1983). D-theory: Talking about talking about trees. In Proceedings of the 21st Annual Meeting of the Association f or Computational Linguistics, Cambridge, MA. Pollard, Carl (1984). Generalized phrase structure grammars, head grammars and natural language. PhD thesis, Stanford University, Stanford, CA. Pollard, Carl and Sag, Ivan (1987). Information- Based Syntax and Semantics. Vol 1: Fundamen- tals. CSLI. Pollard, Carl and Sag, Ivan (1992). Anaphors in En- glish and the scope of binding theory. Linguistic Inquiry, 23(2):261-303. Pollard, Carl and Sag, Ivan (1994). Head-Driven Phrase Structure Grammar. University of Chicago Press, Chicago. Draft distributed at the Third Eu- ropean Summer School in Language, Logic and In- formation, Saarbriicken, 1991. Rambow, Owen (1994). Formal and Computational Models for Natural Language Syntax. PhD thesis, Department of Computer and Information Science, University of Pennsylvania, Philadelphia. Rambow, Owen and Satta, Giorgio (1994). A rewriting system for free word order syntax that is non-local and mildly context sensitive. In Martfn-Vide, Car- los, editor, Current Issues in Mathematical Lin- guistics, North-Holland Linguistic series, Volume 56. Elsevier-North Holland, Amsterdam. Satta, Giorgio (1993). Recognition of vector languages. Unpublished manuscript, Universith di Venezia. Shieber, Stuart B. (1985). Evidence against the context-freeness of natural language. Linguistics and Philosophy, 8:333-343. Staudacher, Peter (1993). New frontiers beyond context-freeness: DI-grammars and DI-automata. In Sixth Conference of the European Chapter of the Association for Computational Linguistics (EA CL '93). Steedman, Mark (1985). Dependency and coordination in the grammar of Dutch and English. Language, 61. Vijay-Shanker, K. (1992). Using descriptions of trees in a Tree Adjoining Grammar. Compvtational Lin- guistics, 18(4) :481-518. Vijay-Shanker, K. and Weir, David (1994). The equiva- lence of four extensions of context-free grammars. Math. Syst. Theory. Also available as Technical Report CSRP 236 from the University of Sussex, School of Cognitive and Computing Sciences. Vijay-Shanker, K.; Weir, D.J.; and Joshi, A.K. (1987). Characterizing structural descriptions produced by various grammatical formalisms. In 25th Meeting of the Association for Computational Lingvistics (ACL '87}, Stanford, CA. 270
1994
36
Optimality Theory: Universal Grammar, Learning and Parsing Algorithms, and Connectionist Foundations (Abstract) Paul Smolensky and Bruce Tesar Department of Computer Science and Institute of Cognitive Science University of Colorado, Boulder USA We present a recently proposed theory of grammar, Optimality Theory (OT; Prince & Smolensky 1991, 1993). The principles of OT derive in large part from the high-level principles governing computation in con- nectionist networks. The talk proceeds as follows: (1) we summarize OT and its applications to UG. The we present (2) learning and (3) parsing algorithms for OT. Finally, (4) we show how crucial elements of OT emerge from connectionism, and discuss the one central feature of OT which so far eludes connectionist explanation. (1) In OT, UG provides a set of highly general univer- sal constraints which apply in parallel to assess the well- formedness of possible structural descriptions of linguis- tic inputs. The constraints may conflict, and for most inputs no structural description meets them all. The grammatical structure is the one that optimally meets the conflicting constraint sets. Optimality is defined on a language-particular basis: each language's grammar ranks the universal constraints in a dominance hierar- chy such that each constraint has absolute priority over all lower-ranked constraints. Given knowledge of UG, the job of the learner is to determine the constraint ranking which is particular to his or her language. [The explanatory power of OT as a theory of UG has now been attested for phonology in over two dozen papers and books (e.g., McCarthy ~: Prince 1993; Rutgers Op- timality Workshop, 1993); applications of OT to syntax are now being explored (e.g. Legendre, Raymond, $molensky 1993; Grimshaw 1993).] (2) Learnability ofOT (Tesar ~ Smolensky, 1993). The- ories of UG can be used to address questions of learn- ability via the formal universal principles they provide, or via their substantive universals. We will show that OT endows UG with sufficiently tight formal struc- ture to yield a number of strong learnability results at the formal level. We will present a family of closely related algorithms for learning, from positive exam- ples only, language-particular grammars on the basis of prior knowledge of the universal principles. We will sketch our proof of the correctness of these algorithms and demonstrate their low computational complexity. (More precisely, the learning time in the worst case, measured in terms of 'informative examples', grows only as n 2, where n is the number of constraints in UG, even though the number of possible grammars grows as n!, i.e., faster than exponentially.) Because these results depend only on the formal universals of OT, and not on the content of the universal constraints which provide the substantive universals of the theory, the conclusion that OT grammars are highly learnable applies equally to OT grammars in phonology, syntax, or any other grammar component. (3) Parsing in OT is assumed by many to be problem- atic. For OT is often described as follows: take an input form, generate all possible parses of it (generally, infinite in number), evaluate all the constraints against all the parses, filter the parses by descending the con- straints in the dominance hierarchy. While this cor- rectly characterizes the input/output function which is an OT grammar, it hardly provides an efficient pars- ing procedure. We will show, however, that efficient, provably correct parsing by dynamic programming is possible, at least when the set of candidate parses is sufficiently simple (Tesar, 1994). (4) OT is built from a set of principles, most of which derive from high-level principles of connectionist com- putation. The most central of these assert that, given an input representation, connectionist networks tend to compute an output representation which best satisfies a set of conflicting soft constraints, with constraint con- flicts handled via a notion of differential strength. For- malized through Harmony Theory (Smolensky, 1986) and Harmonic Grammar (Legendre, Miyata, & Smolen- sky 1990), this conception of computation yields a the- ory of grammar based on optimization. Optimality Theory introduces to a non-numerical form of optimiza- tion, made possible by a property as yet unexplained from the connectionist perspective: in grammars, con- straints fall into strict domination hierarchies. 271
1994
37
Similarity-Based Estimation of Word Cooccurrence Probabilities Ido Dagan Fernando Pereira AT&T Bell Laboratories 600 Mountain Ave. Murray Hill, NJ 07974, USA dagan©research, att. com pereira©research, att. com Abstract In many applications of natural language processing it is necessary to determine the likelihood of a given word combination. For example, a speech recognizer may need to determine which of the two word combinations "eat a peach" and "eat a beach" is more likely. Statis- tical NLP methods determine the likelihood of a word combination according to its frequency in a training cor- pus. However, the nature of language is such that many word combinations are infrequent and do not occur in a given corpus. In this work we propose a method for es- timating the probability of such previously unseen word combinations using available information on "most sim- ilar" words. We describe a probabilistic word association model based on distributional word similarity, and apply it to improving probability estimates for unseen word bi- grams in a variant of Katz's back-off model. The similarity-based method yields a 20% perplexity im- provement in the prediction of unseen bigrams and sta- tistically significant reductions in speech-recognition er- ror. Introduction Data sparseness is an inherent problem in statistical methods for natural language processing. Such meth- ods use statistics on the relative frequencies of config- urations of elements in a training corpus to evaluate alternative analyses or interpretations of new samples of text or speech. The most likely analysis will be taken to be the one that contains the most frequent config- urations. The problem of data sparseness arises when analyses contain configurations that never occurred in the training corpus. Then it is not possible to estimate probabilities from observed frequencies, andsome other estimation scheme has to be used. We focus here on a particular kind of configuration, word cooccurrence. Examples of such cooccurrences include relationships between head words in syntactic constructions (verb-object or adjective-noun, for exam- ple) and word sequences (n-grams). In commonly used models, the probability estimate for a previously un- seen cooccurrence is a function of the probability esti- Lillian Lee Division of Applied Sciences Harvard University 33 Oxford St. Cambridge MA 02138, USA llee©das, harvard, edu mates for the words in the cooccurrence. For example, in the bigram models that we study here, the probabil- ity P(w21wl) of a conditioned word w2 that has never occurred in training following the conditioning word wl is calculated from the probability of w~, as estimated by w2's frequency in the corpus (Jelinek, Mercer, and Roukos, 1992; Katz, 1987). This method depends on an independence assumption on the cooccurrence of Wl and w2: the more frequent w2 is, the higher will be the estimate of P(w2[wl), regardless of Wl. Class-based and similarity-based models provide an alternative to the independence assumption. In those models, the relationship between given words is mod- eled by analogy with other words that are in some sense similar to the given ones. Brown et a]. (1992) suggest a class-based n-gram model in which words with similar cooccurrence distri- butions are clustered in word classes. The cooccurrence probability of a given pair of words then is estimated ac- cording to an averaged cooccurrence probability of the two corresponding classes. Pereira, Tishby, and Lee (1993) propose a "soft" clustering scheme for certain grammatical cooccurrences in which membership of a word in a class is probabilistic. Cooccurrence probabil- ities of words are then modeled by averaged cooccur- rence probabilities of word clusters. Dagan, Markus, and Markovitch (1993) argue that reduction to a relatively small number of predetermined word classes or clusters may cause a substantial loss of information. Their similarity-based model avoids clus- tering altogether. Instead, each word is modeled by its own specific class, a set of words which are most simi- lar to it (as in k-nearest neighbor approaches in pattern recognition). Using this scheme, they predict which unobserved cooccurrences are more likely than others. Their model, however, is not probabilistic, that is, it does not provide a probability estimate for unobserved cooccurrences. It cannot therefore be used in a com- plete probabilistic framework, such as n-gram language models or probabilistic lexicalized grammars (Schabes, 1992; Lafferty, Sleator, and Temperley, 1992). We now give a similarity-based method for estimating the probabilities of cooccurrences unseen in training. 272 Similarity-based estimation was first used for language modeling in the cooccurrence smoothing method of Es- sen and Steinbiss (1992), derived from work on acous- tic model smoothing by Sugawara et al. (1985). We present a different method that takes as starting point the back-off scheme of Katz (1987). We first allocate an appropriate probability mass for unseen cooccurrences following the back-off method. Then we redistribute that mass to unseen cooccurrences according to an av- eraged cooccurrence distribution of a set of most similar conditioning words, using relative entropy as our sim- ilarity measure. This second step replaces the use of the independence assumption in the original back-off model. We applied our method to estimate unseen bigram probabilities for Wall Street Journal text and compared it to the standard back-off model. Testing on a held-out sample, the similarity model achieved a 20% reduction in perplexity for unseen bigrams. These constituted just 10.6% of the test sample, leading to an overall re- duction in test-set perplexity of 2.4%. We also exper- imented with an application to language modeling for speech recognition, which yielded a statistically signifi- cant reduction in recognition error. The remainder of the discussion is presented in terms of bigrams, but it is valid for other types of word cooc- currence as well. Discounting and Redistribution Many low-probability bigrams will be missing from any finite sample. Yet, the aggregate probability of all these unseen bigrams is fairly high; any new sample is very likely to contain some. Because of data sparseness, we cannot reliably use a maximum likelihood estimator (MLE) for bigram prob- abilities. The MLE for the probability of a bigram (wi, we) is simply: PML(Wi, we) -- c(w , we) N , (1) where c(wi, we) is the frequency of (wi, we) in the train- ing corpus and N is the total number of bigrams. How- ever, this estimates the probability of any unseen hi- gram to be zero, which is clearly undesirable. Previous proposals to circumvent the above problem (Good, 1953; Jelinek, Mercer, and Roukos, 1992; Katz, 1987; Church and Gale, 1991) take the MLE as an ini- tial estimate and adjust it so that the total probability of seen bigrams is less than one, leaving some probabil- ity mass for unseen bigrams. Typically, the adjustment involves either interpolation, in which the new estimator is a weighted combination of the MLE and an estimator that is guaranteed to be nonzero for unseen bigrams, or discounting, in which the MLE is decreased according to a model of the unreliability of small frequency counts, leaving some probability mass for unseen bigrams. The back-off model of Katz (1987) provides a clear separation between frequent events, for which observed frequencies are reliable probability estimators, and low- frequency events, whose prediction must involve addi- tional information sources. In addition, the back-off model does not require complex estimations for inter- polation parameters. A hack-off model requires methods for (a) discounting the estimates of previously observed events to leave out some positive probability mass for unseen events, and (b) redistributing among the unseen events the probabil- ity mass freed by discounting. For bigrams the resulting estimator has the general form fPd(w21wl) if c(wi,w2) > 0 D(w21wt) = ~.a(Wl)Pr(w2]wt) otherwise , (2) where Pd represents the discounted estimate for seen bigrams, P~ the model for probability redistribution among the unseen bigrams, and a(w) is a normalization factor. Since the overall mass left for unseen bigrams starting with wi is given by ~, P,~(welwi) , w~:c(wi ,w~)>0 ~(wi) = 1 - the normalization Ew2 P(w2[ wl) : 1 is = factor required to ensure (wl) 1 - ~:c(~i,w2)>0 Pr(we[wi) The second formulation of the normalization is compu- tationally preferable because the total number of pos- sible bigram types far exceeds the number of observed types. Equation (2) modifies slightly Katz's presenta- tion to include the placeholder Pr for alternative models of the distribution of unseen bigrams. Katz uses the Good-Turing formula to replace the actual frequency c(wi, w2) of a bigram (or an event, in general) with a discounted frequency, c*(wi,w2), de- fined by c*(wi, w2) = (C(Wl, w2) + 1)nc(wl'~)+i , (3) nc(wl,w2) where nc is the number of different bigrams in the cor- pus that have frequency c. He then uses the discounted frequency in the conditional probability calculation for a bigram: c* (wi, w2) (4) Pa(w21wt) - C(Wl) In the original Good-Turing method (Good, 1953) the free probability mass is redistributed uniformly among all unseen events. Instead, Katz's back-off scheme redistributes the free probability mass non- uniformly in proportion to the frequency of w2, by set- ting Pr(weJwi) = P(w~) (5) 273 Katz thus assumes that for a given conditioning word wl the probability of an unseen following word w2 is proportional to its unconditional probability. However, the overall form of the model (2) does not depend on this assumption, and we will next investigate an esti- mate for P~(w21wl) derived by averaging estimates for the conditional probabilities that w2 follows words that are distributionally similar to wl. The Similarity Model Our scheme is based on the assumption that words that are "similar" to wl can provide good predictions for the distribution of wl in unseen bigrams. Let S(Wl) denote a set of words which are most similar to wl, as determined by some similarity metric. We define PsiM(W21Wl), the similarity-based model for the condi- tional distribution of wl, as a weighted average of the conditional distributions of the words in S(Wl): PsiM(W21wl) = - , • '- ' ~ w(~i,~') (6) ZWleS(Wl) 2[--~'(~]~l'['/fll)~"~ W/w, ~j ) ' where W(W~l, wl) is the (unnormalized) weight given to w~, determined by its degree of similarity to wl. Ac- cording to this scheme, w2 is more likely to follow wl if it tends to follow words that are most similar to wl. To complete the scheme, it is necessary to define the simi- larity metric and, accordingly, S(wl) and W(w~, Wl). Following Pereira, Tishby, and Lee (1993), we measure word similarity by the relative entropy, or Kullback-Leibler (KL) distance, between the corre- sponding conditional distributions D(w~ II w~) = Z P(w2]wl) log P(w2Iwl) (7) ~ P(w2lw~) " The KL distance is 0 when wl = w~, and it increases as the two distribution are less similar. To compute (6) and (7) we must have nonzero esti- mates of P(w21wl) whenever necessary for (7) to be de- fined. We use the estimates given by the standard back- off model, which satisfy that requirement. Thus our application of the similarity model averages together standard back-off estimates for a set of similar condi- tioning words. We define S(wl) as the set of at most k nearest words to wl (excluding wl itself), that also satisfy D(Wl II w~) < t. k and t are parameters that control the contents of $(wl) and are tuned experimentally, as we will see below. W(w~, wl) is defined as W(w~, Wl) --- exp -/3D(Wl II ~i) The weight is larger for words that are more similar (closer) to wl. The parameter fl controls the relative contribution of words in different distances from wl: as the value of fl increases, the nearest words to Wl get rel- atively more weight. As fl decreases, remote words get a larger effect. Like k and t,/3 is tuned experimentally. Having a definition for PSIM(W2[Wl), we could use it directly as Pr(w2[wl) in the back-off scheme (2). We found that it is better to smooth PsiM(W~[Wl) by inter- polating it with the unigram probability P(w2) (recall that Katz used P(w2) as Pr(w2[wl)). Using linear in- terpolation we get P,(w2[wl) = 7P(w2) + (1 - 7)PsiM(W2lWl) , (8) where "f is an experimentally-determined interpolation parameter. This smoothing appears to compensate for inaccuracies in Pslu(w2]wl), mainly for infrequent conditioning words. However, as the evaluation be- low shows, good values for 7 are small, that is, the similarity-based model plays a stronger role than the independence assumption. To summarize, we construct a similarity-based model for P(w2[wl) and then interpolate it with P(w2). The interpolated model (8) is used in the back-off scheme as Pr(w2[wl), to obtain better estimates for unseen bi- grams. Four parameters, to be tuned experimentally, are relevant for this process: k and t, which determine the set of similar words to be considered,/3, which deter- mines the relative effect of these words, and 7, which de- termines the overall importance of the similarity-based model. Evaluation We evaluated our method by comparing its perplexity 1 and effect on speech-recognition accuracy with the base- line bigram back-off model developed by MIT Lincoln Laboratories for the Wall Streel Journal (WSJ) text and dictation corpora provided by ARPA's HLT pro- grain (Paul, 1991). 2 The baseline back-off model follows closely the Katz design, except that for compactness all frequency one bigrams are ignored. The counts used ill this model and in ours were obtained from 40.5 million words of WSJ text from the years 1987-89. For perplexity evaluation, we tuned the similarity model parameters by minimizing perplexity on an ad- ditional sample of 57.5 thousand words of WSJ text, drawn from the ARPA HLT development test set. The best parameter values found were k = 60, t = 2.5,/3 = 4 and 7 = 0.15. For these values, the improvement in perplexity for unseen bigrams in a held-out 18 thou- sand word sample, in which 10.6% of the bigrams are unseen, is just over 20%. This improvement on unseen 1The perplexity of a conditional bigram probability model /5 with respect to the true bigram distribution is an information-theoretic measure of model quality (Jelinek, Mercer, and Roukos, 1992) that can be empirically esti- mated by exp - -~ ~-~i log P(w, tu, i_l ) for a test set of length N. Intuitively, the lower the perplexity of a model the more likely the model is to assign high probability to bigrams that actually occur. In our task, lower perplexity will indicate better prediction of unseen bigrams. 2The ARPA WSJ development corpora come in two ver- sions, one with verbalized punctuation and the other with- out. We used the latter in all our experiments. 274 k t ~ 7 training reduction (%) test reduction (%) 60 2.5 4 0.15 18.4 20.51 50 2.5 4 0.15 18.38 20.45 40 2.5 4 0.2 18.34 20.03 30 2.5 4 0.25 18.33 19.76 70 2.5 4 0.1 18.3 20.53 80 2.5 4.5 0.1 18.25 20.55 100 2.5 4.5 0.1 18.23 20.54 90 2.5 4.5 0.1 18.23 20.59 20 1.5 4 0.3 18.04 18.7 10 1.5 3.5 0.3 16.64 16.94 Table 1: Perplexity Reduction on Unseen Bigrams for Different Model Parameters bigrams corresponds to an overall test set perplexity improvement of 2.4% (from 237.4 to 231.7). Table 1 shows reductions in training and test perplexity, sorted by training reduction, for different choices in the num- ber k of closest neighbors used. The values of f~, 7 and t are the best ones found for each k. 3 From equation (6), it is clear that the computational cost of applying the similarity model to an unseen bi- gram is O(k). Therefore, lower values for k (and also for t) are computationally preferable. From the table, we can see that reducing k to 30 incurs a penalty of less than 1% in the perplexity improvement, so relatively low values of k appear to be sufficient to achieve most of the benefit of the similarity model. As the table also shows, the best value of 7 increases as k decreases, that is, for lower k a greater weight is given to the condi- tioned word's frequency. This suggests that the predic- tive power of neighbors beyond the closest 30 or so can be modeled fairly well by the overall frequency of the conditioned word. The bigram similarity model was also tested as a lan- guage model in speech recognition. The test data for this experiment were pruned word lattices for 403 WSJ closed-vocabulary test sentences. Arc scores in those lattices are sums of an acoustic score (negative log like- lihood) and a language-model score, in this case the negative log probability provided by the baseline bi- gram model. From the given lattices, we constructed new lattices in which the arc scores were modified to use the similar- ity model instead of the baseline model. We compared the best sentence hypothesis in each original lattice and in the modified one, and counted the word disagree- ments in which one of the hypotheses is correct. There were a total of 96 such disagreements. The similarity model was correct in 64 cases, and the back-off model in 32. This advantage for the similarity model is statisti- cally significant at the 0.01 level. The overall reduction in error rate is small, from 21.4% to 20.9%, because the number of disagreements is small compared with 3Values of fl and t refer to base 10 logarithms and expo- nentials in all calculations. the overall number of errors in our current recognition setup. Table 2 shows some examples of speech recognition disagreements between the two models. The hypotheses are labeled 'B' for back-off and 'S' for similarity, and the bold-face words are errors. The similarity model seems to be able to model better regularities such as semantic parallelism in lists and avoiding a past tense form after "to." On the other hand, the similarity model makes several mistakes in which a function word is inserted in a place where punctuation would be found in written text. Related Work The cooccurrence smooihing technique (Essen and Steinbiss, 1992), based on earlier stochastic speech modeling work by Sugawara et al. (1985), is the main previous attempt to use similarity to estimate the prob- ability of unseen events in language modeling. In addi- tion to its original use in language modeling for speech recognition, Grishman and Sterling (1993) applied the cooccurrence smoothing technique to estimate the like- lihood of selectional patterns. We will outline here the main parallels and differences between our method and cooccurrence smoothing. A more detailed analy- sis would require an empirical comparison of the two methods on the same corpus and task. In cooccurrence smoothing, as in our method, a base- line model is combined with a similarity-based model that refines some of its probability estimates. The sim- ilarity model in cooccurrence smoothing is based on the intuition that the similarity between two words w and w' can be measured by the confusion probability Pc(w'lw ) that w' can be substituted for w in an arbi- trary context in the training corpus. Given a baseline probability model P, which is taken to be the MLE, the confusion probability Pc(w~lwl) between conditioning words w~ and wl is defined as l Pc(wllwl) -- 1 (9) P( l) p(wllw2)p(wl 1 2)P( 2) ' the probability that wl is followed by the same context words as w~. Then the bigram estimate derived by 275 B commitments ... from leaders felt the three point six billion dollars S ] commitments ... from leaders fell to three point six billion dollars B I followed bv France the US agreed in ltalv ,y France the US agreed in Italy S [ followed by France the US Greece ...Italy B [ he whispers to made a S [ he whispers to an aide B the necessity for change exist S [ the necessity for change exists B ] without ...additional reserves Centrust would have reported S [ without ...additional reserves of Centrust would have reported B ] in the darkness past the church S in the darkness passed the church Table 2: Speech Recognition Disagreements between Models cooccurrence smoothing is given by Ps(w21wl) = ~ P(w~lw'l)Pc(w'llwO Notice that this formula has the same form as our sim- ilarity model (6), except that it uses confusion proba- bilities where we use normalized weights. 4 In addition, we restrict the summation to sufficiently similar words, whereas the cooccurrence smoothing method sums over all words in the lexicon. The similarity measure (9) is symmetric in the sense that Pc(w'lw) and Pc(w[w') are identical up to fre- Pc(w'l w) _ P(w) quency normalization, that is Pc(wlw') - P(w,)" In contrast, D(w H w') (7) is asymmetric in that it weighs each context in proportion to its probability of occur- rence with w, but not with wq In this way, if w and w' have comparable frequencies but w' has a sharper context distribution than w, then D(w' I[ w) is greater than D(w [[ w'). Therefore, in our similarity model w' will play a stronger role in estimating w than vice versa. These properties motivated our choice of relative entropy for similarity measure, because of the intuition that words with sharper distributions are more infor- mative about other words than words with flat distri- butions. 4This presentation corresponds to model 2-B in Essen and Steinbiss (1992). Their presentation follows the equiv- alent model l-A, which averages over similar conditioned words, with the similarity defined with the preceding word as context. In fact, these equivalent models are symmetric in their treatment of conditioning and conditioned word, as they can both be rewritten as Ps(w2lwl) ,~, , , , , P(w2[Wl)P(Wl = Iw~)P(w21wl) They also consider other definitions of confusion probabil- ity and smoothed probability estimate, but the one above yielded the best experimental results. Finally, while we have used our similarity model only for missing bigrams in a back-off scheme, Essen and Steinbiss (1992) used linear interpolation for all bi- grams to combine the cooccurrence smoothing model with MLE models of bigrams and unigrams. Notice, however, that the choice of back-off or interpolation is independent from the similarity model used. Further Research Our model provides a basic scheme for probabilistic similarity-based estimation that can be developed in several directions. First, variations of (6) may be tried, such as different similarity metrics and different weight- ing schemes. Also, some simplification of the current model parameters may be possible, especially with re- spect to the parameters t and k used to select the near- est neighbors of a word. A more substantial variation would be to base the model on similarity between con- ditioned words rather than on similarity between con- ditioning words. Other evidence may be combined with the similarity- based estimate. For instance, it may be advantageous to weigh those estimates by some measure of the re- liability of the similarity metric and of the neighbor distributions. A second possibility is to take into ac- count negative evidence: if Wl is frequent, but w2 never followed it, there may be enough statistical evidence to put an upper bound on the estimate of P(w21wl). This may require an adjustment of the similarity based estimate, possibly along the lines of (Rosenfeld and Huang, 1992). Third, the similarity-based estimate can be used to smooth the naaximum likelihood estimate for small nonzero frequencies. If the similarity-based estimate is relatively high, a bigram would receive a higher estimate than predicted by the uniform discount- ing method. Finally, the similarity-based model may be applied to configurations other than bigrams. For trigrams, it is necessary to measure similarity between differ- ent conditioning bigrams. This can be done directly, 276 by measuring the distance between distributions of the form P(w31wl, w2), corresponding to different bigrams (wl, w~). Alternatively, and more practically, it would be possible to define a similarity measure between bi- grams as a function of similarities between correspond- ing words in them. Other types of conditional cooccur- rence probabilities have been used in probabilistic pars- ing (Black et al., 1993). If the configuration in question includes only two words, such as P(objectlverb), then it is possible to use the model we have used for bigrams. If the configuration includes more elements, it is nec- essary to adjust the method, along the lines discussed above for trigrams. Conclusions Similarity-based models suggest an appealing approach for dealing with data sparseness. Based on corpus statistics, they provide analogies between words that of- ten agree with our linguistic and domain intuitions. In this paper we presented a new model that implements the similarity-based approach to provide estimates for the conditional probabilities of unseen word cooccur- fences. Our method combines similarity-based estimates with Katz's back-off scheme, which is widely used for language modeling in speech recognition. Although the scheme was originally proposed as a preferred way of implementing the independence assumption, we suggest that it is also appropriate for implementing similarity- based models, as well as class-based models. It enables us to rely on direct maximum likelihood estimates when reliable statistics are available, and only otherwise re- sort to the estimates of an "indirect" model. The improvement we achieved for a bigram model is statistically significant, though modest in its overall ef- fect because of the small proportion of unseen events. While we have used bigrams as an easily-accessible plat- form to develop and test the model, more substantial improvements might be obtainable for more informa- tive configurations. An obvious case is that of tri- grams, for which the sparse data problem is much more severe. ~ Our longer-term goal, however, is to apply similarity techniques to linguistically motivated word cooccurrence configurations, as suggested by lexical- ized approaches to parsing (Schabes, 1992; Lafferty, Sleator, and Temperley, 1992). In configurations like verb-object and adjective-noun, there is some evidence (Pereira, Tishby, and Lee, 1993) that sharper word cooccurrence distributions are obtainable, leading to improved predictions by similarity techniques. Acknowledgments We thank Slava Katz for discussions on the topic of this paper, Doug McIlroy for detailed comments, Doug Paul 5For WSJ trigrams, only 58.6% of test set trigrams occur in 40M of words of training (Doug Paul, personal communication). for help with his baseline back-off model, and Andre Ljolje and Michael Riley for providing the word lattices for our experiments. References Black, Ezra, Fred Jelinek, John Lafferty, David M. Magerman, David Mercer, and Salim Roukos. 1993. Towards history-based grammars: Using richer mod- els for probabilistic parsing. In 30th Annual Meet- ing of the Association for Computational Linguistics, pages 31-37, Columbus, Ohio. Ohio State University, Association for Computational Linguistics, Morris- town, New Jersey. Brown, Peter F., Vincent J. Della Pietra, Peter V. deSouza, Jenifer C. Lai, and Robert L. Mercer. 1992. Class-based n-gram models of natural lan- guage. Computational Linguistics, 18(4):467-479. Church, Kenneth W. and William A. Gale. 1991. A comparison of the enhanced Good-Turing and deleted estimation methods for estimating probabilities of English bigrams. Computer Speech and Language, 5:19-54. Dagan, Ido, Shaul Markus, and Shaul Markovitch. 1993. Contextual word similarity and estimation from sparse data. In 30th Annual Meeting of the As- sociation for Computational Linguistics, pages 164- 171, Columbus, Ohio. Ohio State University, Asso- ciation for Computational Linguistics, Morristown, New Jersey. Essen, Ute and Volker Steinbiss. 1992. Coocurrence smoothing for stochastic language modeling. In Pro- ceedings of ICASSP, volume I, pages 161-164. IEEE. Good, I.J. 1953. The population frequencies of species and the estimation of population parameters. Biometrika, 40(3):237-264. Grishman, Ralph and John Sterling. 1993. Smoothing of automatically generated selectional constraints. In Human Language Technology, pages 254-259, San Francisco, California. Advanced Research Projects Agency, Software and Intelligent Systems Technology Office, Morgan Kaufmann. Jelinek, Frederick, Robert L. Mercer, and Salim Roukos. 1992. Principles of lexical language mod- eling for speech recognition. In Sadaoki Furui and M. Mohan Sondhi, editors, Advances in Speech Sig- nal Processing. Mercer Dekker, Inc., pages 651-699. Katz, Slava M. 1987. Estimation of probabilities from sparse data for the language model component of a speech recognizer. IEEE Transactions on Acoustics, Speeech and Signal Processing, 35(3):400-401. Lafferty, John, Daniel Sleator, and Davey Temperley. 1992. Grammatical trigrams: aa probabilistic model of link grammar. In Robert Goldman, editor, AAAI 277 Fall Symposium on Probabilistic Approaches to Natu- ral Language Processing, Cambridge, Massachusetts. American Association for Artificial Intelligence. Paul, Douglas B. 1991. Experience with a stack decoder-based HMM CSR and back-off n-gram lan- guage models. In Proceedings of the Speech and Nat- ural Language Workshop, pages 284-288, Palo Alto, California, February. Defense Advanced Research Projects Agency, Information Science and Technol- ogy Office, Morgan Kaufmann. Pereira, Fernando C. N., Naftali Z. Tishby, and Lil- lian Lee. 1993. Distributional clustering of English words. In $Oth Annual Meeting of the Association for Computational Linguistics, pages 183-190, Co]urn- bus, Ohio. Ohio State University, Association for Computational Linguistics, Morristown, New Jersey. Rosenfeld, Ronald and Xuedong Huang. 1992. Im- provements in stochastic language modeling. In DARPA Speech and Natural Language Workshop, pages 107-111, Harriman, New York, February. Mor- gan Kaufmann, San Mateo, California. Sehabes, Yves. 1992. Stochastic lexiealized tree- adjoining grammars. In Proceeedings of the 14th International Conference on Computational Linguis- tics, Nantes, France. Sugawara, K., M. Nishimura, K. Toshioka, M. Okoehi, and T. Kaneko. 1985. Isolated word recognition using hidden Markov models. In Proceedings of ICASSP, pages 1-4, Tampa, Florida. IEEE. 278
1994
38
Acquiring Receptive Morphology: A Connectionist Model Michael Gasser Computer Science and Linguistics Departments Indiana University Abstract This paper describes a modular connectionist model of the acquisition of receptive inflectional morphology. The model takes inputs in the form of phones one at a time and outputs the associated roots and in- flections. Simulations using artificial language stimuli demonstrate the capacity of the model to learn suffix- ation, prefixation, infixation, circumfixation, mutation, template, and deletion rules. Separate network mod- ules responsible for syllables enable to the network to learn simple reduplication rules as well. The model also embodies constraints against association-line crossing. Introduction For many natural languages, a major problem for a language learner, whether human or machine, is the system of bound morphology of the language, which may carry much of the functional load of the grammar. While the acquisition of morphology has sometimes been seen as the problem of learning how to transform one linguistic form into another form, e.g., by [Plunkett and Marchman, 1991] and [Rumelhart and McClelland, 1986], from the learner's perspective, the problem is one of learning how forms map onto meanings. Most work which has viewed the acquisition of morphology in this way, e.g., [Cottrell and Plunkett, 1991], has taken tile perspective of production. But a human language learner almost certainly learns to understand polymor- phemic words before learning to produce them, and pro- duction may need to build on perception [Gasser, 1993]. Thus it seems reasonable to begin with a model of the acquisition of receptive morphology. In this paper, I will deal with that component of re- ceptive morphology which takes sequences of phones, each expressed as a vector of phonetic features, and identifies them as particular morphemes. This process ignores the segmentation of words into phone sequences, the morphological structure of words, and the the se- mantics of morphemes. I will refer to this task as root and inflection identification. It is assumed that children learn to identify roots and inflections through the pre- sentation of paired forms and sets of morpheme mean- ings. They show evidence of generalization when they are able to identify the root and inflection of a novel combination of familiar morphemes. At a minimum, a model of the acquisition of this ca- pacity should succeed on the full range of morphological rule types attested in the world's languages, it should embody known constraints on what sorts of rules are possible in human language, and it should bear a re- lationship to the production of morphologically com- plex words. This paper describes a psychologically motivated connectionist model (Modular Connection- ist Network for the Acquisition of Morphology, MC- NAM) which shows evidence of acquiring all of the basic rule types and which also experiences relative difficulty learning rules which seem not to be possible. In another paper [Gasser, 1992], I show how the representations that develop during the learning of root and inflection identification can support word production. Although still tentative in several respects, MCNAM appears to be the first computational model of the acquisition of receptive morphology to apply to this diversity of mor- phological rules. In contrast to symbolic models of lan- guage acquisition, it succeeds without built-in symbolic distinctions, for example, the distinction between stem and affix. The paper is organized as follows. I first provide a brief overview of the categories of morphological rules found in the world's languages. I then present the model and discuss simulations which demonstrate that it generalizes for most kinds of morphological rules. Next, focusing on template morphology, I show how the network implements the analogue of autosegments and how the model embodies one constraint on the sorts of rules that can be learned. Finally, I discuss augmenta- tion of the model with a hierarchical structure reflect- ing the hierarchy of metrical phonology; this addition is necessary for the acquisition of the most challenging type of morphological rule, reduplication. Categories of Morphological Processes For the sake of convenience, I will be discussing mor- phology in terms of the conventional notions of roots, inflections, and rules. However, a human language learner does not have direct access to the root for a 279 given form, so the problem of learning morphology can- not be one of discovering how to add to or modify a root. And it is not clear whether there is anything like a symbolic morphological rule in the brain of a language learner. The following kinds of inflectional or derivational morphological rules are attested in the world's lan- guages: aj~zation, by which a grammatical morpheme is added to a root (or stem), either before (prefixation), after (suJ~ation), both before and after (eircumfixa- tion), or within (infixation); mutation, by which one or more root segments themselves are modified; tem- plate rules, by which a word can be described as a combination of a root and a template specifying how segments are to be intercalated between the root seg- ments; deletion, by which one or more segments are deleted; reduplication, by which a copy, or a systemat- ically and altered copy, of some portion of the root is added to it. Examples of each rule type are included in the description of the stimuli used in the simulations. The Model The approach to language acquisition exemplified in this paper differs from traditional symbolic approaches in that the focus is on specifying the sort of mechanism which has the capacity to learn some aspect of language, rather than the knowledge which this seems to require. Given the basic problem of what it means to learn re- ceptive morphology, the goal is to begin with a very simple architecture and augment it as necessary. In this paper, I first describe a version' of the model which is modular with respect to the identification of root and inflections. The advantages of this version over the sim- pler model in which these tasks are shared by the same hidden layer is described in a separate paper [Gasser, 1994]. Later I discuss a version of the model which in- corporates modularity at the level of the syllable and metrical foot; this is required to learn reduplication. The model described here is connectionist. There are several reasons why one might want to investigate language acquisition from the perspective of connec- tionism. For the purposes of this paper, the most im- portant is the hope that a connectionist network, or a device making use of a related statistical approach to learning, may have the capacity to learn a task such as word recognition without pre-wired symbolic knowl- edge. That is, such a model would make do without pre-existing concepts such as root and affix or distinc- tions such as regular vs. irregular morphology. If suc- cessful, this model would provide a simpler account of the acquisition of morphology than one which begins with symbolic knowledge and constraints. Words takes place in time, and a psychologically plausible account of word recognition must take this fact into account. Words are often recognized long be- fore they finish; hearers seem to be continuously com- paring the contents of a linguistic short-term memory with the phonologicM representations in their mental lexicons [Marslen-Wilson and Tyler, 1980]. Thus the task at hand requires a short-term memory of some sort. Of the various ways of representing short-term memory in connectionist networks [Port, 1990], the most flexible approach makes use of recurrent connections on hidden units. This has the effect of turning the hidden layer into a short-term memory which is not bounded by a fixed limit on the length of the period it can store. The model to be described here is one of the simpler possible networks of this type, a version of the simple recur- rent network due to [Elman, 1990]. The Version 1 network is shown in Figure 1 Each box represents a layer of connectionist processing units and each arrow a complete set of weighted connections be- tween two layers. The network operates as follows. A sequence of phones is presented to the input layer one at a time. That is, each tick of the network's clock rep- resents the presentation of a single phone. Each phone unit represents a phonetic feature, and each word con- sists of a sequence of phones preceded by a boundary "phone" consisting of 0.0 activations. Figure h MCNAM: Version 1 An input phone pattern sends activation to the net- work's hidden layers. Each hidden layer also receives activation from the pattern that appeared there on the previous time step. Thus each hidden unit is joined by a time-delay connection to each other hidden unit within its layer. It is the two previous hidden-layer patterns which represent the system's short-term memory of the phonological context. At the beginning of each word se- quence, the hidden layers are reinitialized to a pattern consisting of 0.0 activations. Finally the output units are activated by the hidden layers. There are at least three output layers. One represents simply a copy of the current input phone. Training the network to auto-associate its current in- put aids in learning the root and inflection identifica- tion task because it forces the network to learn to dis- tinguish the individual phones at the hidden layers, a prerequisite to using the short-term memory effectively. The second layer of output units represents the root "meaning". For each root there is a single output unit. Thus while there is no real semantics, the association 280 between the input phone sequence and the "meaning" is an arbitrary one. The remaining groups of output units represent the inflection "meaning"; one group is shown in the figure. There is a layer of units for each separate inflectional category (e.g., tense and aspect) and a unit for each separate inflection within its layer. One of the hidden layers connects to the root output layer, the other to the inflection output layers. For each input phone, the network receives a tar- get consisting of the correct phone, root, and inflection outputs for the current word. The phone target is iden- tical to the input phone. The root and inflection tar- gets, which are constant throughout the presentation of a word, are the patterns associated with the root and inflection for the input word. The network is trained using the backpropagation learning algorithm [Rumelhart et al., 1986], which ad- justs the weights on the network's connections in such a way as to minimize the error, that is, the difference be- tween the network's outputs and the targets. For each morphological rule, a separate network is trained on a subset of the possible combinations of root and inflec- tion. At various points during training, the network is tested on unfamiliar words, that is, novel combina- tions of roots and inflections. The performance of the network is the percentage of the test roots and inflec- tions for which its output is correct at the end of each word sequence. An output is considered "correct" if it is closer to the correct root (or inflection) than to any other. The network is evaluated at the end of the word because in general it may need to wait that long to have enough information to identify both root and inflection. Experiments General Performance of the Model In all of the experiments reported on here, the stim- uli presented to the network consisted of words in an artificial language. The phoneme inventory of the lan- guage was made up 19 phones (24 for the mutation rule, which nasalizes vowels). For each morphological rule, there were 30 roots, 15 each of CVC and CVCVC patterns of phones. Each word consisted of either two or three morphemes, a root and one or two inflections (referred to as "tense" and "aspect" for convenience). Examples of each rule, using the root vibun: (1) suf- fix: present-vibuni, past-vibuna; (2) prefix: present- ivibun, past-avibun; (3) infix: present-vikbun, past- vinbun; (4) circumfix: present-ivibuni, past-avibuna; (5) mutation: present-vibun, past-viban; (6) deletion: present-vibun, past-vibu; (7) template: present-vaban, past-vbaan; (8) two-suffix: present perfect-vibunak, present progressive-vibunas, past perfect-vibunik, past progressive-vibunis; (9) two-prefix: present perfect- kavibun, present progressive-kivibun, past perfect- savibuu, past progressive-sivibun; (10) prefix-suffix: present perfect-avibune, present progressive-avibunu, past perfect-ovibune, past progressive-ovibunu. No ir- regular forms were included. For each morphological rule there were either 60 (30 roots x 2 tense inflections) or 120 (30 roots x 2 tense inflections x 2 aspect inflections) different words. From these 2/3 were selected randomly as training words, and the remaining 1/3 were set aside as test words. For each rule, ten separate networks with different random initial weights were trained and tested. Training for the tense- only rules proceeded for 150 epochs (repetitions of all training patterns); training for the tense-aspect rules lasted 100 epochs. Following training the performance of the network on the test patterns was assessed. Figure ??. shows the mean performance of the net- work on the test patterns for each rule following train- ing. Note that chance performance for the roots was .033 and for the inflections .5 since there were 30 roots and 2 inflections in each category. For all tasks, in- cluding both root and inflection identification the net- work performs well above chance. Performance is far from perfect for some of the rule types, but further im- provement is possible with optimization of the learning parameters. Interestingly, template rules, which are problematic for some symbolic approaches to morphology processing and acquisition, are among the easiest for the network. Thus it is informative to investigate further how the network solved this task. For the particular template rule, the two forms of each root shared the same initial and final consonant. This tended to make root identi- fication relatively easy. With respect to inflections, the pattern is more like infixation than prefixation or suffix- ation because all of'the segments relevant to the tense, that is, the/a/s, are between the first and last segment. But inflection identifation for the template is consider- ably higher than for infixation, probably because of the redundancy: the present tense is characterized by an /a/ in second position and a consonant in third posi- tion, the past tense by a consonant in second position and an/a/in third position. To gain a better understanding of the way in which the network solves a template morphology task, a fur- ther experiment was conducted. In this experiment, each root consisted of a sequence of three consonants from the set /p, b, m, t, d, s, n, k, g/. There were three tense morphemes, each characterized by a partic- ular template. The present template was ClaC2aCaa, the past template aCtC2aaC3, and the future template aClaC2Caa. Thus the three forms for the root pmn were pamana, apmaan, and apamna. The network learns to recognize the tense templates very quickly; generalization is over 90% following only 25 epochs of training. This task is relatively easy since the vowels appear in the same sequential positions for each tense. More interesting is the performance of the root identi- fication part of the network, which must learn to rec- ognize the commonality among sequences of the same consonants even though, for any pair of forms for a given root, only one of the three consonants appears in the same position. Performance reaches 72% on the 281 1 ED .¢:: 0.75 • 0.5 o t-- c0 o '~ 0.25 Q. 0 Suf Pre Root ident In Circ Del Mut Tem 2-suf 2-pre P+s Type of inflection - - Chanceforroot Infll ident ~ Infl2 ident ............ Chance forinf/ Figure 2: Performance on Test Words Following Training test words following 150 epochs. To better visualize the problem, it helps to exam- ine what happens in hidden-layer space for the root layer as a word is processed. This 15-dimensional space is impossible to observe directly, but we can get an idea of the most significant movements through this space through the use of principal component analysis, a technique which is by now a familiar way of analyz- ing the behavior of recurrent networks [Elman, 1991, Port, 1990]. Given a set of data vectors, principal com- ponent analysis yields a set of orthogonal vectors, or components, which are ranked in terms of how much of the variance in the data they account for. Principal components for the root identification hid- den layer vectors were extracted for a single network following 150 repetitions of the template training pat- terns. The paths through the space defined by the first two components of the root identification hidden layer as the three forms of the root pds are presented to the network are shown in Figure 3. Points marked in the same way represent the same root consonant. 1 What we see is that, as the root hidden layer processes the word, it passes through roughly similar regions in hidden-layer space as it encounters the consonants of the root, inde- 1Only two points appear for the first root consonant be- cause the first two segments of the past and future forms of a given root are the same. pendent of their sequential position. In a sense these regions correspond to the autosegments of autosegmen- tal phonological and morphological analyses. Constraints on Morphological Processes In the previous sections, I have described how mod- ular simple recurrent networks have the capacity to learn to recognize morphologically complex words re- sulting from a variety of morphological processes. But is this approach too powerful? Can these networks learn rules of types that people cannot? While it is not completely clear what rules people can and can- not learn, some evidence in this direction comes from examining large numbers of languages. One possible constraint on morphological rules comes from autoseg- mental analyses: the association lines that join one tier to another should not cross. Another way of stating the constraint is to say that the relative position of two segments within a morpheme remains the same in the different forms of the word. Can a recognition network learn a rule which vio- lates this constraint as readily as a comparable one which does not? To test this, separate networks were trained to learn the following two template morphology rules, involving three forms: (1) present: CzaC2aCaa, past: aCiC2aaC3, future: aClaC2C3a (2) present: ClaC2Caaa, past: aC1C2aCaa, future: aClaC3aC2. 282 PC 2 , , , , i , , , . . . . . . 2 -0.4 -0.2 -- ° - 0.2 0.4 • PC pds + fut pds +pres pds +pa s t consl • cons2 • cons3 [] Figure 3: Template Rule, Root Hidden Layer, Principal Components 1 and 2, padasa, apdaas, apadsa Both rules produce the three forms of each root using the three root consonants and sequences of threea's. In each case each of the three consonants appears in the same position in two of the three forms. The sec- ond rule differs from the first in that the order of the three consonants is not constant; the second and third consonant of the present and past forms reverse their relative positions in the future form. In the terms of a linguistic analysis, the root consonants would appear in one order in the underlying representation of the root (preserved in the present and past forms) but in the reverse order in the future form. The underlying order is preserved in all three forms for the first rule. I will refer to the first rule as the "favored" one, the second as the "disfavored" one. In the experiments testing the ease with which these two rules were learned, a set of thirty roots was again generated randomly. Each root consisted of three con- sonants limited to the set: {p, b, m, t, d, n, k, g}. As before, the networks were trained on 2/3 of the possi- ble combinations of root and inflection (60 words in all) and tested on the remaining third (30 words). Separate networks were trained on the two rules. Mean results for 10 different networks for each rule are shown in Fig- ure 4. While the disfavored rule is learned to some ex- tent, there is a clear advantage for the favored over the disfavored rule with respect to generalization for root identification. Since the inflection is easily recognized by the pattern of consonants and vowels, the order of the second and third root consonants is irrelevant to in- flection identification. Root identification, on the other hand, depends crucially on the sequence of consonants. With the first rule, in fact, it is possible to completely ignore the CV templates and pay attention only to the root consonants in identifying the root. With the sec- ond rule, however, the only way to be sure which root is intended is to keep track of which sequences occur with which templates. With the two possible roots fin and fnt, for example, there would be no way of knowing which root appeared in a form not encountered during training unless the combination of sequence and tense had somehow been attended to during training. In this ease, the future of one root has the same sequence of consonants as the present and past of the other. Thus, to the extent that roots overlap with one another, root identification with the disfavored rule presents a harder task to a network. Given the relatively small set of consonants in these experiments, there is considerable overlap among the roots, and this is reflected in the poor generalization for the disfavored rule. Thus for this word recognition network, a rule which apparently could not occur in human language is somewhat more difficult than a comparable one which could. 283 0.8 0.7 0.6 0.5 o ~ 0.4 2 o.3 ft. 0.2 0.1 / 0 25 50 75 100 125 150 Epochs of training Disfavored Favored ~ Chance Figure 4: Template Rules, Favored and Disfavored, Root Identification Reduplication We have yet to deal with reduplication. The parsing of an unfamiliar word involving reduplication apparently requires the ability to notice the similarity between the relevant portions of the word. For the networks we have considered so far, recognition of reduplication would seem to be a difficult, if not an impossible, task. Con- sider the case in which a network has just heard the sequence tamkam. At this point we would expect a hu- man listener to be aware that the two syllables rhymed, that is, that they had the same vowel and final conso- nant (rime). But at the point following the second m, the network does not have direct access to representa- tions for the two subsequences to be compared. If it has been trained to identify sequences like tamkara, it will at this point have a representation of the entire se- quence in its contextual short-term memory. However, this representation will not distinguish the two sylla- bles, so it is hard to see how they might be compared. To test whether Version 1 of the model could handle reduplication, networks were trained to perform inflec- tion identification only. The stimuli consisted of two- syllable words, where the initial consonant (the onset) of each syllable came from the set/p, b, f, v, m, t, d, s, z, n, k, g, x, 7, xj/, the vowel from the set/i, e, u, o, a/, and the final consonant, when there was one, from the set/n, s/. Separate networks were trained to turn on their single output unit when the onsets of the two syl- lables were the same and when the rimes were the same. The training set consisted of 200 words. In each case, half of the sequences satisfied the reduplication crite- rion. Results of the two experiments are shown in Fig- ure 5 by the lines marked "Seq". Clearly these networks failed to learn this relatively simple reduplication task. While these experiments do not prove conclusively that a recurrent network, presented with words one segment at a time, cannot learning reduplication, it is obvious that this is a difficult task for these networks. In a sequential network, input sequences are realized as movements through state space. It appears, how- ever, that recognition of reduplication requires the ex- plicit comparison of static representations of the sub- sequences in question, e.g., for syllables in the case of syllable reduplication. If a simple recurrent network is trained to identify, that is, to distinguish, the syllables in a language, then the pattern appearing on the hid- den layer following the presentation of a syllable must encode all of the segments in the syllable. It is, in effect, a summary of the sequence that is the syllable. It is a simple matter to train a network to distinguish all possible syllables in a language. We treat the syl- lables as separate words in a network like the ones we have been dealing with, but with no inflection module. A network of this type was trained to recognize all 165 284 t= o 0 0 Q. o Q. 0.8 0.7 0.6 0.5 0.4 I. ~ .... ?_7~ ............ 0 40 80 120 160 Epochs of training FF Rime Redup FF Onset Redup Seq Onset Redup Seq Rime Redup I I Chance Figure 5: Reduplication Rules, Sequential and Feedforward Networks Trained with Distributed Syllables possible syllables in the same artificial language used in the experiment with the sequential network. When presented to the network, each syllable sequence was followed by a boundary segment. The hidden-layer pattern appearing at the end of each syllable-plus-boundary sequence was then treated as a static representation of the syllable sequence for a second task. Previous work [Gasser, 1992] has shown that these representations embody the structure of the input sequences in ways which permit generalizations. In this case, the sort of generalization which interests us concerns the recognition of similarities between syl- lables with th,e same onsets or rimes. Pairs of these syllable representations, encoding the same syllables as those used to train the sequential network in the pre- vious experiment, were used as inputs to two simple feedforward networks, one trained to respond if its two input syllables had the same onset, the other trained to respond if the two inputs had the same rime, that is, the same rules trained in the previous experiment. Again the training set consisted of 200 pairs of syllables, the test set of 50 pairs in each case. Results of these experiments are shown in Figure 5 by the lines labeled "FF". Although performance is far from perfect, it is clear that these networks have made the appropriate generalization. This means that the syllable represen- tations encode the structure of the syllables in a form which enables the relevant comparisons to be made. What I have said so far about reduplication, how- ever, falls far short of an adequate account. First, there is the problem of how the network is to make use of static syllable representations in recognizing reduplica- tion. That is, how is access to be maintained to the representation for the syllable which occurred two or more time steps back? For syllable representations to be compared directly, a portion of the network needs to run, in a sense, in syllable time. That is, rather than individual segments, the inputs to the relevant portion of the network need to be entire syllable representa- tions. Combining this with the segment-level inputs that we have made use of in previous experiments gives a hierarchical architecture like that shown in Figure 6. In this network, word recognition, which takes place at the output level, can take as its input both segment and syllable sequences. The segment portion of the net- work, appearing on the left in the figure, is identical to what we have seen thus far. (Hidden-layer modularity is omitted from the figure to simplify it.) The syllable portion, on the right, runs on a different "clock" from the segment portion. In the segment portion activation is passed forward and error backward each time a new segment is presented to the network. In the syllable portion this happens each time a new syllable appears. (The different update clock is indicated by the dashed arrows in the figure.) Just as the segment subnetwork begins with context-free segment representations, the syllable subnetwork takes as inputs context-free sylla- bles. This is achieved by replacing the context (that is, the recurrent input to the SYLLABLE layer) by a bound- axy pattern at the beginning of each new syllable. There remains the question of how the network is to know when one syllable ends and another begins. Unfortunately this interesting topic is beyond the scope of this project. 285 ~11 r°°' 2 ~ 1 _~ I~[" % I hidden2 I &_ k ,+ I 111 hidden1 ~, i | __ [[~ segment I ! I I Figure 6: MCNAM: Version 2 Conclusions Can connectionist networks which are more than unin- teresting implementations of symbolic models learn to generalize about morphological rules of different types? Much remains to be done before this question can be an- swered, but, for receptive morphology at least, the ten- tative answer is yes. In place of built-in knowledge, e.g, linguistic notions such as affix and tier and constraints such as the prohibition against association line crossing, we have processing and learning algorithms and partic- ular architectural features, e.g., recurrent connections on the hidden layer and modular hidden layers. Some of the linguistic notions may prove unnecessary alto- gether. For example, there is no place or state in the current model which corresponds to the notion affix. Others may be realized very differently from the way in which they are envisioned in conventional models. An autosegment, for example, corresponds roughly to a region in hidden-layer space in MCNAM. But this is a region which took on this significance only in response to the set of phone sequences and morphological targets which the network was trained on. Language is a complex phenomenon. Connectionists have sometimes been guilty of imagining naively that simple, uniform networks would handle the whole spec- trum of linguistic phenomena. The tack adopted in this project has been to start simple and augment the model when this is called for. MCNAM in its present form is almost certain to fail as a general model of morphol- ogy acquisition and processing, but these early results indicate that it is on the right track. In any case, the model yields many detailed predictions concerning the difficulty of particular morphological rules for partic- ular phonological systems, so an obvious next step is psycholinguistic experiments to test the model. References [Cottrell and Plunkett, 1991] Garrison W. Cottrell and Kim Plunkett. Learning the past tense in a recurrent network: Acquiring the mapping from meaning to sounds. Annual Conference of the Cognitive Science Society, 13:328-333, 1991. [Elman, 1990] Jeffrey Elman. Finding structure in time. Cognitive Science, 14:179-211, 1990. [Elman, 1991] Jeffrey L. Elman. Distributed represen- tations, simple recurrent networks, and grammatical structure. Machine Learning, 7:195-225, 1991. [Gasser, 1992] Michael Gasser. Learning distributed syllable representations. Annual Conference of the Cognitive Science Society, 14:396-401, 1992. [Gasser, 1993] Michael Gasser. Learning words in time: Towards a modular connectionist account of the ac- quisition of receptive morphology. Technical Report 384, Indiana University, Computer Science Depart- ment, Bloomington, 1993. [Gasser, 1994] Michael Gasser. Modularity in a connec- tionist model of morphology acquisition. Proceedings of the International Conference on Computational Linguistics, 15, 1994. [Marslen-Wilson and Tyler, 1980] William D. Marslen- Wilson and Lorraine K. Tyler. The temporal struc- ture of spoken language understanding. Cognition, 8:1-71, 1980. [Plunkett and Marchman, 1991] Kim Plun- kett and Virginia Marchman. U-shaped learning and frequency effects in a multi-layered perceptron: Im- plications for child language acquisition. Cognition, 38:1-60, 1991. [Port, 1990] Robert Port. Representation and recog- nition of temporal patterns. Connection Science, 2:151-176, 1990. [Rumelhart and McClelland, 1986] David E. Rumel- hart and James L. McClelland. On learning the past tense of English verbs. In James L. McClel- land and David E. Rumelhart, editors, Parallel Dis- tributed Processing, Volume 2, pages 216-271. MIT Press, Cambridge, MA, 1986. [Rumelhart et al., 1986] David E. Rumelhart, Geoffrey Hinton, and Ronald Williams. Learning internal representations by error propagation. In David E. Rumelhart and Jay L. McClelland, editors, Paral- lel Distributed Processing, Volume 1, pages 318-364. MIT Press, Cambridge, MA, 1986. 286
1994
39
HIDDEN UNDERSTANDING MODELS OF NATURAL LANGUAGE Scott Miller College of Computer Science Northeastern University Boston, MA 02115 [email protected] Robert Bobrow, Robert Ingria, Richard Schwartz BBN Systems and Technologies 70 Fawcett St., Cambridge, MA 02138 rusty, ingria, [email protected] Abstract We describe and evaluate hidden understanding models, a statistical learning approach to natural language understanding. Given a string of words, hidden understanding models determine the most likely meaning for the string. We discuss 1) the problem of representing meaning in this framework, 2) the structure of the statistical model, 3) the process of training the model, and 4) the process of understanding using the model. Finally, we give experimental results, including results on an ARPA evaluation. 1 Introduction Hidden understanding models are an innovative class of statistical mechanisms that, given a string of words, determines the most likely meaning for the string. The overall approach represents a substantial departure from traditional techniques by replacing hand-crafted grammars and rules with statistical models that are automatically learned from examples. Hidden understanding models were primarily motivated by techniques that have been extremely successful in speech recognition, especially hidden Markov models [Baum, 72]. Related techniques have previously been applied to the problem of identifying concept sequences within a sentence [Pieraccini et aL, 91]. In addition, the approach contains elements of other natural language processing techniques including semantic grammars [Waltz, 78; Hen&ix, 78], augmented transition networks (ATNs) [Woods, 70], probabilistic parsing [Fujisaki et al., 89; Chitrao and Grishman, 90; Seneff, 92], and automatic grammar induction [Pereira and Schabes, 92]. Hidden understanding models are capable of learning a variety of meaning representations, ranging from simple domain-specific representations, to ones at a level of detail and sophistication comparable to current natural language systems. In fact, a hidden understanding model can be used to produce a representation with essentially the same information content as the semantic graph used by the Delphi system [-Bobrow et al., 90], a general purpose NLP system, which utilizes a modified Definite Clause Grammar formalism. This fact made it possible to interface a hidden understanding system to the discourse processing and data- base retrieval components of Delphi to produce a complete "end to end" system. This hybrid system participated in the 1993 ATIS natural language evaluation. Although only four months old, the scores achieved by the combined system were quite respectable. Because of differences between language understanding and speech recognition, significant changes are required in the hidden Markov model methodology. Unlike speech, where each phoneme results in a local sequence of spectra, the relation between the meaning of a sentence and the sequence of words is not a simple linear sequential model. Language is inherently nested, with subgroups of concepts within other concepts. A statistical system for understanding language must take this and other differences into account in its overall design. In principle, we have the following requirements for a hidden understanding system: • A notational system for expressing meanings. • A statistical model that is capable of representing meanings and the association between meanings and words. • An automatic training program which, given pairs of meanings and word sequences, can estimate the parameters of a statistical model. • An understanding program that can search the statistical model to fred the most likely meaning given a word sequence. L, sentences 17 progr~a ~ expressions Figure 1. The Main Components of a Hidden Understanding System. 25 Below, we describe solutions for each of these requirements, and describe the relationship of these solutions to other work in stochastic grammars and probabilistic parsing. Finally, we will report on initial experiments with hidden understanding models. 2 Expressing Meanings One of the key requirements for a hidden understanding model is that the meaning representation must be both precise and appropriate for automatic learning techniques. Specifically, we require a meaning representation that is: • Expressive. It must be able to express meanings over the entire range of utterances that are likely to occur in an application. • Annotatable. It must be possible to produce accurate annotations for a sufficiently large corpus with an acceptable level of human effort. • Trainable. It must be possible to estimate the model parameters from a reasonable number of training examples. • Tractable. There must be a computationally tractable' algorithm capable of searching the meaning space. In order to facilitate annotation of a training corpus, meaning expressions should be as simple as possible. Frame based representations, such as the example shown in figure 2, have the advantage that they are relatively simple to understand. A difficulty with this style of representation is that the frames do not align directly to the words of the sentences. In particular, a meaning flame contains few explicit clues as to how the words of a sentence imply the structural characteristics of the frame. Tree structured meaning representations, discussed in the next section, have the advantage that they can be fully aligned to the words of a sentence. The cost is that these tree structured representations are more detailed than their flame based counterparts, thereby requiring greater annotation effort. Fortunately, the techniques developed for tree structured representations can be extended to simpler frame representations as well. SHOW: FLIGHTS: TIME: PART-OF-DAY: morning ORIGIN: CITY: Boston DEST: CITY: San Francisco DATE: DAY-OF-WEEK: Tuesday Please show me morning flights from Boston to San Francisco on Tuesday. Figure 2. A Frame Based Meaning Representation. 2.1 Tree Structured Meaning Representations The central characteristic of a tree structured representation is that individual concepts appear as nodes in a tree, with component concepts appearing as nodes attached directly below them. For example, the concept of a flight in the ATIS domain has component concepts including airline, flight number, origin, and destination. These could then form part of the representation for the phrase: United flight 203 from Dallas to Atlanta. The use of a hierarchical representation is one characteristic that distinguishes hidden understanding models from earlier work in which meaning is represented by a linear sequence of concepts [Pieraccini et ai., 91]. A requirement for tree structured representations is that the order of the component concepts must match the order of the words they correspond to. Thus, the representation of the phrase flight 203 to Atlanta from Dallas on United includes the same nodes as the earlier example, but in a different order. For both examples, however, the interpretation is identical. At the leaves of a meaning tree are the words of the Figure 3. A Tree Structured Meaning Representation. 26 sentence. We distinguish between nodes that appear above other nodes, and those that appear directly above the words. These will be referred to as nonterminal nodes and terminal nodes respectively, forming two disjoint sets. No node has both words and other nodes appearing directly below it. Figure 3 shows an example of a typical meaning tree. In this example, theflight node represents the abstract concept of a flight, which is a structured entity that may contain an origin, a destination, and other component concepts. Appearing directly above the word "flight" is a terminal node, which we call aflight indicator. This name is chosen to distinguish it from the flight node, and also because the word flight, in some sense, indicates the presence of a flight concept. Similarly, there are airline indicators, origin indicators, and destination indicators. One view of these tree structured representations is that they are parse trees produced according to a semantic grammar. In this view, the dominance relations of the grammar are predetermined by the annotation schema, while the precedence relations are learned from the training examples. 2.2 Alternative Tree Representations Tree structured meaning expressions can range in complexity from simple special purpose sublanguage representations to the structural equivalent of detailed syntactic parse trees. The possibilities are limited only by two fundamental requirements: (I) semantic concepts must be hierarchically nested within a tree structure, and (2) the sets of terminal and nonterminal nodes must remain disjoint. Both of these requirements can be satisfied by trees possessing most of the structural characteristics of conventional syntactic parse trees. Since our objective is to model meaning, the nodes must still be labeled to reflect semantic categories. However, additional and augmented labels may be introduced to reflect syntactic categories as well. Representations of this form contain significantly more internal structure than specialized sublanguage models. This can be seen in the example in figure 4. The specialized sublanguage representation requires only seven nodes, while a full syntactically motivated analysis requires fifteen. The additional nodes are used to distinguish what is being shown to whom, to reflect the fact that the stopover phrase is part of a relative clause, and to determine the internal structure of the relative clause. One interesting characteristic of these more elaborate trees is their similarity to those produced by classical, linguistically motivated, natural language systems. Thus, a hidden understanding model can serve to replace the part-of- speech tagger, parser, and semantic interpreter of a classical system. Instead of writing grammar and semantic interpretation rules by hand, the training program automatically constructs a statistical model from examples of meaning trees. Regardless of the details of the tree structure and labels, the components comprising a hidden understanding system remain unchanged. The only difference is in how the system is trained. Figure 4. A Specialized Sublanguage Analysis and a Full Syntactic Analysis. 27 2.3 Frame Based Representations One way to think of a frame based meaning is as a partially specified tree in which some words are not accounted for. Nevertheless, a flame representation is a complete meaning representation in the sense that it fully specifies the concepts and structure comprising the meaning. In terms of a tree structured representation, the set of nonterminal nodes is fully specified, while some of the terminal nodes may be omitted. The missing terminal nodes are said to be hidden, in the sense that every word is required to align to some terminal node, but the alignment is not necessarily given by the meaning frame. These hidden nodes must later be aligned as part of the training process. The general idea is to assign a small number of free terminal nodes (typically one or two) beneath every nonterminal node. These are then free to align to any unassigned words, provided that the overall tree structure is not violated. An EM algorithm (Estimate- Maximize) is used to organize the unassigned terminal nodes into classes that correspond to individual words and phrases, and that bind to particular abstract concepts. Figure 5 shows the complete meaning tree with hidden nodes corresponding to the flame in figure 2. If we consider tree structured meaning expressions as parse trees which are generated according to some incompletely specified grammar, then the problem of aligning the hidden nodes can be considered as a grammar induction problem. In this way, the problem of aligning the hidden nodes given only a partially specified set of trees is analogous to the problem of fully parsing a training corpus given only a partial bracketing. The difference is that while a partial bracketing determines constituent boundaries that cannot be crossed, a partially specified tree determines structure that must be preserved. 3 The Statistical Model One central characteristic of hidden understanding models is that they are generative. From this viewpoint, language is produced by a two component statistical process. The first component chooses the meaning to be expressed, effectively deciding "what to say". The second component selects word sequences to express that meaning, effectively deciding "how to say it". The first phase is referred to as the semantic language model, and can be thought of as a stochastic process that produces meaning expressions selected from a universe of meanings. The second phase is referred to as the lexical realization model, and can be thought of as a stochastic process that generates words once a meaning is given. By analogy with hidden Markov models, we refer to the combination of these two models as a hidden understanding model. The word "hidden" refers to the fact that only words can be observed. The internal states of each of the two models are unseen and must be inferred from the words. The problem of language understanding, then, is to recover the most likely meaning structure given a sequence of words. More formally, understanding a word sequence W is accomplished by searching among all possible meanings for some meaning M such that P(MI W) is maximized. By Bayes Rule, P(M [ W) can be rewritten as: P(WIM)P(M) P( MIW) = P(W) Now, since P(W) does not depend on M, maximizing P(M [ W) is equivalent to maximizing the product P(W [ M) P(M). However, P(M I W) is simply our lexical realization model, and P(M) is simply our semantic language model. Thus, by searching a combination of these models it is possible to find the maximum likelihood meaning M given word sequence W. Considering the statistical model as a stochastic grammar, the problem of determining M given iV is analogous to the problem of finding the most likely derivation for W according to that grammar. "°' Figure 5. A Tree Structure Corresponding to a Frame Representation. 28 3.1 Semantic Language Model For tree structured meaning representations, individual nonterminal nodes determine particular abstract semantic concepts. In the semantic language model, each abstract concept corresponds to a probabilistic state transition network. All such networks are then combined into a single probabilistic recursive transition network, forming the entire semantic language model. The network corresponding to a particular abstract concept consists of states for each of its component concepts, together with two extra states that define the entry and exit points. Every component concept is fully connected to every other component concept, with additional paths leading from the entry state to each component concept, and from each component concept to the exit state. Figure 6 shows a sample network corresponding to the flight concept. Of course, there are many more flight component concepts in the ATIS domain than actually appear in this example. Associated with each arc is a probability value, in a similar fashion to the TINA system [Seneff, 92]. These probabilities have the form P(Staten I Staten.l,, Context), which is the probability of a taking transition from one state to another within a particular context. Thus, the arc from origin to dest has probability P(dest [ origin, flight), meaning the probability of entering dest from origin within the context of the flight network. Presumably, this probability is relatively high, since people usually mention the destination of a flight directly after mentioning its origin. Conversely, P(origin I dest, flight) is probably low because people don't usually express concepts in that order. Thus, while all paths through the state space are possible, some have much higher probabilities than others. Within a concept network, component concept states exist for both nonterminal concepts, such as origin, as well as terminal concepts, such as flight indicator. Arrows pointing into nonterminal states indicate entries into other networks, while arrows pointing away indicate exits out of those networks. Terminal states correspond to networks as well, although these are determined by the lexical realization model and have a different internal structure. Thus, every meaning tree directly corresponds directly to some particular path through the state space. Figure 7 shows a meaning tree and its corresponding path through state space. Viewed as a grammar, the semantic language model is expressed directly as a collection of networks rather than as a collection of production rules. These networks represent grammatical constraints in a somewhat different fashion than do grammars based on production rules, In this model, constituents may appear beneath nonterminal nodes in any arbitrary order, while preferences for some orderings are determined through the use of probabilities. By contrast, most grammars limit the ordering of constituents to an explicit set which is specified by the grammar rules. The approach taken in the TINA system eliminates many ordering constraints while retaining the local state transition constraints determined by its grammar. We believe that an unconstrained ordering of constraints increases parsing robustness, while the preferences determined by the arc probabilities help minimize overgeneration. 3.2 Lexicai Realization Model Just as nonterminal tree nodes correspond to networks in the semantic language model, terminal nodes correspond to networks in the lexical realization model. The difference is that semantic language networks specify transition Figure 6. A Partial Network Corresponding to the ATIS Flight Concept. 29 probabilities between states, while lexical realization networks specify transition probabilities between words. Lexical realization probabilities have the form P(word,[word,.1 , context), which is the probability of taking a transition from one word to another given a particular context. Thus, P(show I please, show-indicator) is the probability that the word show follows the word please within the context of a show indicator phrase. In addition, there are two pseudo-words, *begin* and *end*, which indicate the beginning and ending of phrases. Thus, we have probabilities such as P(please [ *begin*, show-indicator), which is the probability that please is the first word of a show indicator phrase, and P( *end* [ me, show-indicator) , which is the probability of exiting a show indicator phrase given that the previous word was/tie. 4 The Understanding Component As we have seen, understanding a word string W requires finding a meaning M such that the probability P(W [ lvl) P(M) is maximized. Since, the semantic language model and the lexical realization model are both probabilistic networks, P(W I M) P(M) is the probability of a particular path through the combined network. Thus, the problem of understanding is to fmd the highest probability path among all possible paths, where the probability of a path is the product of all the transition probabilities along that path. rP(state n Istate~_ l,context) if t in Semantic Language Model 1 P(Path):tle~a~LP(word~lwordn_t,context ) if t in Lexical Realization ModelJ Thus far, we have discussed the need to search among all meanings for one with a maximal probability. In fact, if it were necessary to search every path through the combined network individually, the algorithm would require exponential time with respect to sentence length. Fortunately, this can be drastically reduced by combining the probability computation of common subpaths through dynamic programming. In particular, because our meaning representation aligns to the words, the search can be efficiently performed using the well-known Viterbi [Viterbi, 67] algorithm. Since our underlying model is a reeursive transition network, the states for the Viterbi search must be allocated dynamically as the search proceeds, In addition, it is necessary to prune very low probability paths in order to keep the computation tractable. We have developed an elegant algorithm that integrates state allocation, Viterbi search, and pruning all within a single traversal of a tree- like data structure. In this algorithm, each of the set of currently active states is represented as a node in a tree, New nodes are added to the tree as the computation pushes into new subnetworks that are not currently active. Stored at each node is the probability of the most likely path reaching that state, together with a backpointer sufficient to recreate the path later if needed. Whenever the probability of all states in a subtree falls below the threshold specified by the beam width, the entire subtree is pruned away. 5 The Training Component In order to train the statistical model, we must estimate transition probabilities for the semantic language model mid lexical realization model. In the case of fully specified meaning trees, each meaning tree can be straightforwardly converted into a path through state space. Then, by counting occurrence and transition frequencies along those paths, it is possible to form simple estimates of the transition probabilities. Let C(statem, context,) denote the number of times state,, has occurred in contexts, and let C(state, ] state=, context,) denote the number of times that this condition has led to a transition to state state.. Similarly, defme counts C(wordm, context1) and C(word. ] word,., contextt). Then, a direct estimate of the probabilities is given by: Show flights to Atlanta Figure 7. A Meaning Tree and its Corresponding Path Through State Space. 30 and P(statenlstatem,context ) = C(statenlstate=,c°ntext) , C( stca% ,context) P( word n Iword= ,context ) = C( word nlword m ,context ) C ( wordm , context ) In order to obtain robust estimates, these simple estimates are smoothed with backed-off estimates [Good, 53], using techniques similar to those used in speech recognition [Katz, ^ 87; Placeway et al., 93]. Thus, P(state, I state,,, context) is smoothed with P(staten ] context), and P(wordn ] wordm, ^ context) is smoothed with P(word, I context). Robustness is further increased through word classes. For example, Boston and San Francisco are both members of the class of cities. In the case of frame based representations, it is not always possible to construct an exact path through the state space corresponding to a meaning representation. Nevertheless, since frames are treated as partially specified trees, most of the path can be reconstructed, with some portions undetermined. Then, the partial path can be used to constrain a gradient descent search, called the forward- backward algorithm [13aura, 72] for estimating the model parameters. This algorithm is an iterative procedure for adjusting the model parameters so as to increase the likelihood of generating the training data, and is an instance of the well-known class called EM (Estimate-Maximize) algorithms. 6 Experimental Results We have implemented a hidden understanding system and performed a variety of experiments. In addition, we participated in the 1993 ARPA ATIS NL evaluation. One experiment involved a 1000 sentence ATIS corpus, annotated according to a simple specialized sublanguage model. The annotation effort was split between two annotators, one of whom was a system developer, while the other was not. To annotate the training data, we used a bootstrapping process in which only the first 100 sentences were annotated strictly by hand. Thereafter, we worked in cycles of." 1. Running the training program using all available annotated data. 2. Running the understanding component to annotate new sentences. 3. Hand correcting the new annotations. Annotating in this way, we found that a single annotator could produce 200 sentences per day. We then extracted the first 100 sentences as a test set, and trained the system on the remaining 900 sentences. The results were as follows: • 61% matched exactly. • 21% had correct meanings, but did not match exactly. • 28% had the wrong meaning. Another experiment involved a 6000 sentence ATIS corpus, annotated according to a more sophisticated meaning model. In this experiment, the Delphi system automatically produced the annotation by printing out its own internal representation for each sentence, converted into a more readable form. In order to maintain high quality annotations, we used only sentences for which Delphi produced a complete parse, and for which it also retrieved a correct answer from the database. We then removed 300 sentences as a test set, and trained the system on the remaining 5700. The results were as follows: • 85% matched exactly. • 8% had correct meanings, but did not match exactly. • 7% had the wrong meaning. For the ARPA evaluation, we coupled our hidden understanding system to the discourse and backend components of the Delphi. Using the entire 6000 sentence corpus described above as training data, the system produced a score of 26% simple error on the ATIS NL evaluation. By examining the errors, we have reached the conclusion that nearly half are due to simple programming issues, especially in the interface between Delphi and the hidden understanding system. In fact, the interface was still incomplete at the time of the evaluation. We have just begun a series of experiments using frame based annotations, and are continuing to refme our techniques. In a preliminary test involving a small corpus of 588 ATIS sentences, the system correctly aligned the hidden states for over 95% of the sentences in the corpus. 7 Limitations Several limitations to our current approach are worth noting. In a small number of cases, linguistic movement phenomena make it difficult to align the words of a sentence to any tree structured meaning expression without introducing crossings. In most cases, we have been able to work around this problem by introducing minor changes in our annotation such that the tree structure is maintained. A second limitation, due to the local nature of the model, is an inability to handle nonlocal phenomena such as coreference. Finally, in some cases the meaning of a sentence depends strongly upon the discourse state, which is beyond the scope of the current model. 8 Conclusions We have demonstrated the possibility of automatically learning semantic representations directly from a training corpus through the application of statistical techniques. Empirical results, including the results of an ARPA 31 evaluation, indicate that these techniques are capable of relatively high levels of performance. While hidden understanding models are based primarily on the concepts of hidden Markov models, we have also shown their relationship to other work in stochastic grammars and probabilistic parsing. Finally, we have noted some limitations to our current approach. We view each of these limitations as opportunities for fta~er research and exploration. Acknowledgments The work reported here was supported in part by the Defense Advanced Research Projects Agency under ARPA Contract No. N00014-92-C-0035. The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the United States Government. References 1. E. Baum, "An Inequality and Associated Maximization Technique in Statistical Estimation of Probabilistic Functions of Markov Processes," Inequalities 3:1-8, 1972 2. Bobrow, R. Ingria, and D. StaUard, "Syntactic and Semantic Knowledge in the DELPHI Unification Grammar," Proceedings, Speech and Natural Language Workshop, pp. 230-236, June 1990 3. Chitrao, and R. Grishman, "Statistical Parsing of Messages," Proceedings, Speech and Natural Language Workshop, pp. 263-276, Morgan Kanfmama Publishers, June 1990 4. Fujisaki, F. Jelinek, J. Cocke, E. Black, T. Nishino, "A Probabilistic Parsing Method for Sentence Disambiguation," International Parsing Workshop, pp. 85-90, 1989 . Good, "The Population Frequencies of Species and the Estimation of Population Parameters," Biometrika 40, pp.237-264, 1953 6. G.G Hendrix, "Semantic Aspects of Translation," Understanding Spoken Language, pp. 193-226, New York, Elsevier, 1978 7. Katz, "Estimation of Probabilities from Sparse Data for the Language Model Component of a Speech Recognizer," IEEE Transactions on Acoustics, Speech, and Signal Processing, Vol. ASSP-35, pp. 400-401, 1987 8, Pereira and Y. Schabes, "Inside-Outside Reestimation from Partially Bracketed Corpora," Proceedings of the 30th Annual MeetT"ng of the Association for Computational Linguistics, pp.128-135, Newark, Delaware, 1992 9. R. Pieraecini, E. Levin, and C. Lee, Stochastic Representation of Conceptual Structure in the ATIS Tasic. DARPA Speech and Natural Language Workshop, pp. 121-124, Feb. 1991. 10. Placeway, R. Schwartz, P. Fung, L. Nguyen, "The Estimation of Powerful Language Models from Small and Large Corpora," IEEE ICASSP, II:33-36 11. 12. 13. 14. Seneff, "'TINA: A Natural Language System for Spoken Language Applications," Computational Linguistics Vol. 18, Number 1, pp. 61-86, March 1992 J. Viterbi, "Error Bounds for Convolutional Codes and an Asympotically Optimum Decoding Algorithm," IEEE Transactions on Information Theory IT-13(2):260- 269, April 1967 D.L Waltz, "An English Language Question Answering System for a Large Relational Database," Communications of the ACM 21(7):526-39, 1978. W.A Woods, "Transition Network Grammars for Natural Language Analysis," Communications of the ACM 13(10):591-606, 1970 32
1994
4
RELATING COMPLEXITY TO PRACTICAL PERFORMANCE IN PARSING WITH WIDE-COVERAGE UNIFICATION GRAMMARS John Carroll University of Cambridge, Computer Laboratory Pembroke Street, Cambridge CB2 3QG, UK [email protected] Abstract The paper demonstrates that exponential com- plexities with respect to grammar size and input length have little impact on the performance of three unification-based parsing algorithms, using a wide-coverage grammar. The results imply that the study and optimisation of unification-based parsing must rely on empirical data until complex- ity theory can more accurately predict the practi- cal behaviour of such parserQ. 1. INTRODUCTION General-purpose natural language (NL) analysis systems have recently started to use declarative unification-based sentence grammar formalisms; systems of this type include SRI's CLARE sys- tem (Alshawi et al., 1992) and the A1vey NL Tools (ANLT; Briscoe et al., 1987a). Using a declarative formalism helps ease the task of developing and maintaining the grammar (Kaplan, 1987). In ad- dition to syntactic processing, the systems incor- porate lexical, morphological, and semantic pro- cessing, and have been applied successfully to the analysis of naturally-occurring texts (e.g. Alshawi et al., 1992; Briscoe & Carroll, 1993). Evaluations of the grammars in these par- ticular systems have shown them to have wide coverage (Alshawi et al., 1992; Taylor, Grover &= Briscoe, 1989) 2. However, although the practical throughput of parsers with such realistic gram- mars is important, for example when process- 1This research was supported by SERC/DTI project 4/1/1261 'Extensions to the Alvey Natu- ral Language Tools' and by EC ESPRIT BRA-7315 'ACQUILEX-II'. I am grateful to Ted Briscoe for com- ments on an earlier version of this paper, to David Weir for valuable discussions, and to Hiyan Alshawi for assistance with the CLARE system. 2For example, Taylor et al. demonstrate that the ANLT grammar is in principle able to analyse 96.8% of a corpus of 10,000 noun phrases taken from a variety of corpora. ing large amounts of text or in interactive ap- plications, there is little published research that compares the performance of different parsing algorithms using wide-coverage unification-based grammars. Previous comparisons have either fo- cussed on context-free (CF) or augmented CF parsing (Tomita, 1987; Billot & Lang, 1989), or have used relatively small, limited-coverage unification grammars and lexicons (Shann, 1989; Bouma & van Noord, 1993; Maxwell & Kaplan, 1993). It is not clear that these results scale up to reflect accurately the behaviour of parsers using realistic, complex unification-based gram- mars: in particular, with grammars admitting less ambiguity parse time will tend to increase more slowly with increasing input length, and also with smaller grammars rule application can be con- strained tightly with relatively simple predictive techniques. Also, since none of these studies relate observed performance to that of other comparable parsing systems, implementational oversights may not be apparent and so be a confounding factor in any general conclusions made. Other research directed towards improving the throughput of unification-based parsing sys- tems has been concerned with the unification oper- ation itself, which can consume up to 90% of parse time (e.g. Tomabechi, 1991) in systems using lex- icalist grammar formalisms (e.g. HPSG; Pollard & Sag, 1987). However, parsing algorithms as- sume more importance for grammars having more substantial phrase structure components, such as CLARE (which although employing some HPSG- like analyses still contains several tens of rules) and the ANLT (which uses a formalism derived from GPSG; Gazdar et al., 1985), sincethe more specific rule set can be used to control which uni- fications are performed. In NL analysis, the syntactic information as- sociated with lexical items makes top-down pars- ing less attractive than bottom-up (e.g. CKY; Kasami, 1965; Younger, 1967), although the lat- ter is often augmented with top-down predic- 287 tion to improve performance (e.g. Earley, 1970; Lang, 1974; Pratt, 1975). Section 2 describes three unification-based parsers which are related to polynomial-complexity bottom-up CF parsing algorithms. Although incorporating unification increases their complexity to exponential on gram- mar size and input length (section 3), this ap- pears to have little impact on practical perfor- mance (section 4). Sections 5 and 6 discuss these findings and present conclusions. 2. THE PARSERS The three parsers in this study are: a bottom- up left-corner parser, a (non-deterministic) LR parser, and an LR-like parser based on an algo- rithm devised by Schabes (1991). All three parsers accept grammars written in the ANLT formal- ism (Briscoe et al., 1987a), and the first two are distributed as part of the ANLT package. The parsers create parse forests (Tomita, 1987) that incorporate subtree sharing (in which identical sub-analyses are shared between differing super- ordinate analyses) and node packing (where sub- analyses covering the same portion of input whose root categories are in a subsumption relationship are merged into a single node). THE BOTTOM-UP LEFT-CORNER PARSER The bottom-up left-corner (BU-LC) parser oper- ates left-to-right and breadth-first, storing partial (active) constituents in a chart; Carroll (1993) gives a full description. Although pure bottom- up parsing is not usually thought of as provid- ing high performance, the actual implementation achieves very good throughput (see section 4) due to a number of significant optimisations, amongst which are: • Efficient rule invocation from cheap (static) rule indexing, using discrimination trees keyed on the feature values in each rule's first daughter to interleave rule access with unification and also to share unification results across groups of rules. • Dynamic indexing of partial and complete con- stituents on category types to avoid attempt- ing unification or subsumption operations which static analysis shows will always fail. • Dynamic storage minimisation, deferring struc- ture copying--e.g, required by the unification operation or by constituent creation--until ab- solutely necessary (e.g. unification success or parse success, respectively). The optimisations improve throughput by a factor of more than three. THE NON-DETERMINISTIC LR PARSER Briscoe & Carroll (1993) describe a methodology for constructing an LR parser for a unification- based grammar, in which a CF 'backbone' gram- mar is automatically constructed from the unifi- cation grammar, a parse table is constructed from the backbone grammar, and a parser is driven by the table and further controlled by unification of the 'residue' of features in the unification gram- mar that are not encoded in the backbone. In this parser, the LALR(1) technique (Aho, Sethi Ullman, 1986) is used, in conjunction with a graph-structured stack (Tomita, 1987), adapt- ing for unification-based parsing Kipps' (1989) Tomita-like recogniser that achieves polynomial complexity on input length through caching. On each reduction the parser performs the unifications specified by the unification grammar version of the CF backbone rule being applied. This constitutes an on-line parsing algorithm. In the general case, the off-line variant (in which all unifications are deferred until the complete CF parse forest has been constructed) is not guaran- teed to terminate; indeed, it usually does not do so with the ANLT grammar. However, a drawback to the on-line algorithm is that a variant of Kipps' caching cannot be used, since the cache must nec- essarily assume that all reductions at a given ver- tex with all rules with the same number of daugh- ters build exactly the same constituent every time; in general this is not the case when the daughters are unification categories. A weaker kind of cache on partial analyses (and thus unification results) was found to be necessary in the implementation, though, to avoid duplication of unifications; this sped the parser up by a factor of about three, at little space cost. THE COMPILED-EARLEY PARSER The Compiled-Earley (CE) parser is based on a predictive chart-based CF parsing algorithm de- vised by Schabes (1991) which is driven by a table compiling out the predictive component of Ear- ley's (1970) parser. The size of the table is related linearly to the size of the grammar (unlike the LR technique). Schabes demonstrates that this parser always takes fewer steps than Earley's, although its time complexity is the same: O(n3). The space complexity is also cubic, since the parser uses Ear- ley's representation of parse forests. The incorporation of unification into the CE parser follows the methodology developed for unification-based LR parsing described in the pre- vious section: a table is computed from a CF 'backbone', and a parser, augmented with on-line unification and feature-based subsumption opera- 288 tions, is driven by the table. To allow meaningful comparison with the LR parser, the CE parser uses a one-word lookahead version of the table, con- structed using a modified LALR technique (Car- roll, 1993) 3 . To achieve the cubic time bound, the parser must be able to retrieve in unit time all items in the chart having a given state, and start and end position in the input string. However, the obvious array implenmntation, for say a ten word sentence with the ANLT grammar, would contain almost 500000 elements. For this reason, the implementa- tion employs a sparse representation for the array, since only a small proportion of the elements are ever filled. In this parser, the same sort of dupli- cation of ratifications occurs as in the LR parser, so lists of partial analyses are cached in the same way. 3. COMPLEXITIES OF THE PARSERS The two wu'iables that determine a parser's com- l)utational complexity are the grammar and the input string (Barton, Berwick &: Ristad, 1987). These are considered separately in the next two sections. GRAMMAR-DEPENDENT COMPLEXITY The term dependent on tile grammar in the time complexity of the BU-LC unification-based parser described above is O(IC[2[RI3), where ICI is the number of categories implicit in the grammar, and ]RI, the number of rules. The space complexity is dominated by the size of the parse forest, O(]C[) (these results are proved by Carroll, 1993). For the ANLT grammar, in which features are nested to a maximum depth of two, ICI is finite but nev- ertheless extremely large (Briscoe et al., 1987b) 4. The grammar-dependent complexity of the LR parser makes it also appear intractable: John- son (1989) shows that the number of LR(0) states for certain (pathological) grammars is exponen- tially related to the size of the grammar, and that there are some inputs which force an LR parser to visit all of these states in the course of a parse. aSchabes describes a table with no lookahead; the successful application of this technique supports Sch- abes' (1991:109) assertion that "several other methods (such as LR(k)-like and SLR(k)-like) can also be used for constructing the parsing tables [...]" aBarton, Berwick & Ristad (1987:221) calculate that GPSG, also with a maximum nesting depth of two, licences more than 10 rr5 distinct syntactic cate- gories. The number of categories is actually infinite in grammars that use a fully recursive feature system. Thus the total number of operations performed, and also space consumed (by the vertices in the graph-structured stack), is an exponential func- tion of the size of the grammar. To avoid this complexity, the CE parser em- ploys a table construction method which ensures that the number of states in the parse table is linearly related to the size of the grammar, re- sulting in the number of operations performed by the parser being at worst a polynomial function of grammar size. INPUT-DEPENDENT COMPLEXITY Although the complexity of returning all parses for a string is always related exponentially to its length (since the number of parses is exponen- tial, and they must all at least be enumerated), the complexity of a parser is usually measured for the computation of a parse forest (unless extract- ing a single analysis from the forest is worse than linear) 5. If one of the features of the ANLT grammar formalism, the kleene operator (allowing indefinite repetition of rule daughters), is disallowed, then the complexity of the BU-LC parser with respect to the length of the input string is O(np+l), where p is the maximum number of daughters in a rule (Carroll, 1993). The inclusion of the operator in- creases the complexity to exponential. To retain the polynomial time bound, new rules can be in- troduced to produce recursive tree structures in- stead of an iterated fiat tree structure. However, when this technique is applied to the ANLT gram- mar the increased overheads in rule invocation and structure building actually slow the parser down. Although the time and space complexities of CF versions of the LR and CE parsers are O(n3), the unification versions of these parsers both turn out to have time bounds that are greater than cu- bic, in the general case. The CF versions implicitly pack identical sequences of sub-analyses, and in all reductions at a given point with rules with the same number of daughters, the packed sequences can be formed into higher-level constituents as they stand without further processing. However, in the unification versions, on each reduce action the daughters of the rule involved have to be uni- fied with every possible alternative sequence of the sub-analyses that are being consumed by the rule 5This complexity measure does correspond to real world usage of a parser, since practical systems can usually afford to extract only a small number of parses from the frequently very large number encoded in a forest; this is often done on the basis of preference- based or probabilistic factors (e.g. Carroll & Briscoe, 1992). 289 (in effect expanding and flattening out the packed sequences), leading to a bound of n p+I on the total number of unifications. 4. PRACTICAL RESULTS To assess the practical performance of the three unification-based parsers described above, a series of experiments were conducted using the ANLT grammar (Grover, Carroll & Briscoe, 1993), a wide-coverage grammar of English. The gram- mar is defined in metagrammatical formalism which is compiled into a unification-based 'ob- ject gran~mar'--a syntactic variant of the Defi- nite Clause Grammar formalism (Pereira & War- ren, 1980)--containing 84 features and 782 phrase structure rules. Parsing uses fixed-arity term uni- fication. The grammar provides full coverage of the following constructions: declarative sen- tences, imperatives and questions (yes/no, tag and wh-questions); all unbounded dependency types (topicalisation, relativisation, wh-questions); a relatively exhaustive treatment of verb and ad- jective complement types; phrasal and preposi- tional verbs of many complement types; passivi- sation; verb phrase extraposition; sentence and verb phrase modification; noun phrase comple- ments and pre- and post-modification; partitives; coordination of all major category types; and nom- inal and adjectival comparatives. Although the grammar is linked to a lexi- con containing definitions for 40000 base forms of words, the experiments draw on a much smaller lexicon of 600 words (consisting of closed class vocabulary and, for open-class vocabulary, defi- nitions of just a sample of words which taken to- gether exhibit the full range of possible comple- mentation patterns), since issues of lexical cover- age are of no concern here. COMPARING THE PARSERS In the first experiment, the ANLT grammar was loaded and a set of sentences was input to each of the three parsers. In order to provide an inde- pendent basis for comparison, the same sentences were also input to the SRI Core Language En- gine (CLE) parser (Moore & Alshawi, 1992) with the CLARE2.5 grammar (Alshawi et al., 1992), a state-of-the-art system accessible to the author. The sentences were taken from an initial sam- ple of 175 representative sentences extracted from a corpus of approximately 1500 that form part of the ANLT package. This corpus, implicitly defin- ing the types of construction the grammar is in- tended to cover, was written by the linguist who developed the ANLT grammar and is used to check for any adverse effects on coverage when the gram- mar is modified during grammar development. Of Parser Grammar CPU time Storage allocated 47.0 BU-LC LR CE CLE ANLT ANLT ANLT CLARE2.5 75.5 48.9 98.4 277.7 33.6 38.5 Table 1: Parse times (in CPU seconds on a Sun Sparc ELC workstation) and storage allocated (in megabytes) while parsing the 129 test sentences (1-12 words in length). the initial 175 sentences, the CLARE2.5 grammar failed to parse 42 (in several cases because punc- tuation is strictly required but is missing from the corpus). The ANLT grammar also failed to parse three of these, plus an additional four. These sen- tences were removed from the sample, leaving 129 (mean length 6.7 words) of which 47 were declar- ative sentences, 38 wh-questions and other sen- tences with gaps, 20 passives, and 24 sentences containing co-ordination. Table 1 shows the total parse times and stor- age allocated for the BU-LC parser, the LR parser, and the CE parser, all with ANLT grammar and lexicon. All three parsers have been im- plemented by the author to a similar high stan- dard: similar implementation techniques are used in all the parsers, the parsers share the same uni- fication module, run in the same Lisp environ- ment, have been compiled with the same optimisa- tion settings, and have all been profiled with the same tools and hand-optimised to a similar ex- tent. (Thus any difference in performance of more than around 15% is likely to stem from algorithmic rather than implementational reasons). Both of the predictive parsers employ one symbol of looka- head, incorporated into the parsing tables by the LALR technique. Table 1 also shows the results for the CLE parser with the CLARE2.5 grammar and lexicon. The figures include garbage collection time, and phrasal (where appropriate) processing, but not parse forest unpacking. Both grammars give a total of around 280 analyses at a similar level of detail. The results show that the LR parser is ap- proximately 35% faster than the BU-LC parser, and allocates about 30% less storage. The mag- nitude of the speed-up is less than might be ex- pected, given the enthusiastic advocation of non- deterministic CF LR parsing for NL by some re- searchers (e.g. Tomita, 1987; Wright, Wrigley & Sharman, 1991), and in the light of improvements observed for predictive over pure bottom-up pars- ing (e.g. Moore & Dowding, 1991). However, on the assumption that incorrect prediction of gaps is 290 the main avoidable source of performance degra- dation (c.f. Moore & Dowding), further investiga- tion shows that the speed-up is near the maximum that is possible with the ANLT grammar (around 50%). The throughput of the CE parser is half that of the LR parser, and also less than that of the BU-LC parser. However, it is intermediate be- tween the two in terms of storage allocated. Part of the difference in performance between it and the LR parser is due to the fact that it performs around 15% more unifications. This might be expected since the corresponding finite state au- tomaton is not determinised--to avoid theoretical exponential time complexity on grammar size~ thus paying a price at run time. Additional rea- sons for the relatively poor performance of the CE parser are the overheads involved in maintaining a sparse representation of the chart, and the fact that with the ANLT grammar it generates less "densely packed" parse forests, since its parse ta- ble, with 14% more states (though fewer actions) than the LALR(1) table, encodes more contextual distinctions (Billot & Lang, 1989:146). Given that the ANLT and CLARE2.5 gram- mars have broadly similar (wide) coverage and re- turn very similar numbers of syntactic analyses for the same inputs, the significantly better through- lint of the three parsers described in this paper ovcr the CLE parser 6 indicates that they do not contain any significant implementational deficien- cies which would bias the results 7. SWAPPING THE GRAMMARS OVER A second experiment was carried out with the CLE parser, in which the built-in grammar and lexicon were replaced by versions of the ANLT ob- ject grammar and lexical entries translated (auto- matically) into the CLE formalism. (The reverse of this configuration, in which the CLARE2.5 grammar is translated into the ANLT formalism, is not possible since some central rules contain sequences of daughters specified by a single 'list' variable, which has no counterpart in the ANLT and cannot directly be simulated). The through- ~Although the ANLT parser is implemented in Common Lisp and the CLE parser in Prolog, compar- ing parse times is a valid exercise since current com- piler and run-time support technologies for both lan- guages are quite well-developed, and in fact the CLE parser takes advantage of Prolog's built-in unification operation which will have been very tightly coded. 7The ANLT's speed advantage over CLARE is less pronounced if the time for morphological analysis and creation of logical forms is taken into account, proba- bly because the systems use different processing tech- niques in these modules. put of this configuration was only one fiftieth of that of the BU-LC parser. The ANLT grammar contains more than five times as many rules than does the sentence-level portion of the CLARE2.5 grammar, and Alshawi (personal communication) points out that the CLE parser had not previously been run with a grammar containing such a large number of rules, in contrast to the ANLT parsers. THE EFFECT OF SENTENCE LENGTH Although the mean sentence length in the first two experiments is much shorter than the 20-30 word length (depending on genre etc.) that is common in real texts, the test sentences cover a wide range of syntactic constructions and exhibit less con- structional bias than would a set of sentences ex- tracted at random from a single corpus. However, to investigate performance on longer sentences and the relationship between sentence length and parse time, a further set of 100 sentences with lengths distributed uniformly between 13 and 30 words was created by hand by the author and added to the previous test data. Table 2 shows the relation- ship between sentence length and mean parse time with the BU-LC and LR parsers. In contrast to the results from the first exper- iment, the throughput of the LR parser is only 4% better than that of the BU-LC parser for sen- tences of 13-27 words in length. The former parses many sentences up to twice as fast, but a small proportion of the others are parsed almost twice as slowly. As well as their wide variability with respect to the BU-LC parser, the absolute vari- ability of the LR parse times is high (reflected in large standard deviations--a--see Table 2). Most of the sentences for which LR performance is worse contain more than one occurrence of the passive construction: due to their length this is particu- larly the case for the group of sentences of 28-30 words with which the LR parser performed partic- ularly badly. However, it is likely that if the con- straining power of the parse table were improved in this area the difference in throughput between LR and BU-LC would revert to nearer the 35% figure seen in the first experiment. The standard deviations for numbers of parses are also relatively large. The maximum number of parses was 2736 for one 29-word sentence, but on the other hand some of even the longest sentences had fewer than ten parses. (But note that since the time taken for parse forest unpacking is not included in parse times, the latter do not vary by such a large magnitude). The results of this experiment are displayed graphically in Figure 1, together with a quadratic function. Comparison with the function suggests 291 Sentence length (words) 1-3 4-6 7-9 10-12 13-15 16-18 19-21 22-24 25-27 28-30 BU-LC Parse time Mean a 0.11 0.06 0.23 0.18 0.42 0.24 1.17 0.92 0.97 0.28 1.92 0.75 3.54 1.42 3.87 1.62 5.45 1.98 7.86 2.37 LR Parse time Mean a 0.05 0.02 0.15 0.11 0.28 0.17 0.76 0.52 0.86 0.38 1.89 1.00 3.74 2.46 3.61 3.07 5.05 3.59 12.89 5.65 Number of parses Mean a 1.3 0.7 1.4 0.8 1.8 1.3 3.8 2.4 10.0 13.7 14.3 17.5 60.1 117.3 143.8 200.1 168.8 303.1 343.5 693.7 Table 2: Mean and standard deviation parse times (in CPU seconds on an HP9000/710 workstation), and numbers of parses for the 229 test sentences (1-30 words in length) with the BU-LC and LR parsers. that, at least for the BU-LC parser, parse time is related roughly quadratically to input length. In previous work with the ANLT (Briscoe & Carroll, 1993), throughput with raw corpus data was worse than that observed in these experi- ments, though probably only by a constant factor. This could be due to the fact that the vocabu- lary of the corpus concerned exhibits significantly higher lexical ambiguity; however, for sentences taken from a specific corpus, constructional bias observed in a training phase could be exploited to improve performance (e.g. Samuelsson &: Rayner, 1991). 5. DISCUSSION All three of the parsers have theoretical worst-case complexities that are either exponential, or poly- nomial on grammar size but with an extremely large multiplier. Despite this, in the practical experiments reported in the previous section the parsers achieve relatively good throughput with a general-purpose wide-coverage grammar of a nat- ural language. It therefore seems likely that gram- mars of the type considered in this paper (i.e. with relatively detailed phrase structure components, but comparatively simple from a unification per- spective), although realistic, do not bring the pars- ing algorithms involved anywhere near the worst- case complexity. In the experiments, the CE technique results in a parser with worse performance than the nor- mal LR technique. Indeed, for the ANLT gram- mar, the number of states--the term that the CE technique reduces from exponential to linear on the grammar size---is actually smaller in the stan- dard LALR(1) table. This suggests that, when considering the complexity of parsers, the issue of parse table size is of minor importance for realistic NL grammars (as long as an implementation rep- resents the table compactly), and that improve- ments to complexity results with respect to gram- mar size, although interesting from a theoretical standpoint, may have little practical relevance for the processing of natural language. Although Schabes (1991:107) claims that the problem of exponential grammar complexity "is particularly acute for natural language processing since in this context the input length is typically small (10-20 words) and the grammar size very large (hundreds or thousands of rules and sym- bols)", the experiments indicate that, with a wide- coverage NL grammar, inputs of this length can be parsed quite quickly; however, longer inputs (of more than about 30 words in length)--which occur relatively frequently in written text--are a problem. Unless grammar size takes on propor- tionately much more significance for such louger inputs, which seems implausible, it appears that in fact the major problems do not lie in the area of grammar size, but in input length. All three parsers have worst-case complexities that are exponential on input length. This theo- retical bound might suggest that parsing perfor- mance would be severely degraded on long sen- tences; however, the relationship between length of sentence and parse tinm with the ANLT gram- mar and the sentences tested appears to be ap- proximately only quadratic. There are probably many reasons why performance is lnuch better than the complexity results suggest, but the most important may be that: • kleene star is used only in a very limited context (for the analysis of coordination), • more than 90% of the rules in the grammar have no more than two daughters, and • very few rules license both left and right re- cursion (for instance of the sort that is typi- cally used to analyse noun compounding, i.e. 292 14 M e 12 a n i0 C 8 P U 6 t 4 i m 2 e 0 [ ~BU-LC parser BLR parser ~n 2/100 ............... __~ ............................ • ..... ! | ! ! ! ! ! 1-3 4-6 7-9 10-12 13-15 16-18 19-21 22-24 25-27 28-30 Sentence length (n) Figure h Mean parse times (in CPU seconds on an HP9000/710 workstation) for the test sentences with the BU-LC and LR parsers. A quadratic function is also displayed. N --> N N). Despite little apparent theoretical difference between the CLE and ANLT grammar formalisms, and the fact that no explicit or formal process of 'tuning' parsers and grammars to perform well with each other has been carried out in either of the ANLT or CLARE systems, the results of the exl)eriment comparing the performance of the re- spective parsers using the ANLT grammar sug- gests that the parallel development of the software and grammars that has occurred nevertheless ap- pears to have caused this to happen automatically. It therefore seems likely that implementational de- cisions and optimisations based on subtle proper- ties of specific grammars can, and may very of- ten be, more important than worst-case complex- ity when considering the practical performance of parsing algorithms. 6. CONCLUSIONS The research reported is in a similar vein to that of, for example, Moore & Dowding (1991), Samuelsson & Rayner (1991), and Maxwell & Ka- plan (1993), in that it relies on empirical results for the study and optimisation of parsing algo- rithms rather than on traditional techniques of complexity analysis. The paper demonstrates that research in this area will have to rely on empiri- cal data until complexity theory is developed to a point where it is sufficiently fine-grained and ac- curate to predict how the properties of individual unification-based grammars will interact with par- ticular parsing algorithms to determine practical performance. REFERENCES Aho, A., R. Sethi & J. Ullman (1986) Compilers: principles, techniques and tools. Reading, MA: Addison-Wesley. Alshawi, H., D. Carter, R. Crouch, S. Pulman, M. Rayner & A. Smith (1992) CLARE: a contex- tual reasoning and cooperative response frame- work for the Core Language Engine. SRI In- ternational, Cambridge, UK. Barton, G., R. Berwick ~z E. Ristad (1987) Com- putational complexity and natural language. Cambridge, MA: MIT Press. Billot, S. ~z B. Lang (1989) "The structure of shared forests in ambiguous parsing." In Pro- ceedings of the 27th Meeting of the Association for Computational Linguistics. 143-151. Bouma, G. & G. van Noord (1993) "Head-driven parsing for lexicalist grammars: experimental results." In Proceedings of the 6th Conference of the European Chapter of the Association for Computational Linguistics. 101-105. Briscoe, E., C. Grover, B. Boguraev & J. Carroll (1987a) "A formalism and environment for the development of a large grammar of English." In Proceedings of the lOth International Joint Conference on Artificial Intelligence. 703-708. 293 Briscoe, E., C. Grover, B. Boguraev & J. Carroll (1987b) "Feature defaults, propagation and reentrancy." In Categories, Polymorphism and Unification, edited by E. Klein & J. van Ben- them, Centre for Cognitive Science, Edinburgh University, UK. 19-34. Briscoe, E. & J. Carroll (1993) "Generalised probabilistic LR parsing of natural language (corpora) with unification-based grammars." Computational Linguistics, 19(1): 25-59. Carroll, J. (1993) Practical unification-based pars- ing of natural language. Computer Laboratory, Cambridge University, UK, Technical Report 314. Carroll, J. & E. Briscoe (1992) "Probabilistic normalisation and unpacking of packed parse forests for unification-based grammars." In Proceedings o/the AAAI Fall Symposium on Probabilistic Approaches to Natural Language. 33-38. Earley, J. (1970) "An efficient context-free pars- ing algorithm." Communications of the ACM, 13.2: 94-102. Gazdar, G., E. Klein, G. Pullum & I. Sag (1985) Generalized phrase structure grammar. Ox- ford, UK: Blackwell. Grover, C., J. Carroll &= E. Briscoe (1993) The Alvey natural language tools grammar (~th re- lease). Computer Laboratory, Cambridge Uni- versity, UK, Technical Report 284. Johnson, M. (1989) "The computational complex- ity of Tomita's algorithm." In Proceedings o/ the 1st International Workshop on Parsing Technologies. 203-208. Kaplan, R. (1987) "Three seductions of compu- tational psycholinguistics." In Linguistic The- ory and Computer Applications, edited by P. Whitelock et al., New York: Academic Press. 149-188. Kasami, J. (1965) An efficient recognition and syntax analysis algorithm for context-free lan- guages. Air Force Cambridge Research Labo- ratory, Bedford, MA, Report AFCRL-65-758. Kipps, J. (1989) "Analysis of Tomita's algorithm for general context-free parsing." In Proceed- ings o/ the 1st International Workshop on Parsing Technologies. 193-202. Lang, B. (1974) "Deterministic techniques for effi- cient non-deterministic parsers." In Automata, Languages and Programming, Lecture Notes in Computer Science 1~, edited by J. Loeckx, Berlin, Germany: Springer-Verlag. 255-269. Maxwell, J. III £: R. Kaplan (1993) "The interface between phrasal and functional constraints." Computational Linguistics, 19(4): 571-590. Moore, R. & H. Alshawi (1992) "Syntactic and se- mantic processing." In The Core Language En- gine, edited by H. Alshawi, Cambridge, MA: MIT Press. 129-148. Moore, R. & J. Dowding (1991) "Efficient bottom- up parsing." In Proceedings of the DARPA Speech and Natural Language Workshop. 200- 203. Pereira, F. & D. Warren (1980) "Definite clause grammars for language analysis--a survey of the formalism and a comparison with aug- mented transition networks." Artificial Intel- ligence, 13(3): 231-278. Pollard, C. & I. Sag (1987) Information-based syn- tax and semantics: volume 1-fundamentals. Chicago, IL: University of Chicago Press. Pratt, V. (1975) "LINGOL - a progress report." In Proceedings o/the 5th International Joint Conference on Artificial Intelligence. 422-428. Samuelsson, C. ~z M. Rayner (1991) "Quantita- tive evaluation of explanation-based learning as an optimization tool for a large-scale nat- ural language system." In Proceedings o/the 12th International Joint Conference on Artifi- cial Intelligence. 609-615. Schabes, Y. (1991) "Polynomial time and space shift-reduce parsing of arbitrary context-free grammars." In Proceedings o/the 29th Annual Meeting of the Association/or Computational Linguistics. 106-113. Taylor, L., C. Grover & E. Briscoe (1989) "The syntactic regularity of English noun phrases." In Proceedings o/the 4th European Meeting o/ the Association/or Computational Linguistics. 256-263. Tomabechi, H. (1991) "Quasi-destructive graph unification." In Proceedings of the 29th Annual Meeting of the Association for Computational Linguistics. 315-322. Tomita, M. (1987) "An efficient augmented- context-free parsing algoritlmL" Computa- tional Linguistics, 13(1): 31-46. Shann, P. (1989) "The selection of a parsing strat- egy for an on-line machine translation system in a sublanguage domain. A new practical comparison." In Proceedings o/the 1st Inter- national Workshop on Parsing Technologies. 264-276. Wright, J., E. Wrigley • R. Sharman (1991) "Adaptive probabilistic generalized LR pars- ing." In Proceedings of the 2nd International Workshop on Parsing Technologies. 154-163. Younger, D. (1967) "Recognition and parsing of context-free languages in time n'~. '' IT~fo~-ma- tion and Control, 10(2): 189-208. 294
1994
40
Detecting and Correcting Speech Repairs Peter Heeman and James Allen Department of Computer Science University of Rochester Rochester, New York, 14627 {heeman, j ames}@cs, rochester, edu Abstract Interactive spoken dialog provides many new challenges for spoken language systems. One of the most critical is the prevalence of speech repairs. This paper presents an al- gorithm that detects and corrects speech repairs based on finding the repair pattern. The repair pattern is built by find- ing word matches and word replacements, and identifying fragments and editing terms. Rather than using a set of pre- built templates, we build the pattern on the fly. In a fair test, our method, when combined with a statistical model to filter possible repairs, was successful at detecting and correcting 80% of the repairs, without using prosodic information or a parser. Introduction Interactive spoken dialog provides many new challenges for spoken language systems. One of the most critical is the prevalence of speech repairs. Speech repairs are dysfluencies where some of the words that the speaker utters need to be removed in order to correctly understand the speaker's meaning. These repairs can be divided into three types: fresh starts, modifications, and abridged. A fresh start is where the speaker abandons what she was saying and starts again. the current plan is we take - okay let's say we start with the bananas (d91-2.2 uttl05) A modification repair is where the speech-repair modifies what was said before. after the orange juice is at - the oranges are at the OJ factory (d93-19.3 utt59) An abridged repair is where the repair consists solely of a fragment and/or editing terms. we need to -um manage to get the bananas to Dansville more quickly (d93-14.3 utt50) These examples also illustrate how speech repairs can be divided into three intervals: the removed text, the editing terms, and the resumed text (cf. Levelt, (1983), Nakatani and Hirschberg, (1993)). The removed text, which might end in a word fragment, is the text that the speaker intends to replace. The end of the removed text is called the interruption point, which is marked in the above examples as "-". This is then followed by editing terms, which can either be filled pauses, such as "urn", "uh", and "er", or cue phrases, such as "I mean", "I guess", and "well". The last interval is the resumed text, the text that is intended to replace the removed text. (All three intervals need notbe present in a given speech repair.) In order to correct a speech repair, the removed text and the editing terms need to be deleted in order to determine what the speaker intends to say. 1 In our corpus of problem solving dialogs, 25% of turns contain at least one repair, 67% of repairs occur with at least one other repair in the turn, and repairs in the same turn occur on average within 6 words of each other. As a result, no spoken language system will perform well without an effective way to detect and correct speech repairs. We propose that most speech repairs can be detected and corrected using only local clues--it should not be neces- sary to test the syntactic or semantic well-formedness of the entire utterance. People do not seem to have problems com- prehending speech repairs as they occur, and seem to have no problem even when multiple repairs occur in the same utterance. So, it should be possible to construct an algorithm that runs on-line, processing the input a word at a time, and committing to whether a string of words is a repair by the end of the string. Such an algorithm could precede a parser, or even operate in lockstep with it. An ulterior motive for not using higher level syntactic or semantic knowledge is that the coverage of parsers and se- mantic interpreters is not sufficient for unrestricted dialogs. Recently, Dowding et al. (1993) reported syntactic and se- mantic coverage of 86% for the DARPA Airline reservation corpus (Dowding et al., 1993). Unrestricted dialogs will present even more difficulties; not only will the speech be less grammatical, but there is also the problem of segmenting the dialog into utterance units (cf. Wang and Hirschberg, 1992)• If speech repairs can be detected and corrected before pars- ing and semantic interpretation, this should simplify those modules as well as make them more robust. In this paper, we present an algorithm that detects and corrects modification and abridged speech repairs without doing syntactic and semantic processing. The algorithm de- termines the text that needs to be removed by building a repair pattern, based on identification of word fragments, editing ~The removed text and editing terms might still contain prag- matic information, as the following example displays, "Peter was •.. well.., he was fired. 295 terms, and word correspondences between the removed and the resumed text (cf. Bear, Dowding and Shriberg, 1992). The resulting potential repairs are then passed to a statis- tical model that judges the proposal as either fluent speech or an actual repair. Previous Work Several different strategies have been discussed in the liter- ature for detecting and correcting speech repairs. A way to compare the effectiveness of these approaches is to look at their recall and precision rates. For detecting repairs, the recall rate is the number of correctly detected repairs com- pared to the number of repairs, and the precision rate is the number of detected repairs compared to the number of de- tections (including false positives). But the true measures of success are the correction rates. Correction recall is the number of repairs that were properly corrected compared to the number of repairs. Correction precision is the number of repairs that were properly corrected compared to the total number of corrections. Levelt (1983) hypothesized that listeners can use the fol- lowing rules for determining the extent of the removed text (he did not address how a repair could be detected). If the last word before the interruption is of the same category as the word before, then delete the last word before the interruption. Otherwise, find the closest word prior to the interruption that is the same as the first word after the interruption. That word is the start of the removed text. Levelt found that this strategy would work for 50% of all repairs (including fresh starts), get 2% wrong, and have no comment for the remaining 48%. 2 In addition, Levelt showed that different editing terms make different predictions about whether a repair is a fresh start or not. For instance, "uh" strongly signals an abridged or modification repair, whereas a word like "sorry" signals a fresh start. Hindle (1983) addressed the problem of correcting self- repairs by adding rules to a deterministic parser that would remove the necessary text. Hindle assumed the presence of an edit signal that would mark the interruption point, and was able to achieve a recall rate of 97% in finding the correct repair. For modification repairs, Hindle used three rules for "expuncting" text. The first rule "is essentially a non- syntactic rule" that matches repetitions (of any length); the second matches repeated constituents, both complete; and the third, matches repeated constituents, in which the first is not complete, but the second is. However, Hindle's results are difficult to translate into actual performance. First, his parsing strategy depends upon the "successful disambiguation of the syntactic categories." Although syntactic categories can be determined quite well by their local context (as is needed by a deterministic parser), Hindle admits that "[self-repair], by its nature, disrupts the local context." Second, Hindle's algorithm depends on the presence of an edit signal; so far, however, the abrupt cut-off 2Levelt claims (pg. 92) that the hearer can apply his strategy safely for 52% of all repairs, but this figure includes the 2% that the hearer would get wrong. that some have suggested signals the repair (cf. Labov, 1966) has been difficult to find, and it is unlikely to be represented as a binary feature (cf. Nakatani and Hirschberg, 1993). The SRI group (Bear et al., 1992) employed simple pattern matching techniques for detecting and correcting modifica- tion repairs. 3 For detection, they were able to achieve a recall rate of 76%, and a precision of 62%, and they were able to find the correct repair 57% of the time, leading to an over- all correction recall of 43% and correction precision of 50%. They also tried combining syntactic and semantic knowledge in a "parser-first" approach--first try to parse the input and if that fails, invoke repair strategies based on word patterns in the input. In a test set containing 26 repairs (Dowding et al., 1993), they obtained a detection recall rate of 42% and a precision of 84.6%; for correction, they obtained a recall rate of 30% and a recall rate of 62%. Nakatani and Hirschberg (1993) investigated using acous- tic information to detect the interruption point of speech re- pairs. In their corpus, 74% of all repairs are marked by a word fragment. Using hand-transcribed prosodic annota- tions, they trained a classifier on a 172 utterance training set to identify the interruption point (each utterance con- tained at least one repair). On a test set of 186 utterances each containing at least one repair, they obtained a recall rate of 83.4% and a precision of 93.9% in detecting speech repairs. The clues that they found relevant were duration of pause between words, presence of fragments, and lexical matching within a window of three words. However, they do not address the problem of determining the correction or distinguishing modification repairs from abridged repairs. Young and Matessa (Young and Matessa, 1991) have also done work in this area. In their approach, speech repairs are corrected after a opportunistic case-frame parser analyzes the utterance. Their system looks for parts of the input utterance that were not used by the parser, and then uses semantic and pragmatic knowledge (of the limited domain) to correct the interpretation. The Corpus As part of the TRAINS project (Allen and Schubert, 199 I), which is a long term research project to build a conversation- ally proficient planning assistant, we are collecting a corpus of problem solving dialogs. The dialogs involve two partici- pants, one who is playing the role of a user and has a certain task to accomplish, and another, who is playing the role of the system by acting as a planning assistant. 4 The entire corpus consists of 112 dialogs totaling almost eight hours in length and containing about 62,000 words, 6300 speaker turns, and 40 different speakers. These dialogs have been segmented into utterance files (cf. Heeman and Allen, 1994b); words 3They referred to modification repairs as nontrivial repairs, and to abridged repairs as trivial repairs; however, these terms are mis- leading. Consider the utterance "send it back to Elmira uh to make OJ". Determining that the corrected text should be "send it back to Elmira to make OJ" rather than "send it back to make OJ" is non trivial. 4Gross, Allen and Traum (1992) discuss the manner in which the first set of dialogues were collected, and provide transcriptions. 296 have been transcribed and the speech repairs have been an- notated. For a training set, we use 40 of the dialogs, consist- ing of 24,000 words, 725 modification and abridged repairs, and 13 speakers; and for testing, 7 of the dialogs, consisting of 5800 words, 142 modification and abridged repairs, and seven speakers, none of which were included in the training set. The speech repairs in the dialog corpus have been hand- annotated. There is typically a correspondence between the removed text and the resumed text, and following Bear, Dowding and Shriberg (1992), we annotate this using the labels m for word matching and r for word replacements (words of the same syntactic category). Each pair is given a unique index. Other words in the removed text and re- sumed text are annotated with an x. Also, editing terms (filled pauses and clue words) are labeled with et, and the moment of interruption with int, which will occur before any editing terms associated with the repair, and after the fragment, if present. (Further details of this scheme can be found in (Heeman and Allen, 1994a).) Below is a sample annotation, with removed text "go to oran-", editing term "um", and resumed text "go to" (d93-14.2 utt60). gol tol oran-I uml gol tol Corning ml I m2 I x Iint[ et I ml I m2 I A speech repair can also be characterized by its repair pat- tern, which is a string that consists of the repair labels (word fragments are labeled as -, the interruption point by a period, and editing terms by e). The repair pattern for the example is mm-.emm. Repair Indicators In order to correct speech repairs, we first need to detect them. If we were using prosodic informa- tion, we could focus on the actual interruption point (cf. Nakatani and Hirschberg, 1993); however, we are re- stricting ourselves to lexical clues, and so need to be more lenient. Table 1 gives a breakdown of the modification speech repairs and the abridged repairs, based on the hand- annotations} Modification repairs are broken down into four groups, single word repetitions, multiple word repeti- tions, one word replacing another, and others. Also, the percentage of each type of repair that include fragments and editing terms is given. This table shows that strictly looking for the presence of fragments and editing terms will miss at least 41% of speech repairs. So, we need to look at word correspondences in or- der to get better coverage of our repairs. In order to keep the false positive rate down, we restrict ourselves to the follow- ing types of word correspondences: (1) word matching with at most three intervening words, denoted by m-m; (2) two adjacent words matching two others with at most 6 words intervening, denoted by mm-mm; and (3) adjacent replace- ment, denoted by rr. Table 2 the number of repairs in the 5Eight repairs were excluded from this analysis. These repairs could not be automatically separated from other repairs that over- lapped with them. with with Edit Total Frag. Term Modification Repair 450 14.7% 19.3% Word Repetition 179 16.2% 16.2% Larger Repetition 58 17.2% 19.0% Word Replacement 72 4.2% 13.9% Other 141 17.0% 26.2% Abridged Repair 267 46.4% 54.3% Total 717 26.5% 32.4% Table 1: Occurrence of Types of Repairs training corpus that can be deleted by each clue, based on the hand-annotations. For each clue, we give the number of repairs that it will detect in the first column. In the next three columns, we give a breakdown of these numbers in terms of how many clues apply. As the table shows, most repairs are signal by only one of the 3 clues. Total I 1 clue I 2 clues I 3 clues I Fragment 190 Editing Terms m-m mm-mm IT others Total 232 331 94 412 59 9 717 I 127 58 5 164 63 5 296 111 5 n.a. n.a. n.a. 587 I 116 I 5 Table 2: Repair Indicators Although the m-m clue and mm-mm clue do not pre- cisely locate the interruption point, we can, by using simple lexical clues, detect 97.7% (708/725) of all the repairs. But, we still will have a problem with false positives, and detect- ing the extent of the repair. Determining the Correction Based on the work done at SRI (Bear, Dowding and Shriberg, 1992), we next looked at the speech repair patterns in our annotated training corpus. If we can automatically determine the pattern, then the deletion of the removed text along with the editing terms gives the correction. Since the size of the pattern can be quite large, especially when editing terms and word fragments are added in, the number of possible templates becomes very large. In our training corpus of 450 modification repairs, we found 72 different patterns (not including variations due to editing terms and fragments). All patterns with at least 2 occurrences are listed in table 3. Adding to the Pattern Rather than doing template matching, we build the repair pattern on the fly. When a possible repair is detected, the detection itself puts constraints on the repair pattern. For instance, if we detect a word fragment, the location of the fragment limits the extent of the editing terms. It also limits 297 m.m 79 r.r t2 mm.mm ll mr.mr L7 mx.m L5 mmm.mmm L4 rm.rm 12 m.xm 6 mmr.rnmr 5 m.xxm 5 x.xx 4 x. 4 mmx.mm mrm.mrm mmmr.mmmr mm.mxm r.xr mxxx.m mx,mx mmrm.mmrm mmmx.mmm mmmm.mmmm m.mx 4 3 3 3 2 2 2 2 2 2 2 Table 3: Repair Patterns and Occurrences the extent of the resumed text and removed text, and so on restricts word correspondences that can be part of the repair. In this section, we present the rules we use for building repair patterns. These rules not only limit the search space, but more importantly, are intended to keep the number of false positives as low as possible, by capturing a notion of 'well-formness' for speech repairs. The four rules listed below follow from the model of re- pairs that we presented in the introduction. They capture how a repair is made up of three intervals--the removed text, which can end in a word fragment, possible editing terms, and the resumed text--and how the interruption point is follows the removed text and precedes the editing terms. 1. Editing terms must be adjacent. 2. Editing terms must immediately follow the interrup- tion point. 3. A fragment, if present, must immediately precede the interruption point. 4. Word correspondences must straddle the interruption point and can not be marked on a word labeled as an editing term or fragment. The above rules alone do not restrict the possible word correspondences enough. Based on an analysis of the hand- coded repairs in the training corpus, we propose the following additional rules. Rule (5) captures the regularity that word correspondences of a modification repair are rarely, if ever, embedded in each other. Consider the following exception. how would that - how long that would take In this example, the word correspondence involving "that" is embedded inside of the correspondence on "would". The speaker actually made a uncorrected speech error (and so not a speech repair) in the resumed text, for he should have said "how long would that take." Without this ungrammaticality, the two correspondences would not have been embedded, and so would not be in conflict with the following rule. 5. Word correspondences must be cross-serial; a word correspondence cannot be embedded inside of an- other correspondence. The next rule is used to limit the application of word correspondences when no correspondences are yet in the repair pattern. In this case, the repair would have been detected by the presence of a fragment or editing terms. This rule is intended to prevent spurious word correspondences from being added to the repair. For instance in the following example, the correspondence between the two instances of 'T' is spurious, since the second 'T' in fact replaces "we". I think we need to uh I need So, when no correspondences are yet included in the repair, the number of intervening words needs to be limited. From our test corpus, we have found that 3 intervening words, excluding fragments and editing terms is sufficient. 6. If there are no other word correspondences, there can only be 3 intervening words, excluding fragments and editing terms, between the first part and the second part of the correspondence. The next two rules restrict the distance between two word correspondences. Figure 1 shows the distance between two word correspondences, indexed by i and j. The intervals x and y are sequences of the words that occur between the marked words in the removed text and in the resumed text, respectively. The word correspondences of interest are those that are adjacent, in order words, the ones that have no labeled words in the x and y intervals. mi,2,~ mj.- .int... mi,£,~ m j z y Figure 1: Distance between correspondences For two adjacent word correspondences, Rule (7) ensures that there is at most 4 intervening words in the removed text, and Rule (8) ensures that there are at most 4 intervening words in the resumed text. 7. In the removed text, two adjacent matches can have at most 4 intervening words (Izl < 4). 8. In the resumed text, two adjacent matches can have at most 4 intervening words (lyl -< 4). The next rule, Rule (9), is used to capture the regularity that words are rarely dropped from the removed text, instead they tend to be replaced. 9. For two adjacent matches, the number of intervening words in the removed text can be at most one more than the number of intervening words in the resumed text (Izl ___ lyl + 1). The last rule, Rule (10), is used to restrict word replace- ments. From an analysis of our corpus, we found that word replacement correspondences are rarely isolated from other word correspondences. 10. A word replacement (except those added by the de- tection clues) must either only have fragments and editing terms between the two words that it marks, or there must be a word correspondence in which there are no intervening words in either the removed text or the resumed text (x = y = 0). 298 An Example To illustrate the above set of well-formedness constraints on repair patterns, consider the example given above "I think we need to - uh I need." The detection clues will mark the word "uh" as being a possible editing term, giving the partial pattern given below. I think we need to uh[ I need et I Now let's consider the two instances of "I". Adding this correspondence to the repair pattern will violate Rule (6), since there are four intervening words, excluding the editing terms. The correspondence between the two instances of 'need' is acceptable though, since it straddles the editing term, and there are only two intervening words between the corresponding words, excluding editing terms. Even with the correspondence between the two instances of'need', the matching between the 'I's still cannot be added. There are 2 intervening words between 'T' and "need" in the removed text, but none in the resumed side, so this corre- spondence violates Rule (9). The word replacement of "we" by the second instance of 'T', does not violate any of the rules, including Rule (10), so it is added, resulting in the following labeling. I think we I need l to uh I I I need l r I m I et I r] m I Algorithm Our algorithm for labeling potential repair patterns encodes the assumption that speech repairs can be processed one at a time. The algorithm runs in lockstep with a part-of-speech tagger (Church, 1988), which is used for deciding possible word replacements. Words are fed in one at a time. The detection clues are checked first. If one of them succeeds, and there is not a repair being processed, then a new repair pattern is started. Otherwise, if the clue is consistent with the current repair pattern, then the pattern is updated; otherwise, the current one is sent off to be judged, and a new repair pattern is started. When a new repair is started, a search is made to see if any of the text can contribute word correspondences to the repair. Likewise, if there is currently a repair being built, a search is made to see if there is a suitable word correspondence for the current word. Anytime a correspondence is found, a search is made for any additional correspondences that it might sanction. Since there might be a conflict between two possible cor- respondences that can be added to a labeling, the one that involves the most recent pair of words is preferred. For in- stance, in the example above, the correspondence between the second instance of 'T' and "we" is prefered over the correspondence between the second instance of 'T' and the first. The last issue to account for is the judging of a potential repair. If the labeling consists of just cue phrases, then it is judged as not being a repair. 6 Otherwise, if the point of 6This prevents phrases such as "I guess" from being marked as interruption of the potential repair is uniquely determined, then it is taken as a repair. This will be the case if there is at least one editing term, a word fragment, or there are no unaccounted for words between the last removed text part of the last correspondence and the resumed text part of the first correspondence. Results of Pattern Building The input to the algorithm is the word transcriptions, aug- mented with turn-taking markers. Since we are not trying to account for fresh starts, break points are put in to denote the cancel, and its editing terms are deleted (this is done to prevent the algorithm from trying to annotate the fresh start as a repair). The speech is not marked with any intonational information, nor is any form of punctuation inserted. The results are given in Table 4. Training Set Detection Recall 94.9% Detection Precision 55.8% Correction Recall 89.2% Correction Precision 52.4% Test Set 91.5% 45.3% 85.9% 42.5% Table 4: Results of Pattern Matching The pattern builder gives many false positives in detecting speech repairs due to word correspondences in fluent speech being mis-interpreted is evidence of a modification repair. Also, in correcting the repairs, word correspondences across an abridged repair cause the abridged repair to be interpreted as a modification repair, thus lowering the correction re- call rate. 7 For example, the following abridged repair has two spurious word correspondences, between "need to" and "manage to". we need to -um manage to get the bananas to Dansville more quickly This spurious word correspondence will cause the pattern builder to hypothesize that this is a modification repair, and so propose the wrong correction. Adding A Statistical Filter We make use of a part-of-speech tagger to not only determine part-of-speech categories (used for deciding possible word replacements), but also to judge modification repairs that are proposed by the pattern builder. For modification repairs, the category transition probabilities from the last word of the removed text to the first word of the resumed text have a different distribution than category transitions for fluent speech. So, by giving these distributions to the part-of- speech tagger (obtained from our test corpus), the tagger can decide if a transition signals a modification repair or not. editing terms when they have a sentential meanings, as in "I guess we should load the oranges." 7About half of the difference between the detection recall rate and the correction recall rate is due to abridged repairs being mis- classified as modification repairs. 299 Part-of-speech tagging is the process of assigning to a word the category that is most probable given the sentential context (Church, 1988). The sentential context is typically approximated by only a set number of previous categories, usually one or two. Good part-of-speech results can be ob- tained using only the preceding category (Weischedel et al., 1993), which is what we will be using. In this case, the number of states of the Markov model will be N, where N is the number of tags. By using the Viterbi algorithm, the part-of-speech tags that lead to the maximum probability path can be found in linear time. Figure 2 gives a simplified view of a Markov model for part-of-speech tagging, where Ci is a possible category for the ith word, wi, and Gi+l is a possible category for word wi+l. The category transition probability is simply the prob- ability of category Ci+l following category Gi, which is written as P(Ci+l ]Ci). The probability of word wi+l given category Ci+l is P(wi+l ICi+l). The category assignment that maximizes the product of these probabilities is taken to be the best category assignment. p(w~lCd p(w~+]lC~+~) Figure 2: Markov Model of Part-of-Speech Tagging To incorporate knowledge about modification repairs, we let Ri be a variable that indicates whether the transition from word w~ to wi+1 contains the interruption point of a modification repair. Rather than tag each word, wi, with just a category, C~, we will tag it with Ri_lCi, the cat- egory and the presence of a modification repair. So, we will need the following probabilities, P(RiCi+1[Ri_IC 0 and P(wiIRi_lCi). To keep the model simple, and ease problems with sparse data, we make several independence assumptions. By assuming that Ri-1 and RiCi+l are inde- pendent, given Ci, we can simplify the first probability to P(RiICi) * P(C~+I IC~Rd; and by assuming that R~_] and wi are independent, given Ci, we can simplify the second one to P(wilCO. The model that results from this is given in Figure 3. As can be seen, these manipulations allow us to view the problem as tagging null tokens between words as ei- ther the interruption point of a modification repair, R~ = T~, or as fluent speech, R~ = ¢i. Modification repairs can be signaled by other indicators than just syntactic anomalies. For instance, word matches, editing terms, and word fragments also indicate their pres- ence. This information can be added in by viewing the presence of such clues as the 'word' that is tagged by the repair indicator Ri. By assuming that these clues are in- dependent, given the presence of a modification repair, we can simply use the product of the individual probabilities. So, the repair state would have an output probability of P(FiIR~) * P(EiIRi) * P(MiIR~), where Fi, Ei, and Mi are random variables ranging over fragments, editing terms, types of word matches, respectively. So for instance, the © Figure 3: Statistical Model of Speech Repairs model can account for how "uh" is more likely to signal a modification repair than "um". Further details are given in Heeman and Allen (1994c). Overall Results The pattern builder on its own gives many false positives due to word correspondences in fluent speech being mis- interpreted evidence of a modification repair, and due to word correspondences across an abridged repair causing the abridged repair to be interpreted as a modification repair. This results in an overall correction recall rate of 86% and a precision rate of 43%. However, the real result comes from coupling the pattern builder with the decision routine, which will eliminate most of the false positives. Potential repairs are divided into two groups. The first includes abridged repairs and modification repairs involving only word repetitions. These are classified as repairs out- fight. The Rest of the modification repairs are judged by the statistical model. Any potential repair that it rejects, but which contains a word fragment or filled pause is accepted as an abridged repair. Table 5 gives the results of the combined approach on the training and test sets. Training Corpus Detection Recall 91% Precision 96% Correction Recall 88% Precision 93% Test Corpus 83% 89% 80% 86% Table 5: Overall Results Comparing our results to others that have been reported in the literature must be done with caution. Such a comparison is limited due to differences in both the type of repairs that are being studied and in the datasets used for drawing results. Bear, Dowding, and Shriberg (1992) use the ATIS corpus, which is a collection of queries made to an automated airline reservation system. As stated earlier, they removed all ut- terances that contained abridged repairs. For detection they obtained a recall rate of 76% and a precision of 62%, and for correction, a recall rate of 43% and a precision of 50%. It is not clear whether their results would be better or worse if 300 abridged repairs were included. Dowding et al. (1993) used a similar setup for their data. As part of a complete system, they obtained a detection recall rate of 42% and a precision of 85%; and for correction, a recall rate of 30% and a precision of 62%. Lastly, Nakatani and Hirschberg (1993) also used the ATIS corpus, but in this case, focused only on detection, but detection of all three types of repairs. However, their test corpus consisted entirely of utterances that contained at least one repair. This makes it hard to evaluate their re- sults, reporting a detection recall rate of 83% and precision of 94%. Testing on an entire corpus would clearly decrease their precision. As for our own data, we used a corpus of natural dialogues that were segmented only by speaker turns, not by individual utterances, and we focused on modification repairs and abridged repairs, with fresh starts being marked in the input so as not to cause interference in detecting the other two types. The performance of our algorithm for correction is sig- nificantly better than other previously reported work, with a recall rate of 80.2% and a precision rate of 86.4% on a fair test. While Nakatani and Hirschberg report comparable detection rates, and Hindle reports better correction rates, neither of these researchers attack the complete problem of both detection and correction. Both of them also depend on externally supplied annotations not automatically derived from the input. As for the SRI work, their parser-first strategy and simple repair patterns cause their rates to be much lower than ours. A lot of speech repairs do not look ill-formed, such as "and a boxcar of- and a tanker of OJ", and "and bring - and then bring that orange juice," and are mainly signaled by either lexical or acoustic clues. Overlapping Repairs Our algorithm is also novel in that it handles overlapping repairs. Two repairs overlap if part of the text is used in both repairs. Such repairs occur fairly frequently in our corpus, and for the most part, our method of processing repairs, even overlapping ones, in a sequential fashion appears success- ful. Out of the 725 modification and abridged repairs in the training corpus, 164 of them are overlapping repairs, and our algorithm is able to detect and correct 86.6% of them, which is just slightly less than the correction recall rate for all modification and abridged repairs in the entire training corpus. Consider the following example (d93-14.2 utt26), which contains four speech repairs, with the last one overlapping the first three. and pick up um the en- I guess the entire um p- pick up the load of oranges at Coming The algorithm is fed one word at a time. When it encoun- ters the first "um", the detection rule for editing terms gets activated, and so a repair pattern is started, with "um" being labeled as an editing term. The algorithm then processes the word "the", for which it can find no suitable correspon- dences. Next is the fragment"en-". This causes the detection rule for fragments to fire. Since this fragment comes after the editing term in the repair being built, adding it to the repair would violate Rule (2) and Rule (3). So, the algorithm must finish with the current repair, the one involving "um". Since this consists of just a filled pause, it is judged as being an actual repair. Now that the alogrithm is finished with the repair involving "um", it can move on to the next one, the one signaled by the fragment "en-". The next words that are encountered are "I guess", which get labeled as an editing phrase. The next token is the word "the", for which the algorithm finds a word correspondence with the previous instance of "the". At this point, it realizes that the repair is complete (since there is a word correspondence and all words between the first marked word and the last are accounted for) and so sends it off to be judged by the statistical model. The model tags it as a repair. Deleting the removed text and the editing terms indicated by the labeling results in the following, with the algorithm currently processing "the". and pick up the entire um p- pick up the load of oranges at Coming Continuing on, the next potential repair is triggered by the presence of "um", which is labeled as an editing term. The next token encountered, a fragment, also indicates a potential repair, but adding it to the labeling will violate Rule (2) and Rule (3). So, the pattern builder is forced to finish up with the potential repair involving "um". Since this consists of just a filled pause, it is accepted. This leaves us with the following text, with the algorithm currently processing "p-", which it has marked as a fragment. and pick up the entire p- pick up the load of oranges at Coming The next word it encounters is "pick". This word is too far from the preceding "pick" to allow this correspondence to be added. However, the detection clue ram-ram does fire, due to the matching of the pair of adjacent words "pick up". This clue is consistent with "p-" being marked as the word fragment of the repair, and so these correspondences are added. The next token encountered is "the", and the correspondence for it is found. Then "load" is processed, but no correspondence is found for it, nor for the remaining words. So, the repair pattern that is built contains an un- labeled token, namely "entire". But due to the presence of the word fragment, the interruption point can be determined. The repair pattern is set off to be judged, which tags it as a repair. This leaves the following text not labeled as the removed text nor as the editing terms of a repair. and pick up the load of oranges at Corning Due to the sequential processing of the algorithm and its abil- ity to commit to a repair without seeing the entire utterance, overlapping repairs do not pose a major problem. Some overlapping repairs can cause problems however. Problems can occur when word correspondences are at- tributed to the wrong repair. Consider the following example (d93-15.2 utt46). you have w- one you have two boxcar This utterance contains two speech repairs, the first is the re- placement of"w-" by "one", and the second the replacement of "you have one" by "you have two". Since no analysis of fragments is done, the correspondence between "w-" and 301 "one" is not detected. So, our greedy algorithm decides that the repair after "w-" also contains the word matches for "you" and "have", and that the occurrence of "one" after the "w-" is an inserted word. Due to the presence of the partial and the word matching, the statistical model accepts this pro- posal, which leads to the erroneous correction of "one you have two boxcars," which blocks the subsequent repair from being found. Conclusion This paper described a method of locally detecting and cor- rection modification and abridged speech repairs. Our work shows that a large percentage of speech repairs can be re- solved prior to parsing. Our algorithm assumes that the speech recognizer produces a sequence of words and identi- fies the presence of word fragments. With the exception of identifying fresh starts, all other processing is automatic and does not require additional hand-tailored transcription. We will be incorporating this method of detecting and correcting speech repairs into the next version of the TRAINS system, which will use spoken input. There is an interesting question as to how good the per- formance can get before a parser is required in the process. Clearly, some examples require a parser. For instance, we can not account for the replacement of a noun phrase with a pronoun, as in "the engine can take as many um- it can take up to three loaded boxcars" without using syntactic knowledge. On the other hand, we can expect to improve on our performance significantly before requiring a parser. The scores on the training set, as indicated in table 5, suggest that we do not have enough training data yet. In addition, we do not yet use any prosodic cues. We are currently investi- gating methods of automatically extracting simple prosodic measures that can be incorporated into the algorithm. Given Nakatani and Hirschberg's results, there is reason to believe that this would significantly improve our performance. Although we did not address fresh starts, we feel that our approach of combining local information from editing terms, word fragments, and syntactic anomalies will be successful in detecting them. However, the problem lies in determin- ing the extent of the removed text. In our corpus of spoken dialogues, the speaker might make several contributions in a turn, and without incorporating other knowledge, it is dif- ficult to determine the extent of the text that needs to be removed. We are currently investigating approaches to au- tomatically segment a turn into separate utterance units by using prosodic information. Acknowledgments We wish to thank Bin Li, Greg Mitchell, and Mia Stern for their help in both transcribing and giving us useful comments on the annotation scheme. We also wish to thank Hannah Blau, John Dowding, Elizabeth Shriberg, and David Traum for helpful comments. Funding gratefully received from the Natural Sciences and Engineering Research Council of Canada, from NSF under Grant IRI-90-13160, and from ONR/DARPA under Grant N00014-92-J- 1512. References Allen, J. F. and Schubert, L. K. (1991). The TRAINS project. Technical Report 382, Department of Computer Sci- ence, University of Rochester. Bear, J., Dowding, J., and Shriberg, E. (1992). Integrating multiple knowledge sources for detection and correc- tion of repairs in human-computer dialog. In Proceed- ings of the 30 th Annual Meeting of the Association for Computational Linguistics, pages 56--63. Church, K. (1988). A stochastic parts program and noun phrase parser for unrestricted text. In Preceedings of the 2nd Conference on Applied Natural Language Pro- cessing, pages 136-143. Dowding, J., Gawron, J. M., Appelt, D., Bear, J., Cherny, L., Moore, R., and Moran, D. (1993). Gemini: A natural language system for spoken-language understanding. In Proceedings of the 31 th Annual Meeting of the As- sociation for Computational Linguistics, pages 54-61. Gross, D., Allen, J., and Traum, D. (1992). The TRAINS 91 dialogues. Trains Technical Note 92-1, Department of Computer Science, University of Rochester. Heeman, P. A. and Allen, J. (1994a). Annotating speech repairs, unpublished manuscript. Heeman, P. A. and Allen, J. (1994b). Dialogue transcription tools, unpublished manuscript. Heeman, P. A. and Allen, J. (1994c). Tagging speech repairs. In ARPA Workshop on Human Language Technology, Princeton. Hindle, D. (1983). Deterministic parsing of syntactic non- fluencies. In Proceedings of the 21 st Annual Meeting of the Association for Computational Linguistics, pages 123-128. Labov, W. (1966). On the grammaticality of everyday speech. Paper presented at the Linguistic Society of America Annual Meeting. Levelt, W. J. M. (1983). Monitoring and self-repair in speech. Cognition, 14:41-104. Nakatani, C. and Hirschberg, J. (1993). A speech-first model for repair detection and correction. In Proceedings of th the 31 Annual Meeting of the Association for Compu- tational Linguistics, pages 46--53. Wang, M. Q. and Hirschberg, J. (1992). Automatic classi- fication of intonational phrase boundaries. Computer Speech and Language, 6:175-196. Weischedel, R., Meteer, M., Schwartz, R., Ramshaw, L., and Palmucci, J. (1993). Coping with ambiguity and unknown words through probabilistic models. Compu- tational Linguistics, 19(2):359-382. Young, S. R. and Matessa, M. (1991). Using pragmatic and semantic knowledge to correct parsing of spoken lan- guage utterances. In Proceedings of the 2nd European Conference on Speech Communication and Technology (Eurospeech 91), Genova, Italy. 302
1994
41
A COMPUTATIONAL VIEW OF THE COGNITIVE SEMANTICS OF SPATIAL PREPOSITIONS* Patrick Olivier Centre for Intelligent Systems University of Wales Aberystwyth Dyfed, SY23 3DB, UK Internet: plo~aber.ac.uk Abstract This paper outlines the linguistic semantic com- mitments underlying an application which au- tomatically constructs depictions of verbal spa- tial descriptions. Our approach draws on the ideational view of linguistic semantics developed by Ronald Langacker in his theory of Cognitive Grammar, and the conceptual representation of physical objects from the two-level semantics of Bierwisch and Lang. In particular the dimensions of the process of conventwnal imagery are used as a metric for the design of our own conceptual representation. INTRODUCTION An increased interest in ttle semantics of spatial language has accompanied the recent rise in popularity of cognitive linguistics (see [Rudzka-Ostyn1988]), yet computational ap- proaches are thin on the ground. This can in part be accounted for by the rather descriptive and unformalized nature of the theories devel- oped, but is more likely due to the adoption of an ideational view of linguistic meaning which, it seems, is an anathema to computational lin- guists. In this paper we take a serious, if infor- mal, look at Ronald Langacker's theory of Cogni- tive Grammar [Langacker1987], [Langacker1988a], [Langacker1988b], more specifically its commit- ment to conceptualization and the use of conven- tional imagery. The first section of this paper introduces the semantics of projective prepositions (eg. "in front of", "behind", "left of", "right of"), illustrating that these seemingly simple predicates are supris- ingly complex and ambiguous. In the light of this discovery the following sections consider Lan- gacker's view of linguistic meaning, and the design of a conceptual representation for spatial preposi- tions motivated by the consideration of the various *Thi~ research wa~ kindly funded by the Mat- sushita Electric Industrial Company Limited. Jun-ichi Tsujii Centre for Computational Linguistics University of ~anchester Institute of Science and Technology , Manchester, M60 1QD, UK Internet: tsujii~ccl.umist.ac.uk dimensions of conventional imagery. The repre- sentation has been implemented for English spa- tial descriptions and after demonstrating its utility for the automatic depiction of verbal descriptions, we finally contrast our approach against previous at tenapts. THE SEMANTICS OF PROJECTIVE PREPOSITIONS In this section we characterize the components of the spatial meaning of projective prepositions that have motivated our interest in cognitive linguis- tic approaches. Throughout, the decoding prob- lem, that is, generating adequate meanings for a locative expression in a particular situation, is our benchmark for representational adequacy. The spatial meaning Of a projective preposi- tional predication (eg. "the chair is in front of the desk") can include: a constraint on the proximity of the located (LO) (eg. "the chair") and refer- ence objects (RO) (eg. "the desk"); a directional constraint on the LO relative to the RO; and a relative orientation between the speaker, LO and RO. Constraints are of an intrinsically fuzzy na- ture such that different relative positions and ori- entations of the speaker, RO and LO satisfy the predication to different degrees, and combinations of constraints on the RO and LO originating from different predications must be readily accommo- dated. PROXIMITY CONSTRAINTS Projective prepositions necessarily place a con- straint on the proximity of the located object and the reference object. Predications such as "the chair is in front of the desk" constrain the "desk" and "chair", to some degree, to be prox- imal to each other. Conversely projective prepo- sitions such as "away from" predicate a distal re- lationship between the located and reference ob- ject. The degree of the proximity expressed in any projective prepositional predication varies accord- 303 2 INTRINSIC In the intrinsic case the reference frame is centered at the R0 and adopts the intrin- sic orientations of the RO. Thus a LO is deemed to be "in front of" the RO under.an intrinsic read- ing if it is located in the direction defined by the vector that is the half-plane of the front of the R0. In figure 1 stool number I is intrinsically "in front of the desk". DEICTIC The reference frame for a deictic in- terpretation is centered at the speaker and adopts the speaker's orientation; deictic readings can be invoked explicitly with qualifications such as "from where we are standing"; when the RO has no intrinsic or extrinsic sideness relating to the preposition used; or when intrinsic or extrinsic in- terpretations are ruled out on other grounds (eg. the impossibility of spatially arranging the objects as required by the interpretation). In figure 1 stool number 2 is deictically "in front of the desk". Figure 1: Intrinsic, deictic and extrinsic uses of "in front off' ing to a number of considerations including: the spatial context (the spatial extent and content of the scene described); and the absolute and relative sizes of the LO and RO (eg. a car that is "left of" a lorry is typically less proximal than an apple and orange similarly described). DIRECTIONAL CONSTRAINTS In addition to the constraint on the proximity of the LO and RO, projective prepositions place a constraint on the position of the LO relative to a particular side of the RO. In the case of the intrinsic interpretation (see section ) of a predi- cation such as "the stool is in front of the desk", the "stool" is located in some region of the space defined by the half-plane that is the intrinsic front of the "desk". Intuitively, the closer the "stool" is to the region of space defined by the projection of the desk's dimensions into this space, the more the spatial arrangement conforms to the prototypical interpretation of the predication. REFERENCE FRAMES Intrinsic, deictic and extrinsic interpretations of projective prepositions differ according to the ref- erence frame with respect to which the directional constraint is characterized [Retz-Schmidt1988]. Figure 1 is an example of a scene that might give rise to predications which invoke each of these ref- erence frames. EXTRINSIC Extrinsic readings can occur when the RO has no intrinsic sides relating to the locative preposition (eg. for objects such as trees) but is in close proximity to another object that is strongly sided (eg. such as a house); in which case the reference frame capturing the intrinsic orienta- tions of the stronger sided object can be adopted by the RO. Referring to figure 1 the chair is ex- trinsically "in front of stool number 3"; here the stool has inherited an extrinsic front from the right wall. INTERACTING CONSTRAINTS Typically an object is located with respect to more than one RO by the means of multiple spatial predications. This places a requirement of on the meaning representation of spatial predications that they must capable of being easily combined, to give rise to a cumulative meaning. COGNITIVE GRAMMAR AND LINGUISTIC MEANING Cognitive granlmar is comprised of five basic claims as to the composition of linguistic mean- ing, following [Langacker1988b] these are: 1. Meaning reduces to conceptualization. 2. Polysemy is the norm and can be adequately accommodated by representing the meaning a lexical item as a network of senses related by categorizing relationships of schematicity or ex- tension. 3. Semantic structures are characterized relative to cognitive domains. Domains are hierarchically 304 organized in terms of conceptual complexity, where the characterization of a concept at one level can draw on lower level concepts. While there need not necessarily be any conceptual primitives, the lowest level domains are termed basic domains and include our experience of time, space, color etc. 4. A semantic structure derives its value through the imposition of a "profile" upon a "base". 5. Semantic structures incorporate conventional "imagery", our ability to construe the same in- formational content in different ways. That meaning reduces to conceptualization (thesis 1), is characterized relative to cognitive domains (thesis 3), and incorporates conventional imagery (thesis 5) runs in stark contrast to the heavy emphasis placed on truth conditions and formalization by current computational linguistic approaches. We have attempted to tackle the in- formality of this ideational view of meaning, by addressing one particular basic cognitive domain, that of oriented three-dimensional space, and im- plement a restricted version of Langacker's process of conceptualization by means of conventional im- agery. To verify the utility of the resulting concep- tualization, we use the interpretations of spatial expressions so generated (the resulting images), to automatically construct a depictions of the scene. Theses 2, that prototypes should replace tra- ditional objective categories, lies at the very heart of cognitive semantics [Taylor1989], and though it is widely accepted as true for semantic and most other linguistic categories, prototype theory is not conducive to rigorous formalization and has con- sequently been ignored by mainstream computa- tional linguistics. Likewise our concern is with meaning variations that originate from different construals of the same information in the process of conventional imagery (thesis 5). IMAGERY AND ITS IMPLEMENTATION This special technical use of imagery (not to be confused with the psychological term meaning the formation and manipulation mental images) refers to "our amazing mental ability to "structure" or "construe"' a conceived situation in many alter- nate ways" [Langacker1988b], as opposed to tradi- tional semantic approaches whose concern is with informational content alone. Thus "every concep- tion reflects some particular construal of its con- tent". Langacker identifies six important dimen- sions of imagery; in our semantic analysis of spa- tial expressions we are interested in just three of these: 1. level of specificity 2. scale and scope of predication 3. perspective The remainder of this section is a characteri- zation of each of these dimensions and the conse- quences that their consideration has with respect to the design of a conceptual representation for spatial expressions. REPRESENTING 3-D SPACE The basic cognitive domain relative to which the spatial meaning of projective prepositions is char- acterized, is structured three-dimensional space. In our system space is represented using an orthog- onal axis system we refer to as the DCS (Domain Coordinate System). In the process of image con- struction conceptual objects will be constrained to locations described relative to the DCS. The DCS mirrors the speaker's perceptual assignment of axes to a scene, the x-axis extends from deictic left to deictic right, the y-axis from deictic front to deictic back, and the z-axis extends vertically. LEVEL OF SPECIFICITY The level of specificity of conventional imagery ad- dresses the issue of the degree of detail with which an entity is characterized. Specificity has already been mentioned in connection with the construc- tion of the network of polysemous senses of a lex- ical item; on the other hand, concerning different lexical items, we can readily identify different spa- tial predications that are schematic with respect to each other. Consider the sentences below. (a) The chair is near the desk. (b) The chair is in front of the desk. (c) The chair is facing the desk. Sentence (a) simply predicates proximity; (b) predicates both proximity and a positioning of the LO relative to a particular side of the RO I ; lastly (c) predicates proximity and a relative positioning of the LO with respect to the RO, with the addi- tional anti-alignment of the fronl face normals of the two objects. Schematic contrast dictates the minimum de- gree of detail we must maintain in our com- putational representation of the conceptual ref- erence and located objects. In sentences (a) the objects can be thought of as structureless points; in (b) the representation of the RO must incorporate the notion of sideness; and in (c) both the RO and LO are sided. We bor- row Lang's conceptual representation of objects ZThe issue of which side of the reference object the located object is positioned with respect to is ad- dressed as a consequence of the perspective dimension of conventional imagery 305 termed object schemata [Lang1993], constructed within Bierwisch's and Lang's the two-level se- mantics [Bierwisch and Lang1989]. The object schema for a desk is: a max b vert c across al i-left bl i-bottom el i-front a2 i-right b2 i-top c2 i-back In this first schema a, b and ¢ label three or- thogonal axes centered at the object, each of which can be instantiated by one or more dimensional as- signment parameters (DAPs)2; al-a2, bl-b2 and c1-¢2 are corresponding half-axes. Each half axis is labelled either nil or with an intrinsic side (eg. i-fronl;). This representation is augmented with both a three-dimensional Cartesian coordi- nate which when assigned locates the conceptual schema relative to the DCS; and the values of the default extents for the object type along the axes a, b and ¢. Imagery implies an imager, that is, the im- age exists in and with respect cognitive world of the speaker (by default) and this necessarily has important consequences. With respect to spatial language, issues pertaining to perspective, that is taking account of the imager, include the speaker's vantage point and orientation. ORIENTATION The interpretation of some spatial expressions is dependent on assumptions as to the speaker's orientation with respect to the objects in the scene (eg. whether A is "to the left of" B in a scene, is dependent on the orientation of the speaker/viewer); other expressions are orientation independent such as "above" and "below" which implicitly refer to the downward pull of gravity (al- though in space verticality is speaker dependent). When an object schemata is characterized rel- ative to the DCS it is both assigned a Cartesian position (as we show later), and its half-axes are assigned deictic sides according to their relative orientation with the observer. For example if a desk is positioned "against the left wall" as in fig- ure 1 this would result an instantiated conceptual schema for the "desk" of: a max b vert c across al i-left bl i-bottom cl i-front d-front d-bottom d-right a2 i-right b2 i-top c2 i-back d-back d-t op d-lef t 2DAPs are not of direct interest here although they are fundamental to the process of dimensional designa- tion and and important where dimensional a~signment might result in a reorientation of the conceptual object (eg. "the pole is high"). Here al is the intrinsic left side but the deictic front of the desk. VANTAGE POINT The speaker's vantage point is another factor that determines the interpretation of spatial expres- sions in a scene. The notions of deictic and in- trinsic interpretations of projective prepositions can be accounted for purely by recognizing that in each the speaker adopts a different vantage point. For deictic interpretations the vantage point is the speaker's actual position. The vantage point for intrinsic interpretations is the functionally rele- vant position with respect to a reference object, for example, "left of the desk" under the intrinsic interpretation uses a vantage point that is directly in front of the desk (the typical configuration when a human uses a desk). The meaning of a projective preposition is conceptually represented as a spatial constraint on the conceptual schema of the located object which extends out from a particular side of a reference object, the precise nature of which we describe in the next subsection. In our system the lexicalized constraint is of the form of a two place predicate: < zoneprox X:sids Y > Where X is the reference object and Y the lo- cated object. The parameter side depends on the preposition. Thus the schematicity we observed in section is explicitly represented: (a) V is near X. < zonsprox X Y > Proximity constraint between X and Y. (b) Y is in front of X. < zoneprox X: front Y > Proximity and alignment of Y with front of X (c) Y is facing X. < zoneprox X:fron~ Y:back > Proximity, alignment and specific "facing" oriem SCOPE OF PREDICATION Scope refers to exactly how much of a cognitive domain is included in the characterization. Mini- mally, the scope of an image for "next to" must en- compass at least the reference and subject objects and some region of space separating them. We im- plement the spirit of this concept by realising the lexicalized constraint for a projective preposition as a potential field fixed at the reference object's position in the DCS 3, The proximity and direc- tional nature of the constraint < zoneprox . . > is captured using a potential field P~, where: d, = (x - x0) (1) 3This technique is borrowed from robot manipula- tor path-planning [Khatib1986] 306 d~ = (y - y0) (2) P~ = Pp .... ÷ + ed,.,~ (3) P"°~,~= 2 ~ p.ox,~) (4) Kay., ~ d~ (5) Pdir,~ : 2 Here the x-axis points direction of the half- axis of the particular side of the reference axis in the DCS; and in the case of "in front of" y is the perpendicular direction in the horizontal plane; (x0,y0) is the Cartesian coordinate of the refer- ence object in the DCS, and lower the value of Pt~ for a location (x, y) for the located object the better the spatial constraint is satisfied. The min- imum for the field can be quickly computed using gradual approximation [3ramada et al.1988]. The values of Kproz ~. Lproz ' ~r ' and Kdir,.~. are depen- dent on the located and reference objects and are set on the basis of scale considerations (see). Mul- tiple spatial predications over an object is simply accommodated within the potential field model by linear addition of component fields. SCALE OF PREDICATION The concept of the scale relates to the object de- pendency of the degree of proximity and direc- tional constraint afforded by a preosition: where "X is left of Y", and X and Y are houses, then the meaning of this predication would contrast with its meaning if X and Y were pieces of fruit. The con- cept of proximity and directional constraint pred- icated by "left of" is apparent in both cases, what differs is the scale relative to which it is character- ized. Scale effects are realised in the mechanism by which the constants of the potential field are set. For the potential field P~, the effect of the con- stants on the nature of the constraint are: :. K..o.,,~ Proportional to range of the possible separa- tions of X and Y that would still satisfy the predication. 2. Lpro~,~ , The default separation of X and Y. Proportional to the range of directions that would still satisfy the predication. Thus for a reference object that is a house Kp,.o~:,~, Lp,.o~,~, Kai,.~ r must all be consider- ably greater than for a piece of fruit. The precise values can only reasonably set as a result of some experimental investigation, currently Kp~o~, t~' and Lpro~ ,~ are linearly dependent on the sum of the extents of the reference and subject objects in the direction of spatial alignment; and Kdi~,~. on the perpendicular extent of the reference object in the plane of the constraint. GENERATING DEPICTIONS After using gradual approximation to find the po- sition of the minimum in the potential fields rep- resenting the spatial predications over a particular object, this point can be regarded as a probable interpretation. By tying each conceptual object to a graphical model, and interpreting the DCS as the viewer's perceptual axis system, concep- tual interpretations can be rendered as scene de- pictions. Figure 2 illustrates one depiction of the cumulative interpretation of the following verbal description, in which all projective prepositions are viewed intrinsically 4. "I am in a room. Against the left wall is a long desk. Against the back wall is a short desk. In front of the long desk is a chair. Another chair is to the left of the long desk. The chair in front of the desk is near the short desk." OTHER APPROACHES AND CLOSING REMARKS Nearly all the work in recent years on computing the meanings of spatial prepositions stem from the prototype semantics of either Herskovits [Herskovits1985], [Herskovits1986] or Talmy [Talmy1983]. Schirra [Schirra and Stopp1993] adopts Herskovits' notion of a core meaning, and implements this as a typ- icality field. The ability to sum fields of different predications satisfies the compositionality require- ment. Yet representational poverty exists with re- spect to the spatial and perceptual characteristics of the objects, as while directionality and prox- imity constraints are adequately captured for the intrinsic reference frame and set of objects, varia- tion in the degree of constraint (for example, de- pending on the size of the reference object) and the potential for ambiguity arising from interpre- tations with respect to different reference frames are not accounted for. Underlying Kalita's work [Kalita and Badler1991] is a conceptualiza- tion of the space around a reference object as six 4Natural language sentences are parsed to three branch quantifiers using a prolog DCG grammar, the logical predicates are the input to the cognitive seman- tic processor, the resulting conceptual representations are converted to depictions in by the depiction module . The cognitive semantic processor and the depiction module are implemented in Smalltalk/Objectworks 307 Gn~/aa Dmo InDut [ Figure 2: Computer generated depiction'of a ver- bal description orthogonal rectangula~ projected regions (based upon an enclosing cuboid idealization of the ob- ject) due to Douglas [Douglas and Novick1987]. Using this model and following Talmy's work, the semantics of projective prepositions are lexicalized as geometric-relation schemas. Reference frame anabiguity is not addressed; directionality is too tightly restricted to one of the six rectangular re- gions, and proximity constraint is left to the "un- derlying constraint satisfaction techniques and the use of a weight slot in the template for constraint representation". Within the framework of the LILOG project [Maienborn1991] Ewald Lang implemented the two-level approach to the semantics of di- mensional adjectives in which the percep- tual and dimensional properties of objects are conceptually represented as object schemata [Bierwisch and Lang1989]. Further developed for projective spatial predications, Lang's object schemata are capable of distinguishing deictic and intrinsic readings, though without explicit refer- ence to a quantitative space (ie. actual scenes and observers) as in the case of Schirra and Kalita. Our system represents ~ first attempt, and very highly specialized implementation, of the con- ventional imagery process that is a component of the cognitive grammarian's view of linguistic se- mantics. Its performance, in terms of generating all possible interpretations, and the quality of the interpretations constitutes a significant advance on previous approaches. References ' [Bierwisch and Lang1989] M Bierwisch and E Lang. 1989. Dimensional Adjectives: Grammatical Structure and Concep- tual Interpretation. Springer-Verlag, Berlin Hei- delberg New York. [Douglas and Novick1987] S Douglas and D Novick. 1987. Consistency and variance in spatial reference. In Proceedings of the Ninth Annual Cognitive Science Society Meeting, pages 417-426. [Herskovits1985] A Herskovits. 1985. Semantics and pragmatics of locative expressions. Cogni- tive Science, 9:341-378. [Herskovits1986] A Herskovits. 1986. Language and spatial cognition -- an interdisciplinary study of the prepositions in English. Cambridge University Press, Cambridge (UK). [Kalita and Badler1991] J Kalita and B Badler. 1991. Interpreting prepositions physically. In Proceedings AAAI-91, pages 105-110. [Khatib1986] O Khatib. 1986. Real-time obstacle avoidance for manipulators and modile robots. The International Journal of Robotics Research, 5(1):90-98. [Lang1993] E Lang. 1993. A two-level approach to projective prepositions. In C Zelinsky-Wibbelt, editor, The semantics of prepositions: from mental processing to Natural Language process- ing. Mouton de Gruyter, Berlin. [Langacker1987] R W Langacker. 1987. Founda- tions of Cognitive Grammar, Volume I, Theo- retical Prerequisites. Stanford University Press, Stanford. [Langacker1988a] R W Langacker. 1988a. An overview of cognitive grammar. In B Rudzka- Ostyn, editor, Topics in Cognitive Linguis- tics, pages 3-48. Benjamins, Amsterdam- Philadelphia. [Langacker1988b] R W Langacker. 1988b. A view of linguistic semantics. In B Rudzkw-Ostyn, ed- itor, Topics in Cognitive Linguistics, pages 49- 90. Benjamins, Amsterdam-Philadelphia. [Maienborn1991] J R Maienborn. 1991. Process- ing spatial knowledge in lilog. IWBS Report 157, IBM Germany. [Retz-Schmidt1988] G Retz-Schmidt. 1988. Vari- ous views on spatial prepositions. AI Magazine, 9(2):95-105. [Rudzka-Ostyn1988] B Rudzka-Ostyn, editor. 1988. Topics in Cognitive Linguistics. Benjamins, Amsterdam-Philadelphia. 308 [Schirra and Stopp1993] ,] R 3 Schirra and E Stopp. 1993. Antlima -- a listener model with mental images. In Proceedings of IJCAI, pages 175-180. [TaJmy1983] L Talmy. 1983. How language struc- tures space. In H Pick and L Acredolo, editors, Spatial Orientation: Theory, Research, and Ap- plication, pages 225-282. Plenum Press, New York. [Taylor1989] J R Taylor. 1989. Linguistic catego- rization: prototypes in linguistic theory. Oxford University Press, Oxford. [Yamadaet a1.1988] A Yamada, T Nishida, and S Doshita. 1988. Figuring out most plausible interpretation from spatial descriptions. In Pro- ceedings of the 1Pth International Conference on Computational Linguistics, pages 764-769. 309
1994
42